UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • big-bang/bigbang
  • joshwolf/umbrella
  • 90-cos/iac/bigbang
  • cbrechbuhl/bigbang
  • runyontr/bigbang-core
  • snekcode/bigbang
  • michael.mendez/bigbang
  • daniel.dides/bigbang
  • ryan.j.garcia/rjgbigbang
  • nicole.dupree/bigbang
10 results
Show changes
Showing
with 1895 additions and 0 deletions
# Elasticsearch-Kibana
## Overview
[Elasticsearch-Kibana](https://www.elastic.co/elastic-stack) Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Kibana is a data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.
## Big Bang Touchpoints
```mermaid
graph TB
subgraph "Ingress"
ingressgateway
end
subgraph "Operator"
eck-operator
end
subgraph "Kibana"
ingressgateway --> kibana
eck-operator --> kibana
end
subgraph "Elasticsearch"
kibana --> elasticsearch
eck-operator --> elasticsearch
end
subgraph "Metrics"
kibana --> prometheus
end
```
### Storage
Persistent storage for both Elasticsearch Master and Data nodes can be configured with the following values:
```yaml
logging:
values:
elasticsearch:
master:
persistence:
storageClassName: ""
size: 10Gi
data:
persistence:
storageClassName: ""
size: 20Gi
```
### Istio Configuration
Istio is disabled in the elasticsearch-kibana chart by default and can be enabled with the following values in the bigbang chart:
```yaml
hostname: bigbang.dev
istio:
enabled: true
```
These values get passed into the logging chart [here](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/chart/templates/logging/elasticsearch-kibana/values.yaml#L6). This creates the Istio virtual service and maps to the main istio gateway for bigbang. The Kibana GUI is available behind this Istio VirtualService that is configured automatically at "kibana.{{ .Values.hostname }}" (value set above) and can be configured with the following values:
```yaml
logging:
values:
istio:
kibana:
# Toggle vs creation
enabled: true
annotations: {}
labels: {}
gateways:
- istio-system/main
hosts:
- kibana.{{ .Values.hostname }}
```
## High Availability
This can be accomplished by increasing the "count" or number of replicas in each deployment in the stack:
```yaml
logging:
values:
kibana:
count: 1
elasticsearch:
master:
count: 3
data:
count: 4
```
## Single Sign On (SSO)
SSO integration for the eck stack requires a license (see below) and can be configured with the following values:
```yaml
sso:
oidc:
# -- Domain for keycloak used for configuring SSO
host: login.dso.mil
# -- Keycloak realm containing clients
realm: baby-yoda
logging:
sso:
# -- Toggle OIDC SSO for Kibana/Elasticsearch on and off.
# Enabling this option will auto-create any required secrets.
enabled: true
# -- Elasticsearch/Kibana OIDC client ID
client_id: "EXAMPLE_OIDC_CLIENT"
# -- Elasticsearch/Kibana OIDC client secret
client_secret: "EXAMPLE_OIDC_CLIENT_SECRET"
```
## Licensing
Features like SSO integration, email/slack/pagerduty alerting, FIPS 140-2 mode, encryption at rest, and more for the eck stack requires a platinum or enterprise license. Information about licensing and all features is available [here](https://www.elastic.co/pricing/). A Trial license can be enabled by setting `trial: true` in the below settings to enable a 30-day trial of enterprise settings.
Licensing can be configured with the following values:
```yaml
logging:
license:
trial: false
keyJSON: |
{"license":{"uid":....}}
```
## Health Checks
Licensed ECK comes with [built in Health monitoring for Kibana and Elasticsearch](https://www.elastic.co/guide/en/kibana/current/monitoring-kibana.html). This is called self-monitoring within the Kibana UI available at the Stack Monitoring settings https://KIBANA_URL/app/monitoring#.
Outside of the UI it is possible to check the health of Elasticsearch cluster via port-forward via doing the following:
```bash
kubectl get secrets -n logging logging-ek-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
kubectl port-forward svc/logging-ek-es-http -n logging 9200:9200
curl -ku "elastic:ELASTIC_PASSWORD" "https://localhost:9200/_cluster/health?pretty"
```
# Fluentbit
## Overview
[FluentBit](https://fluentbit.io/) is an open source Log Processor and Forwarder which allows you to collect any data like metrics and logs from different sources, enrich them with filters and send them to multiple destinations. It's the preferred choice for containerized environments like Kubernetes.
## Big Bang Touchpoints
```mermaid
graph TB
subgraph "Fluent-Bit"
fluentbit
end
subgraph "Elasticsearch"
fluentbit --> elasticsearch
end
```
### Storage
Fluentbit itself does not use or require any persistent storage, however it does need hostPath mounts to the kubernetes nodes to tail and process log data. These hostPath volumes are for `/var/log/containers` to tail logs from containers running on the nodes, and `/var/log/flb-storage` which is a configurable [storage buffer](https://docs.fluentbit.io/manual/administration/buffering-and-storage) path in use for Big Bang production environments.
This storage buffer is configurable via the following values in Big Bang:
```yaml
fluentbit:
values:
storage_buffer:
path: /var/log/flb-storage/
extraVolumes:
- hostPath:
path: /var/log/flb-storage/
type: DirectoryOrCreate
name: flb-storage
extraVolumeMounts:
- mountPath: /var/log/flb-storage/
name: flb-storage
```
This storage buffer hostPath mount, in conjunction with the hostPath mount of `/var/log/containers/` used to fetch logs requires a securityContext of `privileged` to be set if SELinux is set to `Enforcing` on the kubernetes nodes. To set this securityContext for the fluentbit pods, add the following values in Big Bang:
```yaml
fluentbit:
values:
securityContext:
privileged: true
```
## Logging
Since Fluentbit is the method for shipping cluster logs to the ECK stack, to reduce the amount of logs fluentbit and ECK has to process, fluentbit container logs are excluded from being processed and shipped to ECK. However, if you would like to enable fluentbit container logs being sent to ECK you just have to remove the "Excluded_Path" portion of this INPUT block (requires presence of entire block even when changing a single line):
```yaml
fluentbit:
values:
config:
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/*fluent*.log,/var/log/containers/*gatekeeper-audit*.log
Parser containerd
Tag kube.*
Mem_Buf_Limit 50MB
Skip_Long_Lines On
storage.type filesystem
```
## Health Checks
Fluentbit is able to be configured with a service port for the container, which is able to expose [all kinds of metrics](https://docs.fluentbit.io/manual/administration/monitoring) including metrics for Prometheus.
Starting with Chart version 0.15.X fluentbit comes packaged (when monitoring is enabled) with a ServiceMonitor for the prometheus-operator also bundled with Big Bang so that metrics are available in the Prometheus and Grafana UIs, the latter thanks to this [Grafana Dashboard](https://docs.fluentbit.io/manual/administration/monitoring#grafana-dashboard).
# Mattermost
## Overview
[Mattermost](https://mattermost.com/) is an open-source, self-hostable online chat service with file sharing, search, and integrations.
Big Bang's implementation uses the [Mattermost operator](https://github.com/mattermost/mattermost-operator) to provide custom resources and manage the application.
### Basic Tier
```mermaid
graph LR
subgraph "Mattermost"
mattermostpods("Mattermost Pod(s)")
mmservice{{Mattermost Service}} --> mattermostpods("Mattermost Pod(s)")
end
subgraph "Ingress"
ig(Ingress Gateway) --"App Port"--> mmservice
end
subgraph "Database Storage (Postgres)"
mattermostpods("Mattermost Pod(s)") --"Chats/Config"--> database[(Mattermost DB)]
end
subgraph "File Storage (S3/Minio)"
mattermostpods("Mattermost Pod(s)") --"Files"--> bucket[(Mattermost Bucket)]
end
subgraph "Logging"
mattermostpods("Mattermost Pod(s)") --"Logs"--> fluent(Fluentbit) --> logging-ek-es-http
logging-ek-es-http{{Elastic Service<br />logging-ek-es-http}} --> elastic[(Elastic Storage)]
end
```
### Enterprise Tier with Integrations
```mermaid
graph LR
subgraph "Mattermost"
mattermostpods("Mattermost Pod(s)")
mmservice{{Mattermost Service}} --> mattermostpods("Mattermost Pod(s)")
end
subgraph "Ingress"
ig(Ingress Gateway) --"App Port"--> mmservice
end
subgraph "Database Storage (Postgres)"
mattermostpods("Mattermost Pod(s)") --"Chats/Config"--> database[(Mattermost DB)]
end
subgraph "File Storage (S3/Minio)"
mattermostpods("Mattermost Pod(s)") --"Files"--> bucket[(Mattermost Bucket)]
end
subgraph "Logging"
mattermostpods("Mattermost Pod(s)") --"Logs"--> fluent(Fluentbit) --> logging-ek-es-http
logging-ek-es-http{{Elastic Service<br />logging-ek-es-http}} --> elastic[(Elastic Storage)]
mattermostpods("Mattermost Pod(s)") --"Chat Indexing"--> logging-ek-es-http
end
subgraph "Monitoring"
svcmonitor("Service Monitor") --"Metrics Port"--> mmservice
Prometheus --> svcmonitor("Service Monitor")
end
```
## Big Bang Touchpoints
### UI
The Mattermost UI is the primary way of interacting with Mattermost. The UI is accessible via a web browser, desktop client, and mobile apps. The UI provides access to all mattermost features as well as configuration of the instance via the settings (or "System Console").
### Logging
Mattermost provides access to the system logs via the "System Console" (under "Server Logs"). The UI provides a basic search functionality as well for these logs
By default logs are also shipped to Elastic via Fluentbit for advanced searching/indexing. The filter `kubernetes.namespace_name` set to `mattermost` can provide easy viewing of Mattermost only logs.
Optional Enterprise Feature: Mattermost can make use of Elastic for improved performance with indexing of posts (which provides optimized search queries). For more details see the [dependencies section](#dependencies).
### Monitoring
Monitoring is available within Mattermost as a paid (E20) feature. If you have both `addons.mattermost.enterprise` and `monitoring` enabled within Big Bang values a service monitor will be deployed to automatically ship metrics data to Prometheus for consumption.
### Health Checks
The Mattermost Operator ships by default with health checks on the address `/api/v4/system/ping` port 8065 to verify that the system is healthy. Kubernetes will handle cycling unhealthy pods and all data will persist on the DB and File Storage.
## High Availability
**Important Note:** Mattermost by default handles scaling and what it interprets as your needs based upon the users it is configured for. See this note from the Mattermost Operator:
Size defines the size of the Mattermost. This is typically
specified in number of users. This will override replica and resource
requests/limits appropriately for the provided number of users.
values of resources. Accepted values are: 100users, 1000users, 5000users,
10000users, and 250000users. If replicas and resource requests/limits
are not specified, and Size is not provided the configuration for
5000users will be applied. Setting ''Replicas'', ''Scheduling.Resources'',
''FileStore.Replicas'', ''FileStore.Resource'', ''Database.Replicas'',
or ''Database.Resources'' will override the values set by Size.
Setting new Size will override previous values regardless if set
by Size or manually.
To update the "size" (`users` value) for Mattermost, you need to override the default of 100 in your values (note you do not need to include the word `users` since Big Bang handles this for you), as an example to set 1000:
```yaml
addons:
mattermost:
values:
users: 1000
```
To override Mattermost's handling of replicas and explicitly set replicas you can specify this workaround in your values:
```yaml
addons:
mattermost:
values:
users: null
replicaCount: 3
```
## Single Sign On (SSO)
SSO is built in for Mattermost and Big Bang uses the [Gitlab SSO integration](https://docs.mattermost.com/deployment/sso-gitlab.html) as its implementation since this option is available at the free tier. Mattermost also provides OAuth and SAML integration as paid features for its [enterprise tiers](#licensing) if you wish to use those.
If using Big Bang's SSO implementation, Keycloak is used behind the scenes to "spoof" the way Gitlab interaction works for SSO. The set up for how to configure Keycloak to handle this is well documented via the [Mattermost docs](https://repo1.dso.mil/platform-one/big-bang/apps/collaboration-tools/mattermost/-/blob/main/docs/keycloak.md).
See below for an example of the values to provide to Mattermost for SSO setup:
```yaml
addons:
mattermost:
sso:
enabled: true
client_id: platform1_a8604cc9-f5e9-4656-802d-d05624370245_bb8-mattermost
client_secret: no-secret
auth_endpoint: https://login.dso.mil/oauth/authorize
token_endpoint: https://login.dso.mil/oauth/token
user_api_endpoint: https://login.dso.mil/api/v4/user
```
## Licensing
Big Bang deploys the free version of Mattermost by default, but there are two additional tiers of paid licenses for additional features. Pricing for these licenses is typically based upon number of users. Full details can be viewed on [Mattermost's tier page](https://docs.mattermost.com/overview/product.html). If you want to trial the E20 features you can request a trial via Mattermost's [request page](https://mattermost.com/trial/) or after deploying via the System Console you can begin a 30 day trial under the "Edition and License" page.
### Mattermost E10 Additional Features
- Active Directory/LDAP Single Sign-on
- OAuth 2.0 authentication for team creation, account creation, and user sign-in
- Encrypted push notifications with service level agreements (SLAs) via HPNS
- Advanced access control policy
- Next business day support via online ticketing system
- Scale to handle hundreds of users per team
### Mattermost E20 Additional Features
- Advanced SAML 2.0 authentication with Okta, OneLogin, and Active Directory Federation Services
- Active Directory/LDAP group sync
- OpenID Connect authentication for team creation, account creation, and user sign-in
- Compliance exports of message histories with oversight protection
- Custom retention policies for messages and files
- High Availability support with multi-node database deployment
- Horizontal scaling through cluster-based deployment
- Elasticsearch support for highly efficient database searches in a cluster environment
- Advanced performance monitoring
- Eligibility for Premier Support add-on
### License Values
Once you have obtained a license this can be added to your values in Big Bang to automatically set up your Mattermost instance with the license (replacing the `license:` value with your full license string):
```yaml
addons:
mattermost:
enterprise:
enabled: true
license: "ehjgjhh..."
```
## Storage
### Database Storage
Mattermost makes use of a database to store all chat information as well as persistent configuration for all of Mattermost. By default Big Bang deploys an in-cluster Postgresql instance for this purpose, but it is reccomended to point to an external DB instance for HA. Currently Big Bang supports pointing to any external Postgres instance via values config. See the below example for values to point your database connection to an external instance:
```yaml
addons:
mattermost:
database:
host: "mypostgreshost"
port: "5432"
username: "myusername"
password: "mypassword"
database: "mattermost"
# OPTIONAL: Provide the postgres SSL mode
ssl_mode: ""
```
### File Storage
Mattermost uses S3, Minio, or another S3-style storage for file storage. By default Big Bang deploys an in-cluster Minio instance for this purpose, but you have the option to point to an external Minio or S3 if desired. See the below example for the values to supply:
```yaml
addons:
mattermost:
objectStorage:
endpoint: "s3.amazonaws.com"
accessKey: "myAccessKey"
accessSecret: "myAccessSecret"
bucket: "myMattermostBucket"
```
## Dependencies
Mattermost requires only database storage, file storage, and the mattermost operator by default the operator is bundled in Big Bang, and the database/file storage can also be provided in cluster via Big Bang or externalized (see the [storage section](#storage) above). No additional external dependencies are required, everything can be done via Big Bang. There is an optional dependency on Elasticsearch to provide optimized searches rather than using DB queries (E20 Enterprise license required) - see the official [Mattermost doc](https://docs.mattermost.com/deployment/elasticsearch.html) for more details.
# OPA-Gatekeeper
## Overview
Gatekeeper is an auditing tool that allows administrators to see what resources are currently violating any given policy.
## Big Bang Touchpoints
```mermaid
graph LR
subgraph "OPA Gatekeeper"
collector("Collector") --> auditor{{Auditor}}
end
subgraph "Metrics"
auditor{{Auditor}} --> metrics("Metrics")
end
subgraph "Kubernetes API"
api("Kubernetes API") --> collector("Collector")
auditor{{Auditor}} --> api("Kubernetes API")
end
subgraph "kubectl"
ctl("kubectl") --> api("Kubernetes API")
end
```
### Storage
Data from gatekeeper is not stored is provided via [metrics](https://open-policy-agent.github.io/gatekeeper/website/docs/metrics/).
### Database
Gatekeeper doesn't have a database.
### Istio Configuration
This package has no specific istio configuration.
## High Availability
High availability is accomplished by increasing the replicas in the values file of this helm chart.
## Single Sign on (SSO)
None. This service doesn't have a web interface.
## Licencing
[Apache License](https://github.com/open-policy-agent/gatekeeper/blob/master/LICENSE)
## Dependencies
None.
# Sonarqube
## Overview
[Sonarqube](https://www.sonarqube.org/) is an open-source platform for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities.
## Big Bang Touchpoints
```mermaid
graph TB
subgraph "Ingress"
ingressgateway
end
subgraph "Sonarqube"
ingressgateway --> sonarqube
end
subgraph "Metrics"
sonarqube --> prometheus
end
subgraph "Database"
sonarqube --- postgres
end
```
### Storage
Persistant storage can be enabled by setting the following values in the bigbang chart:
```yaml
addons:
sonarqube:
values:
persistence:
enabled: true
annotations: {}
storageClass:
accessMode: ReadWriteOnce
size: 10Gi
```
### Database
Sonarqube needs a postgres database to function. If one is not specified in the bigbang chart Sonarqube will deploy one internally within the namespace it is deployed to.
```yaml
addons:
sonarqube:
database:
host: ""
port: 5432
database: ""
username: ""
password: ""
```
### Istio Configuration
Istio is disabled in the sonarqube chart by default and can be enabled by setting the following values in the bigbang chart:
```yaml
hostname: bigbang.dev
istio:
enabled: true
```
These values get passed into the sonarqube chart [here](https://repo1.dso.mil/platform-one/big-bang/apps/developer-tools/sonarqube/-/blob/main/chart/values.yaml#L358). This creates the virtual service and maps to the istio gateway.
## High Availability
This can be accomplished by increasing the number of replicas in the deployment.
```yaml
addons:
sonarqube:
values:
replicaCount: 2
```
## Single Sign on (SSO)
SSO integration can be configured by modifying the following settings in the bigbang chart.
```yaml
sso:
oidc:
host: login.dso.mil
realm: baby-yoda
addons:
sonarqube:
enabled: true
sso:
enabled: true
client_id: ""
label: ""
certificate: ""
login: login
name: name
email: email
```
```mermaid
flowchart LR
S --> K[(Keycloak)]
subgraph external
K
end
ingress --> IP
subgraph "Sonarqube namespace"
subgraph "Sonarqube pod"
S["sonarqube"]
IP["istio proxy"] --> K
IP --> S
end
end
```
## Licencing
Sonarqube is released under the [Lesser GNU General Public License](https://en.wikipedia.org/wiki/Lesser_GNU_General_Public_License). The Bigbang chart utilizes the community edition of Sonarqube, but there are are also paid supported versions. Upgrades from community edition to enterprise or developer editions are possible via the [upgrade path](https://docs.sonarqube.org/latest/setup/upgrading/). Here is a link to their [Feature Comparison](https://www.sonarsource.com/plans-and-pricing/)
## Dependencies
Node kernel mods:
https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/docs/d_prerequisites.md#sonarqube
# Twistlock
## Overview
[Twistlock Administration Guide](https://docs.paloaltonetworks.com/prisma/prisma-cloud/20-04/prisma-cloud-compute-edition-admin/welcome/getting_started.html)
## Contents
[Developer Guide](docs/developer-guide.md)
## Big Bang Touchpoints
```mermaid
graph LR
subgraph "Twistlock"
twistlockpods("Twistlock Pod(s)")
twistlockservice{{Twistlock Console}} --> twistlockpods("TwistlockPod(s)")
end
subgraph "Ingress"
ig(Ingress Gateway) --"App Port"--> twistlockservice
end
subgraph "Logging"
twistlockpods("Twistlock Pod(s)") --"Logs"--> fluent(Fluentbit) --> logging-ek-es-http
logging-ek-es-http{{Elastic Service<br />logging-ek-es-http}} --> elastic[(Elastic Storage)]
end
subgraph "Monitoring"
svcmonitor("Service Monitor") --"Metrics Port"--> twistlockservice
Prometheus --> svcmonitor("Service Monitor")
end
```
### UI
Twistlock Console serves as the user interface within Twistlock. The graphical
user interface (GUI) lets you define policy, configure and control your Twistlock deployment, and view the overall health (from a security perspective) of your container environment
### Install Defender
In Bigbang the twistlock defender is installed manual.
Follow the document to install defender as a daemonset.
https://repo1.dso.mil/platform-one/big-bang/apps/security-tools/twistlock/-/blob/main/README.md
### Storage
Twistlock Console requires access to persistent storage \
Persistent storage values can be set/modified in the bigbang chart:
```yaml
console:
persistence:
size: 100Gi
accessMode: ReadWriteOnce
```
### Database
N/A
### Istio Configuration
Istio is disabled in the twistlock chart by default and can be enabled by setting the following values in the bigbang chart:
```yaml
hostname: bigbang.dev
istio:
enabled: true
```
NOTE: In BigBang twistlock istio.enabled : true only exposes twistlock console to VirtualService. The defender installation for twistlock in BigBang is manual. By default, all traffic between the twistlock Defender and the console is TLS encrypted.
## Monitoring
Twistlock Prometheus metrics collection is implemented following the documentation:
[Twistlock Prometheus Integration]<https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/audit/prometheus.html>\
Monitoring is disabled in the twistlock chart by default and can be enabled by setting the following values in the bigbang chart:
```yaml
monitoring:
enabled: true
```
## High Availability
Twistlock uses orchestrators built-in high availability capabilities.
## Single Sign on (SSO)
SSO can be configured for twistlock manually using the documentation provided. \
[Twistlock SSO Integration](https://repo1.dso.mil/platform-one/big-bang/apps/security-tools/twistlock/-/blob/main/docs/KEYCLOAK.md)
## Licensing
Twistlock deployment requires license to operate. Enter your license key in the twistlock console. \
[TwistLock License Documentation](https://docs.paloaltonetworks.com/prisma/prisma-cloud/20-04/prisma-cloud-compute-edition-admin/welcome/licensing.html)
### Health Checks
Twistlock provides API endpoints to monitor the health and availability of deployed components at `/api/v1/_ping` \
Example command: curl -u admin:Password ‘https:<console-ip>:8083/api/ v1/_ping
This diff is collapsed.
# K3D
To test Airgap BigBang on k3d
## Steps
- Launch ec2 instance of size `c5.2xlarge` and ssh into the instance with at least 50GB storage.
- Install `k3d` and `docker` cli tools
- Download `images.tar.gz`, `repositories.tar.gz` and `bigbang-version.tar.gz` from BigBang release.
```bash
$ curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/repositories.tar.gz
$ curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/images.tar.gz
$ sudo apt install -y net-tools
```
- Follow [Airgap Documentation](../README.md) to install Git server and Registry.
- Once Git Server and Registry is up, setup k3d mirroring configuration `registries.yaml`
```yaml
mirrors:
registry.dso.mil:
endpoint:
- https://host.k3d.internal:5443
registry1.dso.mil:
endpoint:
- https://host.k3d.internal:5443
docker.io:
endpoint:
- https://host.k3d.internal:5443
configs:
host.k3d.internal:5443:
tls:
ca_file: "/etc/ssl/certs/registry1.pem"
```
- Launch k3d cluster
```bash
$ PRIVATEIP=$( curl http://169.254.169.254/latest/meta-data/local-ipv4 )
$ k3d cluster create --image "rancher/k3s:v1.20.5-rc1-k3s1" --api-port "33989" -s 1 -a 2 -v "${HOME}/registries.yaml:/etc/rancher/k3s/registries.yaml" -v /etc/machine-id:/etc/machine-id -v "${HOME}/certs/host.k3d.internal.public.pem:/etc/ssl/certs/registry1.pem" --k3s-server-arg "--disable=traefik" --k3s-server-arg "--disable=metrics-server" --k3s-server-arg "--tls-san=$PRIVATEIP" -p 80:80@loadbalancer -p 443:443@loadbalancer
```
- Bock all egress with `iptables` except those going to instance IP before deploying bigbang by running [k3d_airgap.sh](./scripts/k3d_airgap.sh)
```bash
$ sudo ./k3d_airgap.sh
$ curl https://$PRIVATEIP:5443/v2/_catalog -k #show return list of images
curl https://$PRIVATEIP:5443/v2/repositories/rancher/library-busybox/tags
```
​ To permanently save the iptable rules across reboot, check out [link](https://unix.stackexchange.com/questions/52376/why-do-iptables-rules-disappear-when-restarting-my-debian-system)
- Test that mirroring is working
```bash
$ curl -k -X GET https://$PRIVATEIP:5443/v2/rancher/local-path-provisioner/tags/list
$ kubectl run -i --tty test --image=registry1.dso.mil/rancher/local-path-provisioner:v0.0.19 --image-pull-policy='Always' --command sleep infinity -- sh
$ kubectl run test --image=registry1.dso.mil/rancher/library-busybox:1.31.1 --image-pull-policy='Always' --restart=Never --command sleep infinity
$ telnet default.kube-system.svc.cluster.local 443
$ kubectl describe po test
$ kubectl delete po test
```
- Test that cluster cannot pull outside private registry.
```bash
$ kubectl run test --image=nginx
$ kubectl describe po test #should fail
$ kubectl delete po test
```
- Proceed to [bigbang deployment process](../README.md#installing-big-bang)
\ No newline at end of file
### Current Pipeline Outline and Notes
<ol>
<li><h4>.pre</h4>
<ol>
<li>
<h4><b>changelog</b></h4> Does a diff to lint what has changed for the logs
</li>
<li><h4><b>commits</b></h4> enforces the conventional commits stuff
</li>
<li>
<h4><b>pre vars</b></h4>
pre checks
</li>
<li>
<h4><b>version</b></h4>
gets various versions to build a complex version number for the build
</li>
</ol>
</li>
<li><h4><b>smoke tests</b></h4>
<ol>
<li><h4><b>clean install</b></h4>
Doesn't really effect airgap, this sets up things like cluster names and such
</li>
<li><h4><b>upgrade</b></h4>
Splits out testing and determines if there are breaking changes for testing of upgrades.
</li>
</ol>
</li>
<li><h4><b>network up</b></h4>
<ol>
<li><h4><b>airgap/network up</b></h4>
Creates a VPC and subnets for the cluster to be deployed in.
</li>
<li><h4><b>aws/airgap/package</b></h4>
Packages everything needed for the airgap install into a tar file. This leaves the repositories and images bundled in the Releases section for BB (https://repo1.dso.mil/platform-one/big-bang/bigbang/-/releases)
</li>
</ol>
</li>
<li><h4><b>airgap up</b></h4>
<ol>
<li><h4><b>aws/airgap/utility up</b></h4>
Sets up proxies using Route 53 to essentially fake out where Repo 1 and Registry 1 exist for the purposes of using an air gap registry and git repo.
</li>
</ol>
</li>
<li><h4><b>cluster up</b></h4>
<ol>
<li><h4><b>airgap/rke2/cluster up</b></h4>
Stands up an RKE2 cluster for BB in an airgapped network. ** Uses terraform ./gitlab-ci/jobs/rke2/dependencies/terraform/
Both this and the non-airgapped use the same image registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/k3d-builder:0.0.1
</li>
</ol>
</li>
<li><h4><b>bigbang up</b></h4>
<ol>
<li><h4><b>airgap/rke2/bigbang up</b></h4>
Stands up big bang
</li>
</ol>
</li>
<li><h4><b>test</b></h4>
<ol>
<li><h4><b>airgap/rke2/bigbang test</b></h4>
Runs some basic tests to make sure that Big Bang is up and working.
</li>
</ol>
</li>
<li><h4><b>bigbang down</b></h4>
<ol>
<li><h4><b>airgap/rke2/bigbang down</b></h4>
Tears down the Big Bang instance
</li>
</ol>
</li>
<li><h4><b>cluster down</b></h4>
<ol>
<li><h4><b>airgap/rke2/cluster down</b></h4></li>
</ol>
</li>
<li><h4><b>airgap down</b></h4>
<ol>
<li><h4><b>aws/airgap/package delete</b></h4></li>
<li><h4><b>aws/airgap/utility down</b></h4></li>
</ol>
</li>
<li><h4><b>network down</b></h4>
<ol>
<li><h4><b>airgap/network down</b></h4></li>
</ol>
</li>
</ol>
Terraform that creates a new VPC and two subnets. One subnet is public the other is airgapped except for access to/from the public subnet. This allows for a jump box or other resources to be easily moved in and out of the public subnet for setting up your development environment for the private subnet.
# Locals
locals {
az = format("%s%s", var.region_id, "a")
}
# Provider
provider "aws" {
profile = var.profile_id
region = var.region_id
}
# Vpc
resource "aws_vpc" "airgap_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
tags = {
Name = "${var.cluster_id}-${random_string.random.result}-vpc"
}
}
# Public subnet
resource "aws_subnet" "public" {
vpc_id = aws_vpc.airgap_vpc.id
cidr_block = "10.0.0.0/24"
availability_zone = local.az
tags = {
Name = "airgap-public-subnet"
}
}
# Igw
resource "aws_internet_gateway" "airgap_vpc_igw" {
vpc_id = aws_vpc.airgap_vpc.id
tags = {
Name = "airgap-igw"
}
}
# Public route table
resource "aws_route_table" "airgap_vpc_region_public" {
vpc_id = aws_vpc.airgap_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.airgap_vpc_igw.id
}
tags = {
Name = "airgap-public-rt"
}
}
# Public route table associations
resource "aws_route_table_association" "airgap_vpc_region_public" {
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.airgap_vpc_region_public.id
}
# Private subnet
resource "aws_subnet" "private" {
vpc_id = aws_vpc.airgap_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = local.az
tags = {
Name = "airgap-private-subnet"
}
}
# Private routing table
resource "aws_route_table" "airgap_vpc_region_private" {
vpc_id = aws_vpc.airgap_vpc.id
tags = {
Name = "airgap-private-rt"
}
}
# Private routing table association
resource "aws_route_table_association" "airgap_vpc_region_private" {
subnet_id = aws_subnet.private.id
route_table_id = aws_route_table.airgap_vpc_region_private.id
}
# Output
#output "connection_details" {
# value = <<EOF
# Use the following to connect to the bootstrap node and enjoy the ride...
# ssh -J ${var.image_username}@${aws_instance.staging_instance.public_ip} ${var.image_username}@${aws_instance.bootstrap_instance.private_ip}
# EOF
#}
#output "public_ip" {
# description = "List of public IP addresses assigned to the instances, if applicable"
# value = "${aws_instance.staging_instance.*.public_ip}"
#}
#output "private_ip" {
# description = "List of private IP addresses assigned to the instances, if applicable"
# value = "${aws_instance.bootstrap_instance.*.private_ip}"
#}
output "follow_up" {
value = <<EOF
Nothing to see here but I have finished.
EOF
}
# Provider id based on Mesosphere account information
variable "profile_id" {
description = ""
# Default region is default
default = "default"
}
# AWS Region id
variable "region_id" {
description = ""
# Default region is us-gov-west-1
default = "us-gov-west-1"
}
# Cluster UUID
resource "random_string" "random" {
length = 4
special = false
lower = true
upper = false
}
# Cluster id
variable "cluster_id" {
description = ""
# Default region is airgap-????
default = "airgap-"
}
# ec2.tf
variable "image_id" {
description = "Amazon AWS AMI"
# default = "ami-06eeaf749779ed329"
default = "ami-06eeaf749779ed329"
}
# ec2.tf
variable "image_username" {
description = "Amazon AWS AMI username"
default = "centos"
}
# ec2.tf
variable "ec2_instance_type" {
description = "AWS EC2 Instance type"
# Default instance type m5.xlarge
default = "m5.xlarge"
}
# Ssh keyname
variable "ssh_key_name" {
description = ""
# Comment
default = "airgap"
}
# Cluster owner
#variable "owner" {
# description = "Owner of the cluster"
# # Comment
# default = "egoode"
#}
### The directory contains all the utility scripts and examples.
#!/bin/sh
PUBLICINTERFACE=$( route | grep '^default' | grep -o '[^ ]*$' )
iptables -I DOCKER-USER -i ${PUBLICINTERFACE} -j DROP
iptables -I DOCKER-USER -d 10.42.0.0/16 -j RETURN
iptables -I DOCKER-USER -d 10.43.0.0/16 -j RETURN
iptables -A DOCKER-USER -j RETURN
\ No newline at end of file
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
plugin_dir = ""
disabled_plugins = []
required_plugins = []
oom_score = 0
[grpc]
address = "/run/containerd/containerd.sock"
tcp_address = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
[ttrpc]
address = ""
uid = 0
gid = 0
[debug]
address = ""
uid = 0
gid = 0
level = ""
[metrics]
address = "{{ansible_host}}:1338"
grpc_histogram = false
[cgroup]
path = ""
[timeouts]
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
pause_threshold = 0.02
deletion_threshold = 0
mutation_threshold = 100
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
disable_tcp_service = true
stream_server_address = "127.0.0.1"
stream_server_port = "0"
stream_idle_timeout = "4h0m0s"
enable_selinux = false
sandbox_image = "prodmicroservicesregistry.azurecr.io/k8s.gcr.io/pause:3.1"
stats_collect_period = 10
systemd_cgroup = false
enable_tls_streaming = false
max_container_log_line_size = 16384
disable_cgroup = false
disable_apparmor = false
restrict_oom_score_adj = false
max_concurrent_downloads = 3
disable_proc_mount = false
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
default_runtime_name = "runc"
no_pivot = false
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
runtime_type = ""
runtime_engine = ""
runtime_root = ""
privileged_without_host_devices = false
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
runtime_type = ""
runtime_engine = ""
runtime_root = ""
privileged_without_host_devices = false
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v1"
runtime_engine = ""
runtime_root = ""
privileged_without_host_devices = false
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime]
runtime_type = "io.containerd.runc.v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime.options]
BinaryName = "/usr/bin/nvidia-container-runtime"
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
max_conf_num = 1
conf_template = ""
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.elastic.co"]
endpoint = ["{{ registry_endpoint }}"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["{{ registry_endpoint }}","https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
endpoint = ["{{ registry_endpoint }}"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
endpoint = ["{{ registry_endpoint }}"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."{{registry_server}}".auth]
username = "{{ registry_user }}"
password = "{{ registry_password }}"
auth = ""
identitytoken = ""
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
shim = "containerd-shim"
runtime = "runc"
runtime_root = ""
no_shim = false
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.snapshotter.v1.devmapper"]
root_path = ""
pool_name = ""
base_image_size = ""
---
- hosts: all
gather_facts: no
become: yes
vars:
registry_endpoint: http://localregistry/v2/
registry_server: localregistry
registry_user: someuser
vars_prompt:
- name: registry_password
prompt: Enter registry password
private: yes
tasks:
- name: Copy containerd config
template:
src: config.toml.tmpl
dest: /etc/containerd/config.toml
owner: root
group: root
- name: Restart containerd
service:
name: containerd
state: restarted
\ No newline at end of file
#!/usr/bin/env bash
set -e
REGISTRY_PACKAGE_IMAGE="registry:package"
REGISTRY_PACKAGE_TGZ="${REGISTRY_PACKAGE_IMAGE}.tar.gz"
function purge_registry_containers {
echo "Stopping local registry containers"
docker stop registry &>/dev/null || true
docker rm registry &>/dev/null || true
}
function purge_registry_images {
echo "Removing local registry images"
docker image rm ${REGISTRY_PACKAGE_IMAGE} &>/dev/null || true
}
purge_registry_containers
purge_registry_images
echo "Loading local registry package tgz"
docker load < ${REGISTRY_PACKAGE_TGZ}
echo "Creating package registry container"
docker run -d -p 5000:5000 --name registry ${REGISTRY_PACKAGE_IMAGE} >/dev/null
echo "Showing package container registry catalog"
sleep 1; curl -sX GET http://localhost:5000/v2/_catalog
\ No newline at end of file
docker.io/library/busybox:1.32
docker.io/rancher/coredns-coredns:1.8.0
docker.io/rancher/klipper-lb:v0.1.2
docker.io/rancher/library-busybox:1.31.1
#!/usr/bin/env bash
set -e
IMAGES_TXT="images.txt"
REGISTRY_IMAGE="registry:2"
REGISTRY_PACKAGE_IMAGE="registry:package"
REGISTRY_PACKAGE_TGZ="${REGISTRY_PACKAGE_IMAGE}.tar.gz"
# $1 = image_original - original full image url from existing repository
function get_image_sections {
image_full=$(echo ${1} | sed -n 's/^.*\/\(.*:.*\)$/\1/p')
image_base=$(echo ${image_full} | sed -n 's/\(^.*\):\(.*$\)/\1/p')
image_tag=$(echo ${image_full} | sed -n 's/\(^.*\):\(.*$\)/\2/p')
# [ -z "${image_full}" ] && { echo "Error: Unable to set image full variable"; exit 1; }
# [ -z "${image_base}" ] && { echo "Error: Unable to set image base variable"; exit 1; }
# [ -z "${image_tag}" ] && { echo "Error: Unable to set image tag variable"; exit 1; }
}
# $1 = image_base - image name and tag (nginx:latest)
# $1 = image_tag - image tag only (latest)
function verify_catalog_image {
echo "Verifying \"${1}\" exists in registry catalog with tag \"${2}\""
reg_tag=$(curl -sX GET http://localhost:5000/v2/${1}/tags/list | jq -r '.tags | .[0]')
if [ "${2}" != "${reg_tag}" ]; then
echo "Error: Unable to verify ${1} exists in catalog"
exit 1
fi
}
function purge_registry_containers {
echo "Stopping local registry containers"
docker stop registry &>/dev/null || true
docker rm registry &>/dev/null || true
}
function purge_registry_images {
echo "Removing local registry images"
docker image rm ${REGISTRY_IMAGE} &>/dev/null || true
docker image rm ${REGISTRY_PACKAGE_IMAGE} &>/dev/null || true
}
echo "Removing local registry package tgz"
rm -rf ${REGISTRY_PACKAGE_TGZ}
purge_registry_containers
purge_registry_images
echo "Creating initial registry container"
docker run -d -p 5000:5000 --name registry ${REGISTRY_IMAGE} &>/dev/null
echo "--"
for image_original in $(sed '/^$/d' ${IMAGES_TXT}); do
get_image_sections ${image_original}
echo "Referencing \"${image_original}\""
echo "Uploading to registry as \"${image_base}\" with tag \"${image_tag}\""
docker pull ${image_original} >/dev/null
docker tag ${image_original} localhost:5000/${image_full} >/dev/null
docker push localhost:5000/${image_full} >/dev/null
verify_catalog_image ${image_base} ${image_tag}
echo "--"
done
# TODO - is a pass-through proxy needed? - https://docs.docker.com/registry/recipes/mirror/
echo "Creating persistent package inside registry container"
docker cp registry-config.yml registry:/etc/docker/registry/config.yml >/dev/null
docker exec -it registry cp -r /var/lib/registry/ /var/lib/registry-package >/dev/null
echo "Commiting initial registry image to package registry image"
docker commit registry ${REGISTRY_PACKAGE_IMAGE} >/dev/null
purge_registry_containers
echo "Creating package registry container"
docker run -d -p 5000:5000 --name registry ${REGISTRY_PACKAGE_IMAGE} &>/dev/null
echo "--"
for image_original in $(sed '/^$/d' ${IMAGES_TXT}); do
get_image_sections ${image_original}
verify_catalog_image ${image_base} ${image_tag}
done
echo "--"
purge_registry_containers
echo "Saving local registry package tgz"
docker save ${REGISTRY_PACKAGE_IMAGE} | gzip --stdout > ${REGISTRY_PACKAGE_TGZ}
purge_registry_images
\ No newline at end of file
#!/usr/bin/env bash
set -x
GITEA_IMAGE="gitea/gitea:1.13.2"
GITEA_HTTP_METHOD="http"
GITEA_URL="localhost:3000"
GITEA_USERNAME="admin"
GITEA_PASSWORD="password"
curl -X POST "${GITEA_HTTP_METHOD}://${GITEA_USERNAME}:${GITEA_PASSWORD}@${GITEA_URL}/api/v1/user/repos" -H "accept: application/json" -H "content-type: application/json" -d \
"{\"name\":\"test-repo\", \"description\": \"Sample description\" }"
\ No newline at end of file