UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit 7caede0b authored by Jason Krause's avatar Jason Krause :8ball:
Browse files

Merge branch 'doc-reorg' into 'master'

Docs reorg

Closes #416

See merge request platform-one/big-bang/bigbang!464
parents c27fb7e2 9a876f1f
No related branches found
No related tags found
1 merge request!464Docs reorg
Pipeline #259744 passed
Showing
with 878 additions and 441 deletions
# BigBang Docs
## What is BigBang?
* BigBang is a Helm Chart that is used to deploy a DevSecOps Platform on a Kubernetes Cluster. The DevSecOps Platform is composed of application packages which are bundled as helm charts that leverage IronBank hardened container images.
* The BigBang Helm Chart deploys gitrepository and helmrelease Custom Resources to a Kubernetes Cluster that's running the Flux GitOps Operator, these can be seen using `kubectl get gitrepository,helmrelease -n=bigbang`. Flux then installs the helm charts defined by the Custom Resources into the cluster.
* The BigBang Helm Chart has a values.yaml file that does 2 main things:
1. Defines which DevSecOps Platform packages/helm charts will be deployed
2. Defines what input parameters will be passed through to the chosen helm charts.
* You can see what applications are part of the platform by checking the following resources:
* [../Packages.md](../Packages.md) lists the packages and organizes them in categories.
* [Release Notes](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/releases) lists the packages and their versions.
* For a code based source of truth, you can check [BigBang's default values.yaml](../chart/values.yaml), and `[CTRL] + [F]` "repo:", to quickly iterate through the list of applications supported by the BigBang team.
## How do I deploy BigBang?
**Note:** The Deployment Process and Pre-Requisites will vary depending on the deployment scenario. The [Quick Start Demo Deployment](guides/deployment_scenarios/quickstart.md) for example, allows some steps to be skipped due to a mixture of automation and generically reusable demo configuration that satisfies pre-requisites.
The following is a general overview of the process, the [deployment guides](guides/deployment_scenarios) go into more detail.
1. Satisfy Pre-Requisites:
* Provision a Kubernetes Cluster according to [best practices](guides/prerequisites/kubernetes_preconfiguration.md#best-practices).
* Ensure the Cluster has network connectivity to a Git Repo you control.
* Install Flux GitOps Operator on the Cluster.
* Configure Flux, the Cluster, and the Git Repo for GitOps Deployments that support deploying encrypted values.
* Commit to the Git Repo BigBang's values.yaml and encrypted secrets that have been configured to match the desired state of the cluster (including HTTPS Certs and DNS names).
2. `kubectl apply --filename bigbang.yaml`
* [bigbang.yaml](https://repo1.dso.mil/platform-one/big-bang/customers/template/-/blob/main/dev/bigbang.yaml) will trigger a chain reaction of GitOps Custom Resources' that will deploy other GitOps CR's that will eventually deploy an instance of a DevSecOps Platform that's declaratively defined in your Git Repo.
* To be specific, the chain reaction pattern we consider best practice is to have:
* bigbang.yaml deploys a gitrepository and kustomization Custom Resource
* Flux reads the declarative configuration stored in the kustomization CR to do a GitOps equivalent of `kustomize build . | kubectl apply --filename -`, to deploy a helmrelease CR of the BigBang Helm Chart, that references input values.yaml files defined in the Git Repo.
* Flux reads the declarative configuration stored in the helmrelease CR to do a GitOps equivalent of `helm upgrade --install bigbang /chart --namespace=bigbang --filename encrypted_values.yaml --filename values.yaml --create-namespace=true`, the BigBang Helm Chart, then deploys more CR's that flux uses to deploy packages specified in BigBang's values.yaml
## New User Orientation
* New users are encouraged to read through the Useful Background Contextual Information present in the [understanding_bigbang folder](./understanding_bigbang)
# Appendix A - Big Bang FAQs
This diff is collapsed.
...@@ -4,20 +4,18 @@ To test Airgap BigBang on k3d ...@@ -4,20 +4,18 @@ To test Airgap BigBang on k3d
## Steps ## Steps
- Launch ec2 instance of size `c5.2xlarge` and ssh into the instance with at least 50GB storage. - Launch EC2 instance of size `c5.2xlarge` and ssh into the instance with at least 50GB storage.
- Install `k3d` and `docker` cli tools - Install `k3d` and `docker` cli tools
- Download `images.tar.gz`, `repositories.tar.gz` and `bigbang-version.tar.gz` from BigBang release. - Download `images.tar.gz`, `repositories.tar.gz` and `bigbang-version.tar.gz` from BigBang release.
```bash ```shell
$ curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/repositories.tar.gz curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/repositories.tar.gz
$ curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/images.tar.gz curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/images.tar.gz
$ sudo apt install -y net-tools sudo apt install -y net-tools
``` ```
- Follow [Airgap Documentation](../README.md) to install Git server and Registry. - Follow [Airgap Documentation](../README.md) to install Git server and Registry.
- Once Git Server and Registry is up, setup k3d mirroring configuration `registries.yaml` - Once Git Server and Registry is up, setup k3d mirroring configuration `registries.yaml`
...@@ -39,46 +37,40 @@ To test Airgap BigBang on k3d ...@@ -39,46 +37,40 @@ To test Airgap BigBang on k3d
ca_file: "/etc/ssl/certs/registry1.pem" ca_file: "/etc/ssl/certs/registry1.pem"
``` ```
- Launch k3d cluster - Launch k3d cluster
```bash ```shell
$ PRIVATEIP=$( curl http://169.254.169.254/latest/meta-data/local-ipv4 ) PRIVATEIP=$( curl http://169.254.169.254/latest/meta-data/local-ipv4 )
$ k3d cluster create --image "rancher/k3s:v1.20.5-rc1-k3s1" --api-port "33989" -s 1 -a 2 -v "${HOME}/registries.yaml:/etc/rancher/k3s/registries.yaml" -v /etc/machine-id:/etc/machine-id -v "${HOME}/certs/host.k3d.internal.public.pem:/etc/ssl/certs/registry1.pem" --k3s-server-arg "--disable=traefik" --k3s-server-arg "--disable=metrics-server" --k3s-server-arg "--tls-san=$PRIVATEIP" -p 80:80@loadbalancer -p 443:443@loadbalancer $ k3d cluster create --image "rancher/k3s:v1.20.5-rc1-k3s1" --api-port "33989" -s 1 -a 2 -v "${HOME}/registries.yaml:/etc/rancher/k3s/registries.yaml" -v /etc/machine-id:/etc/machine-id -v "${HOME}/certs/host.k3d.internal.public.pem:/etc/ssl/certs/registry1.pem" --k3s-server-arg "--disable=traefik" --k3s-server-arg "--disable=metrics-server" --k3s-server-arg "--tls-san=$PRIVATEIP" -p 80:80@loadbalancer -p 443:443@loadbalancer
``` ```
- Block all egress with `iptables` except those going to instance IP before deploying bigbang by running [k3d_airgap.sh](./scripts/k3d_airgap.sh)
- Bock all egress with `iptables` except those going to instance IP before deploying bigbang by running [k3d_airgap.sh](./scripts/k3d_airgap.sh)
```shell
sudo ./k3d_airgap.sh
```bash curl https://$PRIVATEIP:5443/v2/_catalog -k # Show return list of images
$ sudo ./k3d_airgap.sh curl https://$PRIVATEIP:5443/v2/repositories/rancher/library-busybox/tags
$ curl https://$PRIVATEIP:5443/v2/_catalog -k #show return list of images ```
curl https://$PRIVATEIP:5443/v2/repositories/rancher/library-busybox/tags
```
To permanently save the iptable rules across reboot, check out [link](https://unix.stackexchange.com/questions/52376/why-do-iptables-rules-disappear-when-restarting-my-debian-system) ​To permanently save the iptable rules across reboot, check out [link](https://unix.stackexchange.com/questions/52376/why-do-iptables-rules-disappear-when-restarting-my-debian-system)
- Test that mirroring is working - Test that mirroring is working
```bash ```shell
$ curl -k -X GET https://$PRIVATEIP:5443/v2/rancher/local-path-provisioner/tags/list curl -k -X GET https://$PRIVATEIP:5443/v2/rancher/local-path-provisioner/tags/list
$ kubectl run -i --tty test --image=registry1.dso.mil/rancher/local-path-provisioner:v0.0.19 --image-pull-policy='Always' --command sleep infinity -- sh kubectl run -i --tty test --image=registry1.dso.mil/rancher/local-path-provisioner:v0.0.19 --image-pull-policy='Always' --command sleep infinity -- sh
$ kubectl run test --image=registry1.dso.mil/rancher/library-busybox:1.31.1 --image-pull-policy='Always' --restart=Never --command sleep infinity kubectl run test --image=registry1.dso.mil/rancher/library-busybox:1.31.1 --image-pull-policy='Always' --restart=Never --command sleep infinity
$ telnet default.kube-system.svc.cluster.local 443 telnet default.kube-system.svc.cluster.local 443
$ kubectl describe po test kubectl describe po test
$ kubectl delete po test kubectl delete po test
``` ```
- Test that cluster cannot pull outside private registry. - Test that cluster cannot pull outside private registry.
```bash ```shell
$ kubectl run test --image=nginx kubectl run test --image=nginx
$ kubectl describe po test #should fail kubectl describe po test # Should fail
$ kubectl delete po test kubectl delete po test
``` ```
- Proceed to [bigbang deployment process](../README.md#installing-big-bang) - Proceed to [bigbang deployment process](../README.md#installing-big-bang)
\ No newline at end of file
...@@ -8,24 +8,28 @@ ...@@ -8,24 +8,28 @@
## Usage ## Usage
Unpack Unpack
```
```shell
tar -xvf images.tar.gz tar -xvf images.tar.gz
``` ```
Start a local registry based on the images we just unpacked Start a local registry based on the images we just unpacked.
```
```shell
cd ./var/lib/registry cd ./var/lib/registry
docker load < registry.tar docker load < registry.tar
docker run -p 25000:5000 -v $(pwd):/var/lib/registry registry:2 docker run -p 25000:5000 -v $(pwd):/var/lib/registry registry:2
# verify the registry mounted correctly # Verify the registry mounted correctly
curl http://localhost:25000/v2/_catalog -k curl http://localhost:25000/v2/_catalog -k
# a list of Big Bang images should be displayed, if not check the volume mount of the registry # A list of Big Bang images should be displayed, if not check the volume mount of the registry
``` ```
Configure `./synker.yaml` Configure `./synker.yaml`
Example Example:
```
```yaml
destination: destination:
registry: registry:
# Hostname of the destination registry to push to # Hostname of the destination registry to push to
...@@ -33,8 +37,10 @@ destination: ...@@ -33,8 +37,10 @@ destination:
# Port of the destination registry to push to # Port of the destination registry to push to
port: 5000 port: 5000
``` ```
If using Harbor reference the project name
``` If using Harbor, reference the project name.
```yaml
destination: destination:
registry: registry:
# Hostname of the destination registry to push to # Hostname of the destination registry to push to
...@@ -42,8 +48,10 @@ destination: ...@@ -42,8 +48,10 @@ destination:
# Port of the destination registry to push to # Port of the destination registry to push to
port: 443 port: 443
``` ```
If your destination repo requires credentials add them to ` ~/.docker/config.json`
``` If your destination repo requires credentials add them to `~/.docker/config.json`
```json
{ {
"auths": { "auths": {
"registry.dso.mil": { "registry.dso.mil": {
...@@ -61,11 +69,12 @@ If your destination repo requires credentials add them to ` ~/.docker/config.jso ...@@ -61,11 +69,12 @@ If your destination repo requires credentials add them to ` ~/.docker/config.jso
} }
} }
} }
``` ```
**WARNING:** Verify your credentials with docker login before running synker. If your environment has login lockout after failed attempts synker could trigger a lockout if your credentials are incorrect. **WARNING:** Verify your credentials with docker login before running synker. If your environment has login lockout after failed attempts synker could trigger a lockout if your credentials are incorrect.
``` ```shell
./synker push ./synker push
``` ```
Verify the images were pushed to your registry
Verify the images were pushed to your registry.
File moved
# Appendix D - Big Bang Prerequisites
BigBang is built to work on all the major kubernetes distributions. However, since distributions differ and may come
configured out the box with settings incompatible with BigBang, this document serves as a checklist of pre-requisites
for any distribution that may need it.
> Clusters are sorted _alphabetically_
## All Clusters
The following apply as prerequisites for all clusters
### Storage
BigBang assumes the cluster you're deploying to supports [dynamic volume provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/). Which ultimatley puts the burden on the cluster distro provider to ensure appropriate setup. In many cases, this is as simple as using the in-tree CSI drivers. Please refer to each supported distro's documentation for further details.
In the future, BigBang plans to support the provisioning and management of a cloud agnostic container attached storage solution, but until then, on-prem deployments require more involved setup, typically supported through the vendor.
#### Default `StorageClass`
A default `StorageClass` capable of resolving `ReadWriteOnce` `PersistentVolumeClaims` must exist. An example suitable for basic production workloads on aws that supports a highly available cluster on multiple availability zones is provided below:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: WaitForFirstConsumer
```
It is up to the user to ensure the default storage class' performance is suitable for their workloads, or to specify different `StorageClasses` when necessary.
### `selinux`
Additional pre-requisites are needed for istio on systems with selinux set to `Enforcing`.
By default, BigBang will deploy istio configured to use `istio-init` (read more [here](https://istio.io/latest/docs/setup/additional-setup/cni/)). To ensure istio can properly initialize enovy sidecars without container privileged escalation permissions, several system kernel modules must be pre-loaded before installing BigBang:
```bash
modprobe xt_REDIRECT
modprobe xt_owner
modprobe xt_statistic
```
### Load Balancing
BigBang by default assumes the cluster you're deploying to supports dynamic load balancing provisioning. Specifically during the creation of istio and it's ingress gateways, which map to a "physical" load balancer usually provisioned by the cloud provider.
In almost all cases, the distro provides this capability through in-tree cloud providers appropriately configured through the IAC on repo1. For on-prem environments, please consult with the vendors support for the recommended way of handling automatic load balancing configuration.
If automatic load balancing provisioning is not support or not desired, the default BigBang configuration can be modified to expose istio's ingressgateway through `NodePorts` that can manually (or separate IAC) be mapped to a pre-existing loadbalancer.
### Elasticsearch
Elasticsearch deployed by BigBang uses memory mapping by default. In most cases, the default address space is too low and must be configured.
To ensure unnecessary privileged escalation containers are not used, these kernel settings should be done before BigBang is deployed:
```bash
sysctl -w vm.max_map_count=262144
```
More information can be found from elasticsearch's documentation [here](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html#k8s-virtual-memory)
## OpenShift
1) When deploying BigBang, set the OpenShift flag to true.
```
# inside a values.yaml being passed to the command installing bigbang
openshift: true
# OR inline with helm command
helm install bigbang chart --set openshift=true
```
2) Patch the istio-cni daemonset to allow containers to run privileged (AFTER istio-cni daemonset exists).
Note: it was unsuccessfully attempted to apply this setting via modifications to the helm chart. Online patching succeeded.
```
kubectl get daemonset istio-cni-node -n kube-system -o json | jq '.spec.template.spec.containers[] += {"securityContext":{"privileged":true}}' | kubectl replace -f -
```
3) Modify the OpenShift cluster(s) with the following scripts based on https://istio.io/v1.7/docs/setup/platform-setup/openshift/
```
# Istio Openshift configurations Post Install
oc -n istio-system expose svc/istio-ingressgateway --port=http2
oc adm policy add-scc-to-user privileged -z istio-cni -n kube-system
oc adm policy add-scc-to-group privileged system:serviceaccounts:logging
oc adm policy add-scc-to-group anyuid system:serviceaccounts:logging
oc adm policy add-scc-to-group privileged system:serviceaccounts:monitoring
oc adm policy add-scc-to-group anyuid system:serviceaccounts:monitoring
cat <<\EOF >> NetworkAttachmentDefinition.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: istio-cni
EOF
oc -n logging create -f NetworkAttachmentDefinition.yaml
oc -n monitoring create -f NetworkAttachmentDefinition.yaml
```
## RKE2
Since BigBang makes several assumptions about volume and load balancing provisioning by default, it's vital that the rke2 cluster must be properly configured. The easiest way to do this is through the in tree cloud providers, which can be configured through the `rke2` configuration file such as:
```yaml
# aws, azure, gcp, etc...
cloud-provider-name: aws
# additionally, set below configuration for private AWS endpoints, or custom regions such as (T)C2S (us-iso-east-1, us-iso-b-east-1)
cloud-provider-config: ...
```
For example, if using the aws terraform modules provided [on repo1](https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform), setting the variable: `enable_ccm = true` will ensure all the necessary resources tags.
In the absence of an in-tree cloud provider (such as on-prem), the requirements can be met through the instructions outlined in the [storage](#storage) and [load balancing](#load-balancing) prerequisites section above.
### OPA Gatekeeper
A core component to Bigbang is OPA Gatekeeper, which operates as an elevated [validating admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to audit and enforce various [constraints](https://github.com/open-policy-agent/frameworks/tree/master/constraint) on all requests sent to the kubernetes api server.
By default, `rke2` will deploy with [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) that disable these type of deployments. However, since we trust Bigbang (and OPA gatekeeper), we can patch the default `rke2` psp's to allow OPA.
Given a freshly installed `rke2` cluster, run the following commands _once_ before attempting to install BigBang.
```bash
kubectl patch psp system-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}'
kubectl patch psp global-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}'
kubectl patch psp global-restricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}'
```
### Istio
By default, BigBang will use `istio-init`, and `rke2` clusters will come with `selinux` in `Enforcing` mode, please see the [`istio-init`](#istio-pre-requisites-on-selinux-enforcing-systems) above for pre-requisites and warnings.
### Sonarqube
Sonarqube requires the following kernel configurations set at the node level:
```bash
sysctl -w vm.max_map_count=524288
sysctl -w fs.file-max=131072
ulimit -n 131072
ulimit -u 8192
```
Another option includes running the init container to modify the kernel values on the host (this requires a busybox container run as root):
```yaml
addons:
sonarqube:
values:
initSysctl:
enabled: true
```
**This is not the recommended solution as it requires running an init container as privileged.**
...@@ -16,27 +16,27 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really) ...@@ -16,27 +16,27 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really)
1. Before pushing changes to Git, validate all configuration is syntactically correct. 1. Before pushing changes to Git, validate all configuration is syntactically correct.
```bash ```shell
# If everything is successful, YAML should be output # If everything is successful, YAML should be output
kustomize build ./dev kustomize build ./dev
``` ```
1. If you have not already done so, push configuration changes to Git 1. If you have not already done so, push configuration changes to Git
```bash ```shell
git push git push
``` ```
1. Validate the Kubernetes context is correct 1. Validate the Kubernetes context is correct
```bash ```shell
# This should match the environment you intend to deploy # This should match the environment you intend to deploy
kubectl config current-context kubectl config current-context
``` ```
1. Deploy the Big Bang manifest to the cluster 1. Deploy the Big Bang manifest to the cluster
```bash ```shell
kubectl apply -f dev.yaml kubectl apply -f dev.yaml
``` ```
...@@ -56,7 +56,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -56,7 +56,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify Flux is running 1. Verify Flux is running
```bash ```shell
kubectl get deploy -n flux-system kubectl get deploy -n flux-system
# All resources should be in the 'Ready' state # All resources should be in the 'Ready' state
...@@ -69,7 +69,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -69,7 +69,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify the environment was pulled from the Git repo 1. Verify the environment was pulled from the Git repo
```bash ```shell
kubectl get gitrepository -A kubectl get gitrepository -A
# `environment-repo`: STATUS should be True # `environment-repo`: STATUS should be True
...@@ -79,7 +79,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -79,7 +79,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify the environment Kustomization properly worked 1. Verify the environment Kustomization properly worked
```bash ```shell
kubectl get kustomizations -A kubectl get kustomizations -A
# `environment`: READY should be True # `environment`: READY should be True
...@@ -89,7 +89,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -89,7 +89,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify the ConfigMaps were deployed 1. Verify the ConfigMaps were deployed
```bash ```shell
kubectl get configmap -l kustomize.toolkit.fluxcd.io/namespace -A kubectl get configmap -l kustomize.toolkit.fluxcd.io/namespace -A
# 'common' and 'environment' should exist # 'common' and 'environment' should exist
...@@ -100,7 +100,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -100,7 +100,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify the Secrets were deployed 1. Verify the Secrets were deployed
```bash ```shell
kubectl get secrets -l kustomize.toolkit.fluxcd.io/namespace -A kubectl get secrets -l kustomize.toolkit.fluxcd.io/namespace -A
# 'common-bb' and 'environment-bb' should exist # 'common-bb' and 'environment-bb' should exist
...@@ -111,7 +111,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -111,7 +111,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify the Big Bang Helm Chart was pulled 1. Verify the Big Bang Helm Chart was pulled
```bash ```shell
kubectl get gitrepositories -A kubectl get gitrepositories -A
# 'bigbang' READY should be True # 'bigbang' READY should be True
...@@ -121,7 +121,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -121,7 +121,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify the Big Bang Helm Chart was deployed 1. Verify the Big Bang Helm Chart was deployed
```bash ```shell
kubectl get hr -A kubectl get hr -A
# 'bigbang' READY should be True # 'bigbang' READY should be True
...@@ -131,7 +131,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -131,7 +131,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify Big Bang package Helm charts are pulled 1. Verify Big Bang package Helm charts are pulled
```bash ```shell
kubectl get gitrepository -A kubectl get gitrepository -A
# The Git repository holding the Helm charts for each package can be seen in the URL column. # The Git repository holding the Helm charts for each package can be seen in the URL column.
...@@ -149,7 +149,7 @@ The following commands will help you monitor the progress of the Big Bang deploy ...@@ -149,7 +149,7 @@ The following commands will help you monitor the progress of the Big Bang deploy
1. Verify the packages get deployed 1. Verify the packages get deployed
```bash ```shell
# Use watch since it take a long time to deploy # Use watch since it take a long time to deploy
watch kubectl get hr,deployments,po -A watch kubectl get hr,deployments,po -A
......
...@@ -9,8 +9,8 @@ Package is the term we use for an application that has been prepared to be deplo ...@@ -9,8 +9,8 @@ Package is the term we use for an application that has been prepared to be deplo
1. There are two ways to start a new Package. 1. There are two ways to start a new Package.
A. If there is no upstream helm chart we create a helm chart from scratch. Here is a T3 video that demonstrates creating a new helm chart. Create a directory called "chart" in your repo, change to the chart directory, and scaffold a new chart in the chart directory A. If there is no upstream helm chart we create a helm chart from scratch. Here is a T3 video that demonstrates creating a new helm chart. Create a directory called "chart" in your repo, change to the chart directory, and scaffold a new chart in the chart directory
```bash ```shell
# scaffold new helm chart # Scaffold new helm chart
mkdir chart mkdir chart
cd chart cd chart
helm create name-of-your-application helm create name-of-your-application
...@@ -18,13 +18,13 @@ Package is the term we use for an application that has been prepared to be deplo ...@@ -18,13 +18,13 @@ Package is the term we use for an application that has been prepared to be deplo
B. If there is an existing upstream chart we will use it and modify it. Essentially we create a "fork" of the upstream code. Use kpt to import the helm chart code into your repository. Note that kpt is not used to keep the Package code in sync with the upstream chart. It is a one time pull just to document where the upstream chart code came from. Kpt will generate a Kptfile that has the details. Do not manually create the "chart" directory. The kpt command will create it. Here is an example from when Gitlab Package was created. It is a good idea to push a commit "initial upstream chart with no changes" so you can refer back to the original code while you are developing. B. If there is an existing upstream chart we will use it and modify it. Essentially we create a "fork" of the upstream code. Use kpt to import the helm chart code into your repository. Note that kpt is not used to keep the Package code in sync with the upstream chart. It is a one time pull just to document where the upstream chart code came from. Kpt will generate a Kptfile that has the details. Do not manually create the "chart" directory. The kpt command will create it. Here is an example from when Gitlab Package was created. It is a good idea to push a commit "initial upstream chart with no changes" so you can refer back to the original code while you are developing.
```bash ```shell
kpt pkg get https://gitlab.com/gitlab-org/charts/gitlab.git@v4.8.0 chart kpt pkg get https://gitlab.com/gitlab-org/charts/gitlab.git@v4.8.0 chart
``` ```
1. Run a helm dependency update that will download any external sub-chart dependencies. Commit any *.tgz files that are downloaded into the "charts" directory. The reason for doing this is that BigBang Packages must be able to be installed in an air-gap without any internet connectivity. 1. Run a helm dependency update that will download any external sub-chart dependencies. Commit any *.tgz files that are downloaded into the "charts" directory. The reason for doing this is that BigBang Packages must be able to be installed in an air-gap without any internet connectivity.
```bash ```shell
helm dependency update helm dependency update
``` ```
...@@ -54,7 +54,7 @@ Package is the term we use for an application that has been prepared to be deplo ...@@ -54,7 +54,7 @@ Package is the term we use for an application that has been prepared to be deplo
1. In the values.yaml replace public upstream images with IronBank hardened images. The image version should be compatible with the chart version. Here is a command to identify the images that need to be changed. 1. In the values.yaml replace public upstream images with IronBank hardened images. The image version should be compatible with the chart version. Here is a command to identify the images that need to be changed.
```bash ```shell
# list images # list images
helm template <releasename> ./chart -n <namespace> -f chart/values.yaml | grep image: helm template <releasename> ./chart -n <namespace> -f chart/values.yaml | grep image:
``` ```
...@@ -78,7 +78,7 @@ Package is the term we use for an application that has been prepared to be deplo ...@@ -78,7 +78,7 @@ Package is the term we use for an application that has been prepared to be deplo
1. Add CI pipeline test values to the Package. A Package should be able to be deployed by itself, independently from the BigBang chart. The Package pipeline takes advantage of this to run a Package pipeline test. Create a tests directory and a test yaml file at "tests/test-values.yaml". Set any values that are necessary for this test to pass. The pipeline automatically creates an image pull secret "private-registry-mil". All you need to do is reference that secret in your test values. You can view the pipeline status from the Repo1 console. Keep iterating on your Package code and the test code until the pipeline passes. Refer to the test-values.yaml from other Packages to get started. The repo structure must match what the CI pipeline code expects. 1. Add CI pipeline test values to the Package. A Package should be able to be deployed by itself, independently from the BigBang chart. The Package pipeline takes advantage of this to run a Package pipeline test. Create a tests directory and a test yaml file at "tests/test-values.yaml". Set any values that are necessary for this test to pass. The pipeline automatically creates an image pull secret "private-registry-mil". All you need to do is reference that secret in your test values. You can view the pipeline status from the Repo1 console. Keep iterating on your Package code and the test code until the pipeline passes. Refer to the test-values.yaml from other Packages to get started. The repo structure must match what the CI pipeline code expects.
``` ```yaml
|-- .gitlab-ci.yml |-- .gitlab-ci.yml
|-- chart |-- chart
| |-- Chart.yml | |-- Chart.yml
...@@ -98,7 +98,7 @@ Package is the term we use for an application that has been prepared to be deplo ...@@ -98,7 +98,7 @@ Package is the term we use for an application that has been prepared to be deplo
1. Add the following markdown files to complete the Package. Reference other that Packages for examples of how to create them. 1. Add the following markdown files to complete the Package. Reference other that Packages for examples of how to create them.
``` ```shell
CHANGELOG.md < standard history of changes made CHANGELOG.md < standard history of changes made
CODEOWNERS < list of the code maintainers. Minimum of two people from separate organizations CODEOWNERS < list of the code maintainers. Minimum of two people from separate organizations
CONTRIBUTING.md < instructions for how to contribute to the project CONTRIBUTING.md < instructions for how to contribute to the project
...@@ -116,12 +116,12 @@ Under Settings → Repository → Default Branch, ensure that main is selected. ...@@ -116,12 +116,12 @@ Under Settings → Repository → Default Branch, ensure that main is selected.
1. Development Testing Cycle: Test your Package chart by deploying with helm. Test frequently so you don't pile up multiple layers of errors. The goal is for Packages to be deployable independently of the bigbang chart. Most upstream helm charts come with internal services like a database that can be toggled on or off. If available use them for testing and CI pipelines. In some cases this is not an option. You can manually deploy required in-cluster services in order to complete your development testing. 1. Development Testing Cycle: Test your Package chart by deploying with helm. Test frequently so you don't pile up multiple layers of errors. The goal is for Packages to be deployable independently of the bigbang chart. Most upstream helm charts come with internal services like a database that can be toggled on or off. If available use them for testing and CI pipelines. In some cases this is not an option. You can manually deploy required in-cluster services in order to complete your development testing.
Here is an example of an in-cluster postgres database Here is an example of an in-cluster postgres database
```bash ```shell
helm repo add bitnami https://charts.bitnami.com/bitnami helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgres bitnami/postgresql -n postgres --create-namespace --set postgresqlPostgresPassword=postgres --set postgresqlPassword=postgres helm install postgres bitnami/postgresql -n postgres --create-namespace --set postgresqlPostgresPassword=postgres --set postgresqlPassword=postgres
# test it # test it
kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=postgres" --command -- psql --host postgres-postgresql-headless.postgres.svc.cluster.local -U postgres -d postgres -p 5432 kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=postgres" --command -- psql --host postgres-postgresql-headless.postgres.svc.cluster.local -U postgres -d postgres -p 5432
# postgres commands # Postgres commands
\l < list tables \l < list tables
\du < list users \du < list users
\q < quit \q < quit
...@@ -129,7 +129,7 @@ Under Settings → Repository → Default Branch, ensure that main is selected. ...@@ -129,7 +129,7 @@ Under Settings → Repository → Default Branch, ensure that main is selected.
Here is an example of an in-cluster object storage service using MinIO (api compatible with AWS S3 storage) Here is an example of an in-cluster object storage service using MinIO (api compatible with AWS S3 storage)
```bash ```shell
helm repo add minio https://helm.min.io/ helm repo add minio https://helm.min.io/
helm install minio minio/minio --set accessKey=myaccesskey --set secretKey=mysecretkey -n minio --create-namespace helm install minio minio/minio --set accessKey=myaccesskey --set secretKey=mysecretkey -n minio --create-namespace
# test and configure it # test and configure it
...@@ -142,21 +142,21 @@ Under Settings → Repository → Default Branch, ensure that main is selected. ...@@ -142,21 +142,21 @@ Under Settings → Repository → Default Branch, ensure that main is selected.
Here are the dev test steps you can iterate: Here are the dev test steps you can iterate:
```bash ```shell
# test that the helm chart templates successfully and examine the output to insure expected results # Test that the helm chart templates successfully and examine the output to insure expected results
helm template <releasename> ./chart -n <namespace> -f chart/values.yaml helm template <releasename> ./chart -n <namespace> -f chart/values.yaml
# deploy with helm # Deploy with helm
helm upgrade -i <releasename> ./chart -n <namespace> --create-namespace -f chart/values.yaml helm upgrade -i <releasename> ./chart -n <namespace> --create-namespace -f chart/values.yaml
# conduct testing # Conduct testing
# tear down # Tear down
helm delete <releasename> -n <namespace> helm delete <releasename> -n <namespace>
# manually delete the namespace to insure that everything is gone # Manually delete the namespace to insure that everything is gone
kubectl delete ns <namespace> kubectl delete ns <namespace>
``` ```
1. Wait to create a git tag release until integration testing with BigBang chart is completed. You will very likely discover more Package changes that are needed during BigBang integration. When you are confident that the Package code is complete, squash commits and rebase your development branch with the "main" branch. 1. Wait to create a git tag release until integration testing with BigBang chart is completed. You will very likely discover more Package changes that are needed during BigBang integration. When you are confident that the Package code is complete, squash commits and rebase your development branch with the "main" branch.
```bash ```shell
git rebase origin/main git rebase origin/main
git reset $(git merge-base origin/main $(git rev-parse --abbrev-ref HEAD)) git reset $(git merge-base origin/main $(git rev-parse --abbrev-ref HEAD))
git add -A git add -A
......
...@@ -172,7 +172,6 @@ scp -i ~/.ssh/your-ec2.pem ubuntu@$EC2_PUBLIC_IP:~/.kube/config ~/.kube/config ...@@ -172,7 +172,6 @@ scp -i ~/.ssh/your-ec2.pem ubuntu@$EC2_PUBLIC_IP:~/.kube/config ~/.kube/config
Edit the kubeconfig on your workstation. Replace the server host ```0.0.0.0``` with with the public IP of the EC2 instance. Test cluster access from your local workstation. Edit the kubeconfig on your workstation. Replace the server host ```0.0.0.0``` with with the public IP of the EC2 instance. Test cluster access from your local workstation.
```shell ```shell
kubectl cluster-info kubectl cluster-info
kubectl get nodes kubectl get nodes
......
# Integrate a Package with BigBang helm chart # Integrate a Package with BigBang helm chart
1. Make a branch from the BigBang chart repository master branch. You can automatically create a branch from the Repo1 Gitlab issue. Or, in some cases you might manually create the branch. You should name the branch with your issue number. If your issue number is 9999 then your branch name can be "9999-my-description". It is best practice to make branch names short and simple.
1. Create a directory for your package at chart/templates/<your-package-name> 1. Make a branch from the BigBang chart repository master branch. You can automatically create a branch from the Repo1 Gitlab issue. Or, in some cases you might manually create the branch. You should name the branch with your issue number. If your issue number is 9999 then your branch name can be "9999-my-description". It is best practice to make branch names short and simple.
1. Inside this folder will be 3 helm template files. You can copy one of the other package folders and tweak the code for your package. Gitlab is a good example to reference because it is one of the more complicated Packages. Note that the Istio VirtualService comes from the Package and is not created in the BigBang chart. The purpose of these helm template files is to create an easy-to-use spec for deploying supported applications. Reasonable and safe defaults are provided and any needed secrets are auto-created. We accept the trade off of easy deployment for complicated template code. More details are in the following steps. 1. Create a directory for your package at `chart/templates/<your-package-name>`
```
1. Inside this folder will be 3 helm template files. You can copy one of the other package folders and tweak the code for your package. Gitlab is a good example to reference because it is one of the more complicated Packages. Note that the Istio VirtualService comes from the Package and is not created in the BigBang chart. The purpose of these helm template files is to create an easy-to-use spec for deploying supported applications. Reasonable and safe defaults are provided and any needed secrets are auto-created. We accept the trade off of easy deployment for complicated template code. More details are in the following steps.
```shell
gitrepository.yaml # Flux GitRepository. Is configured by BigBang chart values. gitrepository.yaml # Flux GitRepository. Is configured by BigBang chart values.
helmrelease.yaml # Flux HelmRelease. Is configured by BigBang chart values. helmrelease.yaml # Flux HelmRelease. Is configured by BigBang chart values.
namespace.yaml # Contains the namespace and any needed secrets namespace.yaml # Contains the namespace and any needed secrets
values.yaml # Implements all the BigBang customizations of the package and passthrough for values. values.yaml # Implements all the BigBang customizations of the package and passthrough for values.
``` ```
1. More details about values.yaml: Code reasonable and safe defaults but prioritize any user defined passthrough values wherever this makes sense. Avoid duplicating tags that are provided in the upstream chart values. Instead code reasonable defaults in the values.yaml template. The following is an example from Gitlab that handles SSO config. The code uses Package chart passthrough values if the user has entered them but otherwise defaults to the BigBang chart values or the Helm default values. Notice that the secret is not handled this way. The assumption is that if the user has enabled the BigBang SSO feature the secret will be auto created. In this case the user should not be overriding the secret. If the user wants to create their own secret they should not be enabling the BigBang SSO feature. 1. More details about values.yaml: Code reasonable and safe defaults but prioritize any user defined passthrough values wherever this makes sense. Avoid duplicating tags that are provided in the upstream chart values. Instead code reasonable defaults in the values.yaml template. The following is an example from Gitlab that handles SSO config. The code uses Package chart passthrough values if the user has entered them but otherwise defaults to the BigBang chart values or the Helm default values. Notice that the secret is not handled this way. The assumption is that if the user has enabled the BigBang SSO feature the secret will be auto created. In this case the user should not be overriding the secret. If the user wants to create their own secret they should not be enabling the BigBang SSO feature.
Note that helm does not handle any missing parent tags in the yaml tree. The 'if' statement and 'default' method throw 'nil' errors when parent tags are missing. The work-around is to inspect each level of the tree and assign an empty 'dict' if the value does not exist. Then you will be able to use 'hasKey' in your 'if' statements as shown below in this example from Gitlab. Having described all this, you should understand that coding conditional values is optional. The passthrough values will take priority regardless. But the overridden values will not show up in the deployed flux HelmRelease object if you don't code the conditional values. The value overrides will be obscured in the Package values secret. The only way to confirm that the overrides have been applied is to use "helm get values <releasename> -n bigbang" command on the deployed helm release. When the passthrough values show up in the HelmRelease object the Package configuration is much easier to see and verify. Use your own judgement on when to code conditional values. Note that helm does not handle any missing parent tags in the yaml tree. The 'if' statement and 'default' method throw 'nil' errors when parent tags are missing. The work-around is to inspect each level of the tree and assign an empty 'dict' if the value does not exist. Then you will be able to use 'hasKey' in your 'if' statements as shown below in this example from Gitlab. Having described all this, you should understand that coding conditional values is optional. The passthrough values will take priority regardless. But the overridden values will not show up in the deployed flux HelmRelease object if you don't code the conditional values. The value overrides will be obscured in the Package values secret. The only way to confirm that the overrides have been applied is to use `helm get values <releasename> -n bigbang` command on the deployed helm release. When the passthrough values show up in the HelmRelease object the Package configuration is much easier to see and verify. Use your own judgement on when to code conditional values.
```yaml ```yaml
global: global:
...@@ -43,11 +46,13 @@ ...@@ -43,11 +46,13 @@
{{- end }} {{- end }}
``` ```
1. More details about namespace.yaml: This template is where the code for secrets go. Typically you will see secrets for imagePullSecret, sso, and database. These secrets are a BigBang chart enhancement. They are created conditionally based on what the user enables in the config. 1. More details about namespace.yaml: This template is where the code for secrets go. Typically you will see secrets for imagePullSecret, sso, and database. These secrets are a BigBang chart enhancement. They are created conditionally based on what the user enables in the config.
1. Edit the chart/templates/values.yaml. Add your Package to the list of Packages. Just copy one of the others and change the name. This supports adding chart values from a secret. Pay attention to whether this is a core Package or an add-on package, the toYaml values are different for add-ons. This template allows a Package to add chart values that need to be encrypted in a secret. 1. Edit the chart/templates/values.yaml. Add your Package to the list of Packages. Just copy one of the others and change the name. This supports adding chart values from a secret. Pay attention to whether this is a core Package or an add-on package, the toYaml values are different for add-ons. This template allows a Package to add chart values that need to be encrypted in a secret.
1. Edit the chart/values.yaml. Add your Package to the bottom of the core section if a core package or addons section if an add-on. You can copy from one of the other packages and modify appropriately. Some possible tags underneath your package are [ enabled, git, sso, database, objectstorage ]. Avoid duplicating value tags from the upstream chart in the BigBang chart. The goal is not to cover every edge case. Instead code reasonable defaults in the helmrelease template and allow customer to override values in addons.<packageName>.values 1. Edit the `chart/values.yaml`. Add your Package to the bottom of the core section if a core package or addons section if an add-on. You can copy from one of the other packages and modify appropriately. Some possible tags underneath your package are [enabled, git, sso, database, objectstorage]. Avoid duplicating value tags from the upstream chart in the BigBang chart. The goal is not to cover every edge case. Instead code reasonable defaults in the helmrelease template and allow customer to override values in addons.`<packageName>.values`
```yaml ```yaml
addons: addons:
mypackage: mypackage:
...@@ -76,7 +81,7 @@ ...@@ -76,7 +81,7 @@
values: {} values: {}
``` ```
1. Edit tests/ci/k3d/values.yaml. These are the settings that the CI pipeline uses to run a deployment test. Set your Package to be enabled and add any other necessary values. Where possible reduce the number of replicas to a minumum to reduce straing on the CI infrastructure. When you commit your code the pipeline will run. You can view the pipeline in the Repo1 Gitlab console. Fix any errors in the pipeline output. The pipeline automatically runs a "smoke" test. It deploys bigbang on a k3d cluster using the test values file. 1. Edit tests/ci/k3d/values.yaml. These are the settings that the CI pipeline uses to run a deployment test. Set your Package to be enabled and add any other necessary values. Where possible reduce the number of replicas to a minimum to reduce strain on the CI infrastructure. When you commit your code the pipeline will run. You can view the pipeline in the Repo1 Gitlab console. Fix any errors in the pipeline output. The pipeline automatically runs a "smoke" test. It deploys bigbang on a k3d cluster using the test values file.
1. Add your packages name to the ORDERED_HELMRELEASES list in scripts/deploy/02_wait_for_helmreleases.sh. 1. Add your packages name to the ORDERED_HELMRELEASES list in scripts/deploy/02_wait_for_helmreleases.sh.
...@@ -86,51 +91,54 @@ ...@@ -86,51 +91,54 @@
1. When you are done developing the BigBang chart features for your Package make a merge request in "Draft" status and add a label corresponding to your package name (must match the name in `values.yaml`). Also add any labels for dependencies of the package that are NOT core apps. The merge request will start a pipeline and use the labels to determine which addons to deploy. Fix any errors that appear in the pipeline. When the pipeline has pass and the MR is ready take it out of "Draft" and add the `status::review` label. Address any issues raised in the merge request comments. 1. When you are done developing the BigBang chart features for your Package make a merge request in "Draft" status and add a label corresponding to your package name (must match the name in `values.yaml`). Also add any labels for dependencies of the package that are NOT core apps. The merge request will start a pipeline and use the labels to determine which addons to deploy. Fix any errors that appear in the pipeline. When the pipeline has pass and the MR is ready take it out of "Draft" and add the `status::review` label. Address any issues raised in the merge request comments.
# BigBang Development and Testing Cycle ## BigBang Development and Testing Cycle
There are two ways to test BigBang, imperative or GitOps with Flux. Your initial development can start with imperative testing. But you should finish with GitOps to make sure that your code works with Flux. There are two ways to test BigBang, imperative or GitOps with Flux. Your initial development can start with imperative testing. But you should finish with GitOps to make sure that your code works with Flux.
1. **Imperative:** you can manually deploy bigbang with helm command line. With this method you can test local code changes without committing to a repository. Here are the steps that you can iterate with "code a little, test a little". From the root of your local bigbang repo: 1. **Imperative:** you can manually deploy bigbang with helm command line. With this method you can test local code changes without committing to a repository. Here are the steps that you can iterate with "code a little, test a little". From the root of your local bigbang repo:
```bash
# deploy with helm while pointing to your test values files ```shell
# bigbang packages should create any needed secrets from the chart values # Deploy with helm while pointing to your test values files
# if you have the values file encrypted with sops, temporarily decrypt it # Bigbang packages should create any needed secrets from the chart values
# Ff you have the values file encrypted with sops, temporarily decrypt it
helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../customers/template/dev/configmap.yaml -f ./chart/ingress-certs.yaml -f ../customers/template/dev/registry-values.enc.yaml helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../customers/template/dev/configmap.yaml -f ./chart/ingress-certs.yaml -f ../customers/template/dev/registry-values.enc.yaml
# conduct testing # Conduct testing
# if you make code changes you can run another helm upgrade to pick up the new changes # If you make code changes you can run another helm upgrade to pick up the new changes
helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../customers/template/dev/configmap.yaml -f ./chart/ingress-certs.yaml -f ../customers/template/dev/registry-values.enc.yaml helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../customers/template/dev/configmap.yaml -f ./chart/ingress-certs.yaml -f ../customers/template/dev/registry-values.enc.yaml
# tear down # Tear down
helm delete bigbang -n bigbang helm delete bigbang -n bigbang
# helm delete will not delete the bigbang namespace # Helm delete will not delete the bigbang namespace
kubectl delete ns bigbang kubectl delete ns bigbang
# istio namespace will be stuck in "finalizing". So run the script to delete it. # Istio namespace will be stuck in "finalizing". So run the script to delete it.
hack/remove-ns-finalizer.sh istio-system hack/remove-ns-finalizer.sh istio-system
``` ```
2. **GitOps with Flux:** You can deploy your development code the same way a customer would deploy using GitOps. You must commit any code changes to your development banches because this is how GitOps works. There is a [customer template repository](https://repo1.dso.mil/platform-one/big-bang/customers/template) that has an example template for how to deploy using BigBang. You can create a branch from one of the other developer's branch or start clean from the master branch. Make the necessary modifications as explained in the README.md. The setup information is not repeated here. This is a public repo so DO NOT commit unencrypted secrets. Before committing code it is a good idea to manually run "helm template" and a "helm install" with dry run. This will reveal many errors before you make a commit. Here are the steps you can iterate: 2. **GitOps with Flux:** You can deploy your development code the same way a customer would deploy using GitOps. You must commit any code changes to your development branches because this is how GitOps works. There is a [customer template repository](https://repo1.dso.mil/platform-one/big-bang/customers/template) that has an example template for how to deploy using BigBang. You can create a branch from one of the other developer's branch or start clean from the master branch. Make the necessary modifications as explained in the README.md. The setup information is not repeated here. This is a public repo so DO NOT commit unencrypted secrets. Before committing code it is a good idea to manually run `helm template` and a `helm install` with dry run. This will reveal many errors before you make a commit. Here are the steps you can iterate:
```bash
# verify chart code before committing ```shell
# Verify chart code before committing
helm template bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --debug helm template bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --debug
helm install bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --dry-run helm install bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --dry-run
# commit and push your code # Commit and push your code
# deploy your bigbang template # Deploy your bigbang template
kubectl apply -f dev/bigbang.yaml kubectl apply -f dev/bigbang.yaml
# monitor rollout # Monitor rollout
watch kubectl get pod,helmrelease -A watch kubectl get pod,helmrelease -A
# conduct testing # Conduct testing
# tear down # Tear down
kubectl delete -f dev/bigbang.yaml kubectl delete -f dev/bigbang.yaml
# istio namespace will be stuck in "finalizing". So run the script to delete it. You will need 'jq' installed # Istio namespace will be stuck in "finalizing". So run the script to delete it. You will need 'jq' installed
hack/remove-ns-finalizer.sh istio-system hack/remove-ns-finalizer.sh istio-system
# if you have pushed code changes before the tear down, occasionally the bigbang deployments are not terminated # If you have pushed code changes before the tear down, occasionally the bigbang deployments are not terminated because Flux has not had enough time to reconcile the helmreleases
# because Flux has not had enough time to reconcile the helmreleases
# re-deploy bigbang # Re-deploy bigbang
kubectl apply -f dev/bigbang.yaml kubectl apply -f dev/bigbang.yaml
# run the sync script. # Run the sync script.
hack/sync.sh hack/sync.sh
# tear down # Tear down
kubectl delete -f dev/bigbang.yaml kubectl delete -f dev/bigbang.yaml
hack/remove-ns-finalizer.sh istio-system hack/remove-ns-finalizer.sh istio-system
``` ```
File moved
File moved
# Guides
## deployment_scenarios
Beginner friendly how to guides are intended to be added to these subfolders over time.
## prerequisites
Beginner friendly comprehensive explanations of prerequisites that are generically applicable to multiple scenarios
# Big Bang Quick Start
## Overview
This guide is designed to offer an easy to deploy preview of BigBang, so new users can get to a hands-on state as quickly as possible.
Note: The current implementation of the Quick Start limits the ability to customize the BigBang Deployment. It is doing a GitOps defined deployment from a repository you don't control.
## Step 1. Provision a Virtual Machine
The following requirements are recommended for Demo Purposes:
* 1 Virtual Machine with 64GB RAM, 16-Core CPU (This will become a single node cluster)
* Ubuntu Server 20.04 LTS (Ubuntu comes up slightly faster than RHEL, although both work fine)
* Network connectivity to said Virtual Machine (provisioning with a public IP and a security group locked down to your IP should work. Otherwise a Bare Metal server or even a vagrant box Virtual Machine configured for remote ssh works fine.)
Note: The quick start repositories' `init-k3d.sh` starts up k3d using flags to disable the default ingress controller and map the virtual machine's port 443 to a Docker-ized Load Balancer's port 443, which will eventually map to the istio ingress gateway. That along with some other things (Like leveraging a Lets Encrypt Free HTTPS Wildcard Certificate) are done to lower the prerequisites barrier to make basic demos easier.
## Step 2. SSH into machine and install prerequisite software
1. Setup SSH
```shell
# [User@Laptop:~]
touch ~/.ssh/config
chmod 600 ~/.ssh/config
cat ~/.ssh/config
temp="""##########################
Host k3d
Hostname 1.2.3.4 #IP Address of k3d node
IdentityFile ~/.ssh/bb-onboarding-attendees.ssh.privatekey #ssh key authorized to access k3d node
User ubuntu
StrictHostKeyChecking no #Useful for vagrant where you'd reuse IP from repeated tear downs
#########################"""
echo "$temp" | sudo tee -a ~/.ssh/config #tee -a, appends to preexisting config file
```
1. Install Docker
```shell
# [admin@Laptop:~]
ssh k3d
# [ubuntu@k3d:~]
curl -fsSL https://get.docker.com | bash
docker run hello-world
# docker: Got permission denied while trying to connect to the Docker daemon socket at
# unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create:
# dial unix /var/run/docker.sock: connect: permission denied.See 'docker run --help'.
sudo docker run hello-world
# If docker only works when you use sudo, you need to add your non-root user to the docker group.
sudo groupadd docker
sudo usermod --append --groups docker $USER
# When users are added to a group in linux, a new process needs to spawn in order for the new permissions to be recognized, due to a Linux security feature preventing running processes from gaining additional privileges on the fly. (log out and back in is the sure fire method)
exit
[admin@Laptop:~]
ssh k3d
[ubuntu@k3d:~]
docker run hello-world # validate install was successful
```
1. Install k3d
```shell
[ubuntu@k3d:~]
wget -q -P /tmp https://github.com/rancher/k3d/releases/download/v3.0.1/k3d-linux-amd64
mv /tmp/k3d-linux-amd64 /tmp/k3d
sudo chmod +x /tmp/k3d
sudo mv -v /tmp/k3d /usr/local/bin/
k3d --version # validate install was successful
```
1. Install Kubectl
```shell
[ubuntu@k3d:~]
wget -q -P /tmp "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo chmod +x /tmp/kubectl
sudo mv /tmp/kubectl /usr/local/bin/kubectl
sudo ln -s /usr/local/bin/kubectl /usr/local/bin/k #alternative to alias k=kubectl in ~/.bashrc
k version # validate install was successful
```
1. Install Terraform
```shell
[ubuntu@k3d:~]
wget https://releases.hashicorp.com/terraform/0.14.9/terraform_0.14.9_linux_amd64.zip
sudo apt update && sudo apt install unzip
unzip terraform*
sudo mv terraform /usr/local/bin/
terraform version # validate install was successful
```
1. Run Operating System Pre-configuration
```shell
# [ubuntu@k3d:~]
# For ECK
sudo sysctl -w vm.max_map_count=262144
# Turn off all swap devices and files (won't last reboot)
sudo swapoff -a
# For swap to stay off you can remove any references found via
# cat /proc/swaps
# cat /etc/fstab
# For Sonarqube
sudo sysctl -w vm.max_map_count=524288
sudo sysctl -w fs.file-max=131072
ulimit -n 131072
ulimit -u 8192
```
## Step 3. Clone the Big Bang Quick Start Repo
<https://repo1.dso.mil/platform-one/quick-start/big-bang#big-bang-quick-start>
1. Clone the repo
```shell
# [ubuntu@k3d:~]
cd ~
git clone https://repo1.dso.mil/platform-one/quick-start/big-bang.git
cd ~/big-bang
```
1. Look up your IronBank image pull credentials from <https://registry1.dso.mil>
1. In a web browser go to <https://registry1.dso.mil>
2. Login via OIDC provider
3. Top right of the page, click your name --> User Profile
4. Your image pull username is labeled "Username"
5. Your image pull password is labeled "CLI secret"
(Note: The image pull credentials are tied to the life cycle of an OIDC token which expires after 30 days, so if 30 days have passed since your last login to IronBank, the credentials will stop working until you re-login to the <https://registry1.dso.mil> GUI)
1. Verify your credentials work
```shell
# [ubuntu@k3d:~/big-bang]
docker login https://registry1.dso.mil
# It'll prompt for "Username: " (type it out)
# It'll prompt for "Password: " (copy paste it, or blind type it as it will be masked)
# Login Succeeded
```
1. Create a terraform.tfvars file with your registry1 credentials in your copy of the cloned repo
```shell
# [ubuntu@k3d:~/big-bang]
vi ~/big-bang/terraform.tfvars
```
* Add the following contents to the newly created file
```plaintext
registry1_username="REPLACE_ME"
registry1_password="REPLACE_ME"
```
## Step 4. Follow the deployment directions on the Big Bang Quick Start Repo
[Link to Big Bang Quick Start Repo](https://repo1.dso.mil/platform-one/quick-start/big-bang#big-bang-quick-start)
## Step 5. Add the LEF HTTPS Demo Certificate
* A Lets Encrypt Free HTTPS Wildcard Certificate, for *.bigbang.dev is included in the repo, we'll apply it from a regularly updated upstream source of truth.
```shell
[ubuntu@k3d:~/big-bang]
# Download Encrypted HTTPS Wildcard Demo Cert
curl https://repo1.dso.mil/platform-one/big-bang/bigbang/-/raw/master/hack/secrets/ingress-cert.yaml > ~/ingress-cert.enc.yaml
# Download BigBang's Demo GPG Key Pair to a local file
curl https://repo1.dso.mil/platform-one/big-bang/bigbang/-/raw/master/hack/bigbang-dev.asc > /tmp/demo-bigbang-gpg-keypair.dev
# Import the Big Bang Demo Key Pair into keychain
gpg --import /tmp/demo-bigbang-gpg-keypair.dev
# Install sops (Secret Operations CLI tool by Mozilla)
wget https://github.com/mozilla/sops/releases/download/v3.6.1/sops_3.6.1_amd64.deb
sudo dpkg -i sops_3.6.1_amd64.deb
# Decrypt and apply to the cluster
sops --decrypt ~/ingress-cert.enc.yaml | kubectl apply -f - --namespace=istio-system
```
## Step 6. Edit your Laptop's HostFile to access the web pages hosted on the BigBang Cluster
```shell
# [ubuntu@k3d:~/big-bang]
# Short version of, kubectl get virtualservices --all-namespaces
$ k get vs -A
NAMESPACE NAME GATEWAYS HOSTS AGE
monitoring monitoring-monitoring-kube-alertmanager ["istio-system/main"] ["alertmanager.bigbang.dev"] 8d
monitoring monitoring-monitoring-kube-grafana ["istio-system/main"] ["grafana.bigbang.dev"] 8d
monitoring monitoring-monitoring-kube-prometheus ["istio-system/main"] ["prometheus.bigbang.dev"] 8d
argocd argocd-argocd-server ["istio-system/main"] ["argocd.bigbang.dev"] 8d
kiali kiali ["istio-system/main"] ["kiali.bigbang.dev"] 8d
jaeger jaeger ["istio-system/main"] ["tracing.bigbang.dev"] 8d
```
* Linux/Mac Users:
```shell
# [admin@Laptop:~]
sudo vi /etc/hosts
```
* Windows Users:
1. Right click Notepad -> Run as Administrator
1. Open C:\Windows\System32\drivers\etc\hosts
* Add the following entries to the hostfile, where 1.2.3.4 = k3d virtual machine's IP
```plaintext
1.2.3.4 alertmanager.bigbang.dev
1.2.3.4 grafana.bigbang.dev
1.2.3.4 prometheus.bigbang.dev
1.2.3.4 argocd.bigbang.dev
1.2.3.4 kiali.bigbang.dev
1.2.3.4 tracing.bigbang.dev
```
* Remember to un-edit your hostfile when done
# Prerequisites:
* How the Prerequisites docs are organized:
* This README.md is meant to be a high level overview of prerequsites.
* /docs/guides/prerequsites/(some_topic).md files are meant to offer more specific guidance on prerequisites while staying generic.
* The /docs/guides/deployment_scenarios/(some_topic).md files may also offer additional details on prerequesites specific to the scenario.
* Prerequisites vary depending on deployment scenario, thus a table is used to give an overview.
* Note for future edits: The following table was generated using tablesgenerator.com/markdown_tables, recommended to copy the table's raw text contents, visit tablesgenerator.com/markdown_tables, and File -> Paste table data when edits are needed.
| Prerequisites(rows) vs Deployment Scenarios(columns) | QuickStart | Internet Connected | Internet Disconnected |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **[OS Preconfigured](os_preconfiguration.md) and Prehardened** <br>(OS and level of hardening required depends on AO) | Prerequisite <br>Recommended: A non-hardened single VM with 8 cores and 64 GB ram <br>Minimum: 4 cores and 16 GB ram (requires overriding helm values) | Prerequisite <br>(CSPs usually have marketplaces with pre-hardened VM images) | Prerequisite <br>(configured to AO's risk tolerance / mission needs) |
| **[Kubernetes Distribution Preconfigured](kubernetes_preconfiguration.md) to Best Practices and Prehardened** <br>(Any CNCF Kubernetes Distribution will work as long as an AO signs off on it) | k3d is recommended for demos (It's quick to set up, ships with a dockerized LB, works on every cloud, and bare metal) | Prerequisite <br>(https://repo1.dso.mil/platform-one/distros) | Prerequisite <br>(users are responsible for airgap image import of container images needed by chosen Kubernetes Distribution) |
| **Default Storage Class** <br>((for Dynamic PVCs), the SC needs to support RWX (Read Write Many) Access Mode to support HA deployment of all BigBang AddOns) | Presatisfied* <br>(*if using k3d, which has dynamic local volume storage class baked in) | Prerequisite <br>It's recommended that users start with a CSP specific or Kubernetes Distro provided storage class | Prerequisite <br>[(These docs compare Cloud Agnostic Storage Solutions)](../../k8s-storage/README.md#kubernetes-storage-options) |
| **Support for Automated Provisioning of Service Type Load Balancer** <br>(is recommended) | Presatisfied* <br>(*if using k3d, which ships with the ability to add flags to treat the VM's port 443 as Kubernetes Service of Type LB's port 443, automation in the quick start repo leverages these flags) | Prerequisite <br>Kubernetes Distributions usually have CSPs specific flags you can pass to the kube-apiserver to support auto provisioning of CSP LBs. | Prerequisite <br>[(See docs for guidance on bare metal and no IAM scenarios)](kubernetes_preconfiguration.md#service-of-type-load-balancer) |
| **Access to Container Images** <br>(IronBank Image Pull Credentials or AirGap import from .tar.gz's) | Prerequisite <br>(Anyone can go to login.dso.mil, and self register against P1's SSO. That can be used to login to registry1.dso.mil to generate image pull credentials for the QuickStart) | BigBang customers are recommended to use ask their BB Customer Liaison's for an IronBank Image pull robot account, which lasts 6 months. | Prerequisite <br>(Airgap import of container images, [BigBang Releases](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/releases) includes a .tar.gz of IronBank Images) |
| **Customer Controlled Private Git Repo** <br>(for GitOps, the Cluster needs network access & Credentials for Read Access to said Git Repo) | Presatisfied <br>(the turn key demo points to public repo1, but you won't be able to customize it) | Prerequisite <br>(or follow Air gap docs) | Prerequisite <br>(Air gap docs assist with provisioning an ssh based git repo) |
| **Encrypt Secrets as Code** <br>(Use SOPS + CSP KMS or PGP to encrypt secrets that need to be stored in the GitRepo) | Presatisfied <br>(Demo Repo has mock secrets encrypted with a demo PGP public encryption key) | Prerequisite <br>(CSP KMS and IAM is more secure that gpg key pair) | Prerequisite <br>(Use CSP KMS if available, PGP works universally, [Flux requires the private PGP key to not have a passphrase](https://toolkit.fluxcd.io/guides/mozilla-sops/#generate-a-gpg-key)) |
| **Install and Configure Flux** <br>(Flux needs Git Repo Credentials & CSP IAM rights for KMS decryption or a kubernetes secret containing a private PGP decryption key) | Presatisfied <br>(Demo Public Repo doesn't require RO Credentials, the demo PGP private decryption key is hosted cleartext in the repo) | Prerequisite <br>(see BigBang docs, [flux docs](https://toolkit.fluxcd.io/components/source/gitrepositories/#spec-examples) are also a good resource for this) | Prerequisite <br>(see BigBang docs) |
| **HTTPS Certificates** | Presatisfied <br>(Demo Public Repo contains a Let's Encrypt Free (public internet recognised certificate authority) HTTPS Certificate for *.bigbang.dev, alternatively mkcert can be used to generate demo certs for arbitrary DNS names that will only be trusted by the laptop that provisoned the mkcert) | Prerequisite <br>(HTTPS cert is provided by consumer) | Prerequisite <br>(HTTPS cert is provided by consumer) |
| **DNS** | Edit your Laptop's host file (/etc/hosts, C:\Windows\System32\drivers\etc\hosts), or use something like AWS VPC Private DNS and [sshuttle](https://github.com/sshuttle/sshuttle) to point to host VM (if using k3d) | Prerequisite <br>(point DNS names to Layer 4 CSP LB) | Prerequisite <br>(point DNS names to L4 LB) |
| **HTTPS Certificate, DNS Name, and hostnames in BigBang's helm values must match** <br>(in order for Ingress to work correctly.) | QuickStart leverages `*.bigbang.dev` HTTPS cert, and the BigBang Helm Chart's values.yaml's hostname defaults to bigbang.dev, just need to ensure multiple hostfile entries like "grafana.bigbang.dev " exist, or if you have access to DNS a wildcard entry to map CNAME `*.bigbang.dev` to k3d VM's IP | Prerequisite <br>(update bigbang helm values in git repo so hostnames match HTTPS cert) | Prerequisite <br>(update bigbang helm values in git repo so hostnames match HTTPS cert) |
| **SSO Identity Provider** <br>(Prerequisite for SSO Authentication Proxy feature) | Presatisfied* <br>(*depending on which quick start config is used), There's exists a demo SSO config that leverages P1's CAC enabled SSO, it's coded to only work for localhost to balance turn key demo functionality against security concerns. | Prerequisite <br>(You don't have to use Keycloak, you can use any OIDC/SAML Identity Provider) ([Customer Deployable Keycloak is a feature coming soon](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/issues/291)) | Prerequisite* <br>(Install your own Keycloak cluster), leverage a pre-existing airgap SSO solution, or configure to not use SSO* if not needed for use case) |
| **Ops Team to integrate, configure, and maintain BigBang** <br>(needed skillsets: DevOps IaC/CaC all the things, automate most the things, document the rest, linux administration, productionalization and maintenance of a Kubernetes Cluster.) | QuickStart Demo is designed to be self service. | Prerequisite <br>(BigBang Customer Integration Engineers are available to help long term Ops teams.) | Prerequisite |
# Default Storage Class prerequisite
* BigBang assumes the cluster you're deploying to supports [dynamic volume provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/).
* A BigBang Cluster should have 1 Storage Class annotated as the default SC.
* For Production Deployments it is recommended to leverage a Storage Class that supports the creation of volumes that support ReadWriteMany [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), as there are a few BigBang Addons, where an HA application configuration requires a storage class that supports ReadWriteMany.
## How Dynamic volume provisioning works in a nut shell
* StorageClass + PersistentVolumeClaim = Dynamically Created Persistent Volume
* A PersistentVolumeClaim that does not reference a specific StorageClass will leverage the default StorageClass. (Of which there should only be 1, identified using kubernetes annotations.) Some Helm Charts allow a storage class to be explicitly specified so that multiple storage classes can be used simultaneously.
## How to check what storage classes are installed on your cluster
* `kubectl get storageclass` can be used to see what storage classes are available on a cluster, the default will be marked as such.
* Note: You can have multiple storage classes, but you should only have 1 default storage class.
```bash
kubectl get storageclass
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
# local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 47h
```
------------------------------------------------------
## AWS Specific Notes
### Example AWS Storage Class Configuration
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: 'true'
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2 #gp3 isn't supported by the in-tree plugin
fsType: ext4
# encrypted: 'true' #requires kubernetes nodes have IAM rights to a KMS key
# kmsKeyId: 'arn:aws-us-gov:kms:us-gov-west-1:110518024095:key/b6bf63f0-dc65-49b4-acb9-528308195fd6'
reclaimPolicy: Retain
allowVolumeExpansion: true
```
### AWS EBS Volumes:
* AWS EBS Volumes have the following limitations:
* An EBS volume can only be attached to a single Kubernetes Node at a time, thus ReadWriteMany Access Mode isn't supported.
* An EBS PersistentVolume in AZ1 (Availability Zone 1), cannot be mounted by a worker node in AZ2.
### AWS EFS Volumes:
* An AWS EFS Storage Class can be installed according to the [vendors docs](https://github.com/kubernetes-sigs/aws-efs-csi-driver#installation).
* AWS EFS Storage Class supports ReadWriteMany Access Mode.
* AWS EFS Persistent Volumes can be mounted by worker nodes in multiple AZs.
* AWS EFS is basically NFS(NetworkFileSystem) as a Service. NFS cons like latency apply equally to EFS, thus it's not a good fit for for databases.
------------------------------------------------------
## Azure Specific Notes
### Azure Disk Storage Class Notes
* The Kubernetes Docs offer an Example [Azure Disk Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk)
* An Azure disk can only be mounted with Access mode type ReadWriteOnce, which makes it available to one node in AKS.
* An Azure Disk PersistentVolume in AZ1, can be mounted by a worker node in AZ2 (although some additional lag is involved in such transitions).
------------------------------------------------------
## Bare Metal/Cloud Agnostic Store Class Notes
* The BigBang Product team put together a [Comparison Matrix of a few Cloud Agnostic Storage Class offerings](../../k8s-storage/README.md#kubernetes-storage-options)
* Note: No storage class specific container images exist in IronBank at this time.
* Approved IronBank Images will show up in https://registry1.dso.mil
* https://repo1.dso.mil/dsop can be used to check status of IronBank images.
# Install the flux cli tool
```bash
sudo curl -s https://fluxcd.io/install.sh | sudo bash
```
> Fedora Note: kubectl is a prereq for flux, and flux expects it in `/usr/local/bin/kubectl` symlink it or copy the binary to fix errors.
## Install flux.yaml to the cluster
```bash
export REGISTRY1_USER='REPLACE_ME'
export REGISTRY1_TOKEN='REPLACE_ME'
```
> In production use robot credentials, single quotes are important due to the '$'
`export REGISTRY1_USER='robot$bigbang-onboarding-imagepull'`
```bash
kubectl create ns flux-system
kubectl create secret docker-registry private-registry \
--docker-server=registry1.dso.mil \
--docker-username=$REGISTRY1_USER \
--docker-password=$REGISTRY1_TOKEN \
--namespace flux-system
kubectl apply -f https://repo1.dso.mil/platform-one/big-bang/bigbang/-/raw/master/scripts/deploy/flux.yaml
```
> k apply -f flux.yaml, is equivalent to "flux install", but it installs a version of flux that's been tested and gone through IronBank.
#### Now you can see new CRD objects types inside of the cluster
```bash
kubectl get crds | grep flux
```
# Advanced Installation
Clone the Big Bang repo and use the awesome installation [scripts](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/tree/master/scripts) directory
```bash
git clone https://repo1.dso.mil/platform-one/big-bang/bigbang.git
./bigbang/scripts/install_flux.sh
```
> **NOTE** install_flux.sh requires arguments to run properly, calling it will print out a friendly USAGE mesage with required arguments needed to complete installation.
# Kubernetes Cluster Preconfiguration:
## Best Practices:
* A CNI (Container Network Interface) that supports Network Policies (which are basically firewalls for the Inner Cluster Network.) (Note: k3d, which is recommended for the quickstart demo, defaults to flannel, which does not support network policies.)
* All Kubernetes Nodes and the LB associated with the kube-apiserver should all use private IPs.
* In most case User Application Facing LBs should have Private IP Addresses and be paired with a defense in depth Ingress Protection mechanism like [P1's CNAP](https://p1.dso.mil/#/products/cnap/), a CNAP equivalent (Advanced Edge Firewall), VPN, VDI, port forwarding through a bastion, or air gap deployment.
* CoreDNS in the kube-system namespace should be HA with pod anti-affinity rules
* Master Nodes should be HA and tainted.
* Consider using a licensed Kubernetes Distribution with a support contract.
* [A default storage class should exist](default_storageclass.md) to support dynamic provisioning of persistent volumes.
## Service of Type Load Balancer:
BigBang's default configuration assumes the cluster you're deploying to supports dynamic load balancer provisioning. Specifically Istio defaults to creating a Kubernetes Service of type Load Balancer, which usually creates an endpoint exposed outside of the cluster that can direct traffic inside the cluster to the istio ingress gateway.
How Kubernetes service of type LB works depends on implementation details, there are many ways of getting it to work, common methods are listed below:
* CSP API Method: (Recommended option for Cloud Deployments)
The Kubernetes Control Plane has a --cloud-provider flag that can be set to aws, azure, etc. If the Kubernetes Master Nodes have that flag set and CSP IAM rights. The control plane will auto provision and configure CSP LBs. (Note: a Vendors Kubernetes Distro automation, may have IaC/CaC defaults that allow this to work turn key, but if you have issues when provisioning LBs, consult with the Vendor's support for the recommended way of configuring automatic LB provisioning.)
* External LB Method: (Good for bare metal and 0 IAM rights scenarios)
You can override bigbang's helm values so istio will provision a service of type NodePort instead of type LoadBalancer. Instead of randomly generating from the port range of 30000 - 32768, the NodePorts can be pinned to convention based port numbers like 30080 & 30443. If you're in a restricted cloud env or bare metal you can ask someone to provision a CSP LB where LB:443 would map to Nodeport:30443 (of every worker node), etc.
* No LB, Network Routing Methods: (Good options for bare metal)
* [MetalLB](https://metallb.universe.tf/)
* [kubevip](https://kube-vip.io/)
* [kube-router](https://www.kube-router.io)
## BigBang doesn't support PSPs (Pod Security Policies):
* [PSP's are being removed from Kubernetes and will be gone by version 1.25.x](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/issues/10)
* [Open Policy Agent Gatekeeper can enforce the same security controls as PSPs](https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy#pod-security-policies), and is core component of BigBang, which operates as an elevated [validating admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to audit and enforce various [constraints](https://github.com/open-policy-agent/frameworks/tree/master/constraint) on all requests sent to the kubernetes api server.
* We recommend users disable PSPs completely given they're being removed, we have a replacement, and PSPs can prevent OPA from deploying (and if OPA is not able to deploy, nothing else gets deployed).
* Different ways of Disabling PSPs:
* Edit the kube-apiserver's flags (methods for doing this varry per distro.)
* ```bash
kubectl patch psp system-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}'
kubectl patch psp global-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}'
kubectl patch psp global-restricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}'
```
## Kubernetes Distribution Specific Notes
* Note: P1 has forks of various [Kubernetes Distribution Vendor Repos](https://repo1.dso.mil/platform-one/distros), there's nothing special about the P1 forks.
* We recommend you leverage the Vendors upstream docs in addition to any docs found in P1 Repos; infact, the Vendor's upstream docs are far more likely to be up to date.
### VMWare Tanzu Kubernetes Grid:
[Prerequisites section of VMware Kubernetes Distribution Docs's](https://repo1.dso.mil/platform-one/distros/vmware/tkg#prerequisites)
### Cluster API
* Note that there are some OS hardening and VM Image Build automation tools in here, in addition to Cluster API.
* https://repo1.dso.mil/platform-one/distros/clusterapi
* https://repo1.dso.mil/platform-one/distros/cluster-api/gov-image-builder
### OpenShift
OpenShift
1) When deploying BigBang, set the OpenShift flag to true.
```
# inside a values.yaml being passed to the command installing bigbang
openshift: true
# OR inline with helm command
helm install bigbang chart --set openshift=true
```
2) Patch the istio-cni daemonset to allow containers to run privileged (AFTER istio-cni daemonset exists).
Note: it was unsuccessfully attempted to apply this setting via modifications to the helm chart. Online patching succeeded.
```
kubectl get daemonset istio-cni-node -n kube-system -o json | jq '.spec.template.spec.containers[] += {"securityContext":{"privileged":true}}' | kubectl replace -f -
```
3) Modify the OpenShift cluster(s) with the following scripts based on https://istio.io/v1.7/docs/setup/platform-setup/openshift/
```
# Istio Openshift configurations Post Install
oc -n istio-system expose svc/istio-ingressgateway --port=http2
oc adm policy add-scc-to-user privileged -z istio-cni -n kube-system
oc adm policy add-scc-to-group privileged system:serviceaccounts:logging
oc adm policy add-scc-to-group anyuid system:serviceaccounts:logging
oc adm policy add-scc-to-group privileged system:serviceaccounts:monitoring
oc adm policy add-scc-to-group anyuid system:serviceaccounts:monitoring
cat <<\EOF >> NetworkAttachmentDefinition.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: istio-cni
EOF
oc -n logging create -f NetworkAttachmentDefinition.yaml
oc -n monitoring create -f NetworkAttachmentDefinition.yaml
```
### Konvoy
* [Prerequistes can be found here](https://repo1.dso.mil/platform-one/distros/d2iq/konvoy/konvoy/-/tree/master/docs/1.5.0#prerequisites)
* [Different Deployment Scenarios have been documented here](https://repo1.dso.mil/platform-one/distros/d2iq/konvoy/konvoy/-/tree/master/docs/1.4.4/install)
### RKE2
* RKE2 turns PSPs on by default (see above for tips on disabling)
* RKE2 sets selinux to enforcing by default ([see os_preconfiguration.md for selinux config](os_preconfiguration.md))
Since BigBang makes several assumptions about volume and load balancing provisioning by default, it's vital that the rke2 cluster must be properly configured. The easiest way to do this is through the in tree cloud providers, which can be configured through the `rke2` configuration file such as:
```yaml
# aws, azure, gcp, etc...
cloud-provider-name: aws
# additionally, set below configuration for private AWS endpoints, or custom regions such as (T)C2S (us-iso-east-1, us-iso-b-east-1)
cloud-provider-config: ...
```
For example, if using the aws terraform modules provided [on repo1](https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform), setting the variable: `enable_ccm = true` will ensure all the necessary resources tags.
In the absence of an in-tree cloud provider (such as on-prem), the requirements can be met by ensuring a default storage class and automatic load balancer provisioning exist.
# OS Configuration Pre-Requisites:
## Disable swap (Kubernetes Best Practice)
1. Identify configured swap devices and files with cat /proc/swaps.
2. Turn off all swap devices and files with swapoff -a.
3. Remove any matching reference found in /etc/fstab.
(Credit: Above copy pasted from Aaron Copley of [Serverfault.com](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux))
## ECK specific configuration (ECK is a Core BB App):
Elastic Cloud on Kubernetes (Elasticsearch Operator) deployed by BigBang uses memory mapping by default. In most cases, the default address space is too low and must be configured.
To ensure unnecessary privileged escalation containers are not used, these kernel settings should be applied before BigBang is deployed:
```bash
sudo sysctl -w vm.max_map_count=262144 #(ECK crash loops without this)
```
More information can be found from elasticsearch's documentation [here](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html#k8s-virtual-memory)
## SELinux specific configuration:
* If SELinux is enabled and the OS hasn't received additional pre-configuration, then users will see istio init-container crash loop.
* Depending on security requirements it may be possible to set selinux in permissive mode: `sudo setenforce 0`.
* Additional OS and Kubernetes specific configuration are required for istio to work on systems with selinux set to `Enforcing`.
By default, BigBang will deploy istio configured to use `istio-init` (read more [here](https://istio.io/latest/docs/setup/additional-setup/cni/)). To ensure istio can properly initialize enovy sidecars without container privileged escalation permissions, several system kernel modules must be pre-loaded before installing BigBang:
```bash
modprobe xt_REDIRECT
modprobe xt_owner
modprobe xt_statistic
```
## Sonarqube specific configuration (Sonarqube is a BB Addon App):
Sonarqube requires the following kernel configurations set at the node level:
```bash
sysctl -w vm.max_map_count=524288
sysctl -w fs.file-max=131072
ulimit -n 131072
ulimit -u 8192
```
Another option includes running the init container to modify the kernel values on the host (this requires a busybox container run as root):
```yaml
addons:
sonarqube:
values:
initSysctl:
enabled: true
```
**This is not the recommended solution as it requires running an init container as privileged.**
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment