diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000000000000000000000000000000000000..5d827dfea8d38c973c8980965850e89f6bdd1a1e --- /dev/null +++ b/docs/README.md @@ -0,0 +1,35 @@ +# BigBang Docs + +## What is BigBang? + +* BigBang is a Helm Chart that is used to deploy a DevSecOps Platform on a Kubernetes Cluster. The DevSecOps Platform is composed of application packages which are bundled as helm charts that leverage IronBank hardened container images. +* The BigBang Helm Chart deploys gitrepository and helmrelease Custom Resources to a Kubernetes Cluster that's running the Flux GitOps Operator, these can be seen using `kubectl get gitrepository,helmrelease -n=bigbang`. Flux then installs the helm charts defined by the Custom Resources into the cluster. +* The BigBang Helm Chart has a values.yaml file that does 2 main things: + 1. Defines which DevSecOps Platform packages/helm charts will be deployed + 2. Defines what input parameters will be passed through to the chosen helm charts. +* You can see what applications are part of the platform by checking the following resources: + * [../Packages.md](../Packages.md) lists the packages and organizes them in categories. + * [Release Notes](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/releases) lists the packages and their versions. + * For a code based source of truth, you can check [BigBang's default values.yaml](../chart/values.yaml), and `[CTRL] + [F]` "repo:", to quickly iterate through the list of applications supported by the BigBang team. + +## How do I deploy BigBang? + +**Note:** The Deployment Process and Pre-Requisites will vary depending on the deployment scenario. The [Quick Start Demo Deployment](guides/deployment_scenarios/quickstart.md) for example, allows some steps to be skipped due to a mixture of automation and generically reusable demo configuration that satisfies pre-requisites. +The following is a general overview of the process, the [deployment guides](guides/deployment_scenarios) go into more detail. + +1. Satisfy Pre-Requisites: + * Provision a Kubernetes Cluster according to [best practices](guides/prerequisites/kubernetes_preconfiguration.md#best-practices). + * Ensure the Cluster has network connectivity to a Git Repo you control. + * Install Flux GitOps Operator on the Cluster. + * Configure Flux, the Cluster, and the Git Repo for GitOps Deployments that support deploying encrypted values. + * Commit to the Git Repo BigBang's values.yaml and encrypted secrets that have been configured to match the desired state of the cluster (including HTTPS Certs and DNS names). +2. `kubectl apply --filename bigbang.yaml` + * [bigbang.yaml](https://repo1.dso.mil/platform-one/big-bang/customers/template/-/blob/main/dev/bigbang.yaml) will trigger a chain reaction of GitOps Custom Resources' that will deploy other GitOps CR's that will eventually deploy an instance of a DevSecOps Platform that's declaratively defined in your Git Repo. + * To be specific, the chain reaction pattern we consider best practice is to have: + * bigbang.yaml deploys a gitrepository and kustomization Custom Resource + * Flux reads the declarative configuration stored in the kustomization CR to do a GitOps equivalent of `kustomize build . | kubectl apply --filename -`, to deploy a helmrelease CR of the BigBang Helm Chart, that references input values.yaml files defined in the Git Repo. + * Flux reads the declarative configuration stored in the helmrelease CR to do a GitOps equivalent of `helm upgrade --install bigbang /chart --namespace=bigbang --filename encrypted_values.yaml --filename values.yaml --create-namespace=true`, the BigBang Helm Chart, then deploys more CR's that flux uses to deploy packages specified in BigBang's values.yaml + +## New User Orientation + +* New users are encouraged to read through the Useful Background Contextual Information present in the [understanding_bigbang folder](./understanding_bigbang) diff --git a/docs/a_faqs.md b/docs/a_faqs.md deleted file mode 100644 index 81a7e918e420b4b717bc13ba3ac2c22d44bd3eae..0000000000000000000000000000000000000000 --- a/docs/a_faqs.md +++ /dev/null @@ -1 +0,0 @@ -# Appendix A - Big Bang FAQs diff --git a/docs/airgap/README.md b/docs/airgap/README.md index a68e0eb78f0553def0447f3a5ea73e39bf2afeea..2f9bf5dbdc59d7808ec316a2186aff256ed0d0d4 100644 --- a/docs/airgap/README.md +++ b/docs/airgap/README.md @@ -21,6 +21,7 @@ This work was quickly developed to entertain certain paths for image packaging a * This is due to the fact that `/var/lib/registry` is a docker volume `deploy_images.sh` - Proof of concept script for image deployment + * Dependencies * `docker` - The docker CLI tool * `registry:package.tar.gz` - Modified `registry:2` container loaded with airgap images @@ -32,24 +33,25 @@ Hack commands: * `curl -sX GET http://localhost:5000/v2/_catalog | jq -r .` * Verify the catalog of a local running registry container -# Repository Packaging / Deployment +## Repository Packaging / Deployment -Airgap Deployment is a form of deployment which does not have any direct connection to the Internet or external network during cluster setup or runtime. During installation, bigbang requires certain images and git repos for installation. Since we will be installing in internet-disconnected environment, we need to perform extra steps to make sure these resources are available. +Airgap Deployment is a form of deployment which does not have any direct connection to the Internet or external network during cluster setup or runtime. During installation, bigbang requires certain images and git repositories for installation. Since we will be installing in internet-disconnected environment, we need to perform extra steps to make sure these resources are available. ## Requirements and Prerequisites -### General Prereqs -- A kubernetes cluster with container mirroring support. There is a section below that covers mirroring in more detail with examples for supported clusters. -- BigBang(BB) [release artifacts](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/releases). -- Utility Server. +### General Prerequisites + +* A kubernetes cluster with container mirroring support. There is a section below that covers mirroring in more detail with examples for supported clusters. +* BigBang(BB) [release artifacts](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/releases). +* Utility Server. -### Package Specific Prereqs +### Package Specific Prerequisites #### Elastic (Logging) Elastic requires a larger number of memory map areas than some OSes support by default. This can be change at startup with a cloud config or later using sysctl. -``` +```shell MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="==MYBOUNDARY==" @@ -62,19 +64,15 @@ MIME-Version: 1.0 sysctl -w vm.max_map_count=262144 ``` - - ## Utility Server -Utility Server is an internet-disconected server that will host the private registry and git server that are required to deploy bigbang. It should include these commandline tools below; - -- `docker`: for running docker registry. - - `registry:2` image - - `openssl` for self-signed certificate. -- `curl`: For troubleshooting registry. -- `git`: for setup git server. - +Utility Server is an internet-disconnected server that will host the private registry and git server that are required to deploy bigbang. It should include these command-line tools below; +* `docker`: for running docker registry. + * `registry:2` image + * `openssl` for self-signed certificate. +* `curl`: For troubleshooting registry. +* `git`: for setup git server. ## Git Server @@ -84,49 +82,47 @@ As part of BB release, we provide `repositories.tar.gz` which contains all the You can follow the process below to setup git with `repositories.tar.gz` on the Utility Server. -- Create Git user and SSH key +* Create Git user and SSH key -```bash -$ sudo useradd --create-home --shell /bin/bash git -$ ssh-keygen -b 4096 -t rsa -f ~/.ssh/identity -q -N "" +```shell +sudo useradd --create-home --shell /bin/bash git +ssh-keygen -b 4096 -t rsa -f ~/.ssh/identity -q -N "" ``` -- Create .SSH folder for `git` user +* Create .SSH folder for `git` user - ```bash - $ sudo su - git - $ mkdir -p .ssh && chmod 700 .ssh/ - $ touch ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys - $ exit + ```shell + sudo su - git + mkdir -p .ssh && chmod 700 .ssh/ + touch ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys + exit ``` -- Add client ssh key to `git` user `authorized_keys` +* Add client ssh key to `git` user `authorized_keys` - ```bash - $ sudo su - $ cat /[client-public-key-path]/identity.pub >> /home/git/.ssh/authorized_keys - $ exit + ```shell + sudo su + cat /[client-public-key-path]/identity.pub >> /home/git/.ssh/authorized_keys + exit ``` -- Extract `repositories.tar.gz` to git user home directory +* Extract `repositories.tar.gz` to git user home directory - ```bash - $ sudo tar -xvf repositories.tar.gz --directory /home/git/ + ```shell + sudo tar -xvf repositories.tar.gz --directory /home/git/ ``` -- Add Hostname alias +* Add Hostname alias - ```bash + ```shell PRIVATEIP=$( curl http://169.254.169.254/latest/meta-data/local-ipv4 ) sudo sed -i -e '1i'$PRIVATEIP' 'myhostname.com'\' /etc/hosts sudo sed -i -e '1i'$PRIVATEIP' 'host.k3d.internal'\' /etc/hosts #only for k3d ``` - - -- To test the client key; +* To test the client key; - ```bash + ```shell GIT_SSH_COMMAND='ssh -i /[client-private-key-path] -o IdentitiesOnly=yes' git clone git@[hostname/IP]:/home/git/repos/[sample-repo] #For example; @@ -139,39 +135,36 @@ $ ssh-keygen -b 4096 -t rsa -f ~/.ssh/identity -q -N "" There are some cases where you do not have access to or cannot create an ssh user on the utility server. It is possible to run an ssh git server on a non-standard port using Docker. -- Create an SSH key +* Create an SSH key -```bash -$ ssh-keygen -b 4096 -t rsa -f ./identity -q -N "" +```shell +ssh-keygen -b 4096 -t rsa -f ./identity -q -N "" ``` -- Extract `repositories.tar.gz` to your working directory +* Extract `repositories.tar.gz` to your working directory -```bash -$ sudo tar -xvf repositories.tar.gz +```shell +sudo tar -xvf repositories.tar.gz ``` -- Start the provided Docker image (TODO: move this to an IB image when ready) +* Start the provided Docker image (TODO: move this to an IB image when ready) -```bash +```shell docker run -d -p 4001:22 -v ${PWD}/identity.pub:/home/git/.ssh/authorized_keys -v ${PWD}/repos:/home/git servicesengineering/gitshim:0.0.1 ``` You will now be able to test by checking out some of the code. - ```bash - GIT_SSH_COMMAND='ssh -i /[client-private-key-path] -o IdentitiesOnly=yes' git clone git@[hostname/IP]:[PORT]/home/git/repos/[sample-repo] - - #For example; - GIT_SSH_COMMAND='ssh -i ~/.ssh/identity -o IdentitiesOnly=yes' git clone git@host.k3d.internal:[PORT]/home/git/repos/bigbang - #checkout release branch - git checkout 1.3.0 - ``` - - +```shell +GIT_SSH_COMMAND='ssh -i /[client-private-key-path] -o IdentitiesOnly=yes' git clone git@[hostname/IP]:[PORT]/home/git/repos/[sample-repo] +# For example; +GIT_SSH_COMMAND='ssh -i ~/.ssh/identity -o IdentitiesOnly=yes' git clone git@host.k3d.internal:[PORT]/home/git/repos/bigbang +# Check out release branch +git checkout 1.3.0 +``` -## Private Registry +## Private Registry Images needed to run BB in your cluster is packaged as part of the release in `images.tar.gz`. You can see the list of required images in `images.txt`. In our airgap environment, we need to setup a registry that our cluster can pull required images from or an existing cluster where we can copy images from `images.tar.gz` into. @@ -179,27 +172,25 @@ Images needed to run BB in your cluster is packaged as part of the release in `i To setup the registry, we will be using `registry:2` to run a private registry with self-signed certificate. -- First, untar `images.tar.gz`; +* First, untar `images.tar.gz`; -```bash +```shell tar -xvf images.tar.gz -C . ``` -- SCP `registry:2` tar file +* SCP `registry:2` tar file - ```bash - docker save -o registry2.tar registry:2 - docker save -o k3s.tar rancher/k3s:v1.20.5-rc1-k3s1 #check release matching version - scp registry2.tar k3s.tar ubuntu@hostname:~ #modify according to your environment - docker load -i registry2.tar #on your registry server - docker load -i k3s.tar - ``` - - +```shell +docker save -o registry2.tar registry:2 +docker save -o k3s.tar rancher/k3s:v1.20.5-rc1-k3s1 #check release matching version +scp registry2.tar k3s.tar ubuntu@hostname:~ #modify according to your environment +docker load -i registry2.tar #on your registry server +docker load -i k3s.tar +``` -- Use the script [registry.sh](./scripts/registry.sh) to create registry; +* Use the script [registry.sh](./scripts/registry.sh) to create registry; -```bash +```shell $ chmod +x registry.sh && sudo ./registry.sh Required information: @@ -240,25 +231,22 @@ To see images in the registry; ========================= curl https://myhostname.com:5443/v2/_catalog -k ========================= - ``` A folder is created with TLS certs that we are going to supply to our k8s cluster when pulling from the registry. You can ensure the images are now loaded in the registry; -```bash +```shell curl -k https://myhostname.com:5443/v2/_catalog {"repositories":["ironbank/anchore/engine/engine","ironbank/anchore/enterprise/enterprise","ironbank/anchore/enterpriseui/enterpriseui","ironbank/big-bang/argocd","ironbank/bitnami/analytics/redis-exporter","ironbank/elastic/eck-operator/eck-operator","ironbank/elastic/elasticsearch/elasticsearch","ironbank/elastic/kibana/kibana","ironbank/fluxcd/helm-controller","ironbank/fluxcd/kustomize-controller","ironbank/fluxcd/notification-controller","ironbank/fluxcd/source-controller","ironbank/gitlab/gitlab/alpine-certificates","ironbank/gitlab/gitlab/cfssl-self-sign","ironbank/gitlab/gitlab/gitaly",...] ``` - - ### Mirroring - The images specified as part of the helm charts in BB are expected to be sourced from `registry1.dso.mil` hence this registry needs to be mirrored to the one setup above. To reduce the amount of work needed on the developer part, we will be taking advantage of container mirroring which is supported by `containerd` as well as `cri-o`. Check if your container runtime supports this as it is required for smooth developer experience when deploying BB. You should also check documentation on how your cluster supports passing these configuration to the runtime. For example, TKG and RKE2 support such configuration for `containerd` below to enable `registry.dso.mil` and `registry1.dso.mil` . +The images specified as part of the helm charts in BB are expected to be sourced from `registry1.dso.mil` hence this registry needs to be mirrored to the one setup above. To reduce the amount of work needed on the developer part, we will be taking advantage of container mirroring which is supported by `containerd` as well as `cri-o`. Check if your container runtime supports this as it is required for smooth developer experience when deploying BB. You should also check documentation on how your cluster supports passing these configuration to the runtime. For example, TKG and RKE2 support such configuration for `containerd` below to enable `registry.dso.mil` and `registry1.dso.mil` . -​ You need to also configure your cluster with appropriate registry TLS. Please consult your cluster documentation on how to configure this. +​You need to also configure your cluster with appropriate registry TLS. Please consult your cluster documentation on how to configure this. If you need to handle mirroring manually, there is an example Ansible script provided that will update the containerd mirroring and restart the container runtimes for each node in your inventory. (copy-containerd-config.yaml) @@ -320,31 +308,33 @@ configs: ca_file: "/etc/ssl/certs/registry1.pem" ``` - - ## Installing Big Bang -```bash -$ cd bigbang +```shell +cd bigbang ``` Install flux Install Flux 2 into the cluster using the provided artifacts. These are located in the scripts section of the Big Bang repository. - kubectl apply -f ./scripts/deploy/flux.yaml +```shell +kubectl apply -f ./scripts/deploy/flux.yaml +``` After Flux is up and running you are ready to deploy Big Bang. We will do this using Helm. To first check to see if Flux is ready you can do. You can watch to see if Flux is reconciling the projects by watching the progress. -```bash +```shell watch kubectl get all -n flux-system ``` We need a namespace for our preparations and eventually for Big Bang to deploy into. - kubectl create ns bigbang +```shell +kubectl create ns bigbang +``` Installing Big Bang in an air gap environment currently uses the Helm charts from the **[Big Bang Repo](https://repo1.dso.mil/platform-one/big-bang/bigbang)**. @@ -352,48 +342,58 @@ All changes are modified in the custom [values.yaml](./examples/values.yaml) fil Change the hostname for the installation. It is currently set to the development domain: - # -- Domain used for BigBang created exposed services, can be overridden by individual packages. - hostname: bigbang.dev +```yaml +# -- Domain used for BigBang created exposed services, can be overridden by individual packages. +hostname: bigbang.dev +``` Add your registry URL. This will be the IP address or URL of the utility server or the registry in which you have loaded all of the Big Bang images (note: it is possible that your registry doesn't have a username or password, there will be ignored for insecure registries.): - # -- Single set of registry credentials used to pull all images deployed by BigBang. - registryCredentials: - registry: 10.0.52.144 - username: "asdfasdfasdf" - password: "asdfasdfasdfasdfasdf" - email: "" +```yaml +# -- Single set of registry credentials used to pull all images deployed by BigBang. +registryCredentials: + registry: 10.0.52.144 + username: "asdfasdfasdf" + password: "asdfasdfasdfasdfasdf" + email: "" +``` For your Git repository you have two options for setting up the credentials. Option 1: Use an existing secret. - cd ~/.ssh - ssh-keygen -b 4096 -t rsa -f ~/.ssh/identity -q -N "" - ssh-keyscan <YOUR GIT URL HERE> ./known_hosts - - kubectl create secret generic -n bigbang ssh-credentials \ - --from-file=./identity \ - --from-file=./identity.pub \ - --from-file=./known_hosts +```shell +cd ~/.ssh +ssh-keygen -b 4096 -t rsa -f ~/.ssh/identity -q -N "" +ssh-keyscan <YOUR GIT URL HERE> ./known_hosts + +kubectl create secret generic -n bigbang ssh-credentials \ + --from-file=./identity \ + --from-file=./identity.pub \ + --from-file=./known_hosts +``` -In the above example we created a new set of keys to use, you could also use an existing set of keys. These are just SSH keys, so any SSH key pair should work. The second command is going to create a known hosts file. There is no way to answer yes to the unknown hosts prompt, this alleviates that neeed. +In the above example we created a new set of keys to use, you could also use an existing set of keys. These are just SSH keys, so any SSH key pair should work. The second command is going to create a known hosts file. There is no way to answer yes to the unknown hosts prompt, this alleviates that need. Once we have our private key, public key and the known hosts file, we place all of those into the secret using kubectl. This creates a BASE64 encoded secret of these values. !!! It is VERY important that the names of the files match above. So if you are using your own keypair change the names. Kubernetes uses the names of the files to create the keys inside of the secret. If you want to create your secret and store in the Kubernetes format you can add the -o yaml --dry-run to the above command to get that output. - kubectl create secret generic ssh-credentials \ - --from-file=./identity \ - --from-file=./identity.pub \ - --from-file=./known_hosts \ - -o yaml --dry-run +```shell +kubectl create secret generic ssh-credentials \ + --from-file=./identity \ + --from-file=./identity.pub \ + --from-file=./known_hosts \ + -o yaml --dry-run +``` -Once your secret is created you can add that value to the values.yaml that we were modifing above. +Once your secret is created you can add that value to the values.yaml that we were modifying above. - git: - # -- Existing secret to use for git credentials, must be in the appropriate format: https://toolkit.fluxcd.io/components/source/gitrepositories/#https-authentication - existingSecret: "ssh-credentials" +```yaml +git: + # -- Existing secret to use for git credentials, must be in the appropriate format: https://toolkit.fluxcd.io/components/source/gitrepositories/#https-authentication + existingSecret: "ssh-credentials" +``` ** Note that we substituted the name of the secret from the example to the secret created above. This value is arbitrary, so if you created your secret with a different name use that name instead. @@ -401,59 +401,64 @@ Option 2: Put the values of your ssh keys directly in the values.yaml file. You can also elect to just put the key values and the known hosts directly into the chart's values.yaml file. +```shell +ssh-keygen -q -N "" -f ./identity +ssh-keyscan <YOUR GIT URL HERE> ./known_hosts - ssh-keygen -q -N "" -f ./identity - ssh-keyscan <YOUR GIT URL HERE> ./known_hosts - - cat identity - cat identity.pub - cat known_hosts +cat identity +cat identity.pub +cat known_hosts +``` Take the values from each of these files and place in the correct fields in the values.yaml. - git: - # -- SSH git credentials, privateKey, publicKey, and knownHosts must be provided - privateKey: | - -----BEGIN RSA PRIVATE KEY----- - MIIEowIBAAKCAQEAwcG6YKsqDC6728XZ7/8oiqnQaw3OkQnvMBrzvZjxd//PsEog - xVc+F9YqW4FIeTH57wN6JXIC4iMbE0QGd6+1yOoYiXkhi66tuO5FN+n4PeMnvKcC - JXtFWme4W/9YnEk/3sbNOgAMPlhMhTsudzLiXtHd3g+xCmNs1pdEIInaNadrolWn - QTM0krUCcC6VLCri7ae/pDloglX4cBJ+EfqFC94T6wUICPd1P7zYsy8WwIQtPhLT - lbY8CHj9iMlxlUdwdiXTlifqHsPgTh3X5e9Vptd+wi0+vfjvrXd/8SuM1q8xdQvY - bZ27AlhgfQsVl9WQrk/47xd3g430G4cqSbyhLQIDAQABAoIBAFlSu153akIFhXtz - Ad7fbcxHLxs7WUCKKOevdTCyApgEqbWm5uazKqAIjqxytHuS65shqjz7C5M/Beti - z+x7Z73BFiDCZBgmLNZ1mhmF1niJcTdKcvXel4FvEZHv7OTX7AcC9XfIr9xKDrTZ - LLmtDqkR7UvDRiX44iMnxzOM+bkDsHVva00e3IoSiOsQ4DKQ1l/HFseVlPIaGzfZ - Z2q0myUrBzlOYE06VJluhexsrrVDi7KdIfR8UGpN4kC5R/vOnOi7ycd4tfsZe2Wb - CjbKMTNYRFnVTt6/SXAhhFu+kz0FftDXNTIOhikVB8ryZ5iyNXszYqiptUI9VUZB - mQLdPuECgYEA9odVxlPUgSMLhbE5vD57jbtB6Cswy5ztAuyCHMABM4U6pVvFDSNb - 244y0ov0TzviaCZkb+0qrAM0ZSNItLQ1PmbeD0SnB4q/C8hDvVtpB+0SPBJMX8so - 49n1Wr5dH0axGMLaZXGmQ4DPEW/t0dNbYpN1Sxgn6KZPprISXigBufkCgYEAyTNe - kY3vaJ6Nla1pBVUmiK7hu1G3Ddihy1w56upHbOnDvJySuVOM5HRPm2ISFwW38/b5 - 5+cGKWnmu7UhFi1d8Iz3Kmr6kpfRxEDtbrk5rkgKJmTtduxAzBH8CTZfxuYIC5xS - 3fbcFpFYfrtE+3tjqlXJSOpLOuDqbA3uGwWFTdUCgYEAkSi9A8uGnAdDmJPzF/l+ - jMTPGOKdl7auBAO41S7lRi3Ti1xO2d6RDuVa3YiU8TakqIi6qQDwGFrGtiqhe+2E - UFsHs9vLsfArb8eaw1uYq5c7HpHzsJASYp+LDcR7VpgsXRUWvZa+vI6S3oSWdu9J - pvCGpxHxJdcPnWrKz/AknBkCgYAnej/U+W9/LJUFSFgx5qo/6Wh7M6ZiPh5I45it - ojhPg3KXgHU9jco4TSYNi+mWwNV+NfiE6wyHdbMDI6ARVOd4uoAIv6M9NDLBeifc - MNXDf3kWXXlGe0afg+va9uNGCH6NoKeVy8kVWIFvpFj9qxE8K8bp2qbWL6lveDA+ - 9w9X3QKBgGtkQi9OI7TyrloZ5F6/0/LnOJMGd/+e2cJUN6Pa10ZAjQh12JZ5fK7i - Vwh5l0P5CGQsuC96n4xPELoBnbTdr+y17f0o+kAuSDAsXnDf/Jjr0y/+uzL6YYCg - VD1yNitgcQw6oHKdTbGn4jni3/VemzONOz0uTB+/K7WhW2J7faaJ - -----END RSA PRIVATE KEY----- - publicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBwbpgqyoMLrvbxdnv/yiKqdBrDc6RCe8wGvO9mPF3/+wSiDFVz4X1ipbgUh3MfnvA2olcgLiIxsTRAZ8r7XI6hiJeSGLrq2123kU36fg94ye8pwIle0VaZ7hb/1icST/exs06AAw+WEyFOy53MuJe0d3e$" - knownHosts: "10.0.52.144 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFZzQ6BmaswdhT8UWD5a/VYmZYrGv1qD3T+euf/gFjkPkeySYRIyM+Kg/UdHCHVBzc4aaFdBDmugHimZ4lbWpE=" +```yaml +git: + # -- SSH git credentials, privateKey, publicKey, and knownHosts must be provided + privateKey: | + -----BEGIN RSA PRIVATE KEY----- + MIIEowIBAAKCAQEAwcG6YKsqDC6728XZ7/8oiqnQaw3OkQnvMBrzvZjxd//PsEog + xVc+F9YqW4FIeTH57wN6JXIC4iMbE0QGd6+1yOoYiXkhi66tuO5FN+n4PeMnvKcC + JXtFWme4W/9YnEk/3sbNOgAMPlhMhTsudzLiXtHd3g+xCmNs1pdEIInaNadrolWn + QTM0krUCcC6VLCri7ae/pDloglX4cBJ+EfqFC94T6wUICPd1P7zYsy8WwIQtPhLT + lbY8CHj9iMlxlUdwdiXTlifqHsPgTh3X5e9Vptd+wi0+vfjvrXd/8SuM1q8xdQvY + bZ27AlhgfQsVl9WQrk/47xd3g430G4cqSbyhLQIDAQABAoIBAFlSu153akIFhXtz + Ad7fbcxHLxs7WUCKKOevdTCyApgEqbWm5uazKqAIjqxytHuS65shqjz7C5M/Beti + z+x7Z73BFiDCZBgmLNZ1mhmF1niJcTdKcvXel4FvEZHv7OTX7AcC9XfIr9xKDrTZ + LLmtDqkR7UvDRiX44iMnxzOM+bkDsHVva00e3IoSiOsQ4DKQ1l/HFseVlPIaGzfZ + Z2q0myUrBzlOYE06VJluhexsrrVDi7KdIfR8UGpN4kC5R/vOnOi7ycd4tfsZe2Wb + CjbKMTNYRFnVTt6/SXAhhFu+kz0FftDXNTIOhikVB8ryZ5iyNXszYqiptUI9VUZB + mQLdPuECgYEA9odVxlPUgSMLhbE5vD57jbtB6Cswy5ztAuyCHMABM4U6pVvFDSNb + 244y0ov0TzviaCZkb+0qrAM0ZSNItLQ1PmbeD0SnB4q/C8hDvVtpB+0SPBJMX8so + 49n1Wr5dH0axGMLaZXGmQ4DPEW/t0dNbYpN1Sxgn6KZPprISXigBufkCgYEAyTNe + kY3vaJ6Nla1pBVUmiK7hu1G3Ddihy1w56upHbOnDvJySuVOM5HRPm2ISFwW38/b5 + 5+cGKWnmu7UhFi1d8Iz3Kmr6kpfRxEDtbrk5rkgKJmTtduxAzBH8CTZfxuYIC5xS + 3fbcFpFYfrtE+3tjqlXJSOpLOuDqbA3uGwWFTdUCgYEAkSi9A8uGnAdDmJPzF/l+ + jMTPGOKdl7auBAO41S7lRi3Ti1xO2d6RDuVa3YiU8TakqIi6qQDwGFrGtiqhe+2E + UFsHs9vLsfArb8eaw1uYq5c7HpHzsJASYp+LDcR7VpgsXRUWvZa+vI6S3oSWdu9J + pvCGpxHxJdcPnWrKz/AknBkCgYAnej/U+W9/LJUFSFgx5qo/6Wh7M6ZiPh5I45it + ojhPg3KXgHU9jco4TSYNi+mWwNV+NfiE6wyHdbMDI6ARVOd4uoAIv6M9NDLBeifc + MNXDf3kWXXlGe0afg+va9uNGCH6NoKeVy8kVWIFvpFj9qxE8K8bp2qbWL6lveDA+ + 9w9X3QKBgGtkQi9OI7TyrloZ5F6/0/LnOJMGd/+e2cJUN6Pa10ZAjQh12JZ5fK7i + Vwh5l0P5CGQsuC96n4xPELoBnbTdr+y17f0o+kAuSDAsXnDf/Jjr0y/+uzL6YYCg + VD1yNitgcQw6oHKdTbGn4jni3/VemzONOz0uTB+/K7WhW2J7faaJ + -----END RSA PRIVATE KEY----- + publicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBwbpgqyoMLrvbxdnv/yiKqdBrDc6RCe8wGvO9mPF3/+wSiDFVz4X1ipbgUh3MfnvA2olcgLiIxsTRAZ8r7XI6hiJeSGLrq2123kU36fg94ye8pwIle0VaZ7hb/1icST/exs06AAw+WEyFOy53MuJe0d3e$" + knownHosts: "10.0.52.144 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFZzQ6BmaswdhT8UWD5a/VYmZYrGv1qD3T+euf/gFjkPkeySYRIyM+Kg/UdHCHVBzc4aaFdBDmugHimZ4lbWpE=" +``` ** Note the above values are all examples and are intentionally not operational keys. Then install Big Bang using Helm. +```shell helm upgrade -i bigbang chart -n bigbang --create-namespace -f values.yaml watch kubectl get gitrepositories,kustomizations,hr,po -A +``` ** Note that the --create-namespace isn't needed if you created it earlier, but it doesn't hurt anything. -You should see the diffent projects configure working through their reconciliation starting with "gatekeeper". +You should see the different projects configure working through their reconciliation starting with "gatekeeper". ## Using 3rd Party Packages @@ -463,40 +468,47 @@ The third party guide assumes that you already have or are planning to install B Packaging your repository from Git -``` +```shell git clone --no-checkout https://repo1.dso.mil/platform-one/big-bang/apps/third-party/kafka.git && tar -zcvf kafka-repo.tar.gz kafka ``` This creates a tar of a full git repo without a checkout. After you have placed this git repo in its destination you can get the files to view by doing. - git checkout +```shell +git checkout +``` ### Package your registry images -Package image -``` +Package image + +```shell docker save -o image-name.tar image-name:image-version ``` Unpack the image on your utility server -``` + +```shell tar -xvf image-name.tar ``` Move the image to the location of your other images. Restart your local registry and it should pick up the new image. -``` + +```shell cd ./var/lib/registry docker run -p 25000:5000 -v $(pwd):/var/lib/registry registry:2 # verify the registry mounted correctly curl http://localhost:25000/v2/_catalog -k # a list of Big Bang images should be displayed, if not check the volume mount of the registry ``` + Configure `./synker.yaml` Example -``` + +```yaml destination: registry: # Hostname of the destination registry to push to @@ -504,4 +516,5 @@ destination: # Port of the destination registry to push to port: 5000 ``` -If you are using runtime mirroring the new image should be available at the original location on your cluster. \ No newline at end of file + +If you are using runtime mirroring the new image should be available at the original location on your cluster. diff --git a/docs/airgap/developer/developer.md b/docs/airgap/developer/developer.md index 47de786d7e677023d28dcf35b77ea0475e8cb15c..68c0dbcd394500e4f18aba93f0a6c7e1d51c3885 100644 --- a/docs/airgap/developer/developer.md +++ b/docs/airgap/developer/developer.md @@ -4,20 +4,18 @@ To test Airgap BigBang on k3d ## Steps -- Launch ec2 instance of size `c5.2xlarge` and ssh into the instance with at least 50GB storage. +- Launch EC2 instance of size `c5.2xlarge` and ssh into the instance with at least 50GB storage. - Install `k3d` and `docker` cli tools - Download `images.tar.gz`, `repositories.tar.gz` and `bigbang-version.tar.gz` from BigBang release. - ```bash - $ curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/repositories.tar.gz - $ curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/images.tar.gz - $ sudo apt install -y net-tools + ```shell + curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/repositories.tar.gz + curl -O https://umbrella-bigbang-releases.s3-us-gov-west-1.amazonaws.com/umbrella/1.3.0/images.tar.gz + sudo apt install -y net-tools ``` - - - Follow [Airgap Documentation](../README.md) to install Git server and Registry. - Once Git Server and Registry is up, setup k3d mirroring configuration `registries.yaml` @@ -39,46 +37,40 @@ To test Airgap BigBang on k3d ca_file: "/etc/ssl/certs/registry1.pem" ``` - - - Launch k3d cluster - ```bash - $ PRIVATEIP=$( curl http://169.254.169.254/latest/meta-data/local-ipv4 ) + ```shell + PRIVATEIP=$( curl http://169.254.169.254/latest/meta-data/local-ipv4 ) $ k3d cluster create --image "rancher/k3s:v1.20.5-rc1-k3s1" --api-port "33989" -s 1 -a 2 -v "${HOME}/registries.yaml:/etc/rancher/k3s/registries.yaml" -v /etc/machine-id:/etc/machine-id -v "${HOME}/certs/host.k3d.internal.public.pem:/etc/ssl/certs/registry1.pem" --k3s-server-arg "--disable=traefik" --k3s-server-arg "--disable=metrics-server" --k3s-server-arg "--tls-san=$PRIVATEIP" -p 80:80@loadbalancer -p 443:443@loadbalancer ``` - - -- Bock all egress with `iptables` except those going to instance IP before deploying bigbang by running [k3d_airgap.sh](./scripts/k3d_airgap.sh) +- Block all egress with `iptables` except those going to instance IP before deploying bigbang by running [k3d_airgap.sh](./scripts/k3d_airgap.sh) - - - ```bash - $ sudo ./k3d_airgap.sh - $ curl https://$PRIVATEIP:5443/v2/_catalog -k #show return list of images - curl https://$PRIVATEIP:5443/v2/repositories/rancher/library-busybox/tags - ``` +```shell +sudo ./k3d_airgap.sh +curl https://$PRIVATEIP:5443/v2/_catalog -k # Show return list of images +curl https://$PRIVATEIP:5443/v2/repositories/rancher/library-busybox/tags +``` -​ To permanently save the iptable rules across reboot, check out [link](https://unix.stackexchange.com/questions/52376/why-do-iptables-rules-disappear-when-restarting-my-debian-system) +​To permanently save the iptable rules across reboot, check out [link](https://unix.stackexchange.com/questions/52376/why-do-iptables-rules-disappear-when-restarting-my-debian-system) - Test that mirroring is working -```bash -$ curl -k -X GET https://$PRIVATEIP:5443/v2/rancher/local-path-provisioner/tags/list -$ kubectl run -i --tty test --image=registry1.dso.mil/rancher/local-path-provisioner:v0.0.19 --image-pull-policy='Always' --command sleep infinity -- sh -$ kubectl run test --image=registry1.dso.mil/rancher/library-busybox:1.31.1 --image-pull-policy='Always' --restart=Never --command sleep infinity -$ telnet default.kube-system.svc.cluster.local 443 -$ kubectl describe po test -$ kubectl delete po test +```shell +curl -k -X GET https://$PRIVATEIP:5443/v2/rancher/local-path-provisioner/tags/list +kubectl run -i --tty test --image=registry1.dso.mil/rancher/local-path-provisioner:v0.0.19 --image-pull-policy='Always' --command sleep infinity -- sh +kubectl run test --image=registry1.dso.mil/rancher/library-busybox:1.31.1 --image-pull-policy='Always' --restart=Never --command sleep infinity +telnet default.kube-system.svc.cluster.local 443 +kubectl describe po test +kubectl delete po test ``` - Test that cluster cannot pull outside private registry. -```bash -$ kubectl run test --image=nginx -$ kubectl describe po test #should fail -$ kubectl delete po test +```shell +kubectl run test --image=nginx +kubectl describe po test # Should fail +kubectl delete po test ``` -- Proceed to [bigbang deployment process](../README.md#installing-big-bang) \ No newline at end of file +- Proceed to [bigbang deployment process](../README.md#installing-big-bang) diff --git a/docs/airgap/scripts/synker.md b/docs/airgap/scripts/synker.md index 85f5b893c3d372ac75d6f35fe813ca633ca57349..9c14edf8e137d2f3b4a1cc0fed8ac61830a79847 100644 --- a/docs/airgap/scripts/synker.md +++ b/docs/airgap/scripts/synker.md @@ -8,24 +8,28 @@ ## Usage -Unpack -``` +Unpack + +```shell tar -xvf images.tar.gz ``` -Start a local registry based on the images we just unpacked -``` +Start a local registry based on the images we just unpacked. + +```shell cd ./var/lib/registry docker load < registry.tar docker run -p 25000:5000 -v $(pwd):/var/lib/registry registry:2 -# verify the registry mounted correctly +# Verify the registry mounted correctly curl http://localhost:25000/v2/_catalog -k -# a list of Big Bang images should be displayed, if not check the volume mount of the registry +# A list of Big Bang images should be displayed, if not check the volume mount of the registry ``` + Configure `./synker.yaml` -Example -``` +Example: + +```yaml destination: registry: # Hostname of the destination registry to push to @@ -33,8 +37,10 @@ destination: # Port of the destination registry to push to port: 5000 ``` -If using Harbor reference the project name -``` + +If using Harbor, reference the project name. + +```yaml destination: registry: # Hostname of the destination registry to push to @@ -42,8 +48,10 @@ destination: # Port of the destination registry to push to port: 443 ``` -If your destination repo requires credentials add them to ` ~/.docker/config.json` -``` + +If your destination repo requires credentials add them to `~/.docker/config.json` + +```json { "auths": { "registry.dso.mil": { @@ -61,11 +69,12 @@ If your destination repo requires credentials add them to ` ~/.docker/config.jso } } } -``` +``` **WARNING:** Verify your credentials with docker login before running synker. If your environment has login lockout after failed attempts synker could trigger a lockout if your credentials are incorrect. -``` +```shell ./synker push ``` -Verify the images were pushed to your registry + +Verify the images were pushed to your registry. diff --git a/docs/4_configuration.md b/docs/configuration.md similarity index 100% rename from docs/4_configuration.md rename to docs/configuration.md diff --git a/docs/d_prerequisites.md b/docs/d_prerequisites.md deleted file mode 100644 index b6a570ef41b09883f6fde0dbe5bdc56103b31db1..0000000000000000000000000000000000000000 --- a/docs/d_prerequisites.md +++ /dev/null @@ -1,168 +0,0 @@ -# Appendix D - Big Bang Prerequisites - -BigBang is built to work on all the major kubernetes distributions. However, since distributions differ and may come -configured out the box with settings incompatible with BigBang, this document serves as a checklist of pre-requisites -for any distribution that may need it. - -> Clusters are sorted _alphabetically_ - -## All Clusters - -The following apply as prerequisites for all clusters - -### Storage - -BigBang assumes the cluster you're deploying to supports [dynamic volume provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/). Which ultimatley puts the burden on the cluster distro provider to ensure appropriate setup. In many cases, this is as simple as using the in-tree CSI drivers. Please refer to each supported distro's documentation for further details. - -In the future, BigBang plans to support the provisioning and management of a cloud agnostic container attached storage solution, but until then, on-prem deployments require more involved setup, typically supported through the vendor. - -#### Default `StorageClass` - -A default `StorageClass` capable of resolving `ReadWriteOnce` `PersistentVolumeClaims` must exist. An example suitable for basic production workloads on aws that supports a highly available cluster on multiple availability zones is provided below: - -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: ebs - annotations: - storageclass.kubernetes.io/is-default-class: "true" -provisioner: kubernetes.io/aws-ebs -parameters: - type: gp2 -reclaimPolicy: Delete -allowVolumeExpansion: true -mountOptions: - - debug -volumeBindingMode: WaitForFirstConsumer -``` - -It is up to the user to ensure the default storage class' performance is suitable for their workloads, or to specify different `StorageClasses` when necessary. - -### `selinux` - -Additional pre-requisites are needed for istio on systems with selinux set to `Enforcing`. - -By default, BigBang will deploy istio configured to use `istio-init` (read more [here](https://istio.io/latest/docs/setup/additional-setup/cni/)). To ensure istio can properly initialize enovy sidecars without container privileged escalation permissions, several system kernel modules must be pre-loaded before installing BigBang: - -```bash -modprobe xt_REDIRECT -modprobe xt_owner -modprobe xt_statistic -``` - -### Load Balancing - -BigBang by default assumes the cluster you're deploying to supports dynamic load balancing provisioning. Specifically during the creation of istio and it's ingress gateways, which map to a "physical" load balancer usually provisioned by the cloud provider. - -In almost all cases, the distro provides this capability through in-tree cloud providers appropriately configured through the IAC on repo1. For on-prem environments, please consult with the vendors support for the recommended way of handling automatic load balancing configuration. - -If automatic load balancing provisioning is not support or not desired, the default BigBang configuration can be modified to expose istio's ingressgateway through `NodePorts` that can manually (or separate IAC) be mapped to a pre-existing loadbalancer. - -### Elasticsearch - -Elasticsearch deployed by BigBang uses memory mapping by default. In most cases, the default address space is too low and must be configured. - -To ensure unnecessary privileged escalation containers are not used, these kernel settings should be done before BigBang is deployed: - -```bash -sysctl -w vm.max_map_count=262144 -``` - -More information can be found from elasticsearch's documentation [here](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html#k8s-virtual-memory) - -## OpenShift - -1) When deploying BigBang, set the OpenShift flag to true. - -``` -# inside a values.yaml being passed to the command installing bigbang -openshift: true - -# OR inline with helm command -helm install bigbang chart --set openshift=true -``` - -2) Patch the istio-cni daemonset to allow containers to run privileged (AFTER istio-cni daemonset exists). -Note: it was unsuccessfully attempted to apply this setting via modifications to the helm chart. Online patching succeeded. - -``` -kubectl get daemonset istio-cni-node -n kube-system -o json | jq '.spec.template.spec.containers[] += {"securityContext":{"privileged":true}}' | kubectl replace -f - -``` - -3) Modify the OpenShift cluster(s) with the following scripts based on https://istio.io/v1.7/docs/setup/platform-setup/openshift/ - -``` -# Istio Openshift configurations Post Install -oc -n istio-system expose svc/istio-ingressgateway --port=http2 -oc adm policy add-scc-to-user privileged -z istio-cni -n kube-system -oc adm policy add-scc-to-group privileged system:serviceaccounts:logging -oc adm policy add-scc-to-group anyuid system:serviceaccounts:logging -oc adm policy add-scc-to-group privileged system:serviceaccounts:monitoring -oc adm policy add-scc-to-group anyuid system:serviceaccounts:monitoring - -cat <<\EOF >> NetworkAttachmentDefinition.yaml -apiVersion: "k8s.cni.cncf.io/v1" -kind: NetworkAttachmentDefinition -metadata: - name: istio-cni -EOF -oc -n logging create -f NetworkAttachmentDefinition.yaml -oc -n monitoring create -f NetworkAttachmentDefinition.yaml -``` - -## RKE2 - -Since BigBang makes several assumptions about volume and load balancing provisioning by default, it's vital that the rke2 cluster must be properly configured. The easiest way to do this is through the in tree cloud providers, which can be configured through the `rke2` configuration file such as: - -```yaml -# aws, azure, gcp, etc... -cloud-provider-name: aws - -# additionally, set below configuration for private AWS endpoints, or custom regions such as (T)C2S (us-iso-east-1, us-iso-b-east-1) -cloud-provider-config: ... -``` - -For example, if using the aws terraform modules provided [on repo1](https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform), setting the variable: `enable_ccm = true` will ensure all the necessary resources tags. - -In the absence of an in-tree cloud provider (such as on-prem), the requirements can be met through the instructions outlined in the [storage](#storage) and [load balancing](#load-balancing) prerequisites section above. - -### OPA Gatekeeper - -A core component to Bigbang is OPA Gatekeeper, which operates as an elevated [validating admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to audit and enforce various [constraints](https://github.com/open-policy-agent/frameworks/tree/master/constraint) on all requests sent to the kubernetes api server. - -By default, `rke2` will deploy with [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) that disable these type of deployments. However, since we trust Bigbang (and OPA gatekeeper), we can patch the default `rke2` psp's to allow OPA. - -Given a freshly installed `rke2` cluster, run the following commands _once_ before attempting to install BigBang. - -```bash -kubectl patch psp system-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}' -kubectl patch psp global-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}' -kubectl patch psp global-restricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}' -``` - -### Istio - -By default, BigBang will use `istio-init`, and `rke2` clusters will come with `selinux` in `Enforcing` mode, please see the [`istio-init`](#istio-pre-requisites-on-selinux-enforcing-systems) above for pre-requisites and warnings. - -### Sonarqube - -Sonarqube requires the following kernel configurations set at the node level: - -```bash -sysctl -w vm.max_map_count=524288 -sysctl -w fs.file-max=131072 -ulimit -n 131072 -ulimit -u 8192 -``` - -Another option includes running the init container to modify the kernel values on the host (this requires a busybox container run as root): - -```yaml -addons: - sonarqube: - values: - initSysctl: - enabled: true -``` -**This is not the recommended solution as it requires running an init container as privileged.** diff --git a/docs/5_deployment.md b/docs/deployment.md similarity index 98% rename from docs/5_deployment.md rename to docs/deployment.md index 97c4760da74d240b4ca1fc1569b9a3d90e4c92d3..416f553ed9f0bf9ad6e26bcf665ea08ca30746e5 100644 --- a/docs/5_deployment.md +++ b/docs/deployment.md @@ -16,27 +16,27 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really) 1. Before pushing changes to Git, validate all configuration is syntactically correct. - ```bash + ```shell # If everything is successful, YAML should be output kustomize build ./dev ``` 1. If you have not already done so, push configuration changes to Git - ```bash + ```shell git push ``` 1. Validate the Kubernetes context is correct - ```bash + ```shell # This should match the environment you intend to deploy kubectl config current-context ``` 1. Deploy the Big Bang manifest to the cluster - ```bash + ```shell kubectl apply -f dev.yaml ``` @@ -56,7 +56,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify Flux is running - ```bash + ```shell kubectl get deploy -n flux-system # All resources should be in the 'Ready' state @@ -69,7 +69,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify the environment was pulled from the Git repo - ```bash + ```shell kubectl get gitrepository -A # `environment-repo`: STATUS should be True @@ -79,7 +79,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify the environment Kustomization properly worked - ```bash + ```shell kubectl get kustomizations -A # `environment`: READY should be True @@ -89,7 +89,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify the ConfigMaps were deployed - ```bash + ```shell kubectl get configmap -l kustomize.toolkit.fluxcd.io/namespace -A # 'common' and 'environment' should exist @@ -100,7 +100,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify the Secrets were deployed - ```bash + ```shell kubectl get secrets -l kustomize.toolkit.fluxcd.io/namespace -A # 'common-bb' and 'environment-bb' should exist @@ -111,7 +111,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify the Big Bang Helm Chart was pulled - ```bash + ```shell kubectl get gitrepositories -A # 'bigbang' READY should be True @@ -121,7 +121,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify the Big Bang Helm Chart was deployed - ```bash + ```shell kubectl get hr -A # 'bigbang' READY should be True @@ -131,7 +131,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify Big Bang package Helm charts are pulled - ```bash + ```shell kubectl get gitrepository -A # The Git repository holding the Helm charts for each package can be seen in the URL column. @@ -149,7 +149,7 @@ The following commands will help you monitor the progress of the Big Bang deploy 1. Verify the packages get deployed - ```bash + ```shell # Use watch since it take a long time to deploy watch kubectl get hr,deployments,po -A diff --git a/docs/developer/develop-package.md b/docs/developer/develop-package.md index 2312268d23d4066503c8d0c48b1a1e92e0520a74..fa24c158b868f9bb44b557f77a74bf87748df618 100644 --- a/docs/developer/develop-package.md +++ b/docs/developer/develop-package.md @@ -9,8 +9,8 @@ Package is the term we use for an application that has been prepared to be deplo 1. There are two ways to start a new Package. A. If there is no upstream helm chart we create a helm chart from scratch. Here is a T3 video that demonstrates creating a new helm chart. Create a directory called "chart" in your repo, change to the chart directory, and scaffold a new chart in the chart directory - ```bash - # scaffold new helm chart + ```shell + # Scaffold new helm chart mkdir chart cd chart helm create name-of-your-application @@ -18,13 +18,13 @@ Package is the term we use for an application that has been prepared to be deplo B. If there is an existing upstream chart we will use it and modify it. Essentially we create a "fork" of the upstream code. Use kpt to import the helm chart code into your repository. Note that kpt is not used to keep the Package code in sync with the upstream chart. It is a one time pull just to document where the upstream chart code came from. Kpt will generate a Kptfile that has the details. Do not manually create the "chart" directory. The kpt command will create it. Here is an example from when Gitlab Package was created. It is a good idea to push a commit "initial upstream chart with no changes" so you can refer back to the original code while you are developing. - ```bash + ```shell kpt pkg get https://gitlab.com/gitlab-org/charts/gitlab.git@v4.8.0 chart ``` 1. Run a helm dependency update that will download any external sub-chart dependencies. Commit any *.tgz files that are downloaded into the "charts" directory. The reason for doing this is that BigBang Packages must be able to be installed in an air-gap without any internet connectivity. - ```bash + ```shell helm dependency update ``` @@ -54,7 +54,7 @@ Package is the term we use for an application that has been prepared to be deplo 1. In the values.yaml replace public upstream images with IronBank hardened images. The image version should be compatible with the chart version. Here is a command to identify the images that need to be changed. - ```bash + ```shell # list images helm template <releasename> ./chart -n <namespace> -f chart/values.yaml | grep image: ``` @@ -78,7 +78,7 @@ Package is the term we use for an application that has been prepared to be deplo 1. Add CI pipeline test values to the Package. A Package should be able to be deployed by itself, independently from the BigBang chart. The Package pipeline takes advantage of this to run a Package pipeline test. Create a tests directory and a test yaml file at "tests/test-values.yaml". Set any values that are necessary for this test to pass. The pipeline automatically creates an image pull secret "private-registry-mil". All you need to do is reference that secret in your test values. You can view the pipeline status from the Repo1 console. Keep iterating on your Package code and the test code until the pipeline passes. Refer to the test-values.yaml from other Packages to get started. The repo structure must match what the CI pipeline code expects. - ``` + ```yaml |-- .gitlab-ci.yml |-- chart | |-- Chart.yml @@ -98,7 +98,7 @@ Package is the term we use for an application that has been prepared to be deplo 1. Add the following markdown files to complete the Package. Reference other that Packages for examples of how to create them. - ``` + ```shell CHANGELOG.md < standard history of changes made CODEOWNERS < list of the code maintainers. Minimum of two people from separate organizations CONTRIBUTING.md < instructions for how to contribute to the project @@ -116,12 +116,12 @@ Under Settings → Repository → Default Branch, ensure that main is selected. 1. Development Testing Cycle: Test your Package chart by deploying with helm. Test frequently so you don't pile up multiple layers of errors. The goal is for Packages to be deployable independently of the bigbang chart. Most upstream helm charts come with internal services like a database that can be toggled on or off. If available use them for testing and CI pipelines. In some cases this is not an option. You can manually deploy required in-cluster services in order to complete your development testing. Here is an example of an in-cluster postgres database - ```bash + ```shell helm repo add bitnami https://charts.bitnami.com/bitnami helm install postgres bitnami/postgresql -n postgres --create-namespace --set postgresqlPostgresPassword=postgres --set postgresqlPassword=postgres # test it kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=postgres" --command -- psql --host postgres-postgresql-headless.postgres.svc.cluster.local -U postgres -d postgres -p 5432 - # postgres commands + # Postgres commands \l < list tables \du < list users \q < quit @@ -129,7 +129,7 @@ Under Settings → Repository → Default Branch, ensure that main is selected. Here is an example of an in-cluster object storage service using MinIO (api compatible with AWS S3 storage) - ```bash + ```shell helm repo add minio https://helm.min.io/ helm install minio minio/minio --set accessKey=myaccesskey --set secretKey=mysecretkey -n minio --create-namespace # test and configure it @@ -142,21 +142,21 @@ Under Settings → Repository → Default Branch, ensure that main is selected. Here are the dev test steps you can iterate: - ```bash - # test that the helm chart templates successfully and examine the output to insure expected results + ```shell + # Test that the helm chart templates successfully and examine the output to insure expected results helm template <releasename> ./chart -n <namespace> -f chart/values.yaml - # deploy with helm + # Deploy with helm helm upgrade -i <releasename> ./chart -n <namespace> --create-namespace -f chart/values.yaml - # conduct testing - # tear down + # Conduct testing + # Tear down helm delete <releasename> -n <namespace> - # manually delete the namespace to insure that everything is gone + # Manually delete the namespace to insure that everything is gone kubectl delete ns <namespace> ``` 1. Wait to create a git tag release until integration testing with BigBang chart is completed. You will very likely discover more Package changes that are needed during BigBang integration. When you are confident that the Package code is complete, squash commits and rebase your development branch with the "main" branch. - ```bash + ```shell git rebase origin/main git reset $(git merge-base origin/main $(git rev-parse --abbrev-ref HEAD)) git add -A diff --git a/docs/developer/development-environment.md b/docs/developer/development-environment.md index c00566f19e16c79710d0be713feb5155e515f5c1..4c495bd01972726e959abd7576a3f7c3e2354b6b 100644 --- a/docs/developer/development-environment.md +++ b/docs/developer/development-environment.md @@ -172,7 +172,6 @@ scp -i ~/.ssh/your-ec2.pem ubuntu@$EC2_PUBLIC_IP:~/.kube/config ~/.kube/config Edit the kubeconfig on your workstation. Replace the server host ```0.0.0.0``` with with the public IP of the EC2 instance. Test cluster access from your local workstation. - ```shell kubectl cluster-info kubectl get nodes diff --git a/docs/developer/package-integration.md b/docs/developer/package-integration.md index 9b0bab8c69016cb364e787a970c65deccad256c6..1ddc980329852189f18a1f5dd30d9b0dcefe0211 100644 --- a/docs/developer/package-integration.md +++ b/docs/developer/package-integration.md @@ -1,18 +1,21 @@ # Integrate a Package with BigBang helm chart -1. Make a branch from the BigBang chart repository master branch. You can automatically create a branch from the Repo1 Gitlab issue. Or, in some cases you might manually create the branch. You should name the branch with your issue number. If your issue number is 9999 then your branch name can be "9999-my-description". It is best practice to make branch names short and simple. -1. Create a directory for your package at chart/templates/<your-package-name> +1. Make a branch from the BigBang chart repository master branch. You can automatically create a branch from the Repo1 Gitlab issue. Or, in some cases you might manually create the branch. You should name the branch with your issue number. If your issue number is 9999 then your branch name can be "9999-my-description". It is best practice to make branch names short and simple. -1. Inside this folder will be 3 helm template files. You can copy one of the other package folders and tweak the code for your package. Gitlab is a good example to reference because it is one of the more complicated Packages. Note that the Istio VirtualService comes from the Package and is not created in the BigBang chart. The purpose of these helm template files is to create an easy-to-use spec for deploying supported applications. Reasonable and safe defaults are provided and any needed secrets are auto-created. We accept the trade off of easy deployment for complicated template code. More details are in the following steps. - ``` +1. Create a directory for your package at `chart/templates/<your-package-name>` + +1. Inside this folder will be 3 helm template files. You can copy one of the other package folders and tweak the code for your package. Gitlab is a good example to reference because it is one of the more complicated Packages. Note that the Istio VirtualService comes from the Package and is not created in the BigBang chart. The purpose of these helm template files is to create an easy-to-use spec for deploying supported applications. Reasonable and safe defaults are provided and any needed secrets are auto-created. We accept the trade off of easy deployment for complicated template code. More details are in the following steps. + + ```shell gitrepository.yaml # Flux GitRepository. Is configured by BigBang chart values. helmrelease.yaml # Flux HelmRelease. Is configured by BigBang chart values. namespace.yaml # Contains the namespace and any needed secrets values.yaml # Implements all the BigBang customizations of the package and passthrough for values. ``` + 1. More details about values.yaml: Code reasonable and safe defaults but prioritize any user defined passthrough values wherever this makes sense. Avoid duplicating tags that are provided in the upstream chart values. Instead code reasonable defaults in the values.yaml template. The following is an example from Gitlab that handles SSO config. The code uses Package chart passthrough values if the user has entered them but otherwise defaults to the BigBang chart values or the Helm default values. Notice that the secret is not handled this way. The assumption is that if the user has enabled the BigBang SSO feature the secret will be auto created. In this case the user should not be overriding the secret. If the user wants to create their own secret they should not be enabling the BigBang SSO feature. - Note that helm does not handle any missing parent tags in the yaml tree. The 'if' statement and 'default' method throw 'nil' errors when parent tags are missing. The work-around is to inspect each level of the tree and assign an empty 'dict' if the value does not exist. Then you will be able to use 'hasKey' in your 'if' statements as shown below in this example from Gitlab. Having described all this, you should understand that coding conditional values is optional. The passthrough values will take priority regardless. But the overridden values will not show up in the deployed flux HelmRelease object if you don't code the conditional values. The value overrides will be obscured in the Package values secret. The only way to confirm that the overrides have been applied is to use "helm get values <releasename> -n bigbang" command on the deployed helm release. When the passthrough values show up in the HelmRelease object the Package configuration is much easier to see and verify. Use your own judgement on when to code conditional values. + Note that helm does not handle any missing parent tags in the yaml tree. The 'if' statement and 'default' method throw 'nil' errors when parent tags are missing. The work-around is to inspect each level of the tree and assign an empty 'dict' if the value does not exist. Then you will be able to use 'hasKey' in your 'if' statements as shown below in this example from Gitlab. Having described all this, you should understand that coding conditional values is optional. The passthrough values will take priority regardless. But the overridden values will not show up in the deployed flux HelmRelease object if you don't code the conditional values. The value overrides will be obscured in the Package values secret. The only way to confirm that the overrides have been applied is to use `helm get values <releasename> -n bigbang` command on the deployed helm release. When the passthrough values show up in the HelmRelease object the Package configuration is much easier to see and verify. Use your own judgement on when to code conditional values. ```yaml global: @@ -43,11 +46,13 @@ {{- end }} ``` + 1. More details about namespace.yaml: This template is where the code for secrets go. Typically you will see secrets for imagePullSecret, sso, and database. These secrets are a BigBang chart enhancement. They are created conditionally based on what the user enables in the config. 1. Edit the chart/templates/values.yaml. Add your Package to the list of Packages. Just copy one of the others and change the name. This supports adding chart values from a secret. Pay attention to whether this is a core Package or an add-on package, the toYaml values are different for add-ons. This template allows a Package to add chart values that need to be encrypted in a secret. -1. Edit the chart/values.yaml. Add your Package to the bottom of the core section if a core package or addons section if an add-on. You can copy from one of the other packages and modify appropriately. Some possible tags underneath your package are [ enabled, git, sso, database, objectstorage ]. Avoid duplicating value tags from the upstream chart in the BigBang chart. The goal is not to cover every edge case. Instead code reasonable defaults in the helmrelease template and allow customer to override values in addons.<packageName>.values +1. Edit the `chart/values.yaml`. Add your Package to the bottom of the core section if a core package or addons section if an add-on. You can copy from one of the other packages and modify appropriately. Some possible tags underneath your package are [enabled, git, sso, database, objectstorage]. Avoid duplicating value tags from the upstream chart in the BigBang chart. The goal is not to cover every edge case. Instead code reasonable defaults in the helmrelease template and allow customer to override values in addons.`<packageName>.values` + ```yaml addons: mypackage: @@ -76,7 +81,7 @@ values: {} ``` -1. Edit tests/ci/k3d/values.yaml. These are the settings that the CI pipeline uses to run a deployment test. Set your Package to be enabled and add any other necessary values. Where possible reduce the number of replicas to a minumum to reduce straing on the CI infrastructure. When you commit your code the pipeline will run. You can view the pipeline in the Repo1 Gitlab console. Fix any errors in the pipeline output. The pipeline automatically runs a "smoke" test. It deploys bigbang on a k3d cluster using the test values file. +1. Edit tests/ci/k3d/values.yaml. These are the settings that the CI pipeline uses to run a deployment test. Set your Package to be enabled and add any other necessary values. Where possible reduce the number of replicas to a minimum to reduce strain on the CI infrastructure. When you commit your code the pipeline will run. You can view the pipeline in the Repo1 Gitlab console. Fix any errors in the pipeline output. The pipeline automatically runs a "smoke" test. It deploys bigbang on a k3d cluster using the test values file. 1. Add your packages name to the ORDERED_HELMRELEASES list in scripts/deploy/02_wait_for_helmreleases.sh. @@ -86,51 +91,54 @@ 1. When you are done developing the BigBang chart features for your Package make a merge request in "Draft" status and add a label corresponding to your package name (must match the name in `values.yaml`). Also add any labels for dependencies of the package that are NOT core apps. The merge request will start a pipeline and use the labels to determine which addons to deploy. Fix any errors that appear in the pipeline. When the pipeline has pass and the MR is ready take it out of "Draft" and add the `status::review` label. Address any issues raised in the merge request comments. -# BigBang Development and Testing Cycle +## BigBang Development and Testing Cycle + There are two ways to test BigBang, imperative or GitOps with Flux. Your initial development can start with imperative testing. But you should finish with GitOps to make sure that your code works with Flux. 1. **Imperative:** you can manually deploy bigbang with helm command line. With this method you can test local code changes without committing to a repository. Here are the steps that you can iterate with "code a little, test a little". From the root of your local bigbang repo: - ```bash - # deploy with helm while pointing to your test values files - # bigbang packages should create any needed secrets from the chart values - # if you have the values file encrypted with sops, temporarily decrypt it + + ```shell + # Deploy with helm while pointing to your test values files + # Bigbang packages should create any needed secrets from the chart values + # Ff you have the values file encrypted with sops, temporarily decrypt it helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../customers/template/dev/configmap.yaml -f ./chart/ingress-certs.yaml -f ../customers/template/dev/registry-values.enc.yaml - # conduct testing - # if you make code changes you can run another helm upgrade to pick up the new changes + # Conduct testing + # If you make code changes you can run another helm upgrade to pick up the new changes helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../customers/template/dev/configmap.yaml -f ./chart/ingress-certs.yaml -f ../customers/template/dev/registry-values.enc.yaml - # tear down + # Tear down helm delete bigbang -n bigbang - # helm delete will not delete the bigbang namespace + # Helm delete will not delete the bigbang namespace kubectl delete ns bigbang - # istio namespace will be stuck in "finalizing". So run the script to delete it. + # Istio namespace will be stuck in "finalizing". So run the script to delete it. hack/remove-ns-finalizer.sh istio-system ``` -2. **GitOps with Flux:** You can deploy your development code the same way a customer would deploy using GitOps. You must commit any code changes to your development banches because this is how GitOps works. There is a [customer template repository](https://repo1.dso.mil/platform-one/big-bang/customers/template) that has an example template for how to deploy using BigBang. You can create a branch from one of the other developer's branch or start clean from the master branch. Make the necessary modifications as explained in the README.md. The setup information is not repeated here. This is a public repo so DO NOT commit unencrypted secrets. Before committing code it is a good idea to manually run "helm template" and a "helm install" with dry run. This will reveal many errors before you make a commit. Here are the steps you can iterate: - ```bash - # verify chart code before committing +2. **GitOps with Flux:** You can deploy your development code the same way a customer would deploy using GitOps. You must commit any code changes to your development branches because this is how GitOps works. There is a [customer template repository](https://repo1.dso.mil/platform-one/big-bang/customers/template) that has an example template for how to deploy using BigBang. You can create a branch from one of the other developer's branch or start clean from the master branch. Make the necessary modifications as explained in the README.md. The setup information is not repeated here. This is a public repo so DO NOT commit unencrypted secrets. Before committing code it is a good idea to manually run `helm template` and a `helm install` with dry run. This will reveal many errors before you make a commit. Here are the steps you can iterate: + + ```shell + # Verify chart code before committing helm template bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --debug helm install bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --dry-run - # commit and push your code - # deploy your bigbang template + # Commit and push your code + # Deploy your bigbang template kubectl apply -f dev/bigbang.yaml - # monitor rollout + # Monitor rollout watch kubectl get pod,helmrelease -A - # conduct testing - # tear down + # Conduct testing + # Tear down kubectl delete -f dev/bigbang.yaml - # istio namespace will be stuck in "finalizing". So run the script to delete it. You will need 'jq' installed + # Istio namespace will be stuck in "finalizing". So run the script to delete it. You will need 'jq' installed hack/remove-ns-finalizer.sh istio-system - # if you have pushed code changes before the tear down, occasionally the bigbang deployments are not terminated - # because Flux has not had enough time to reconcile the helmreleases - # re-deploy bigbang + # If you have pushed code changes before the tear down, occasionally the bigbang deployments are not terminated because Flux has not had enough time to reconcile the helmreleases + + # Re-deploy bigbang kubectl apply -f dev/bigbang.yaml - # run the sync script. + # Run the sync script. hack/sync.sh - # tear down + # Tear down kubectl delete -f dev/bigbang.yaml hack/remove-ns-finalizer.sh istio-system ``` diff --git a/docs/3_encryption.md b/docs/encryption.md similarity index 100% rename from docs/3_encryption.md rename to docs/encryption.md diff --git a/docs/2_getting_started.md b/docs/getting_started.md similarity index 100% rename from docs/2_getting_started.md rename to docs/getting_started.md diff --git a/docs/guides/README.md b/docs/guides/README.md new file mode 100644 index 0000000000000000000000000000000000000000..ed2472135df444ed4b0f3c94a7891346adedfa7e --- /dev/null +++ b/docs/guides/README.md @@ -0,0 +1,9 @@ +# Guides + +## deployment_scenarios + +Beginner friendly how to guides are intended to be added to these subfolders over time. + +## prerequisites + +Beginner friendly comprehensive explanations of prerequisites that are generically applicable to multiple scenarios diff --git a/docs/guides/deployment_scenarios/quickstart.md b/docs/guides/deployment_scenarios/quickstart.md new file mode 100644 index 0000000000000000000000000000000000000000..d48710531e0b630aea2b6a3d029e8c8a52666d5b --- /dev/null +++ b/docs/guides/deployment_scenarios/quickstart.md @@ -0,0 +1,232 @@ +# Big Bang Quick Start + +## Overview + +This guide is designed to offer an easy to deploy preview of BigBang, so new users can get to a hands-on state as quickly as possible. +Note: The current implementation of the Quick Start limits the ability to customize the BigBang Deployment. It is doing a GitOps defined deployment from a repository you don't control. + +## Step 1. Provision a Virtual Machine + +The following requirements are recommended for Demo Purposes: + +* 1 Virtual Machine with 64GB RAM, 16-Core CPU (This will become a single node cluster) +* Ubuntu Server 20.04 LTS (Ubuntu comes up slightly faster than RHEL, although both work fine) +* Network connectivity to said Virtual Machine (provisioning with a public IP and a security group locked down to your IP should work. Otherwise a Bare Metal server or even a vagrant box Virtual Machine configured for remote ssh works fine.) +Note: The quick start repositories' `init-k3d.sh` starts up k3d using flags to disable the default ingress controller and map the virtual machine's port 443 to a Docker-ized Load Balancer's port 443, which will eventually map to the istio ingress gateway. That along with some other things (Like leveraging a Lets Encrypt Free HTTPS Wildcard Certificate) are done to lower the prerequisites barrier to make basic demos easier. + +## Step 2. SSH into machine and install prerequisite software + +1. Setup SSH + +```shell +# [User@Laptop:~] +touch ~/.ssh/config +chmod 600 ~/.ssh/config +cat ~/.ssh/config +temp="""########################## +Host k3d + Hostname 1.2.3.4 #IP Address of k3d node + IdentityFile ~/.ssh/bb-onboarding-attendees.ssh.privatekey #ssh key authorized to access k3d node + User ubuntu + StrictHostKeyChecking no #Useful for vagrant where you'd reuse IP from repeated tear downs +#########################""" +echo "$temp" | sudo tee -a ~/.ssh/config #tee -a, appends to preexisting config file +``` + +1. Install Docker + +```shell +# [admin@Laptop:~] +ssh k3d +# [ubuntu@k3d:~] +curl -fsSL https://get.docker.com | bash + +docker run hello-world +# docker: Got permission denied while trying to connect to the Docker daemon socket at +# unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: +# dial unix /var/run/docker.sock: connect: permission denied.See 'docker run --help'. + +sudo docker run hello-world +# If docker only works when you use sudo, you need to add your non-root user to the docker group. + +sudo groupadd docker +sudo usermod --append --groups docker $USER + +# When users are added to a group in linux, a new process needs to spawn in order for the new permissions to be recognized, due to a Linux security feature preventing running processes from gaining additional privileges on the fly. (log out and back in is the sure fire method) + +exit + +[admin@Laptop:~] +ssh k3d + +[ubuntu@k3d:~] +docker run hello-world # validate install was successful +``` + +1. Install k3d + +```shell +[ubuntu@k3d:~] +wget -q -P /tmp https://github.com/rancher/k3d/releases/download/v3.0.1/k3d-linux-amd64 +mv /tmp/k3d-linux-amd64 /tmp/k3d +sudo chmod +x /tmp/k3d +sudo mv -v /tmp/k3d /usr/local/bin/ +k3d --version # validate install was successful +``` + +1. Install Kubectl + +```shell +[ubuntu@k3d:~] +wget -q -P /tmp "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" +sudo chmod +x /tmp/kubectl +sudo mv /tmp/kubectl /usr/local/bin/kubectl +sudo ln -s /usr/local/bin/kubectl /usr/local/bin/k #alternative to alias k=kubectl in ~/.bashrc +k version # validate install was successful +``` + +1. Install Terraform + +```shell +[ubuntu@k3d:~] +wget https://releases.hashicorp.com/terraform/0.14.9/terraform_0.14.9_linux_amd64.zip +sudo apt update && sudo apt install unzip +unzip terraform* +sudo mv terraform /usr/local/bin/ +terraform version # validate install was successful +``` + +1. Run Operating System Pre-configuration + +```shell +# [ubuntu@k3d:~] +# For ECK +sudo sysctl -w vm.max_map_count=262144 + +# Turn off all swap devices and files (won't last reboot) +sudo swapoff -a +# For swap to stay off you can remove any references found via +# cat /proc/swaps +# cat /etc/fstab + +# For Sonarqube +sudo sysctl -w vm.max_map_count=524288 +sudo sysctl -w fs.file-max=131072 +ulimit -n 131072 +ulimit -u 8192 +``` + +## Step 3. Clone the Big Bang Quick Start Repo + +<https://repo1.dso.mil/platform-one/quick-start/big-bang#big-bang-quick-start> + +1. Clone the repo + +```shell +# [ubuntu@k3d:~] +cd ~ +git clone https://repo1.dso.mil/platform-one/quick-start/big-bang.git +cd ~/big-bang +``` + +1. Look up your IronBank image pull credentials from <https://registry1.dso.mil> + + 1. In a web browser go to <https://registry1.dso.mil> + 2. Login via OIDC provider + 3. Top right of the page, click your name --> User Profile + 4. Your image pull username is labeled "Username" + 5. Your image pull password is labeled "CLI secret" + + (Note: The image pull credentials are tied to the life cycle of an OIDC token which expires after 30 days, so if 30 days have passed since your last login to IronBank, the credentials will stop working until you re-login to the <https://registry1.dso.mil> GUI) + +1. Verify your credentials work + +```shell +# [ubuntu@k3d:~/big-bang] +docker login https://registry1.dso.mil +# It'll prompt for "Username: " (type it out) +# It'll prompt for "Password: " (copy paste it, or blind type it as it will be masked) +# Login Succeeded +``` + +1. Create a terraform.tfvars file with your registry1 credentials in your copy of the cloned repo + +```shell +# [ubuntu@k3d:~/big-bang] +vi ~/big-bang/terraform.tfvars +``` + +* Add the following contents to the newly created file + +```plaintext +registry1_username="REPLACE_ME" +registry1_password="REPLACE_ME" +``` + +## Step 4. Follow the deployment directions on the Big Bang Quick Start Repo + +[Link to Big Bang Quick Start Repo](https://repo1.dso.mil/platform-one/quick-start/big-bang#big-bang-quick-start) + +## Step 5. Add the LEF HTTPS Demo Certificate + +* A Lets Encrypt Free HTTPS Wildcard Certificate, for *.bigbang.dev is included in the repo, we'll apply it from a regularly updated upstream source of truth. + +```shell +[ubuntu@k3d:~/big-bang] +# Download Encrypted HTTPS Wildcard Demo Cert +curl https://repo1.dso.mil/platform-one/big-bang/bigbang/-/raw/master/hack/secrets/ingress-cert.yaml > ~/ingress-cert.enc.yaml + +# Download BigBang's Demo GPG Key Pair to a local file +curl https://repo1.dso.mil/platform-one/big-bang/bigbang/-/raw/master/hack/bigbang-dev.asc > /tmp/demo-bigbang-gpg-keypair.dev + +# Import the Big Bang Demo Key Pair into keychain +gpg --import /tmp/demo-bigbang-gpg-keypair.dev + +# Install sops (Secret Operations CLI tool by Mozilla) +wget https://github.com/mozilla/sops/releases/download/v3.6.1/sops_3.6.1_amd64.deb +sudo dpkg -i sops_3.6.1_amd64.deb + +# Decrypt and apply to the cluster +sops --decrypt ~/ingress-cert.enc.yaml | kubectl apply -f - --namespace=istio-system +``` + +## Step 6. Edit your Laptop's HostFile to access the web pages hosted on the BigBang Cluster + +```shell +# [ubuntu@k3d:~/big-bang] +# Short version of, kubectl get virtualservices --all-namespaces +$ k get vs -A + +NAMESPACE NAME GATEWAYS HOSTS AGE +monitoring monitoring-monitoring-kube-alertmanager ["istio-system/main"] ["alertmanager.bigbang.dev"] 8d +monitoring monitoring-monitoring-kube-grafana ["istio-system/main"] ["grafana.bigbang.dev"] 8d +monitoring monitoring-monitoring-kube-prometheus ["istio-system/main"] ["prometheus.bigbang.dev"] 8d +argocd argocd-argocd-server ["istio-system/main"] ["argocd.bigbang.dev"] 8d +kiali kiali ["istio-system/main"] ["kiali.bigbang.dev"] 8d +jaeger jaeger ["istio-system/main"] ["tracing.bigbang.dev"] 8d +``` + +* Linux/Mac Users: + +```shell +# [admin@Laptop:~] +sudo vi /etc/hosts +``` + +* Windows Users: + +1. Right click Notepad -> Run as Administrator +1. Open C:\Windows\System32\drivers\etc\hosts + +* Add the following entries to the hostfile, where 1.2.3.4 = k3d virtual machine's IP + +```plaintext +1.2.3.4 alertmanager.bigbang.dev +1.2.3.4 grafana.bigbang.dev +1.2.3.4 prometheus.bigbang.dev +1.2.3.4 argocd.bigbang.dev +1.2.3.4 kiali.bigbang.dev +1.2.3.4 tracing.bigbang.dev +``` + +* Remember to un-edit your hostfile when done diff --git a/docs/guides/prerequisites/README.md b/docs/guides/prerequisites/README.md new file mode 100644 index 0000000000000000000000000000000000000000..5a884ca5b39cd56fe97bba6577bdab64c4148c9d --- /dev/null +++ b/docs/guides/prerequisites/README.md @@ -0,0 +1,23 @@ +# Prerequisites: +* How the Prerequisites docs are organized: + * This README.md is meant to be a high level overview of prerequsites. + * /docs/guides/prerequsites/(some_topic).md files are meant to offer more specific guidance on prerequisites while staying generic. + * The /docs/guides/deployment_scenarios/(some_topic).md files may also offer additional details on prerequesites specific to the scenario. +* Prerequisites vary depending on deployment scenario, thus a table is used to give an overview. +* Note for future edits: The following table was generated using tablesgenerator.com/markdown_tables, recommended to copy the table's raw text contents, visit tablesgenerator.com/markdown_tables, and File -> Paste table data when edits are needed. + +| Prerequisites(rows) vs Deployment Scenarios(columns) | QuickStart | Internet Connected | Internet Disconnected | +|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| **[OS Preconfigured](os_preconfiguration.md) and Prehardened** <br>(OS and level of hardening required depends on AO) | Prerequisite <br>Recommended: A non-hardened single VM with 8 cores and 64 GB ram <br>Minimum: 4 cores and 16 GB ram (requires overriding helm values) | Prerequisite <br>(CSPs usually have marketplaces with pre-hardened VM images) | Prerequisite <br>(configured to AO's risk tolerance / mission needs) | +| **[Kubernetes Distribution Preconfigured](kubernetes_preconfiguration.md) to Best Practices and Prehardened** <br>(Any CNCF Kubernetes Distribution will work as long as an AO signs off on it) | k3d is recommended for demos (It's quick to set up, ships with a dockerized LB, works on every cloud, and bare metal) | Prerequisite <br>(https://repo1.dso.mil/platform-one/distros) | Prerequisite <br>(users are responsible for airgap image import of container images needed by chosen Kubernetes Distribution) | +| **Default Storage Class** <br>((for Dynamic PVCs), the SC needs to support RWX (Read Write Many) Access Mode to support HA deployment of all BigBang AddOns) | Presatisfied* <br>(*if using k3d, which has dynamic local volume storage class baked in) | Prerequisite <br>It's recommended that users start with a CSP specific or Kubernetes Distro provided storage class | Prerequisite <br>[(These docs compare Cloud Agnostic Storage Solutions)](../../k8s-storage/README.md#kubernetes-storage-options) | +| **Support for Automated Provisioning of Service Type Load Balancer** <br>(is recommended) | Presatisfied* <br>(*if using k3d, which ships with the ability to add flags to treat the VM's port 443 as Kubernetes Service of Type LB's port 443, automation in the quick start repo leverages these flags) | Prerequisite <br>Kubernetes Distributions usually have CSPs specific flags you can pass to the kube-apiserver to support auto provisioning of CSP LBs. | Prerequisite <br>[(See docs for guidance on bare metal and no IAM scenarios)](kubernetes_preconfiguration.md#service-of-type-load-balancer) | +| **Access to Container Images** <br>(IronBank Image Pull Credentials or AirGap import from .tar.gz's) | Prerequisite <br>(Anyone can go to login.dso.mil, and self register against P1's SSO. That can be used to login to registry1.dso.mil to generate image pull credentials for the QuickStart) | BigBang customers are recommended to use ask their BB Customer Liaison's for an IronBank Image pull robot account, which lasts 6 months. | Prerequisite <br>(Airgap import of container images, [BigBang Releases](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/releases) includes a .tar.gz of IronBank Images) | +| **Customer Controlled Private Git Repo** <br>(for GitOps, the Cluster needs network access & Credentials for Read Access to said Git Repo) | Presatisfied <br>(the turn key demo points to public repo1, but you won't be able to customize it) | Prerequisite <br>(or follow Air gap docs) | Prerequisite <br>(Air gap docs assist with provisioning an ssh based git repo) | +| **Encrypt Secrets as Code** <br>(Use SOPS + CSP KMS or PGP to encrypt secrets that need to be stored in the GitRepo) | Presatisfied <br>(Demo Repo has mock secrets encrypted with a demo PGP public encryption key) | Prerequisite <br>(CSP KMS and IAM is more secure that gpg key pair) | Prerequisite <br>(Use CSP KMS if available, PGP works universally, [Flux requires the private PGP key to not have a passphrase](https://toolkit.fluxcd.io/guides/mozilla-sops/#generate-a-gpg-key)) | +| **Install and Configure Flux** <br>(Flux needs Git Repo Credentials & CSP IAM rights for KMS decryption or a kubernetes secret containing a private PGP decryption key) | Presatisfied <br>(Demo Public Repo doesn't require RO Credentials, the demo PGP private decryption key is hosted cleartext in the repo) | Prerequisite <br>(see BigBang docs, [flux docs](https://toolkit.fluxcd.io/components/source/gitrepositories/#spec-examples) are also a good resource for this) | Prerequisite <br>(see BigBang docs) | +| **HTTPS Certificates** | Presatisfied <br>(Demo Public Repo contains a Let's Encrypt Free (public internet recognised certificate authority) HTTPS Certificate for *.bigbang.dev, alternatively mkcert can be used to generate demo certs for arbitrary DNS names that will only be trusted by the laptop that provisoned the mkcert) | Prerequisite <br>(HTTPS cert is provided by consumer) | Prerequisite <br>(HTTPS cert is provided by consumer) | +| **DNS** | Edit your Laptop's host file (/etc/hosts, C:\Windows\System32\drivers\etc\hosts), or use something like AWS VPC Private DNS and [sshuttle](https://github.com/sshuttle/sshuttle) to point to host VM (if using k3d) | Prerequisite <br>(point DNS names to Layer 4 CSP LB) | Prerequisite <br>(point DNS names to L4 LB) | +| **HTTPS Certificate, DNS Name, and hostnames in BigBang's helm values must match** <br>(in order for Ingress to work correctly.) | QuickStart leverages `*.bigbang.dev` HTTPS cert, and the BigBang Helm Chart's values.yaml's hostname defaults to bigbang.dev, just need to ensure multiple hostfile entries like "grafana.bigbang.dev " exist, or if you have access to DNS a wildcard entry to map CNAME `*.bigbang.dev` to k3d VM's IP | Prerequisite <br>(update bigbang helm values in git repo so hostnames match HTTPS cert) | Prerequisite <br>(update bigbang helm values in git repo so hostnames match HTTPS cert) | +| **SSO Identity Provider** <br>(Prerequisite for SSO Authentication Proxy feature) | Presatisfied* <br>(*depending on which quick start config is used), There's exists a demo SSO config that leverages P1's CAC enabled SSO, it's coded to only work for localhost to balance turn key demo functionality against security concerns. | Prerequisite <br>(You don't have to use Keycloak, you can use any OIDC/SAML Identity Provider) ([Customer Deployable Keycloak is a feature coming soon](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/issues/291)) | Prerequisite* <br>(Install your own Keycloak cluster), leverage a pre-existing airgap SSO solution, or configure to not use SSO* if not needed for use case) | +| **Ops Team to integrate, configure, and maintain BigBang** <br>(needed skillsets: DevOps IaC/CaC all the things, automate most the things, document the rest, linux administration, productionalization and maintenance of a Kubernetes Cluster.) | QuickStart Demo is designed to be self service. | Prerequisite <br>(BigBang Customer Integration Engineers are available to help long term Ops teams.) | Prerequisite | diff --git a/docs/guides/prerequisites/default_storageclass.md b/docs/guides/prerequisites/default_storageclass.md new file mode 100644 index 0000000000000000000000000000000000000000..3d63c5672a38fc1b98e248db5e59de879aaeaf0f --- /dev/null +++ b/docs/guides/prerequisites/default_storageclass.md @@ -0,0 +1,68 @@ +# Default Storage Class prerequisite +* BigBang assumes the cluster you're deploying to supports [dynamic volume provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/). +* A BigBang Cluster should have 1 Storage Class annotated as the default SC. +* For Production Deployments it is recommended to leverage a Storage Class that supports the creation of volumes that support ReadWriteMany [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), as there are a few BigBang Addons, where an HA application configuration requires a storage class that supports ReadWriteMany. + + +## How Dynamic volume provisioning works in a nut shell +* StorageClass + PersistentVolumeClaim = Dynamically Created Persistent Volume +* A PersistentVolumeClaim that does not reference a specific StorageClass will leverage the default StorageClass. (Of which there should only be 1, identified using kubernetes annotations.) Some Helm Charts allow a storage class to be explicitly specified so that multiple storage classes can be used simultaneously. + + +## How to check what storage classes are installed on your cluster +* `kubectl get storageclass` can be used to see what storage classes are available on a cluster, the default will be marked as such. +* Note: You can have multiple storage classes, but you should only have 1 default storage class. +```bash +kubectl get storageclass +# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +# local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 47h +``` + +------------------------------------------------------ + +## AWS Specific Notes + +### Example AWS Storage Class Configuration +```yaml +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: gp2 + annotations: + storageclass.kubernetes.io/is-default-class: 'true' +provisioner: kubernetes.io/aws-ebs +parameters: + type: gp2 #gp3 isn't supported by the in-tree plugin + fsType: ext4 +# encrypted: 'true' #requires kubernetes nodes have IAM rights to a KMS key +# kmsKeyId: 'arn:aws-us-gov:kms:us-gov-west-1:110518024095:key/b6bf63f0-dc65-49b4-acb9-528308195fd6' +reclaimPolicy: Retain +allowVolumeExpansion: true +``` + +### AWS EBS Volumes: +* AWS EBS Volumes have the following limitations: + * An EBS volume can only be attached to a single Kubernetes Node at a time, thus ReadWriteMany Access Mode isn't supported. + * An EBS PersistentVolume in AZ1 (Availability Zone 1), cannot be mounted by a worker node in AZ2. + +### AWS EFS Volumes: +* An AWS EFS Storage Class can be installed according to the [vendors docs](https://github.com/kubernetes-sigs/aws-efs-csi-driver#installation). +* AWS EFS Storage Class supports ReadWriteMany Access Mode. +* AWS EFS Persistent Volumes can be mounted by worker nodes in multiple AZs. +* AWS EFS is basically NFS(NetworkFileSystem) as a Service. NFS cons like latency apply equally to EFS, thus it's not a good fit for for databases. + +------------------------------------------------------ + +## Azure Specific Notes +### Azure Disk Storage Class Notes +* The Kubernetes Docs offer an Example [Azure Disk Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk) +* An Azure disk can only be mounted with Access mode type ReadWriteOnce, which makes it available to one node in AKS. +* An Azure Disk PersistentVolume in AZ1, can be mounted by a worker node in AZ2 (although some additional lag is involved in such transitions). + +------------------------------------------------------ + +## Bare Metal/Cloud Agnostic Store Class Notes +* The BigBang Product team put together a [Comparison Matrix of a few Cloud Agnostic Storage Class offerings](../../k8s-storage/README.md#kubernetes-storage-options) +* Note: No storage class specific container images exist in IronBank at this time. + * Approved IronBank Images will show up in https://registry1.dso.mil + * https://repo1.dso.mil/dsop can be used to check status of IronBank images. diff --git a/docs/guides/prerequisites/install_flux.md b/docs/guides/prerequisites/install_flux.md new file mode 100644 index 0000000000000000000000000000000000000000..f72332b825483c8311a66391332484a5f9aaec72 --- /dev/null +++ b/docs/guides/prerequisites/install_flux.md @@ -0,0 +1,47 @@ +# Install the flux cli tool + +```bash +sudo curl -s https://fluxcd.io/install.sh | sudo bash +``` +> Fedora Note: kubectl is a prereq for flux, and flux expects it in `/usr/local/bin/kubectl` symlink it or copy the binary to fix errors. + +## Install flux.yaml to the cluster +```bash +export REGISTRY1_USER='REPLACE_ME' +export REGISTRY1_TOKEN='REPLACE_ME' +``` +> In production use robot credentials, single quotes are important due to the '$' +`export REGISTRY1_USER='robot$bigbang-onboarding-imagepull'` + + +```bash +kubectl create ns flux-system +kubectl create secret docker-registry private-registry \ + --docker-server=registry1.dso.mil \ + --docker-username=$REGISTRY1_USER \ + --docker-password=$REGISTRY1_TOKEN \ + --namespace flux-system +kubectl apply -f https://repo1.dso.mil/platform-one/big-bang/bigbang/-/raw/master/scripts/deploy/flux.yaml +``` +> k apply -f flux.yaml, is equivalent to "flux install", but it installs a version of flux that's been tested and gone through IronBank. + + +#### Now you can see new CRD objects types inside of the cluster +```bash +kubectl get crds | grep flux +``` + +# Advanced Installation +Clone the Big Bang repo and use the awesome installation [scripts](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/tree/master/scripts) directory + +```bash +git clone https://repo1.dso.mil/platform-one/big-bang/bigbang.git +./bigbang/scripts/install_flux.sh +``` +> **NOTE** install_flux.sh requires arguments to run properly, calling it will print out a friendly USAGE mesage with required arguments needed to complete installation. + + + + + + diff --git a/docs/guides/prerequisites/kubernetes_preconfiguration.md b/docs/guides/prerequisites/kubernetes_preconfiguration.md new file mode 100644 index 0000000000000000000000000000000000000000..f77e0cc51c9410ae062c4a3f5e5eeae3441fc63a --- /dev/null +++ b/docs/guides/prerequisites/kubernetes_preconfiguration.md @@ -0,0 +1,115 @@ +# Kubernetes Cluster Preconfiguration: + + +## Best Practices: +* A CNI (Container Network Interface) that supports Network Policies (which are basically firewalls for the Inner Cluster Network.) (Note: k3d, which is recommended for the quickstart demo, defaults to flannel, which does not support network policies.) +* All Kubernetes Nodes and the LB associated with the kube-apiserver should all use private IPs. +* In most case User Application Facing LBs should have Private IP Addresses and be paired with a defense in depth Ingress Protection mechanism like [P1's CNAP](https://p1.dso.mil/#/products/cnap/), a CNAP equivalent (Advanced Edge Firewall), VPN, VDI, port forwarding through a bastion, or air gap deployment. +* CoreDNS in the kube-system namespace should be HA with pod anti-affinity rules +* Master Nodes should be HA and tainted. +* Consider using a licensed Kubernetes Distribution with a support contract. +* [A default storage class should exist](default_storageclass.md) to support dynamic provisioning of persistent volumes. + + +## Service of Type Load Balancer: +BigBang's default configuration assumes the cluster you're deploying to supports dynamic load balancer provisioning. Specifically Istio defaults to creating a Kubernetes Service of type Load Balancer, which usually creates an endpoint exposed outside of the cluster that can direct traffic inside the cluster to the istio ingress gateway. + +How Kubernetes service of type LB works depends on implementation details, there are many ways of getting it to work, common methods are listed below: +* CSP API Method: (Recommended option for Cloud Deployments) +The Kubernetes Control Plane has a --cloud-provider flag that can be set to aws, azure, etc. If the Kubernetes Master Nodes have that flag set and CSP IAM rights. The control plane will auto provision and configure CSP LBs. (Note: a Vendors Kubernetes Distro automation, may have IaC/CaC defaults that allow this to work turn key, but if you have issues when provisioning LBs, consult with the Vendor's support for the recommended way of configuring automatic LB provisioning.) +* External LB Method: (Good for bare metal and 0 IAM rights scenarios) +You can override bigbang's helm values so istio will provision a service of type NodePort instead of type LoadBalancer. Instead of randomly generating from the port range of 30000 - 32768, the NodePorts can be pinned to convention based port numbers like 30080 & 30443. If you're in a restricted cloud env or bare metal you can ask someone to provision a CSP LB where LB:443 would map to Nodeport:30443 (of every worker node), etc. +* No LB, Network Routing Methods: (Good options for bare metal) + * [MetalLB](https://metallb.universe.tf/) + * [kubevip](https://kube-vip.io/) + * [kube-router](https://www.kube-router.io) + + +## BigBang doesn't support PSPs (Pod Security Policies): +* [PSP's are being removed from Kubernetes and will be gone by version 1.25.x](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/issues/10) +* [Open Policy Agent Gatekeeper can enforce the same security controls as PSPs](https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy#pod-security-policies), and is core component of BigBang, which operates as an elevated [validating admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to audit and enforce various [constraints](https://github.com/open-policy-agent/frameworks/tree/master/constraint) on all requests sent to the kubernetes api server. +* We recommend users disable PSPs completely given they're being removed, we have a replacement, and PSPs can prevent OPA from deploying (and if OPA is not able to deploy, nothing else gets deployed). +* Different ways of Disabling PSPs: + * Edit the kube-apiserver's flags (methods for doing this varry per distro.) + * ```bash + kubectl patch psp system-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}' + kubectl patch psp global-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}' + kubectl patch psp global-restricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}' + ``` + + +## Kubernetes Distribution Specific Notes +* Note: P1 has forks of various [Kubernetes Distribution Vendor Repos](https://repo1.dso.mil/platform-one/distros), there's nothing special about the P1 forks. +* We recommend you leverage the Vendors upstream docs in addition to any docs found in P1 Repos; infact, the Vendor's upstream docs are far more likely to be up to date. + +### VMWare Tanzu Kubernetes Grid: +[Prerequisites section of VMware Kubernetes Distribution Docs's](https://repo1.dso.mil/platform-one/distros/vmware/tkg#prerequisites) + +### Cluster API +* Note that there are some OS hardening and VM Image Build automation tools in here, in addition to Cluster API. +* https://repo1.dso.mil/platform-one/distros/clusterapi +* https://repo1.dso.mil/platform-one/distros/cluster-api/gov-image-builder + +### OpenShift +OpenShift + +1) When deploying BigBang, set the OpenShift flag to true. + +``` +# inside a values.yaml being passed to the command installing bigbang +openshift: true + +# OR inline with helm command +helm install bigbang chart --set openshift=true +``` + +2) Patch the istio-cni daemonset to allow containers to run privileged (AFTER istio-cni daemonset exists). +Note: it was unsuccessfully attempted to apply this setting via modifications to the helm chart. Online patching succeeded. + +``` +kubectl get daemonset istio-cni-node -n kube-system -o json | jq '.spec.template.spec.containers[] += {"securityContext":{"privileged":true}}' | kubectl replace -f - +``` + +3) Modify the OpenShift cluster(s) with the following scripts based on https://istio.io/v1.7/docs/setup/platform-setup/openshift/ + +``` +# Istio Openshift configurations Post Install +oc -n istio-system expose svc/istio-ingressgateway --port=http2 +oc adm policy add-scc-to-user privileged -z istio-cni -n kube-system +oc adm policy add-scc-to-group privileged system:serviceaccounts:logging +oc adm policy add-scc-to-group anyuid system:serviceaccounts:logging +oc adm policy add-scc-to-group privileged system:serviceaccounts:monitoring +oc adm policy add-scc-to-group anyuid system:serviceaccounts:monitoring + +cat <<\EOF >> NetworkAttachmentDefinition.yaml +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: istio-cni +EOF +oc -n logging create -f NetworkAttachmentDefinition.yaml +oc -n monitoring create -f NetworkAttachmentDefinition.yaml +``` + +### Konvoy +* [Prerequistes can be found here](https://repo1.dso.mil/platform-one/distros/d2iq/konvoy/konvoy/-/tree/master/docs/1.5.0#prerequisites) +* [Different Deployment Scenarios have been documented here](https://repo1.dso.mil/platform-one/distros/d2iq/konvoy/konvoy/-/tree/master/docs/1.4.4/install) + +### RKE2 +* RKE2 turns PSPs on by default (see above for tips on disabling) +* RKE2 sets selinux to enforcing by default ([see os_preconfiguration.md for selinux config](os_preconfiguration.md)) + +Since BigBang makes several assumptions about volume and load balancing provisioning by default, it's vital that the rke2 cluster must be properly configured. The easiest way to do this is through the in tree cloud providers, which can be configured through the `rke2` configuration file such as: + +```yaml +# aws, azure, gcp, etc... +cloud-provider-name: aws + +# additionally, set below configuration for private AWS endpoints, or custom regions such as (T)C2S (us-iso-east-1, us-iso-b-east-1) +cloud-provider-config: ... +``` + +For example, if using the aws terraform modules provided [on repo1](https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform), setting the variable: `enable_ccm = true` will ensure all the necessary resources tags. + +In the absence of an in-tree cloud provider (such as on-prem), the requirements can be met by ensuring a default storage class and automatic load balancer provisioning exist. + diff --git a/docs/guides/prerequisites/os_preconfiguration.md b/docs/guides/prerequisites/os_preconfiguration.md new file mode 100644 index 0000000000000000000000000000000000000000..c4db49e11b7402b58824ffb06cd039f5f0ec4f28 --- /dev/null +++ b/docs/guides/prerequisites/os_preconfiguration.md @@ -0,0 +1,56 @@ +# OS Configuration Pre-Requisites: + + +## Disable swap (Kubernetes Best Practice) +1. Identify configured swap devices and files with cat /proc/swaps. +2. Turn off all swap devices and files with swapoff -a. +3. Remove any matching reference found in /etc/fstab. +(Credit: Above copy pasted from Aaron Copley of [Serverfault.com](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux)) + + +## ECK specific configuration (ECK is a Core BB App): +Elastic Cloud on Kubernetes (Elasticsearch Operator) deployed by BigBang uses memory mapping by default. In most cases, the default address space is too low and must be configured. +To ensure unnecessary privileged escalation containers are not used, these kernel settings should be applied before BigBang is deployed: + +```bash +sudo sysctl -w vm.max_map_count=262144 #(ECK crash loops without this) +``` + +More information can be found from elasticsearch's documentation [here](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html#k8s-virtual-memory) + + +## SELinux specific configuration: +* If SELinux is enabled and the OS hasn't received additional pre-configuration, then users will see istio init-container crash loop. +* Depending on security requirements it may be possible to set selinux in permissive mode: `sudo setenforce 0`. +* Additional OS and Kubernetes specific configuration are required for istio to work on systems with selinux set to `Enforcing`. + +By default, BigBang will deploy istio configured to use `istio-init` (read more [here](https://istio.io/latest/docs/setup/additional-setup/cni/)). To ensure istio can properly initialize enovy sidecars without container privileged escalation permissions, several system kernel modules must be pre-loaded before installing BigBang: + +```bash +modprobe xt_REDIRECT +modprobe xt_owner +modprobe xt_statistic +``` + + +## Sonarqube specific configuration (Sonarqube is a BB Addon App): +Sonarqube requires the following kernel configurations set at the node level: + +```bash +sysctl -w vm.max_map_count=524288 +sysctl -w fs.file-max=131072 +ulimit -n 131072 +ulimit -u 8192 +``` + +Another option includes running the init container to modify the kernel values on the host (this requires a busybox container run as root): + +```yaml +addons: + sonarqube: + values: + initSysctl: + enabled: true +``` +**This is not the recommended solution as it requires running an init container as privileged.** + diff --git a/docs/k8s-storage/README.md b/docs/k8s-storage/README.md index ad4285db68838969558faf644dad43d5a44d711f..89c9a2f111fcfa671f1a129448859bf04f1c9224 100644 --- a/docs/k8s-storage/README.md +++ b/docs/k8s-storage/README.md @@ -1,17 +1,17 @@ -## Kubernetes Storage Options +# Kubernetes Storage Options Use this data to assist in your CSI decision. However, when using a cloud provider we suggest you use their Kubernetes CSI. ## Feature Matrix -| Product | BB Compatible | FOSS | In Ironbank | RWX/RWM Support | Airgap Compatible | Cloud Agnostic | +| Product | BB Compatible | License Type | In Ironbank | RWX/RWM Support | Airgap Compatible | Cloud Agnostic | | --------- | --------- | --------- | --------- | --------- | --------- | --------- | -Amazon EBS CSI | **X** | N/A | | **X** | AWS Dependent | No | -Azure Disk CSI | Not Tested | N/A | | **X** | Azure Dependent | No | -Longhorn v1.1.0 | **X** | **X** | | **X** | **X** - [Docs](https://longhorn.io/docs/1.1.0/advanced-resources/deploy/airgap/) | Yes, uses host storage | -OpenEBS (jiva) | **X** | **X** | | **X** **[Alpha](https://docs.openebs.io/docs/next/rwm.html)** | Manual Work Required | Yes, uses host storage | -Rook-Ceph | **X** | **X** | | **X** | Manual Work Required | Yes, uses host storage | -Portworx | **X** | | | **X** | **X** - [Docs](https://docs.portworx.com/portworx-install-with-kubernetes/operate-and-maintain-on-kubernetes/pxcentral-onprem/install/px-central/) | Yes, uses host storage | +Amazon EBS CSI | **X** | Apache License 2.0 | | **X** | AWS Dependent | No | +Azure Disk CSI | Not Tested | Apache License 2.0 | | **X** | Azure Dependent | No | +Longhorn v1.1.0 | **X** | Apache License 2.0 | | **X** | **X** - [Docs](https://longhorn.io/docs/1.1.0/advanced-resources/deploy/airgap/) | Yes, uses host storage | +OpenEBS (jiva) | **X** | Apache License 2.0 | | **X** **[Alpha](https://docs.openebs.io/docs/next/rwm.html)** | Manual Work Required | Yes, uses host storage | +Rook-Ceph | **X** | Rook - Apache License 2.0. Ceph - dual licensed under the LGPL version 2.1 or 3.0 | | **X** | Manual Work Required | Yes, uses host storage | +Portworx | **X** | Tiered License - [See website](https://docs.portworx.com/reference/knowledge-base/px-licensing/) | | **X** | **X** - [Docs](https://docs.portworx.com/portworx-install-with-kubernetes/operate-and-maintain-on-kubernetes/pxcentral-onprem/install/px-central/) | Yes, uses host storage | ## Benchmark Results @@ -20,7 +20,7 @@ Benchmarks were tested on AWS with GP2 ebs volumes using using FIO, see [example | Product | Random Read/Write IOPS | Average Latency (usec) | Sequential Read/Write | Mixed Random Read/Write IOPS | | --------- | --------- | --------- | --------- | --------- | Amazon EBS CSI | 2997/2996. BW: 128MiB/s / 128MiB/s | 1331.61 | 129MiB/s / 131MiB/s | 7203/2390 -Azure Disk CSI | | | | +Azure Disk CSI | | | | Longhorn v1.1.0 | 6155/1551 BW: 230MiB/s / 96.3MiB/s | 1042.53 | 319MiB/s / 130MiB/s | 3804/1267 OpenEBS (jiva) | 2183/770. BW: 76.8MiB/s / 45.8MiB/s | 2059.55 | 132MiB/s / 98.2MiB/s | 1590/533 Rook-Ceph | 10.7k/3205. BW: 503MiB/s / 148MiB/s | 548.36/s | 496MiB/s / 154MiB/s | 6664/2228 @@ -30,37 +30,42 @@ Portworx 2.6 | 3016/19.3k. BW: 74.5MiB/s / 85.1MiB/s | 1337.31 | 113MiB/s / 12 [Website/Docs](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html) -### REQUIREMENTS +**REQUIREMENTS** - Must be using AWS -### Notes -- Super easy use, apply CSI and you done! +**Notes** + +- Apache License 2.0 +- Very easy to install and use, apply CSI spec and you are ready. ## Azure Disk CSI [Website/Docs](https://docs.microsoft.com/en-us/azure/aks/azure-disk-csi) -### REQUIREMENTS +**REQUIREMENTS** - Must be using Azure -### Notes -- Super easy use, apply CSI and you done! +**Notes** + +- Apache License 2.0 +- Very easy to install and use, apply CSI spec and you are ready. ## Longhorn [Website/Docs](https://longhorn.io/) -### REQUIREMENTS +**REQUIREMENTS** - RWX requires `nfs-common` to be installed on the nodes. [Longhorn RWX Docs](https://longhorn.io/docs/1.1.0/advanced-resources/rwx-workloads/) -### Notes +**Notes** -- 100% open source +- Apache License 2.0 - Easiest to install -- Documented airgap install process +- Built-in backup tool. +- Documented airgap install process. [Docs](https://longhorn.io/docs/1.1.0/advanced-resources/deploy/airgap/) - GUI provides data and observability; replica status, cluster health status, backup status, and backup initiation/recovery. - Native backup to S3 or NFS @@ -68,40 +73,51 @@ Portworx 2.6 | 3016/19.3k. BW: 74.5MiB/s / 85.1MiB/s | 1337.31 | 113MiB/s / 12 [Website/Docs](https://openebs.io/) -### REQUIREMENTS +**REQUIREMENTS** -- Blank, un-partitioned attached disk(s) +- Blank and un-partitioned attached disk(s) - RWX is in Alpha and requires work. [OpenEBS RWX Docs](https://docs.openebs.io/docs/next/rwm.html) -### Notes +**Notes** + +- Very flexible, supports multiple storage designs. +Application requirements | Storage Type | OpenEBS Volumes +| --------- | --------- | --------- | +Low Latency, High Availability, Synchronous replication, Snapshots, Clones, Thin provisioning | SSDs/Cloud Volumes | OpenEBS Mayastor +High Availability, Synchronous replication, Snapshots, Clones, Thin provisioning | Disks/SSDs/Cloud Volumes | OpenEBS cStor +High Availability, Synchronous replication, Thin provisioning | hostpath or external mounted storage | OpenEBS Jiva +Low latency, Local PV | hostpath or external mounted storage | Dynamic Local PV - Hostpath +Low latency, Local PV | Disks/SSDs/Cloud Volumes | Dynamic Local PV - Device +Low latency, Local PV, Snapshots, Clones | Disks/SSDs/Cloud Volumes | OpenEBS Dynamic Local PV - ZFS ## Rook-Ceph [Website/Docs](https://rook.io/) -### REQUIREMENTS +**REQUIREMENTS** -- Blank, un-partitioned attached disk(s) +- Blank and un-partitioned attached disk(s) -### Notes +**Notes** -- 100% open source +- Rook - Apache License 2.0. +- Ceph - dual licensed under the LGPL version 2.1 or 3.0 - Very Fast ## Portworx [Website/Docs](https://docs.portworx.com/portworx-install-with-kubernetes/) -### REQUIREMENTS +**REQUIREMENTS** -- Blank, un-partitioned attached disk(s) +- Blank and un-partitioned attached disk(s) -### Notes +**Notes** - Portworx Essentials is free **up to** 5nodes, 5TB Storage, 500 volumes -- Portworx Enterprise and PX-Backup require paid licenses +- Portworx Enterprise and PX-Backup require paid licenses - Best Mixed IOPS, average read/write performance - Install is very picky about the container runtime hostpath - Tested on Konvoy 1.6.1 due to Portworx issues when using RKE2 diff --git a/docs/1_overview.md b/docs/overview.md similarity index 100% rename from docs/1_overview.md rename to docs/overview.md diff --git a/docs/postrenderers.md b/docs/postrenderers.md index 8de23891b5dc3d120e90df618992656057ee92de..e536fa92e14766c77cddbfdb24fe252b124990a1 100644 --- a/docs/postrenderers.md +++ b/docs/postrenderers.md @@ -1,10 +1,8 @@ # Post Renderers +[Flux V2](https://toolkit.fluxcd.io/) provides the ability to apply kustomizations on a Helm Release after rendering using a [Post Renderer](https://toolkit.fluxcd.io/components/helm/helmreleases/#post-renderers). This feature provides significant flexibility to the Helm objects, and allows for adjusting values inside of Helm that are not exposed explicitly as part of the values file. Each `HelmRelease` is configured with a `postRenderer` pass through: -[Flux V2](https://toolkit.fluxcd.io/) provides the ability to apply kustomizations on a Helm Release after rendering using a [Post Renderer](https://toolkit.fluxcd.io/components/helm/helmreleases/#post-renderers). This feature provides significant flexibility to the Helm objects, and allows for adjusting values inside of Helm that are not exposed explicitly as part of the values file. Each `HelmRelease` is configured with a `postRenderer` pass through: - - -``` +```yaml ... jaeger: postRenderers: @@ -36,4 +34,4 @@ jaeger: - name: registry1.dso.mil/ironbank/opensource/jaegertracing/jaeger-operator newName: registry1.dso.mil/ironbank/opensource/jaegertracing/jaeger-operator newTag: 1.23.0 -``` \ No newline at end of file +``` diff --git a/docs/b_troubleshooting.md b/docs/troubleshooting.md similarity index 99% rename from docs/b_troubleshooting.md rename to docs/troubleshooting.md index 1cdec19860564c5302d910eb43ff083053142ea9..4ec0b236b87de06effff98b742ff2d4a7eccf40e 100644 --- a/docs/b_troubleshooting.md +++ b/docs/troubleshooting.md @@ -23,7 +23,7 @@ Big Bang is configured to retry failed package installations and upgrades. Befo Helpful debugging commands: -```bash +```shell # Get the status kubectl get pods -n flux-system @@ -40,7 +40,7 @@ kubectl get events -n flux-system Helpful debugging commands: -```bash +```shell # Get the status kubectl get gitrepositories -A @@ -65,7 +65,7 @@ kubectl get events --field-selector involvedObject.kind=GitRepository -A Helpful debugging commands: -```bash +```shell # Get the status kubectl get hr -A @@ -85,7 +85,7 @@ kubectl get events --field-selector involvedObject.kind=HelmRelease -A Helpful debugging commands: -```bash +```shell # Get the status kubectl get kustomizations -A @@ -108,7 +108,7 @@ kubectl get events --field-selector involvedObject.kind=Kustomization -A Helpful debugging commands: -```bash +```shell # Get the status kubectl get deployments,po -n <namespace of package> diff --git a/docs/understanding/logs_data_flow_diagram.md b/docs/understanding/logs_data_flow_diagram.md deleted file mode 100644 index a4a3cefd6db2b400955a17fb26349ad5dd1f2665..0000000000000000000000000000000000000000 --- a/docs/understanding/logs_data_flow_diagram.md +++ /dev/null @@ -1,14 +0,0 @@ -# Goals of this Diagram: -* Help new users understand the data flow of pod logs - -# Kubernetes Pod Logs Data Flow Diagram: - - -| Line Number | Protocol | Port | Description | -| --- | --- | --- | --- | -| N1 | Volume Mount | NA | Fluentbit reads pod logs from a host node volume mount | -| N2 | HTTPS | TCP:9200 | Fluentbit sends logs to Elastic Search over the URL: https://logging-ek-es-http:9200 (This URL is only exposed over the Kubernetes Inner Cluster Network, and because Fluentbit and ElasticSearch have Istio Envoy Proxy sidecar containers the network traffic is protected by the service mesh.) | - -## Notes: -1. The fluentbit log shipper is configured to send pod logs to the ElasticSearch Cluster in the logstash data format. Logstash_Format On -2. By default: The log index logstash-%Y.%m.%d will create a new log index everyday, because %d will increment by one everyday. There are no default Index Lifecycle Management Policies that are created or applied to these indexes. It is recommended that customers create a Index Lifecycle policy to prevent disk space from filling up. (Example: Archive to s3 and then delete from PVC logs older than N days.) diff --git a/docs/understanding_bigbang/README.md b/docs/understanding_bigbang/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e9bed1711c7b128cb5b1f3ec8c866238be50db5d --- /dev/null +++ b/docs/understanding_bigbang/README.md @@ -0,0 +1,69 @@ +# Useful Background Contextual Information + +## The purpose of this section is to help consumers of BigBang understand + +* BigBang's scope: what it is and isn't, goals and non-goals +* The value add gained by using BigBang +* What to expect in terms of prerequisites for those interested in using BigBang +* Help those who want a deep drive concrete understanding of BigBang quickly come up to speed, via pre-reading materials, that can act as a self service new user orientation to point out features and nuances that new users wouldn't know to ask about. + +## BigBang's scope: what it is and isn't, goals and non-goals + +### What BigBang is + +* BigBang is a Helm Chart that is used to deploy a DevSecOps Platform composed of IronBank hardened container images on a Kubernetes Cluster. +* See [/docs/README.md](../README.md#what-is-bigbang?) more details. + +### What BigBang isn't + +* BigBang by itself is not intended to be an End to End Secure Kubernetes Cluster Solution, but rather a reusable secure component/piece of a full solution. +* A Secure Kubernetes Cluster Solution, will have multiple components, that can each be swappable and in some cases considered optional depending on use case and risk tolerance: + Example of some potential components in a full End to End Solution: + * P1's Cloud Native Access Point to protect Ingress Traffic. (This can be swapped with an equivalent, or considered optional in an internet disconnected setup.) + * Hardened Host OS + * Hardened Kubernetes Cluster (BigBang assumes ByoC, Bring your own Cluster) (The BigBang team recommends consumers who are interested in a full solution, partner with Vendors of Kubernetes Distributions to satisfy the prerequisite of a Hardened Kubernetes Cluster.) + * Hardened Applications running on the Cluster (BigBang helps solve this component) + +## Value add gained by using BigBang + +* Compliant with the [DoD DevSecOps Reference Architecture Design](https://dodcio.defense.gov/Portals/0/Documents/DoD%20Enterprise%20DevSecOps%20Reference%20Design%20v1.0_Public%20Release.pdf) +* Can be used to check some but not all of the boxes needed to achieve a cATO (Continuous Authority to Operate.) +* Uses hardened IronBank Container Images. (left shifted security concern) +* GitOps adds security benefits, and BigBang leverages GitOps, and can be further extended using GitOps. + Security Benefits of GitOps: + * Prevents config drift between state of a live cluster and IaC/CaC source of truth: By avoiding giving any humans direct kubectl access, by only allowing humans to deploy via git commits, out of band changes are limited. + * Git Repo based deployments create an audit trail. + * Secure Configurations become reusable, which lowers the burden of implementing secure configurations. +* Lowers maintainability overhead involved in keeping the images of the DevSecOps Platform's up to date and maintaining a secure posture over the long term. This is achieved by pairing the GitOps pattern with the Umbrella Helm Chart Pattern. + Let's walk through an example: + * Initially a kustomization.yaml file in a git repo will tell the Flux GitOps operator (software deployment bot running in the cluster), to deploy version 1.0.0 of BigBang. BigBang could deploy 10 helm charts. And each helm chart could deploy 10 images. (So BigBang is managing 100 container images in this example.) + * After a 2 week sprint version 1.1.0 of BigBang is released. A BigBang consumer updates the kustomization.yaml file in their git repo to point to version 1.1.0 of the BigBang Helm Chart. That triggers an update of 10 helm charts to a new version of the helm chart. Each updated helm chart will point to newer versions of the container images managed by the helm chart. + * So when the end user edits the version of 1 kustomization.yaml file, that triggers a chain reaction that updates 100 container images. + * These upgrades are pre-tested. The BigBang team "eats our own dogfood". Our CI jobs for developing the BigBang product, run against a BigBang dogfood Cluster, and as part of our release process we upgrade our dogfood cluster, before publishing each release. (Note: We don't test upgrades that skip multiple minor versions.) + * Auto updates are also possible by setting kustomization.yaml to 1.x.x, because BigBang follows semantic versioning, flux is smart enough to read x as the most recent version number. +* DoD Software Developers get a Developer User Experience of "SSO for free". Instead of developers coding SSO support 10 times for 10 apps. The complexity of SSO support is baked into the platform, and after an Ops team correctly configures the Platform's SSO settings, SSO works for all apps hosted on the platform. The developer's user experience for enabling SSO for their app then becomes as simple as adding the label istio-injection=enabled (which transparently injects mTLS service mesh protection into their application's Kubernetes YAML manifest) and adding the label protect=keycloak to each pod, which leverages an EnvoyFilter CustomResource to auto inject an SSO Authentication Proxy in front of the data path to get to their application. + +## Acronyms + +* CSP: Cloud Service Provider +* L4 LB: Layer 4 Load Balancer +* KMS: Key Management System / Encryption as a Service (AWS/GCP KMS, Azure Key Vault, HashiCorp Transient Secret Engine) +* PGP: Pretty Good Privacy (Asymmetric Encryption Key Pair, where public key is used to encrypt, private key used to decrypt) +* SOPS: "Secret Operations" CLI tool by Mozilla, leverages KMS or PGP to encrypt secrets in a Git Repo. (Flux and P1's modified ArgoCD can use SOPS to decrypt secrets stored in a Git Repo.) +* cATO: continuous Authority to Operate +* AO: Authorizing Official (Government Official who determines OS and Kubernetes Cluster hardening requirements, that result in a level of acceptable remaining risk that they're willing to sign off on for a Kubernetes Cluster to receive an ATO, and a BigBang Cluster to receive a cATO) +* IaC: Infrastructure as Code +* CaC: Configuration as Code +* CAC: Common Access Card + +## Prerequisites + +* Prerequisites vary depending on deployment scenario +* [Prerequisites can be found here](../guides/prerequisites) + +## Additional Useful Background Contextual Information + +* We are still migrating some docs from IL2 Confluence, and the BigBang Onboarding Engineering Cohort into to this repositories' /docs folder, the planned future state is for this to be a primary location for docs going forward. (Any docs hosted in other repositories, will at least have pointers hosted here.) +* There are multiple implementations of Helm Charts (Helm repositories, .tgz, and files and folders in a git repo), whenever P1 refers to a helm chart we're always referring to the files and folders in a git repo implementation, which is stored in /chart folder in a git repo. +* Additional pre-reading materials to develop a better understanding of BigBang before deploying can be found in this understanding_bigbang folder. +* If you see an issue with docs or packages, please [open an issue against the main BigBang Repo](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/issues), instead of the individual package repo. diff --git a/docs/understanding/images/logs_data_flow_diagram.app.diagrams.net.png b/docs/understanding_bigbang/images/logs_data_flow_diagram.app.diagrams.net.png similarity index 100% rename from docs/understanding/images/logs_data_flow_diagram.app.diagrams.net.png rename to docs/understanding_bigbang/images/logs_data_flow_diagram.app.diagrams.net.png diff --git a/docs/understanding/images/logs_data_flow_diagram.app.diagrams.net.xml b/docs/understanding_bigbang/images/logs_data_flow_diagram.app.diagrams.net.xml similarity index 100% rename from docs/understanding/images/logs_data_flow_diagram.app.diagrams.net.xml rename to docs/understanding_bigbang/images/logs_data_flow_diagram.app.diagrams.net.xml diff --git a/docs/understanding/images/metrics_data_flow_diagram.app.diagrams.net.png b/docs/understanding_bigbang/images/metrics_data_flow_diagram.app.diagrams.net.png similarity index 100% rename from docs/understanding/images/metrics_data_flow_diagram.app.diagrams.net.png rename to docs/understanding_bigbang/images/metrics_data_flow_diagram.app.diagrams.net.png diff --git a/docs/understanding/images/metrics_data_flow_diagram.app.diagrams.net.xml b/docs/understanding_bigbang/images/metrics_data_flow_diagram.app.diagrams.net.xml similarity index 100% rename from docs/understanding/images/metrics_data_flow_diagram.app.diagrams.net.xml rename to docs/understanding_bigbang/images/metrics_data_flow_diagram.app.diagrams.net.xml diff --git a/docs/understanding_bigbang/logs_data_flow_diagram.md b/docs/understanding_bigbang/logs_data_flow_diagram.md new file mode 100644 index 0000000000000000000000000000000000000000..280a5d891fa57ae468f28418f960a9038288ad2a --- /dev/null +++ b/docs/understanding_bigbang/logs_data_flow_diagram.md @@ -0,0 +1,17 @@ +# Goals of this Diagram + +* Help new users understand the data flow of pod logs + +## Kubernetes Pod Logs Data Flow Diagram + + + +| Line Number | Protocol | Port | Description | +| --- | --- | --- | --- | +| N1 | Volume Mount | NA | Fluent Bit reads pod logs from a host node volume mount | +| N2 | HTTPS | TCP:9200 | Fluent Bit sends logs to Elastic Search over the URL: <https://logging-ek-es-http:9200> (This URL is only exposed over the Kubernetes Inner Cluster Network, and because Fluent Bit and ElasticSearch have Istio Envoy Proxy sidecar containers the network traffic is protected by the service mesh.) | + +## Notes + +1. The Fluent Bit log shipper is configured to send pod logs to the ElasticSearch Cluster in the logstash data format. Logstash_Format On +2. By default: The log index logstash-%Y.%m.%d will create a new log index everyday, because %d will increment by one everyday. There are no default Index Lifecycle Management Policies that are created or applied to these indexes. It is recommended that customers create a Index Lifecycle policy to prevent disk space from filling up. (Example: Archive to s3 and then delete from PVC logs older than N days.) diff --git a/docs/understanding/metrics_data_flow_diagram.md b/docs/understanding_bigbang/metrics_data_flow_diagram.md similarity index 80% rename from docs/understanding/metrics_data_flow_diagram.md rename to docs/understanding_bigbang/metrics_data_flow_diagram.md index 482c17e1e625b1a0a1c733519a63337e444e7681..ebe992bd7998a1e41532b02b3248eda7f8f19305 100644 --- a/docs/understanding/metrics_data_flow_diagram.md +++ b/docs/understanding_bigbang/metrics_data_flow_diagram.md @@ -1,8 +1,10 @@ -# Goals of this Diagram: +# Goals of this Diagram + * Help new users understand the data flow of prometheus metrics -# Prometheus Metrics Data Flow Diagram: - +## Prometheus Metrics Data Flow Diagram + + | Line Number | Protocol | Port | Description | | --- | --- | --- | --- |