diff --git a/docs/developer/ci-workflow.md b/docs/developer/ci-workflow.md index 748dc81905a46c9e62d9c85aaf04d1993bf626ab..15217915522f54fcaa3cb4c88d0e260bb6cc64f9 100644 --- a/docs/developer/ci-workflow.md +++ b/docs/developer/ci-workflow.md @@ -1,26 +1,15 @@ -## Gitlab-ci Workflow +# Gitlab-ci Workflow The following is meant to serve as an overview of the pipeline stages required to get a commit merged. There are package, bigbang, and infrastructure pipelines. -### Table of Contents: - -- [Generic Package Pipeline Stages](#generic-package-pipeline-stages) - - [Configuration Validation](#configuration-validation) - - [Package Tests](#package-tests) -- [BigBang Pipeline Stages](#bigbang-pipeline-stages) - - [Pre Vars](#pre-vars) - - [Smoke Tests](#smoke-tests) -- [Infrastructure Testing Pipeline Stages](#infrastructure-testing-pipeline-stages) - - [Network Creation](#network-creation) - - [Cluster Creation](#cluster-creation) - - [Big Bang Installation](#big-bang-installation) - - [Big Bang Tests](#big-bang-tests) - - [Teardown](#teardown) -### Generic Package Pipeline Stages - -This pipeline is triggered by the following for individual bigbang packages: +[[_TOC_]] + +## Generic Package Pipeline Stages + +This pipeline is triggered by the following for individual bigbang packages: + - merge request events - - Note: Currently upgrade step only runs during MR events + - Note: Currently upgrade step only runs during MR events - manual tag events - commits to default branch @@ -28,14 +17,15 @@ This pipeline is triggered by the following for individual bigbang packages: [Link to draw.io diagram file](diagrams/BB_gitlab_ci_diagram.drawio). This diagram file should be modified on draw.io and exported into this repository when the developer / ci workflow changes. It is provided here for ease of use. -#### Configuration Validation +### Configuration Validation This stage runs a `helm conftest` which is a plugin for testing helm charts with Open Policy Agent. It provides the following checks: - confirms that the helm chart is valid (should fail similar to how a helm lint fails if there is bad yaml, etc) - runs the helm chart against a set of rego policies - currently these tests will only raise warnings on "insecure" things and will allow pipeline to proceed. -#### Package Tests +### Package Tests + This stage verifies several easy to check assumptions such as: - does package successfully install @@ -44,11 +34,12 @@ This stage verifies several easy to check assumptions such as: If required, the upgrade step can skipped when MR title starts with 'SKIP UPGRADE' -### BigBang Pipeline Stages +## BigBang Pipeline Stages + +This pipeline is triggered by the following for individual bigbang packages: -This pipeline is triggered by the following for individual bigbang packages: - merge request events - - Note: Currently upgrade step only runs during MR events + - Note: Currently upgrade step only runs during MR events - manual tag events - commits to default branch @@ -57,10 +48,12 @@ The pipeline is split into several stages:  [Link to draw.io diagram file](diagrams/BB_gitlab_ci_diagram.drawio). This diagram file should be modified on draw.io and exported into this repository when the developer / ci workflow changes. It is provided here for ease of use. -#### Pre Vars + +### Pre Vars This stage currently has one purpose at this point which is to generate a terraform var. -#### Smoke Tests + +### Smoke Tests For fast feedback testing, an ephemeral in cluster pipeline is created using [`k3d`](https://k3d.io) that lives for the lifetime of the gitlab ci job. Within that cluster, BigBang is deployed, and an initial set of smoke tests are performed against the deployment to ensure basic conformance. @@ -71,6 +64,7 @@ This stage verifies several easy to check assumptions such as: - are endpoints routable This stage will fail if: + - script failures - gitrepositories status condition != ready - expected helm releases are not present @@ -85,14 +79,15 @@ This stage also serves as a guide for local development, and care is taken to en This stage is ran on every merge request event, and is a requirement for merging. -### Infrastructure Testing Pipeline Stages +## Infrastructure Testing Pipeline Stages Ultimately, BigBang is designed to deploy production ready workloads on real infrastructure. While local and ephemeral clusters are excellent for fast feedback during development, changes must ultimately be tested on real clusters on real infrastructure. As part of BigBang's [charter](https://repo1.dso.mil/platform-one/big-bang/charter), it is expected work on any CNCF conformant kubernetes cluster, on multiple clouds, and on premise environments. By very definition, this means infrastructure testing is _slow_. To strive for a pipeline with a happy medium of providing fast feedback while still exhaustively testing against environments that closely mirror production, __infrastructure testing only occurs on manual actions on merge request commits.__ -This requires adding `test-ci::infra` label to your MR. In addition, infrastructure testing pipeline is run nightly on a schedule. +This requires adding `test-ci::infra` label to your MR. In addition, infrastructure testing pipeline is run nightly on a schedule. + +Note: Due to the amount of resources and time required for this pipeline, the `test-ci::infra` label should be used sparingly. The scheduled nightly run will ideally catch issues if they are already in master. The `test-ci::infra` label should mainly be used when: -Note: Due to the amount of resources and time required for this pipeline, the `test-ci::infra` label should be used sparringly. The scheduled nightly run will ideally catch issues if they are already in master. The `test-ci::infra` label should mainly be used when: - your changes affect the infra ci - your changes are large in scope and likely to behave differently on "real" clusters @@ -107,13 +102,14 @@ More information on the full set of infrastructure tests are below:  [Link to draw.io diagram file](diagrams/BB_gitlab_ci_diagram.drawio). This diagram file should be modified on draw.io and exported into this repository when the developer / ci workflow changes. It is provided here for ease of use. -#### Network Creation + +### Network Creation For each cloud, a BigBang owned network will be created that conform with the appropriate set of tests about to be ran. For example, to validate that Big Bang deploys in a connected environment on AWS, a VPC, subnets, route tables, etc... are created, and the outputs are made available through terraform's remote `data` source. At this time the infrastructure testing pipeline is only utilizing internet-connect AWS govcloud. -#### Cluster Creation +### Cluster Creation The infrastructure pipeline is currently setup to standup an `rke2` cluster by default. @@ -121,13 +117,13 @@ An `rke2` cluster is created that leverages the upstream [terraform modules](htt It is a hard requirement at this stage that every cluster outputs an admin scoped `kubeconfig` as a gitlab ci artifact. This artifact will be leveraged in the following stages for interacting with the created cluster. -#### Big Bang Installation +### Big Bang Installation Given the kubeconfig created in the previous stage, BigBang is installed on the cluster using the same installation process used in the smoke tests. Like any BigBang installation, several cluster requirements (see [Pre-requisites](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/tree/master/docs/guides/prerequisites)) must be met before BigBang is installed, and it is up to the vendor to ensure those requirements are met. -#### Big Bang Tests +### Big Bang Tests Assuming BigBang has installed successfully, additional tests residing within the `./tests` folder of this repository are run against the deployed cluster. @@ -137,7 +133,7 @@ Currently there are 3 test scripts that test the following: - curl VirtualService endpoints, to validate istio works + the UIs are up - fetch a list of non-IB images (this test never fails but provides some contextual info) -#### Teardown +### Teardown Infrastructure teardown happens in the reverse sequence as to which they are deployed, and the pipeline will ensure these teardown jobs are _always_ ran, regardless of whether or not the previous jobs were successful. diff --git a/docs/developer/develop-package.md b/docs/developer/develop-package.md index 2171eb10bac05e5e4ee5d733fca6dffef3a1f9de..c8114c627561231b200f3c82fe0ff25c14c2e6aa 100644 --- a/docs/developer/develop-package.md +++ b/docs/developer/develop-package.md @@ -74,14 +74,17 @@ Package is the term we use for an application that has been prepared to be deplo CONTRIBUTING.md < instructions for how to contribute to the project README.md < introduction and high level information ``` + 1. Create a top-level tests directory and inside put a test-values.yaml file that includes any special values overrides that are needed for CI pipeline testing. Refer to other packages for examples. But this is specific to what is needed for your package. - ``` + + ```shell mkdir tests touch test-values.yaml ``` 1. At a high level, a Package structure should look like this when you are finished - ```text + + ```plaintext ├── chart/ └── templates/ └── bigbang/ @@ -136,11 +139,13 @@ Under Settings → Repository → Default Branch, ensure that main is selected. ``` Create a local directory on your workstation where you store your helm values override files. Don't make test changes in the Package values.yaml because they could accidentally be committed. The most convenient location is in a sibling directory next to the Package repo. Here is an example directory structure: - ```text + + ```plaintext ├── PackageRepo/ └── overrides/ └── override-values.yaml ``` + Here are the dev test steps you can iterate: ```shell @@ -170,7 +175,8 @@ Under Settings → Repository → Default Branch, ensure that main is selected. 1. After the merge create a git tag following the charter convention of {UpstreamChartVersion}-bb.{BigBangVersion}. The tag should exactly match the chart version in the Chart.yaml. example: 1.2.3-bb.0 -### Private registry secret creation +## Private registry secret creation + In some instances you may wish to manually create a private-registry secret in the namespace or during a helm deployment. There are a couple of ways to do this: 1. The first way is to add the secret manually using kubectl. This method is useful for standalone package testing/development. @@ -178,9 +184,11 @@ In some instances you may wish to manually create a private-registry secret in t ```shell kubectl create secret docker-registry private-registry --docker-server="https://registry1.dso.mil" --docker-username='Username' --docker-password="CLI secret" --docker-email=<your-email> --namespace=<package-namespace> ``` + 2. The second is to create a yaml file containing the secret and apply it during a helm install. This method is applicable when installing your new package as part of the Big Bang chart. In this example the file name is "reg-creds.yaml": Create the file with the secret contents: + ```yaml registryCredentials: registry: registry1.dso.mil @@ -190,6 +198,7 @@ Create the file with the secret contents: ``` Then include a reference to your file during your helm install command by adding the below `-f` to your Big Bang install command: + ```shell -f reg-creds.yaml - ``` \ No newline at end of file + ``` diff --git a/docs/developer/development-environment.md b/docs/developer/development-environment.md index 99cc35e34630a0c0a728ba537847cebd817fb044..ed198b2693c6c44b482eabdf6dda2392f3215ad8 100644 --- a/docs/developer/development-environment.md +++ b/docs/developer/development-environment.md @@ -8,7 +8,7 @@ It is not recommend to run k3d with BigBang on your local computer. Instead use There is a script in the [/docs/developer/scripts/](./scripts/) directory that automates the creation and teardown of a development environment. There is a video tutorial in the PlatformOne IL2 Confluence. Search for "T3" and click the link to the page. The video is #57 on 22-February-2022. -The manual steps included below are no longer maintained. The manual steps are only included for historical reference as a study guide to understand how the script works. The script is the singular focus for development environments. +The manual steps included below are no longer maintained. The manual steps are only included for historical reference as a study guide to understand how the script works. The script is the singular focus for development environments. ## Prerequisites @@ -81,7 +81,7 @@ ssh -i ~/.ssh/your-ec2.pem ubuntu@$EC2_PUBLIC_IP # Remove any old Docker items sudo apt remove docker docker-engine docker.io containerd runc -# Install all pre-reqs for Docker +# Install all prerequisites for Docker sudo apt update sudo apt install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common @@ -227,7 +227,8 @@ gatekeeper: ``` **Note2:** The information in this note is simply to give you awareness in advance. You should create local directory on your workstation where you store your helm values override files. Development changes made in the code for testing could accidentally be committed. That is why you should create a separate local directory to hold your override values for testing. The location can be anywhere on your workstation but it is most convenient to place them in a sibling directory next to the BigBang repos. Below is an example directory structure. The directory names are fake (for example only). Other documents will give more specific detail as needed. - ```text + + ```plaintext ├── BigBangCodeRepo/ └── overrides/ ├── override-values-1.yaml @@ -307,125 +308,145 @@ k3d cluster create \ --port 443:443@loadbalancer \ --api-port 6443 ``` - - This will create a K3D cluster just like before, except we need to ensure the built in "servicelb" add-on is disabled so we can use metallb. -2. Find the Subnet for your k3d cluster's Docker network +- This will create a K3D cluster just like before, except we need to ensure the built in "servicelb" add-on is disabled so we can use metallb. -```shell -docker network inspect k3d-k3s-default | jq .[0].IPAM.Config[0] -``` +1. Find the Subnet for your k3d cluster's Docker network - - k3d-k3s-default is the name of the default bridge network k3d creates when creating a k3d cluster. - - We need the "Subnet": value to populate the correct addresses in the ConfigMap below. - - If my output looks like: - ```json - { - "Subnet": "172.18.0.0/16", - "Gateway": "172.18.0.1" - } - ``` - - Then the addresses I want to input for metallb would be `172.18.1.240-172.18.1.243` so that I can reserve 4 IP addresses within the subnet of the Docker Network. + ```shell + docker network inspect k3d-k3s-default | jq .[0].IPAM.Config[0] + ``` -3. Before installing BigBang we will need to install and configure [metallb](https://metallb.universe.tf/concepts/) + - k3d-k3s-default is the name of the default bridge network k3d creates when creating a k3d cluster. + - We need the "Subnet": value to populate the correct addresses in the ConfigMap below. + - If my output looks like: -```shell -kubectl create -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml -kubectl create -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml -cat << EOF > metallb-config.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - namespace: metallb-system - name: config -data: - config: | - address-pools: - - name: default - protocol: layer2 - addresses: - - 172.18.1.240-172.18.1.243 -EOF -kubectl create -f metallb-config.yaml -``` + ```json + { + "Subnet": "172.18.0.0/16", + "Gateway": "172.18.0.1" + } + ``` - - The commands will create a metallb install and configure it to assign LoadBalancer IPs within the range `172.18.1.240-172.18.1.243` which is within the standard Docker Bridge Network CIDR meaning that the linux network stack will have a route to this network already. + - Then the addresses I want to input for metallb would be `172.18.1.240-172.18.1.243` so that I can reserve 4 IP addresses within the subnet of the Docker Network. + +1. Before installing BigBang we will need to install and configure [metallb](https://metallb.universe.tf/concepts/) + + ```shell + kubectl create -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml + kubectl create -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml + cat << EOF > metallb-config.yaml + apiVersion: v1 + kind: ConfigMap + metadata: + namespace: metallb-system + name: config + data: + config: | + address-pools: + - name: default + protocol: layer2 + addresses: + - 172.18.1.240-172.18.1.243 + EOF + kubectl create -f metallb-config.yaml + ``` -4. Deploy BigBang with istio ingress gateways configured. + - The commands will create a metallb install and configure it to assign LoadBalancer IPs within the range `172.18.1.240-172.18.1.243` which is within the standard Docker Bridge Network CIDR meaning that the linux network stack will have a route to this network already. -5. Verify LoadBalancers +1. Deploy BigBang with istio ingress gateways configured. -```shell -kubectl get svc -n istio-system -``` +1. Verify LoadBalancers - - You should see a result like: -``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -istiod ClusterIP 10.43.59.25 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 151m -private-ingressgateway LoadBalancer 10.43.221.12 172.18.1.240 15021:31000/TCP,80:31001/TCP,443:31002/TCP,15443:31003/TCP 150m -public-ingressgateway LoadBalancer 10.43.35.202 172.18.1.241 15021:30000/TCP,80:30001/TCP,443:30002/TCP,15443:30003/TCP 150m -passthrough-ingressgateway LoadBalancer 10.43.173.31 172.18.1.242 15021:32000/TCP,80:32001/TCP,443:32002/TCP,15443:32003/TCP 119m -``` + ```shell + kubectl get svc -n istio-system + ``` - - With the key information here being the assigned `EXTERNAL-IP` sections for the ingressgateways. + - You should see a result like: -6. Update Hosts file on ec2 instance with IPs above + ```console + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + istiod ClusterIP 10.43.59.25 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 151m + private-ingressgateway LoadBalancer 10.43.221.12 172.18.1.240 15021:31000/TCP,80:31001/TCP,443:31002/TCP,15443:31003/TCP 150m + public-ingressgateway LoadBalancer 10.43.35.202 172.18.1.241 15021:30000/TCP,80:30001/TCP,443:30002/TCP,15443:30003/TCP 150m + passthrough-ingressgateway LoadBalancer 10.43.173.31 172.18.1.242 15021:32000/TCP,80:32001/TCP,443:32002/TCP,15443:32003/TCP 119m + ``` -```shell -sudo vim /etc/hosts -``` + - With the key information here being the assigned `EXTERNAL-IP` sections for the ingressgateways. - - Update it with similar entries: - - Applications with the following values (eg for Jaeger): - ```yaml - jaeger: - ingress: - gateway: "" #(Defaults to public-ingressgateway) - ``` - We will need to set to the EXTERNAL-IP of the public-ingressgateway - ``` - 172.18.1.241 jaeger.bigbang.dev - ``` - - Applications with the following values (eg for Logging): - ```yaml - logging: - ingress: - gateway: "private" - ``` - We will need to set to the EXTERNAL-IP of the private-ingressgateway - ``` - 172.18.1.240 kibana.bigbang.dev - ``` - - Keycloak will need to be set to the External-IP of the passthrough-ingressgateway - ``` - 172.18.1.242 keycloak.bigbang.dev +1. Update Hosts file on ec2 instance with IPs above + + ```shell + sudo vim /etc/hosts ``` - - With these DNS settings in place you will now be able to reach the external *.bigbang.dev URLs from this EC2 instance. - - To reach outside the EC2 instance use either SSH or SSHUTTLE commands to specify a local port for Dynamic application-level port forwarding (ssh -D). Example - ```shell - sshuttle --dns -vr ubuntu@$EC2_PRIVATE_IP 172.31.0.0/16 --ssh-cmd 'ssh -i ~/.ssh/your.pem -D 127.0.0.1:12345' - ``` - - and utilize Firefox's built in SOCKS proxy configuration to route DNS and web traffic through the application-level port forward from the SSH command. - 1. Open Firefox browser - 1. Click on hamburger menu in upper right corner and select ```Settings``` - 1. At the bottom of ```Settings``` page in the ```Network Settings``` section select ```Settings``` - 1. Select ```Manual proxy configuration``` and the following values - ``` - SOCKS Host: localhost - Port: 12345 - ``` - and select SOCKS v5 - 1. Select ```Proxy DNS when using SOCKS v5``` - -7. To be able to test SSO between BigBang Package apps and your own Keycloak instance deployed in the same cluster you will need to take some extra steps. For SSO OIDC to work the app pod from within the cluster must be able to reach ```keycloak.bigbang.dev```. When using a development k3d environment with the development TLS cert the public DNS for ```keycloak.bigbang.dev``` points to localhost IP 127.0.0.1. This means that from within pod containers your Keycloak deployment can't be found. Therefore the SSO will fail. The development hack to fix this is situation is to edit the cluster coredns configmap and add a NodeHosts entry for Keycloak. + - Update it with similar entries: + - Applications with the following values (eg for Jaeger): + + ```yaml + jaeger: + ingress: + gateway: "" #(Defaults to public-ingressgateway) + ``` + + We will need to set to the EXTERNAL-IP of the public-ingressgateway + + ```plaintext + 172.18.1.241 jaeger.bigbang.dev + ``` + + - Applications with the following values (eg for Logging): + + ```yaml + logging: + ingress: + gateway: "private" + ``` + + We will need to set to the EXTERNAL-IP of the private-ingressgateway + + ```plaintext + 172.18.1.240 kibana.bigbang.dev + ``` + + - Keycloak will need to be set to the External-IP of the passthrough-ingressgateway + + ```plaintext + 172.18.1.242 keycloak.bigbang.dev + ``` + + - With these DNS settings in place you will now be able to reach the external *.bigbang.dev URLs from this EC2 instance. + + - To reach outside the EC2 instance use either SSH or SSHUTTLE commands to specify a local port for Dynamic application-level port forwarding (ssh -D). Example + + ```shell + sshuttle --dns -vr ubuntu@$EC2_PRIVATE_IP 172.31.0.0/16 --ssh-cmd 'ssh -i ~/.ssh/your.pem -D 127.0.0.1:12345' + ``` + + - and utilize Firefox's built in SOCKS proxy configuration to route DNS and web traffic through the application-level port forward from the SSH command. + 1. Open Firefox browser + 1. Click on hamburger menu in upper right corner and select ```Settings``` + 1. At the bottom of ```Settings``` page in the ```Network Settings``` section select ```Settings``` + 1. Select ```Manual proxy configuration``` and the following values + + ```plaintext + SOCKS Host: localhost + Port: 12345 + ``` + + and select SOCKS v5 + 1. Select ```Proxy DNS when using SOCKS v5``` + +1. To be able to test SSO between BigBang Package apps and your own Keycloak instance deployed in the same cluster you will need to take some extra steps. For SSO OIDC to work the app pod from within the cluster must be able to reach ```keycloak.bigbang.dev```. When using a development k3d environment with the development TLS cert the public DNS for ```keycloak.bigbang.dev``` points to localhost IP 127.0.0.1. This means that from within pod containers your Keycloak deployment can't be found. Therefore the SSO will fail. The development hack to fix this is situation is to edit the cluster coredns configmap and add a NodeHosts entry for Keycloak. - Edit the coredns configmap - ``` + ```shell kubectl edit configmap/coredns -n kube-system ``` + - add NodeHosts entry for Keycloak using using the passthrough-ingressgateway service EXTERNAL-IP - ``` + + ```yaml data: NodeHosts: | 172.18.0.2 k3d-k3s-default-server-0 @@ -434,10 +455,13 @@ sudo vim /etc/hosts 172.18.0.5 k3d-k3s-default-agent-2 172.18.1.242 keycloak.bigbang.dev ``` + - Restart the coredns pod so it can pick up the new config - ``` + + ```console kubectl rollout restart deployment coredns -n kube-system ``` + - You might also need to restart the Package app pods before they can detect the new coredns config - Deploy Keycloak using the example dev config values ```docs/developer/example_configs/keycloak-dev-values.yaml``` @@ -468,7 +492,6 @@ sudo wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh ### Setting an imagePullSecret on the cluster with k3d - **_This methodology is not recommended_** It is possible to set your image pull secret on the cluster so that you don't have to put your credentials in the code or in the command line in later steps diff --git a/docs/developer/package-integration/package-integration-database.md b/docs/developer/package-integration/package-integration-database.md index c9174c21004fccd1902c9b654afcdc409d47809a..ef3d9f4e7e77cf8e1f7cfc066ca5de40a797493f 100644 --- a/docs/developer/package-integration/package-integration-database.md +++ b/docs/developer/package-integration/package-integration-database.md @@ -20,7 +20,7 @@ Add database values for the package in bigbang/chart/values.yaml Note: Names of key/values may differ based on the application being integrated. Please refer to package chart values to ensure key/values coincide and application documentation for additional information on connecting to a database. -```yml +```yaml <package> database: # -- Hostname of a pre-existing PostgreSQL database to use. @@ -34,6 +34,7 @@ Add database values for the package in bigbang/chart/values.yaml # -- Database password for the username used to connect to the existing database. password: "" ``` + Example: [Anchore](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/10d43bea9351b91dfc6f14d3b0c2b2a60fe60c6a/chart/values.yaml#L882) **Next details the first way packages connect to a pre-existing database.** @@ -46,7 +47,7 @@ Example: [Anchore](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/10 If database values are NOT present then the internal database is enabled and default values declared in the package are used. -```yml +```yaml # External Postgres config {{- with .Values.<package>.database }} postgresql: @@ -64,14 +65,16 @@ postgresql: {{- end }} {{- end }} ``` + Example: [Anchore](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/10d43bea9351b91dfc6f14d3b0c2b2a60fe60c6a/chart/templates/anchore/values.yaml#L49) **The alternative way packages connect to a pre-existing database is detailed below.** -2. Package chart accepts a secret name where all the DB connection info is defined. In these cases we make the secret in the BB chart.. +1. Package chart accepts a secret name where all the DB connection info is defined. In these cases we make the secret in the BB chart.. - add conditional statement in `chart/templates/<package>/values.yaml` to add values for database secret, if database values exist. Otherwise the internal database is deployed. -```yml + +```yaml {{- with .Values.addons.<package>.database }} {{- if and .username .password .host .port .database }} database: @@ -88,10 +91,9 @@ postgresql: Example: [Mattermost](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/10d43bea9351b91dfc6f14d3b0c2b2a60fe60c6a/chart/templates/mattermost/mattermost/values.yaml#L49) - - create manifest that uses database values to create the database secret referenced above -```yml +```yaml {{- if .Values.addons.<package>.enabled }} {{- with .Values.addons.<package>.database }} {{- if and .username .password .host .port .database }} @@ -119,7 +121,7 @@ For validating connection to the external database in your environment or testin Mattermost Example: -```yml +```yaml addons: mattermost: enabled: true diff --git a/docs/developer/package-integration/package-integration-network-policies.md b/docs/developer/package-integration/package-integration-network-policies.md index 294b58bd0d0c77ee2cd3e77219f432b2902320b9..6f0023c814ce728f651043050784047c847ba83f 100644 --- a/docs/developer/package-integration/package-integration-network-policies.md +++ b/docs/developer/package-integration/package-integration-network-policies.md @@ -1,26 +1,24 @@ # Big Bang Package: Network Policies + To increase the overall security posture of Big Bang, network policies are put in place to only allow ingress and egress from package namespaces to other needed services. A deny by default policy is put in place to deny all traffic that is not explicitly allowed. The following is how to implement the network policies per Big Bang standards. ## Table of Contents -1. [Prerequisites](#prerequisites) -2. [Integration](#integration) - - [Default Deny](#default-deny) - - [Default Allow](#default-allow) - - [Was Something Important Blocked?](#something-important-blocked) - - [Allowing Exceptions](#allowing-exceptions) - - [Additional Configuration](#additional-configuration) -3. [Validation](#validation) +[[_TOC_]] + +## Prerequisites -## Prerequisites <a name="prerequisites"></a> - Understanding of ports and communications of applications and other components within BigBang - `chart/templates/bigbang` and `chart/templates/bigbang/networkpolicies` folders within package for committing bigbang specific templates -## Integration <a name="integration"></a> +## Integration + All examples in this documentation will center on [podinfo](https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/podinfo). -### Default Deny <a name="default-deny"></a> +### Default Deny + In order to keep Big Bang secure, a default deny policy must be put into place for each package. Create `default-deny-all.yaml` inside `chart/templates/bigbang/networkpolicies` with the following details: + ```yaml {{ if .Values.networkPolicies.enabled }} # Default deny everything to/from this namespace @@ -38,8 +36,11 @@ spec: ingress: [] {{- end }} ``` -### Default Allow <a name="default-allow"></a> + +### Default Allow + For packages with more than one pod/deployment and those pods/deployments need to talk to each other, add a policy that allows all ingress/egress between pods in the namespace. Create `default-allow-ns.yaml` inside `chart/templates/bigbang/networkpolicies` with the following details: + ```yaml {{- if .Values.networkPolicies.enabled }} apiVersion: networking.k8s.io/v1 @@ -61,13 +62,15 @@ spec: {{- end }} ``` -### Was Something Important Blocked? <a name="something-important-blocked"></a> +### Was Something Important Blocked? + There are a few ways to determine if a network policy is blocking egress or ingress to or from a pod. - Test things from the pod's perspective using ssh/exec. See [this portion](../../guides/deployment_scenarios/sso_quickstart.md#step-18-update-inner-cluster-dns-on-the-workload-cluster) of the keycloak quickstart for an example of how do to that. - Curl a pod's IP from another pod to see if network polices are blocking that traffic. Use `kubectl pod -o wide -n <podNamespace>` to see pod IP addresses. - Check the pod logs (or curl from one container to the service) for a `context deadline exceeded` or `connection refused` message. -### Allowing Exceptions <a name="allowing-exceptions"></a> +### Allowing Exceptions + - Egress exceptions to consider: - pod to pod - SSO @@ -82,11 +85,12 @@ There are a few ways to determine if a network policy is blocking egress or ingr - Prometheus - Istio for virtual service - web endpoints -- Once you have determined an exception needs to be made, create a template in `chart/templates/bigbang/networkpolicies`. -- NetworkPolicy templates follow the naming convention of `direction-destination.yaml` (eg: egress-dns.yaml). +- Once you have determined an exception needs to be made, create a template in `chart/templates/bigbang/networkpolicies`. +- NetworkPolicy templates follow the naming convention of `direction-destination.yaml` (eg: egress-dns.yaml). - Each networkPolicy template in the package will have an if statement checking for `networkPolicies.enabled` and will only be present when `enabled: true` For example, if the podinfo package needs to send information to istiod, add the following content to a file named `egress-istio-d.yaml`: + ```yaml {{- if and .Values.networkPolicies.enabled .Values.istio.enabled }} apiVersion: networking.k8s.io/v1 @@ -112,6 +116,7 @@ spec: ``` Similarly, if prometheus needs access to podinfo to scrape metrics, create an `ingress-monitoring-prometheus.yaml` file with the following contents: + ```yaml {{- if and .Values.networkPolicies.enabled .Values.monitoring.enabled }} apiVersion: networking.k8s.io/v1 @@ -139,8 +144,10 @@ spec: {{- end }} ``` -### Additional Configuration <a name="additional-configuration"></a> +### Additional Configuration + Sample `chart/values.yaml` code at the package level: + ```yaml # BigBang specific Network Policy Configuration networkPolicies: @@ -154,10 +161,11 @@ networkPolicies: istio: ingressgateway ``` -- Use the `enabled: false` code above in order to disable networkPolicy templates for the package. The networkPolicy templates will be enabled by default when deployed from BigBang because it will inherit the `networkPolicies.enabled` [value](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/chart/values.yaml#L102). -- The ingressLabels portion supports packages that have an externally accessible UIs. Values from BigBang will also be inherited in this portion to ensure traffic from the correct istio ingressgateway is whitelisted. +- Use the `enabled: false` code above in order to disable networkPolicy templates for the package. The networkPolicy templates will be enabled by default when deployed from BigBang because it will inherit the `networkPolicies.enabled` [value](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/chart/values.yaml#L102). +- The ingressLabels portion supports packages that have an externally accessible UIs. Values from BigBang will also be inherited in this portion to ensure traffic from the correct istio ingressgateway is whitelisted. + +Example of a BigBang value configuration, `bigbang/templates/podinfo/values.yaml`, when adding a package into BigBang with networkPolicies: -Example of a BigBang value configuration, `bigbang/templates/podinfo/values.yaml`, when adding a package into BigBang with networkPolicies: ```yaml networkPolicies: enabled: {{ .Values.networkPolicies.enabled }} @@ -172,6 +180,7 @@ networkPolicies: - The `controlPlaneCidr` will control egress to the kube-api and be wide open by default, but will inherit the `networkPolicies.controlPlaneCidr` value from BigBang so the range can be locked down. Sample `chart/templates/bigbang/networkpolicies/egress-kube-api.yaml`: + ```yaml {{- if .Values.networkPolicies.enabled }} apiVersion: networking.k8s.io/v1 @@ -194,7 +203,9 @@ spec: - Egress {{- end }} ``` + - The networkPolicy template for kube-api egress will look like the above, so that communication to the [AWS Instance Metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) and [Azure Instance Metadata](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service) can be limited unless required by the package. -## Validation <a name="validation"></a> +## Validation + - Package functions as expected and is able to communicate with all BigBang touchpoints. diff --git a/docs/developer/package-integration/package-integration-pipeline.md b/docs/developer/package-integration/package-integration-pipeline.md index 36d44e6c94de22ebdf9381f8b4ff3008f69d49ba..fda3560a7f29a86d8f0e10f7d0576d1d47e68e9e 100644 --- a/docs/developer/package-integration/package-integration-pipeline.md +++ b/docs/developer/package-integration/package-integration-pipeline.md @@ -52,15 +52,20 @@ Pipelines provide rapid feedback to changes in our Helm chart as we develop and 1. Update the repo's CI/CD settings to call the pipeline (`Settings > CI/CD > General pipelines > Expand > CI/CD configuration file`). For Bigbang - ```text + + ```plaintext pipelines/bigbang-package.yaml@platform-one/big-bang/pipeline-templates/pipeline-templates:master ``` + For Third party - ```text + + ```plaintext pipelines/third-party.yaml@platform-one/big-bang/pipeline-templates/pipeline-templates:master ``` + For Sandbox - ```text + + ```plaintext pipelines/sandbox.yaml@platform-one/big-bang/pipeline-templates/pipeline-templates:master ``` diff --git a/docs/developer/package-integration/package-integration-policy-enforcement.md b/docs/developer/package-integration/package-integration-policy-enforcement.md index b3eeea7ebde491a4d42e051031120ea5603699b9..378430164c38f66097c8c6ac5265bce3a15f46b7 100644 --- a/docs/developer/package-integration/package-integration-policy-enforcement.md +++ b/docs/developer/package-integration/package-integration-policy-enforcement.md @@ -11,15 +11,16 @@ When integrating your package, you must adhere to the policies that are enforced ## Integration -#### 1. Deploying a Policy Enforcement Tool (OPA Gatekeeper) +### 1. Deploying a Policy Enforcement Tool (OPA Gatekeeper) -The policy enforcement tool is deployed as the first package in the default Big Bang configuration. This is so that the enforcement tool can effectively protect the cluster from the start. Your package will be deployed on top of the Big Bang enforcement tool. The policy enforcment tool will control your pacakge's access to the cluster. +The policy enforcement tool is deployed as the first package in the default Big Bang configuration. This is so that the enforcement tool can effectively protect the cluster from the start. Your package will be deployed on top of the Big Bang enforcement tool. The policy enforcement tool will control your package's access to the cluster. -#### 2. Identifying Violations Found on Your Application +### 2. Identifying Violations Found on Your Application In the following section, you will be shown how to identify violations found in your package. The app [PodInfo](https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/podinfo) will be used for all of the examples. Gatekeeper has three enforcement actions `deny`, `dryrun`, and `warn`. Only `deny` will prohibit access to the cluster, but the `warn` and `dryrun` constraints should be fixed as well as they are generally best practice. In this example we will be attempting to install PodInfo onto our cluster: + ```bash ➜ helm install flux-podinfo chart NAME: flux-podinfo @@ -32,7 +33,9 @@ NOTES: echo "Visit http://127.0.0.1:8080 to use your application" kubectl -n default port-forward deploy/flux-podinfo 8080:9898 ``` + Everything looks good with the deployment, but upon further inspection we can see that our app hasn't deployed properly. + ```bash ➜ kubectl get all -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE @@ -45,14 +48,16 @@ deployment.apps/flux-podinfo 0/1 0 0 19s NAME DESIRED CURRENT READY AGE replicaset.apps/flux-podinfo-84d5bccfd6 1 0 0 19s ``` -In order to get more information on why our deployment isn't avaialable, we can check the events of the K8s cluster. This will show us if there are policy violations, but will also reveal any other issues in our cluster. + +In order to get more information on why our deployment isn't available, we can check the events of the K8s cluster. This will show us if there are policy violations, but will also reveal any other issues in our cluster. ```bash ➜ kubectl get events -n default NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE default 31s Warning FailedCreate replicaset/flux-podinfo-84d5bccfd6 Error creating: admission webhook "validation.gatekeeper.sh" denied the request: [no-privileged-containers] Privileged container is not allowed: podinfo, securityContext: {"privileged": true} ``` -We can see that the issue in our example is that our PodInfo application is running containers as privileged. + +We can see that the issue in our example is that our PodInfo application is running containers as privileged. To get more information as to how to fix this issue we can get the logs of the gatekeeper control plane @@ -61,7 +66,9 @@ This is going to output a lot of logs to sift through so we can do a simple `gre ```bash kubectl logs -l control-plane=controller-manager -n gatekeeper-system --tail=-1 | grep "no-privileged-containers" ``` + And we'll see one of the log lines will looks something like the following: + ```json { "level": "info", @@ -84,37 +91,40 @@ And we'll see one of the log lines will looks something like the following: } ``` -#### 3. Fixing Policy Violations +### 3. Fixing Policy Violations -We can see the `constraint_action: deny` indicates that our resource was denied access to the cluster. The `contstraint_name` and `constraint_kind` can provide us a way to get more information as to why our resource was denied. Running the following command will help you do so. +We can see the `constraint_action: deny` indicates that our resource was denied access to the cluster. The `constraint_name` and `constraint_kind` can provide us a way to get more information as to why our resource was denied. Running the following command will help you do so. ```bash kubectl get <constraint_kind>.constraints.gatekeeper.sh/<constraint_name> -o json | jq '.metadata.annotations' ``` + Replacing the command with our information give us the following: + ```bash kubectl get K8sPSPPrivilegedContainer2.constraints.gatekeeper.sh/no-privileged-containers -o json | jq '.metadata.annotations' ``` + ```json { "constraints.gatekeeper/description": "Containers must not run as privileged.", "constraints.gatekeeper/docs": "https://kubernetes.io/docs/concepts/workloads/pods/#privileged-mode-for-containers", - "constraints.gatekeeper/name": "Privilged Containers", + "constraints.gatekeeper/name": "Privileged Containers", "constraints.gatekeeper/source": "https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy/privileged-containers", "helm.sh/hook": "post-install,post-upgrade" } ``` + The annotations provide us documentation information for specific policies as well as the source code to view that policy. To fix this issue, navigate to your package's `chart/values.yaml` or `deployment.yaml` and remove `privileged: true` or explicitly set it to `false`. -#### 4. Exemptions to Policy Exceptions +### 4. Exemptions to Policy Exceptions Fixing the violation in the application is preferred, but sometimes we need to make an exception to the policy and leave the violation in place. If you require an exception to a policy, please reference our [exception doc](https://repo1.dso.mil/platform-one/big-bang/apps/core/policy/-/blob/main/docs/exceptions.md) for more information. - ## Validation After we fixed the violation, we can run `helm upgrade flux-podinfo chart`. We can now check all the events in our cluster. This will show us if we've fixed our policy violation, but will also reveal non-policy related issues. diff --git a/docs/developer/package-integration/package-integration-sso.md b/docs/developer/package-integration/package-integration-sso.md index 78abcfc9ab79b3e11b40770c1a4219f2db34f14c..e3cd2380d9cf39c0f3d93214ae5248bed42ed395 100644 --- a/docs/developer/package-integration/package-integration-sso.md +++ b/docs/developer/package-integration/package-integration-sso.md @@ -14,17 +14,19 @@ The development environment can be set up in one of two ways: All package SSO Integrations within BigBang require a `<package>.sso` block within the BigBang [chart values](../../../chart/values.yaml) for your package along with an enabled flag: -```yml +```yaml <package>: sso: enabled: true ``` + Based on the authentication protocol implemented by the package being integrated, either Security Access Markup Language (SAML) or OpenID (OIDC), follow the appropriate example below. #### OIDC + For SSO integration using OIDC, at a minimum this usually requires `sso.client_id` and `sso.client_secret` values under the same block above. We can then reference these values further down in either the template values for your package ([eg: Gitlab](../../../chart/templates/gitlab/values.yaml)) or [Authservice Values template](../../../chart/templates/authservice/values.yaml) if there is no built-in support for OIDC or SAML in the package. Authservice will be discussed in more detail further down. -```yml +```yaml <package>: sso: enabled: true @@ -41,15 +43,16 @@ For SSO integration using OIDC, at a minimum this usually requires `sso.client_i Example: [ECK Values template](../../../chart/templates/logging/elasticsearch-kibana/values.yaml) #### SAML -For SSO integration using SAML, review the upstream documentation specific to the package and create the necessary items to passthrough from BigBang to the package values under the `<package>.sso` key. For example, Sonarqube configures SSO settings through `sonarProperties` values, which are collected from defined values under `addons.sonarqube.sso` within BigBang and passed through in the [sonarqube Values template](../../../chart/templates/sonarqube/values.yaml). +For SSO integration using SAML, review the upstream documentation specific to the package and create the necessary items to passthrough from BigBang to the package values under the `<package>.sso` key. For example, Sonarqube configures SSO settings through `sonarProperties` values, which are collected from defined values under `addons.sonarqube.sso` within BigBang and passed through in the [sonarqube Values template](../../../chart/templates/sonarqube/values.yaml). ### AuthService Integration -If SSO is not availble on the package to be integrated, Istio AuthService can be used for authentication. For AuthService integration, add `<package>.sso.client_id` and `<package>.sso.client_secret` definitions for the package within `../../chart/values.yaml`. Authservice has `global` settings defined and any values not explicitly set in this file will be inherited from the global values (like `authorization_uri`, `certificate_authority`, `jwks`, etc). Review the example below below of the jaeger specific chain configured within BigBang and passed through to the authservice values. + +If SSO is not available on the package to be integrated, Istio AuthService can be used for authentication. For AuthService integration, add `<package>.sso.client_id` and `<package>.sso.client_secret` definitions for the package within `../../chart/values.yaml`. Authservice has `global` settings defined and any values not explicitly set in this file will be inherited from the global values (like `authorization_uri`, `certificate_authority`, `jwks`, etc). Review the example below below of the jaeger specific chain configured within BigBang and passed through to the authservice values. Example: [Jaeger chain in Authservice template values](../../../chart/templates/authservice/values.yaml) -In order to use Authservice, Istio injection is required and utilized to route all pod traffic through the Istio side car proxy and the associated Authentication and Authorization policies. +In order to use Authservice, Istio injection is required and utilized to route all pod traffic through the Istio side car proxy and the associated Authentication and Authorization policies. 1. The first step is to ensure your namespace template where you package is destined is istio injected, and the appropriate label is set in `chart/templates/<package>/namespace.yaml`. @@ -57,7 +60,7 @@ Example: [Jaeger Namespace template](../../../chart/templates/jaeger/namespace.y 1. Next is to make sure the following label is applied to the workload (pod/deployment/replicaset/daemonset/etc) that will be behind the Authservice gate: -```yml +```yaml ... {{- $<package>AuthserviceKey := (dig "selector" "key" "protect" .Values.addons.authservice.values) }} {{- $<package>AuthserviceValue := (dig "selector" "value" "keycloak" .Values.addons.authservice.values) }} @@ -72,7 +75,9 @@ This label is set in the Authservice package, and is set to `protect=keycloak` b Example: [Jaeger Values template](../../../chart/templates/jaeger/values.yaml) ## Validation + For validating package integration with Single Sign On (SSO), carry out the following basic steps: + 1. Enable the package and SSO within Big Bang through the values added in the sections above 2. Using an internet browser, browse to your application (e.g. sonarqube.bigbang.dev) 3. If using built-in SAML/OIDC, click the login button, confirm a redirect to the Identity Provider happens. If using Authservice, confirm a redirect to the Identity Provider happens, prompting user sign in. @@ -80,4 +85,4 @@ For validating package integration with Single Sign On (SSO), carry out the foll 5. Successful sign in should return you to the application page 6. Confirm you are in the expected account within the application and that you are able to use the application -Note: An unsuccessful sign in may result in an `x509` cert issues, `invalid client ID/group/user` error, `JWKS` error, or other issues. +Note: An unsuccessful sign in may result in an `x509` cert issues, `invalid client ID/group/user` error, `JWKS` error, or other issues. diff --git a/docs/developer/package-integration/package-integration-storage.md b/docs/developer/package-integration/package-integration-storage.md index 7526ad9bbbe041c203855c1f7925c19f8992be9b..bbc7b073fe93e48c60383d2ee0f0febb07931ef6 100644 --- a/docs/developer/package-integration/package-integration-storage.md +++ b/docs/developer/package-integration/package-integration-storage.md @@ -20,9 +20,10 @@ Both ways will first require the following step: Add objectStorage values for the package in bigbang/chart/values.yaml - Notes: - - Names of key/values may differ based on the application being integrated (eg: iamProfile for Gitlab objectStorage values). Please refer to package chart values to ensure key/values coincide and application documentation for additional information on connecting to object storage. - - Some packages may have in-built object storage and the implementation may vary. + Notes: + +- Names of key/values may differ based on the application being integrated (eg: iamProfile for Gitlab objectStorage values). Please refer to package chart values to ensure key/values coincide and application documentation for additional information on connecting to object storage. +- Some packages may have in-built object storage and the implementation may vary. ```yaml <package> @@ -60,11 +61,11 @@ Add objectStorage values for the package in bigbang/chart/values.yaml - add a conditional statement to `bigbang/chart/templates/<package>/values` that will check if the object storage values exist and creates the necessary object storage values. - If object storage values are present, then the internal object storage is disabled by setting `enabled: false` and the endpoint, accessKey, accessSecret, and bucket values are set. + If object storage values are present, then the internal object storage is disabled by setting `enabled: false` and the endpoint, accessKey, accessSecret, and bucket values are set. If object storage values are NOT present then the minio cluster is enabled and default values declared in the package are used. -```yml +```yaml {{- with .Values.addons.<package>.objectStorage }} {{- if and .endpoint .accessKey .accessSecret .bucket }} fileStore: @@ -75,13 +76,14 @@ fileStore: {{- end }} {{- end }} ``` -Example: [MatterMost](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/chart/templates/mattermost/mattermost/values.yaml#L66-68) passes the endpoint and bucket via chart values. +Example: [MatterMost](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/chart/templates/mattermost/mattermost/values.yaml#L66-68) passes the endpoint and bucket via chart values. -2. Package chart accepts a secret name where all the object storage connection info is defined. In these cases we make the secret in the BB chart. +1. Package chart accepts a secret name where all the object storage connection info is defined. In these cases we make the secret in the BB chart. - add conditional statement in `chart/templates/<package>/values.yaml` to add values for object storage secret, if object storage values exist. Otherwise the minio cluster is used. -```yml + +```yaml objectStorage: config: secret: <package>-object-storage @@ -90,8 +92,9 @@ objectStorage: Example: [GitLab](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/chart/templates/gitlab/values.yaml#L54-57) -- Create the secret in the Big Bang chart. (NOTE: Replace <package> with your package name in the example below) -```yml +- Create the secret in the Big Bang chart. (NOTE: Replace `<package>` with your package name in the example below) + +```yaml {{- if .Values.addons.<package>.enabled }} {{- with .Values.addons.<package>.objectStorage }} {{- if and .endpoint .accessKey .accessSecret }} @@ -110,14 +113,16 @@ stringData: {{- end }} {{- end }} ``` + Example: [GitLab secret-objectstore.yaml](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/chart/templates/gitlab/secret-objectstore.yaml) ## Validation + For validating connection to the object storage in your environment or testing in CI pipeline you will need to add the object storage specific values to your overrides file or `./tests/test-values.yaml` respectively. If you are using Minio, ensure `addons.minio.enabled: true`. Mattermost Example: -```yml +```yaml addons: mattermost: enabled: true @@ -127,9 +132,10 @@ addons: accessSecret: "LKSJF2343KS9LS21J3KK20" bucket: "myMMBucket" ``` + For testing with the CI pipeline, create a `tests/dependencies.yaml` and include Minio. -```yml +```yaml miniooperator: git: repo: "https://repo1.dso.mil/platform-one/big-bang/apps/application-utilities/minio-operator.git" @@ -142,6 +148,7 @@ minio: tag: "4.2.3-bb.6" namespace: minio ``` + Example: [Velero dependencies.yaml](https://repo1.dso.mil/platform-one/big-bang/apps/cluster-utilities/velero/-/blob/main/tests/dependencies.yaml) -In order to test that the object storage is working, perform an action that stores a file. For example, if using Mattermost, upload an image for a user avatar. \ No newline at end of file +In order to test that the object storage is working, perform an action that stores a file. For example, if using Mattermost, upload an image for a user avatar. diff --git a/docs/developer/package-integration/package-integration-upstream.md b/docs/developer/package-integration/package-integration-upstream.md index 49a9c48d91f23f3d78475ddf4dbddce3951d47ce..c98d1fbd7aa82d0ac6f3b3851fc2b68c49faad3d 100644 --- a/docs/developer/package-integration/package-integration-upstream.md +++ b/docs/developer/package-integration/package-integration-upstream.md @@ -81,7 +81,7 @@ To minimize maintenance, it is preferable to reuse existing Helm charts availabl Example: - ```text + ```plaintext * @gitlabuser ``` diff --git a/docs/developer/package-integration/product-integration-supported.md b/docs/developer/package-integration/product-integration-supported.md index 2cfcae872a684a187e23cb1caf2b18720158dd3c..912e478e2fdec4bbd6fbd0b99d56390e72700fc2 100644 --- a/docs/developer/package-integration/product-integration-supported.md +++ b/docs/developer/package-integration/product-integration-supported.md @@ -80,7 +80,7 @@ After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/ 1. Create an overrrides directory as a sibling directory next to the bigbang code directory. Put your override yaml files in this directory. The reason we do this is to avoid modifying the bigbang values.yaml that is under source control. You could accidentally commit it with your secrets. Avoid that mistake and create a local overrides directory. One option is to copy the tests/ci/k3d/values.yaml to make the override-values.yaml and make modifications. The file structure is like this: - ```text + ```plaintext ├── bigbang/ └── overrides/ ├── override-values.yaml diff --git a/docs/developer/scripts/README.md b/docs/developer/scripts/README.md index afd59630cd5159b47c60ce353e407f88f03a2a89..0d982971ee9f5fdb535a0977b3dbab582aa2b606 100644 --- a/docs/developer/scripts/README.md +++ b/docs/developer/scripts/README.md @@ -4,7 +4,7 @@ The instance will automatically terminate in the middle of the night at 08:00 UTC. -# Install and Configure Dependencies +## Install and Configure Dependencies 1. Install aws cli @@ -31,15 +31,14 @@ The instance will automatically terminate in the middle of the night at 08:00 UT aws configure list ``` -1. Install jq - Follow jq installation instructions for your workstation operating system. - https://stedolan.github.io/jq/download/ +1. Install jq + Follow jq installation instructions for your workstation operating system. + <https://stedolan.github.io/jq/download/> +1. Mac users will need to install the GNU version of the sed command. + <https://medium.com/@bramblexu/install-gnu-sed-on-mac-os-and-set-it-as-default-7c17ef1b8f64> -1. Mac users will need to install the GNU version of the sed command. - https://medium.com/@bramblexu/install-gnu-sed-on-mac-os-and-set-it-as-default-7c17ef1b8f64 - -# Usage +## Usage The default with no options specified is to use the EC2 public IP for the k3d cluster and the security group. @@ -56,17 +55,17 @@ k3d-dev.sh -b -p -m -d -h -h output help ``` -# Troubleshooting +## Troubleshooting 1. If you are on a Mac insure that you have GNU sed command installed. Otherwise you will see this error and the kubeconfig will not be updated with the IP from the instance. - ``` + + ```console copy kubeconfig config 100% 3019 72.9KB/s 00:00 sed: 1: "...": extra characters at the end of p command ``` -2. If you get a failure from the script study and correct the error. Then run script with "-d" option to clean up resources. Then re-run your original command. +2. If you get a failure from the script study and correct the error. Then run script with "-d" option to clean up resources. Then re-run your original command. 3. Occasionally a ssh command will fail because of connection problems. If this happens the script will fail with "unexpected EOF". Simply try again. Run the script with ```-d``` to clean up resources. Then re-run your original command. -