UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit 5c9d5921 authored by Caitlin Bowman-Clare's avatar Caitlin Bowman-Clare Committed by Andrew Shoell
Browse files

Update docs/developer/package-integration/sso.md,...

parent c4e8f7ba
No related branches found
No related tags found
1 merge request!4617Update docs/developer/package-integration/sso.md,...
# Single Sign On (SSO)
Big Bang has configuration for Single Sign-On (SSO) authentication using an identity provider, like Keycloak. If the package supports SSO, you will need to integrate Big Bang's configuration with the package. If the package does not support SSO, an [authentication service](https://repo1.dso.mil/big-bang/product/packages/authservice) can be used to intercept traffic and provide SSO. This document details how to setup your package for either scenario.
Big Bang has configuration for Single Sign-On (SSO) authentication using an identity provider, like Keycloak. If the package supports SSO, you will need to integrate Big Bang's configuration with the package. If the package does not support SSO, an [authentication service](https://repo1.dso.mil/big-bang/product/packages/authservice) can be used to intercept traffic and provide SSO. This document details how to setup your package for either scenario.
## Prerequisites
The development environment can be set up in one of three ways:
1. Two k3d clusters with keycloak in one cluster and Big Bang and all other apps in the second cluster (see [this quick start guide](../../guides/deployment-scenarios/sso-quickstart.md) for more information)
2. One k3d cluster using MetalLB to have Keycloak, Big Bang, and all other apps in the one cluster (see [this example config](../../assets/configs/example/keycloak-dev-values.yaml) for more information)
3. Use a single K3D cluster with two Public IP addresses and the `-a` option on the `k3d-dev.sh` script. This will provision two Elastic IPs, MetalLB, and two specialized `k3d-proxy` containers for connecting the Elastic IPs to the MetalLB IPs. This allows for both a Public and Passthrough Istio Gateway to work simultaneously, specifically to allow for x509 mTLS authentication with Keycloak. Keep in mind that `keycloak.bigbang.dev` will need to point to the Secondary IP in your `/etc/hosts` file. The `k3d-dev.sh` script will inform you of this and return the SecondaryIP.
1. Two k3d clusters with Keycloak in one cluster and Big Bang and all other apps in the second cluster (see [this quick start guide](../../guides/deployment-scenarios/sso-quickstart.md) for more information).
1. One k3d cluster using MetalLB to have Keycloak, Big Bang, and all other apps in the one cluster (see [this example config](../../assets/configs/example/keycloak-dev-values.yaml) for more information).
1. Use a single K3D cluster with two Public IP addresses and the `-a` option on the `k3d-dev.sh` script. This will provision two Elastic IPs, MetalLB, and two specialized `k3d-proxy` containers for connecting the Elastic IPs to the MetalLB IPs. This allows for both a Public and Passthrough Istio Gateway to work simultaneously, specifically to allow for x509 mTLS authentication with Keycloak. Keep in mind that `keycloak.bigbang.dev` will need to point to the Secondary IP in your `/etc/hosts` file. The `k3d-dev.sh` script will inform you of this and return the SecondaryIP.
## Integration
......@@ -35,11 +38,11 @@ For SSO integration using OIDC, at a minimum this usually requires `sso.client_i
client_secret: "XXXXXXXXXXXX"
```
* A `bigbang/chart/templates/<package>/secret-sso.yaml` may need to be created in order to auto-generate secrets if required by the upstream documentation. We can see in the Gitlab documentation for SSO the configuration is handled [via JSON configuration](https://docs.gitlab.com/ee/administration/auth/oidc.html) [within a secret](https://docs.gitlab.com/charts/charts/globals.html#providers). This `secret-sso.yaml` can conditionally be created when `<package>.sso.enabled=true` within the BigBang values.
* A `bigbang/chart/templates/<package>/secret-sso.yaml` may need to be created in order to auto-generate secrets if required by the upstream documentation. We can see in the Gitlab documentation for SSO the configuration is handled [via JSON configuration](https://docs.gitlab.com/ee/administration/auth/oidc.html) [within a secret](https://docs.gitlab.com/charts/charts/globals.html#providers). This `secret-sso.yaml` can conditionally be created when `<package>.sso.enabled=true` within the Big Bang values.
Example: [GitLab SSO Secret template](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/chart/templates/gitlab/secret-sso.yaml)
* If configuration isn't destined for a secret and the package supports SSO options directly via helm values we can create and reference the necessary options from the `<package>.sso` values block. For example, elasticsearch documentation specifies a few [values required to enable and configure OIDC](https://www.elastic.co/guide/en/elasticsearch/reference/master/oidc-guide.html#oidc-enable-token) that we can configure and set to be conditional on `<package>.sso.enabled`.
* If configuration isn't destined for a secret and the package supports SSO options directly via helm values, we can create and reference the necessary options from the `<package>.sso` values block. For example, elasticsearch documentation specifies a few [values required to enable and configure OIDC](https://www.elastic.co/guide/en/elasticsearch/reference/master/oidc-guide.html#oidc-enable-token) that we can configure and set to be conditional on `<package>.sso.enabled`.
Example: [ECK Values template](../../../chart/templates/elasticsearch-kibana/values.yaml)
......@@ -59,7 +62,7 @@ In order to use Authservice, Istio injection is required and utilized to route a
Example: [Jaeger Namespace template](../../../chart/templates/jaeger/namespace.yaml)
1. Next is to make sure the following label is applied to the workload (pod/deployment/replicaset/daemonset/etc) that will be behind the Authservice gate:
1. Next, ensure the following label is applied to the workload (e.g., pod, deployment, replicaset, and/or daemonset) that will be behind the Authservice gate:
```yaml
...
......@@ -77,13 +80,18 @@ Example: [Jaeger Values template](../../../chart/templates/jaeger/values.yaml)
## Validation
For validating package integration with Single Sign On (SSO), carry out the following basic steps:
For validating package integration with SSO, carry out the following basic steps:
1. Enable the package and SSO within Big Bang through the values added in the sections above.
1. Using an internet browser, browse to your application (e.g., sonarqube.bigbang.dev).
1. If using built-in SAML/OIDC, click the login button, confirm a redirect to the Identity Provider happens. If using Authservice, confirm a redirect to the Identity Provider happens, prompting user sign in.
1. Sign in as a valid user.
1. Successful sign in should return you to the application page.
1. Enable the package and SSO within Big Bang through the values added in the sections above
2. Using an internet browser, browse to your application (e.g. sonarqube.bigbang.dev)
3. If using built-in SAML/OIDC, click the login button, confirm a redirect to the Identity Provider happens. If using Authservice, confirm a redirect to the Identity Provider happens, prompting user sign in.
4. Sign in as a valid user
5. Successful sign in should return you to the application page
6. Confirm you are in the expected account within the application and that you are able to use the application
1. Confirm you are in the expected account within the application and that you are able to use the application.
Note: An unsuccessful sign in may result in an `x509` cert issues, `invalid client ID/group/user` error, `JWKS` error, or other issues.
**NOTE:** An unsuccessful sign in may result in an `x509` cert issues, `invalid client ID/group/user` error, `JWKS` error, or other issues.
# Object Storage
If the package you are integrating connects to object storage (e.g. S3 buckets), you will need to follow the instructions below to integrate this feature into Big Bang.
If the package you are integrating connects to object storage (e.g., S3 buckets), you will need to follow the instructions provided here to integrate this feature into Big Bang.
In BigBang MinIO is a consistent, performant and scalable object store for the cloud strategies. Minio is Kubernetes-native by design and provides S3 compatible endpoints.
In Big Bang, Minio is a consistent, performant and scalable object store for the cloud strategies. Minio is Kubernetes-native by design and provides S3 compatible endpoints.
## Prerequisites
Blob storage bucket available with correct permissions, or Minio Addon is enabled at the BigBang level. Alternatively, you have (1) an existing Minio Instance, or (2) AWS S3 AccessKey and SecretKey.
Blob storage bucket is available with correct permissions, or Minio Addon is enabled at the Big Bang level. Alternatively, you have either:
1. An existing Minio Instance, or
1. AWS S3 AccessKey and SecretKey.
## Integration
There are currently 2 typical ways in bigbang that packages connect to object storage.
There are currently two ways in Big Bang that packages connect to object storage. They are listed in the following:
1. Package charts accept values for endpoint, accessKey, bucket values, etc and the chart makes the necessary secret, configmap etc.
1. Package charts accept values for endpoint, accessKey, and/or bucket values and the chart makes the necessary secret and/or configmap.
2. Package chart accepts a secret name where all the object storage connection info is defined. In these cases we make the secret in the BB chart.
1. Package chart accepts a secret name where all the object storage connection info is defined. In these cases, we make the secret in the BB chart.
Both ways will first require the following step:
Add objectStorage values for the package in bigbang/chart/values.yaml
Add objectStorage values for the package in bigbang/chart/values.yaml.
Notes:
- Names of key/values may differ based on the application being integrated (eg: iamProfile for Gitlab objectStorage values). Please refer to package chart values to ensure key/values coincide and application documentation for additional information on connecting to object storage.
- Some packages may have in-built object storage and the implementation may vary.
* Names of key/values may differ based on the application being integrated (e.g., iamProfile for Gitlab objectStorage values). Please refer to package chart values to ensure key/values coincide and application documentation for additional information on connecting to object storage.
* Some packages may have in-built object storage and the implementation may vary.
```yaml
<package>
......@@ -57,13 +62,13 @@ Add objectStorage values for the package in bigbang/chart/values.yaml
**Options for packages connecting to a pre-existing object storage.**
1. Package charts accept values for endpoint, accessKey, bucket values, etc. and the chart makes the necessary secret, configmap etc...
1. Package charts accept values for endpoint, accessKey, and/or bucket values and the chart makes the necessary secret and/or configmap.
- add a conditional statement to `bigbang/chart/templates/<package>/values` that will check if the object storage values exist and creates the necessary object storage values.
* Add a conditional statement to `bigbang/chart/templates/<package>/values` that will check if the object storage values exist and creates the necessary object storage values.
If object storage values are present, then the internal object storage is disabled by setting `enabled: false` and the endpoint, accessKey, accessSecret, and bucket values are set.
* If object storage values are present, then the internal object storage is disabled by setting `enabled: false` and the endpoint, accessKey, accessSecret, and bucket values are set.
If object storage values are NOT present then the minio cluster is enabled and default values declared in the package are used.
* If object storage values are NOT present then the minio cluster is enabled and default values declared in the package are used.
```yaml
{{- with .Values.addons.<package>.objectStorage }}
......@@ -79,9 +84,9 @@ fileStore:
Example: [MatterMost](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/chart/templates/mattermost/values.yaml#L101) passes the endpoint and bucket via chart values.
1. Package chart accepts a secret name where all the object storage connection info is defined. In these cases we make the secret in the BB chart.
1. Package chart accepts a secret name where all the object storage connection info is defined. In these cases, we make the secret in the Big Bang chart.
- add conditional statement in `chart/templates/<package>/values.yaml` to add values for object storage secret, if object storage values exist. Otherwise the minio cluster is used.
* Add conditional statement in `chart/templates/<package>/values.yaml` to add values for object storage secret, if object storage values exist. Otherwise, the minio cluster is used.
```yaml
objectStorage:
......@@ -92,7 +97,7 @@ objectStorage:
Example: [GitLab](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/chart/templates/gitlab/values.yaml#L76)
- Create the secret in the Big Bang chart. (NOTE: Replace `<package>` with your package name in the example below)
* Create the secret in the Big Bang chart. (**NOTE:** Replace `<package>` with your package name in the example below.)
```yaml
{{- if .Values.addons.<package>.enabled }}
......@@ -118,7 +123,7 @@ Example: [GitLab secret-objectstore.yaml](https://repo1.dso.mil/big-bang/bigbang
## Validation
For validating connection to the object storage in your environment or testing in CI pipeline you will need to add the object storage specific values to your overrides file or `./tests/test-values.yaml` respectively. If you are using Minio, ensure `addons.minio.enabled: true`.
For validating connection to the object storage in your environment or testing in CI pipeline, you will need to add the object storage specific values to your overrides file or `./tests/test-values.yaml`, respectively. If you are using Minio, ensure `addons.minio.enabled: true`.
Mattermost Example:
......
# Supported Package Integration
After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/master/process) and getting approval to add it to Big Bang, the following instructions must be completed.
After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/master/process) and getting approval to add it to Big Bang, the instructions provided here must be completed.
[[_TOC_]]
......@@ -24,13 +24,13 @@ After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/
1. Clone the [Big Bang Git repository](https://repo1.dso.mil/big-bang/bigbang) to your machine using `git clone https://repo1.dso.mil/big-bang/bigbang`
1. Make a branch from the BigBang chart repository `master` branch. You can automatically create a branch from the Repo1 Gitlab issue. Or, in some cases you might manually create the branch. Name the branch with your issue number. For example, if your issue number is `9999` then your branch name can be `9999-my-description`. It is best practice to make branch names short and simple.
1. Make a branch from the Big Bang chart repository `master` branch. You can automatically create a branch from the Repo1 Gitlab issue or, in some cases, you might manually create the branch. Name the branch with your issue number. For example, if your issue number is `9999` then your branch name can be `9999-my-description`. It is best practice to make branch names short and simple.
1. Make sure the files described in this [document](./flux.md) have been generated in `chart/templates/<your-package-name>` directory
1. Make sure the files described in this [document](./flux.md) have been generated in `chart/templates/<your-package-name>` directory.
1. More details about secret-*.yaml: The secret template is where the code for secrets go. Typically you will see secrets for imagePullSecret, sso, database, and possibly object storage. These secrets are a BigBang chart enhancement. They are created conditionally based on what the user enables in the config. For example if the app supports SSO and will need a Certificate Authority supplied to trust the connection to the IdP there should be a `secret-ca.yaml` template to populate a secret with the `sso.certificateAuthority.cert` value in the application namespace.
1. More details about secret-*.yaml: The secret template is where the code for secrets go. Typically, you will see secrets for imagePullSecret, sso, database, and possibly object storage. These secrets are a BigBang chart enhancement. They are created conditionally based on what the user enables in the config. For example if the app supports SSO and will need a Certificate Authority supplied to trust the connection to the IdP there should be a `secret-ca.yaml` template to populate a secret with the `sso.certificateAuthority.cert` value in the application namespace.
1. Merge your default package values from `<your-package-git-folder>/bigbang/values.yaml` into `chart/values.yaml`. Only the "standard" keys used across packages should be used. Keep in mind that values can be passed directly to the package using `.Values.<package>.values`
1. Merge your default package values from `<your-package-git-folder>/bigbang/values.yaml` into `chart/values.yaml.` Only the "standard" keys used across packages should be used. Keep in mind that values can be passed directly to the package using `.Values.<package>.values.`
> If your package is an `addon`, it falls into a different location than core packages. In this case, you will need to update all your references from `.Values.<package>` to `.Values.addons.<package>`.
......@@ -64,7 +64,7 @@ After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/
values: {}
```
1. Edit `tests/test-values.yaml`. These are the settings that the CI pipeline uses to run a deployment test. Set your Package to be enabled and add any other necessary values. Where possible reduce the number of replicas to a minimum to reduce strain on the CI infrastructure. When you commit your code the pipeline will run. You can view the pipeline in the Repo1 Gitlab console. Fix any errors in the pipeline output. The pipeline automatically runs a "smoke" test. It deploys bigbang on a k3d cluster using the test values file.
1. Edit `tests/test-values.yaml.` These are the settings that the CI pipeline uses to run a deployment test. Set your Package to be enabled and add any other necessary values. Where possible reduce the number of replicas to a minimum to reduce strain on the CI infrastructure. When you commit your code the pipeline will run. You can view the pipeline in the Repo1 Gitlab console. Fix any errors in the pipeline output. The pipeline automatically runs a "smoke" test. It deploys bigbang on a k3d cluster using the test values file.
1. You will also need to create an MR into the pipeline templates to update [02_wait_for_helmreleases.sh](https://repo1.dso.mil/big-bang/pipeline-templates/pipeline-templates/-/blob/master/scripts/deploy/03_wait_for_helmreleases.sh) and add your package's HR name to the core or addon lists.
......@@ -78,7 +78,7 @@ After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/
PIPELINE_REPO_BRANCH: 'your-branch'
```
1. Create an overrrides directory as a sibling directory next to the bigbang code directory. Put your override yaml files in this directory. The reason we do this is to avoid modifying the bigbang values.yaml that is under source control. You could accidentally commit it with your secrets. Avoid that mistake and create a local overrides directory. One option is to copy the tests/ci/k3d/values.yaml to make the override-values.yaml and make modifications. The file structure is like this:
1. Create an override directory as a sibling directory next to the Big Bang code directory. Put your override yaml files in this directory. The reason we do this is to avoid modifying the bigbang values.yaml that is under source control. You could accidentally commit it with your secrets. Avoid that mistake by create a local override directory. One option is to copy the tests/ci/k3d/values.yaml to make the override-values.yaml and make modifications. The file structure is like this:
```plaintext
├── bigbang/
......@@ -100,7 +100,7 @@ After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/
You will use these files as arguments in your helm commands.
1. Verify your Package works when deployed through bigbang. See the instructions below in the ```BigBang Development and Testing Cycle``` for the manual ```imperative``` way to deploy with helm upgrade commands. While testing you should use your package git branch instead of a tag. If you don't null the tag your branch will not get deployed. example:
1. Verify your package works when deployed through Big Bang. See the instructions below in the ```BigBang Development and Testing Cycle``` for the manual ```imperative``` way to deploy with helm upgrade commands. While testing you should use your package git branch instead of a tag. If you don't null the tag your branch will not get deployed. Here is an example:
```yaml
addons:
......@@ -110,7 +110,7 @@ After [graduating your package](https://repo1.dso.mil/platform-one/bbtoc/-/tree/
branch: "999-your-dev-branch-name"
```
1. After you have tested BigBang integration complete a Package MR and contact the codeowners to create a release tag. Package release tags follow the naming convention of {UpstreamChartVersion}-bb.{BigBangVersion} – example 1.2.3-bb.0.
1. After you have tested Big Bang integration, complete a Package MR and contact the codeowners to create a release tag. Package release tags follow the naming convention of {UpstreamChartVersion}-bb.{BigBangVersion} – example 1.2.3-bb.0.
1. Make sure to change the chart/values.yaml file to point to the new release tag rather than your dev branch (i.e. tag: "1.2.3-bb.0" in place of branch: "999-your-dev-branch-name"). example:
......@@ -129,7 +129,7 @@ There are two ways to test BigBang, imperative or GitOps with Flux. Your initia
### Imperative
You can manually deploy bigbang with helm command line. With this method you can test local code changes without committing to a repository. Here are the steps that you can iterate with "code a little, test a little". You should have previously created the ../overrides directory as described in step #10 above. From the root of your local bigbang repo:
You can manually deploy Big Bang with helm command line. With this method, you can test local code changes without committing to a repository. Here are the steps that you can iterate with "code a little, test a little". You should have previously created the ../overrides directory as described in step #10 above. From the root of your local bigbang repo:
```shell
# Deploy with helm while pointing to your override values files.
......@@ -182,20 +182,20 @@ kubectl delete -f dev/bigbang.yaml
### Validation
In order to validate that the new package is running as expected, we recommend to check the following things
In order to validate that the new package is running as expected, we recommend checking the following things:
1. Make sure that the steps from the other documentation in `package-integration` directory has been completed
1. Make sure that the steps from the other documentation in `package-integration` directory has been completed.
1. Deploy the package following the Imperative step described [above](#imperative)
1. Deploy the package following the imperative step described [above](#imperative).
1. Make sure that a namespace has been created for the package deployed (`kubectl get ns`)
1. Make sure that a namespace has been created for the package deployed (`kubectl get ns`).
1. The HR (Helm Release) reconciled successfully for the package (`kubectl get hr -A`)
1. The Helm Release (HR) reconciled successfully for the package (`kubectl get hr -A`).
1. All the pods and services we expected are up and running (`kubectl get po -n <Package Namespace>`)
1. All the pods and services we expected are up and running (`kubectl get po -n <Package Namespace>`).
1. Make sure all the pods are in a healthy state and have the right specs
1. Make sure all the pods are in a healthy state and have the right specs.
1. Utilize grafana to make sure the pods have the right resources if needed
1. Utilize grafana to make sure the pods have the right resources if needed.
1. Create an MR and make sure it passes all the automated tests
1. Create a merge request and validate passing CI tests.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment