From d94f0ffed343178f72bdd86a2f9136b1d7282d17 Mon Sep 17 00:00:00 2001 From: Caitlin Bowman-Clare <caitlin.bowman-clare@intellibridge.us> Date: Thu, 11 Jul 2024 18:25:23 +0000 Subject: [PATCH] Update docs/developer/testing.md, docs/developer/vendor-distro-integration.md,... --- docs/developer/aws-k3d-script.md | 20 +++---- docs/developer/dev-oci-workflow.md | 43 ++++++++------- docs/developer/mdo-partybus-pipelines.md | 8 +-- docs/developer/oscal-contributing.md | 54 ++++++++++++------- .../bigbang-merge-request.md | 5 +- docs/developer/renovate-maintenance.md | 17 +++--- docs/developer/test-package-against-bb.md | 28 +++++----- docs/developer/testing.md | 16 +++--- docs/developer/vendor-distro-integration.md | 2 +- 9 files changed, 109 insertions(+), 84 deletions(-) diff --git a/docs/developer/aws-k3d-script.md b/docs/developer/aws-k3d-script.md index 5a5403234e..57965a67a8 100644 --- a/docs/developer/aws-k3d-script.md +++ b/docs/developer/aws-k3d-script.md @@ -1,12 +1,12 @@ -# Development k3d cluster automation +# Development k3d Cluster Automation -> NOTE: This script does not does not install Flux or deploy Big Bang. You must handle those deployments after your k3d dev cluster is ready. +> **NOTE:** This script does not does not install Flux or deploy Big Bang. You must handle those deployments after your k3d dev cluster is ready. -The instance will automatically terminate 8 hours after creation. +The instance will automatically terminate eight hours after creation. ## Install and Configure Dependencies -1. Install aws cli +1. Install aws cli. ```shell curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" @@ -18,7 +18,7 @@ The instance will automatically terminate 8 hours after creation. aws --version ``` -1. Configure aws cli +1. Configure aws cli. ```shell aws configure @@ -31,7 +31,7 @@ The instance will automatically terminate 8 hours after creation. aws configure list ``` -1. Install jq +1. Install jq. Follow jq installation instructions for your workstation operating system. <https://stedolan.github.io/jq/download/> @@ -98,7 +98,7 @@ The Big Bang product is tightly coupled with the GitOps tool FluxCD. Before you ## Deploy Bigbang -From the bigbang directory deploy BigBang via helm +From the bigbang directory, deploy BigBang via helm. ```shell helm upgrade -i bigbang chart/ -n bigbang --create-namespace --set registryCredentials.username=XXXXX --set registryCredentials.password='XXXXX' -f chart/ingress-certs.yaml -f chart/values.yaml ``` @@ -114,7 +114,7 @@ Refer to this [documentation](package-integration/sso.md#Prerequisites) for vari ## Troubleshooting -1. If you are on a Mac insure that you have GNU sed command installed. Otherwise you will see this error and the kubeconfig will not be updated with the IP from the instance. +1. If you are on a Mac, ensure that you have GNU sed command installed. Otherwise, you will see this error and the kubeconfig will not be updated with the IP from the instance. ```console copy kubeconfig @@ -123,6 +123,6 @@ Refer to this [documentation](package-integration/sso.md#Prerequisites) for vari ``` -2. If you get a failure from the script study and correct the error. Then run script with "-d" option to clean up resources. Then re-run your original command. +2. If you get a failure from the script, study and correct the error. Then run script with "-d" option to clean up resources. Then re-run your original command. -3. Occasionally a ssh command will fail because of connection problems. If this happens the script will fail with "unexpected EOF". Simply try again. Run the script with `-d` to clean up resources. Then re-run your original command. +3. Occasionally, a ssh command will fail because of connection problems. If this happens, the script will fail with "unexpected EOF". Simply try again. Run the script with `-d` to clean up resources. Then re-run your original command. diff --git a/docs/developer/dev-oci-workflow.md b/docs/developer/dev-oci-workflow.md index 5ee35124a4..1f48dea92d 100644 --- a/docs/developer/dev-oci-workflow.md +++ b/docs/developer/dev-oci-workflow.md @@ -2,11 +2,11 @@ âš ï¸ **NOTE: This doc is a work in progress as OCI is not the expected or default workflow in Big Bang yet. Changes might be made to the structure or process at any time.** âš ï¸ -If you want to test deployment of a package off of your dev branch you have two options. This doc covers the OCI workflow, the Git workflow requires nothing more than the values specified in [example git values](../assets/configs/example/git-repo-values.yaml) pointed to your development branch. +If you want to test deployment of a package off of your dev branch, you have two options. This doc covers the OCI workflow. The Git workflow requires nothing more than the values specified in [example git values](../assets/configs/example/git-repo-values.yaml) pointed to your development branch. ## Package Chart for OCI -After making your changes to a chart you will need to package it with `helm package chart`. You should see output similar to the below: +After making your changes to a chart you will need to package it with `helm package chart`. You should see output similar to the following: ```console Successfully packaged chart and saved it to: /Users/me/bigbang/anchore/anchore-1.19.7-bb.4.tgz @@ -14,19 +14,21 @@ Successfully packaged chart and saved it to: /Users/me/bigbang/anchore/anchore-1 Note that Helm strictly enforces the OCI name and tag to match the chart name and version (see [HIP 0006](https://github.com/helm/community/blob/main/hips/hip-0006.md#3-chart-versions--oci-reference-tags)), and artifacts will always match the above syntax. -## Pushing OCI "somewhere" +## Pushing OCI "Somewhere" -In order to use this OCI artifact you will need to push it to an OCI compatible registry. You have a couple options here. +In order to use this OCI artifact, you will need to push it to an OCI compatible registry. There are multiple options here: you can push to a self-hosted docker registry, push to Registry1 staging, or push to a Big Bang registry. -### Push to self-hosted docker registry +### Push to self-Hosted Docker Registry -The preferred option for OCI storage is in your own personal registry. We can do this by running a registry with the standard docker `registry:2` image. Note that we have to host this as a TLS registry due to limitations with Helm. +The preferred option for OCI storage is in your own personal registry. We can do this by running a registry with the standard docker `registry:2` image. -You will want to spin up the registry on the same host as your cluster, i.e. your ec2 instance if following the normal developer workflow. +**NOTE:** We have to host this as a TLS registry due to limitations with Helm. + +You will want to spin up the registry on the same host as your cluster (i.e., your ec2 instance, if following the normal developer workflow). TODO: Make this all happen with a flag in the dev script, this should not be too challenging to automate. -1. Grab the `*.bigbang.dev` cert to use for the registry. If you follow the commands below, using `curl` and `yq`, this is pretty easy. +1. Grab the `*.bigbang.dev` cert to use for the registry. If you follow the commands below, using `curl` and `yq`, this is a simple process. ```console mkdir certs @@ -34,7 +36,7 @@ TODO: Make this all happen with a flag in the dev script, this should not be too curl -sS https://repo1.dso.mil/big-bang/bigbang/-/raw/master/chart/ingress-certs.yaml | yq '.istio.gateways.public.tls.cert' > certs/tls.crt ``` -1. Setup a docker registry, mounting the certs to expose this as a TLS (HTTPS) registry. +1. Set up a docker registry, mounting the certs to expose this as a TLS (HTTPS) registry. ```console docker volume create registry @@ -48,7 +50,7 @@ TODO: Make this all happen with a flag in the dev script, this should not be too 1. Spin up your development cluster as you normally would. Do not install Flux or Big Bang on top of the cluster yet. -1. Modify CoreDNS for your cluster to resolve your registry address to the private IP of your cluster host. In the example below we are using `oci.bigbang.dev`. Run the commands below from your cluster host (i.e. ec2 instance if using it): +1. Modify CoreDNS for your cluster to resolve your registry address to the private IP of your cluster host. In the example below we are using `oci.bigbang.dev`. Run the commands from your cluster host (i.e., ec2 instance if using it) listed in the following: ```console # Note that these commands assume a Linux host and k3d cluster @@ -98,7 +100,9 @@ TODO: Make this all happen with a flag in the dev script, this should not be too ### Push to Registry1 Staging -One option is to push your OCI artifacts to the Big Bang Staging area of Registry1. This is a SHARED area that internal Big Bang team members have access to - note that you may overwrite other developer's artifacts if you take this approach. +Another option is to push your OCI artifacts to the Big Bang Staging area of Registry1. This is a SHARED area that internal Big Bang team members have access to. + +**NOTE:** You may overwrite other developer's artifacts if you take this approach. 1. Login to registry1 with helm: `helm registry login registry1.dso.mil`. Follow the prompts to add your normal username and CLI token for registry1 auth. @@ -117,7 +121,7 @@ One option is to push your OCI artifacts to the Big Bang Staging area of Registr Digest: sha256:3cb826ee59fab459aa3cd723ded448fc6d7ef2d025b55142b826b33c480f0a4c ``` -1. Configure your Big Bang values to setup an additional `HelmRepository` and point the package to that repository. See example below: +1. Configure your Big Bang values to setup an additional `HelmRepository` and point the package to that repository. An example is provided in the following: ```yaml helmRepositores: @@ -138,21 +142,22 @@ One option is to push your OCI artifacts to the Big Bang Staging area of Registr tag: "1.19.7-bb.4" ``` -### Push to a Big Bang registry +### Push to a Big Bang Registry Note that this has a limited use case, since this requires at minimum Istio + Registry to be installed in advance. This may not work well if you are testing Istio or the registry package itself. Currently you could leverage any of the following as your OCI registry: -- Gitlab Project Registries (in a Big Bang installed Gitlab, not Repo1) -- Nexus Registry (see CI test values for auto-creation of OCI registry) -- Harbor (currently in sandbox, but functioning well with the test values) -1. Install a minimal Big Bang on your cluster, not including the package you want to test. You should at least install Istio and the registry (Gitlab, Nexus, Harbor). +* Gitlab Project Registries (in a Big Bang installed Gitlab, not Repo1) +* Nexus Registry (refer to CI test values for auto-creation of OCI registry) +* Harbor (currently in sandbox, but functioning well with the test values) + +1. Install a minimal Big Bang on your cluster, not including the package you want to test. You should at least install Istio and the registry (i.e., Gitlab, Nexus, and/or Harbor). -1. Modify CoreDNS for your cluster to route traffic to `x.bigbang.dev` (ex: `harbor.bigbang.dev`) to the IP of the public ingress gateway. +1. Modify CoreDNS for your cluster to route traffic to `x.bigbang.dev` (e.g., `harbor.bigbang.dev`) to the IP of the public ingress gateway. 1. Modify `/etc/hosts` to route `x.bigbang.dev` to the Public IP of your instance (if using a remote/ec2 based cluster). 1. Push Helm tgz to your chosen registry. -1. Configure your Big Bang values to setup an additional `HelmRepository` and point the package to that repository. +1. Configure your Big Bang values to set up an additional `HelmRepository` and point the package to that repository. diff --git a/docs/developer/mdo-partybus-pipelines.md b/docs/developer/mdo-partybus-pipelines.md index 0c5574c8c2..e89aa1977b 100644 --- a/docs/developer/mdo-partybus-pipelines.md +++ b/docs/developer/mdo-partybus-pipelines.md @@ -2,7 +2,7 @@ At times, Big Bang will have code for a plugin, binary, and/or extension that we'll need to fork/create/re-host. When we do so, we should have the code ran through a Party Bus MDO pipeline and the resulting artifact used within the Platform. -1. Create a repo for the code within repo1 under https://repo1.dso.mil/big-bang/apps/product-tools/ -2. This repo will need to be mirrored to code.il2.dso.mil. Create issue for the MDO team within [Jira IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) as a "New Pipeline Request" and state that you would like a pipeline and repo created from this repo1 link. -3. Create access token within repo1 project for the IL2 cloning, browse to Settings for the project > Access Tokens > check `read_repository` with a role of `Reporter` enter a name mentioning `partybus-il2` and ensure there is a date of expiration set for 1 year from this creation time > Click `Create project access token` and save the output at the top of the page to send to the MDO team over chat.il4 when prompted. -4. Once mirroring to code.il2 is successful, the pipeline will start running and depending on the language, will run it's specific lint and unit testing stages and eventually get to trufflehog, fortify, dependencyCheck & sonarqube stages at the end. If any of these are throwing errors, you will have to investigate why and can open issues to gain exceptions for any false-positives or other issues within [JIRA IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) with a "Pipeline Exception Request". +1. Create a repo for the code within repo1 under https://repo1.dso.mil/big-bang/apps/product-tools/. +1. This repo will need to be mirrored to code.il2.dso.mil. Create issue for the MDO team within [Jira IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) as a "New Pipeline Request" and state that you would like a pipeline and repo created from this repo1 link. +1. Create access token within repo1 project for the IL2 cloning, browse to Settings for the project > Access Tokens > check `read_repository` with a role of `Reporter` enter a name mentioning `partybus-il2` and ensure there is a date of expiration set for 1 year from this creation time > Click `Create project access token` and save the output at the top of the page to send to the MDO team over chat.il4 when prompted. +1. Once mirroring to code.il2 is successful, the pipeline will start running and depending on the language, will run it's specific lint and unit testing stages and eventually get to trufflehog, fortify, dependencyCheck & sonarqube stages at the end. If any of these are throwing errors, you will have to investigate why and can open issues to gain exceptions for any false-positives or other issues within [JIRA IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) with a "Pipeline Exception Request". diff --git a/docs/developer/oscal-contributing.md b/docs/developer/oscal-contributing.md index c90eeca79f..c795a3368b 100644 --- a/docs/developer/oscal-contributing.md +++ b/docs/developer/oscal-contributing.md @@ -1,13 +1,17 @@ -# Contributing to Package OSCAL Documents within BigBang +# Contributing to Package OSCAL Documents within Big Bang -## Why we have OSCAL documents in BigBang packages -OSCAL (Open Security Controls Assessment Language) documents are used in BigBang packages to provide a standardized format for representing security controls and their implementation details. By using OSCAL documents, we ensure consistency, interoperability, and ease of understanding when working with security controls across different systems and tools. +## Why we have OSCAL documents in Big Bang Packages + +Open Security Controls Assessment Language (OSCAL) documents are used in Big Bang packages to provide a standardized format for representing security controls and their implementation details. By using OSCAL documents, we ensure consistency, interoperability, and ease of understanding when working with security controls across different systems and tools. + +## The Basics of OSCAL Component Schema -## The basics of OSCAL Component schema The OSCAL Component schema defines the structure and properties of a component in an OSCAL document. You can find detailed information about the OSCAL Component schema in the official OSCAL documentation: [OSCAL Component Schema](https://pages.nist.gov/OSCAL/reference/latest/component-definition/json-reference/#/component-definition/import-component-definitions). ### Example -below is an example of our oscal-component.yaml file + +An example of our oscal-component.yaml file is provided in the following: + ```yaml component-definition: uuid: <<unique uuid>> @@ -48,14 +52,19 @@ component-definition: ``` ## How to validate package OSCAL documents against JSON Schema -Validating package OSCAL documents against the JSON Schema ensures that they adhere to the defined structure and properties. In addition to having OSCAL component validation within the BB CI pipelines, it is possible to manually validate an OSCAL document against the JSON Schema, you can use JSON Schema validation tools or libraries available for your programming language of choice. -### Here's a general process for validating package OSCAL documents: +Validating package OSCAL documents against the JSON Schema ensures that they adhere to the defined structure and properties. In addition to having OSCAL component validation within the Big Bang CI pipelines, it is possible to manually validate an OSCAL document against the JSON Schema, you can use JSON Schema validation tools or libraries available for your programming language of choice. + +### Process for Validating Package OSCAL Documents + +A general process for validating package OSCAL documents is provided in the following: + * Obtain the latest JSON Schema for OSCAL documents. -* Use a JSON Schema validation tool/library to validate the OSCAL document against the schema. +* Use a JSON Schema validation tool or library to validate the OSCAL document against the schema. * Verify that the document passes the validation without any errors or warnings. -### Example: +### Example + from the directory containing your oscal-component.yaml file ```shell @@ -63,27 +72,32 @@ yq eval oscal-component.yaml -o=json > tmp-oscal-component.json jsonschema -i tmp-oscal-component.json ${PATH_TO_OSCAL_SCHEMA}/oscal_component_schema.json -o pretty ``` -By validating package OSCAL documents, we maintain the integrity and quality of the documentation within BigBang. +By validating package OSCAL documents, we maintain the integrity and quality of the documentation within Big Bang. + +## Considerations when Updating Package OSCAL Documents + +When updating package OSCAL documents, it's essential to consider the following: -## Considerations when updating package OSCAL Documents -### When updating package OSCAL documents, it's essential to consider the following: -* Ensure partyID consistency: The partyID should remain consistent throughout all packages. Changing the partyID can cause confusion and potential errors. Always verify and ensure that the partyID remains unchanged during updates. -* Generate new UUID: Whenever a package OSCAL document is modified, a new UUID (Universally Unique Identifier) should be generated for the updated document. This ensures that the document retains its uniqueness and avoids potential conflicts. +* **Ensure PartyID Consistency:** The partyID should remain consistent throughout all packages. Changing the partyID can cause confusion and potential errors. Always verify and ensure that the partyID remains unchanged during updates. +* **Generate New UUID:** Whenever a package OSCAL document is modified, a new Universally Unique Identifier (UUID) should be generated for the updated document. This ensures that the document retains its uniqueness and avoids potential conflicts. ## How to add a control-implementation -### To add a control-implementation to a package OSCAL document within BigBang, follow these steps: + +To add a control-implementation to a package OSCAL document within Big Bang, follow these steps: + * Identify the appropriate component or control section in the OSCAL document where the new control-implementation should be added. * Create a new control-implementation element within the component or control section. * Populate the necessary properties and values for the control-implementation, such as control ID, implementation status, responsible roles, and associated resources. * Validate the updated OSCAL document against the JSON Schema to ensure its correctness. -Adding control-implementations allows for the documentation of specific control implementation details within the BigBang package. +Adding control-implementations allows for the documentation of specific control implementation details within the Big Bang package. + +## Unifying a Big Bang OSCAL Document -## A brief explanation of our intentions to aggregate package OSCAL documents into a unified Big Bang OSCAL document -One of our goals is to aggregate package OSCAL documents into a unified Big Bang OSCAL document. This unified document will serve as a comprehensive representation of security controls and their implementations across various packages within the BigBang ecosystem. +This section provides a brief explanation of our intentions to aggregate package OSCAL documents into a unified Big Bang OSCAL document. This unified document will serve as a comprehensive representation of security controls and their implementations across various packages within the Big Bang ecosystem. By aggregating package OSCAL documents, we aim to provide a centralized reference point for understanding and managing security controls. It allows for easier comparison, analysis, and reporting of security control implementations across different systems, applications, and environments. -The unified Big Bang OSCAL document simplifies the process of ensuring consistency and standardization in security control implementations, ultimately enhancing the overall security posture and efficiency of the BigBang ecosystem. +The unified Big Bang OSCAL document simplifies the process of ensuring consistency and standardization in security control implementations, ultimately enhancing the overall security posture and efficiency of the Big Bang ecosystem. -* Remember to refer to the OSCAL documentation and guidelines provided by BigBang for specific implementation details and any updates to the contributing process. \ No newline at end of file +**NOTE:** Remember to refer to the OSCAL documentation and guidelines provided by BigBang for specific implementation details and any updates to the contributing process. diff --git a/docs/developer/package-integration/bigbang-merge-request.md b/docs/developer/package-integration/bigbang-merge-request.md index 54c2936857..6c39ee9cd0 100644 --- a/docs/developer/package-integration/bigbang-merge-request.md +++ b/docs/developer/package-integration/bigbang-merge-request.md @@ -1,5 +1,6 @@ # Create a Big Bang Merge Request -Following the steps in the [flux integration](flux.md), create a merge request into big bang for your package. +Following the steps in the [flux integration](flux.md), create a Merge Request (MR) into Big Bang for your package. When ready, add the all-packages label to the MR and run the pipeline. This will trigger a pipeline with all big bang packages installed to a k3d cluster. -A passing all-packages pipeline is required prior to merging the new package. This validates that the additional package works with existing packages. \ No newline at end of file + +A passing all-packages pipeline is required prior to merging the new package. This validates that the additional package works with existing packages. diff --git a/docs/developer/renovate-maintenance.md b/docs/developer/renovate-maintenance.md index a184c3e819..82882f8704 100644 --- a/docs/developer/renovate-maintenance.md +++ b/docs/developer/renovate-maintenance.md @@ -1,19 +1,22 @@ # Renovate Package Maintenance -The Bread and butter of Big Bang is updating and providing timely releases for the Big Bang maintained helm charts. Most of these helm charts are based off of upstream vendor charts and repositories. This can be confirmed via seeing if a `chart/Kptfile` exists in our repository. +The bread and butter of Big Bang is updating and providing timely releases for the Big Bang maintained helm charts. Most of these helm charts are based off of upstream vendor charts and repositories. This can be confirmed via seeing if a `chart/Kptfile` exists in our repository. 1. All Big Bang packages should contain a `docs/DEVELOPMENT_MAINTENANCE.md` file which should be reviewed and understood by codeowners and those working the package updates. It has all of the necessary changes to begin working a Renovate update for any given package. Repository CODEOWNERS should also be using this document when reviewing the updates for a Merge Request that is in `status::review` to ensure all items are being performed, or updating the documentation if a portion is no longer relevant or needed. This document is where all of our local changes and caveats should be documented when creating changes which deviate from upstream templates or values according to the [Develop Package](./develop-package.md) steps when onboarding a new package. -1. Once the package has been updated, tested and verified according to the DEVELOPMENT_MAINTENANCE.md guide it should then have the following steps taken: +1. Once the package has been updated, tested, and verified according to the DEVELOPMENT_MAINTENANCE.md guide, the following steps should then be taken: - 1. Review pipeline and ensure all items are passing or if warnings are found, they are notated in the Merge Request comments or description. If an item mentioned in the [CI Workflow Document](./ci-workflow.md) is added `SKIP UPGRADE/skip-bb-mr` ensure this is notated why so we have justification as to why this was needed. - 1. The `## Upgrade Notices` section is accurately filled out in the MR description. This notice should be treated as customer public facing and should not include any internal team or CI specific notes for the package. Good examples of relevant upgrade notices are when a template or value moves or is renamed like a ENV var or value for an admin password for example, some change that will require downstream consumers of Big Bang to read and perform a change for their environment. - 1. Ensure `SKIP UPDATE CHECK` is removed from the MR title and a pipeline has ran with a `chart update check` stage. - 1. Add `status::review` label to issue and Merge Request. + a. Review pipeline and ensure all items are passing or if warnings are found, they are notated in the Merge Request comments or description. If an item mentioned in the [CI Workflow Document](./ci-workflow.md) is added `SKIP UPGRADE/skip-bb-mr` ensure this is notated why so we have justification as to why this was needed. + + b. The `## Upgrade Notices` section is accurately filled out in the MR description. This notice should be treated as customer public facing and should not include any internal team or CI specific notes for the package. Good examples of relevant upgrade notices are when a template or value moves or is renamed like a ENV var or value for an admin password for example, some change that will require downstream consumers of Big Bang to read and perform a change for their environment. + + c. Ensure `SKIP UPDATE CHECK` is removed from the MR title and a pipeline has ran with a `chart update check` stage. + + d. Add `status::review` label to issue and Merge Request. 1. Once merged into `main`, ensure post-MR pipelines for `main` and tag creation fully pass. Reach out to CODEOWNERS or anchors if issues arise. -1. After all MR, main, and tag pipelines pass and are successful, bigbang-bot will open a BigBang Merge Request. Link your issue your assigned in the description after the `Closes` eg: `Closes https://repo1.dso.mil/ISSUE`. Ensure pipeline is passing and package/change related issues are not present in pipeline. Once pipeline is passing and is linked take MR out of draft status. Assign anchors and BigBang codeowners as reviewers. +1. After all MR, main, and tag pipelines pass and are successful, bigbang-bot will open a Big Bang Merge Request. Link your issue your assigned in the description after the `Closes` (e.g., `Closes https://repo1.dso.mil/ISSUE`). Ensure pipeline is passing and package/change related issues are not present in pipeline. Once pipeline is passing and is linked take MR out of draft status. Assign anchors and Big Bang codeowners as reviewers. ## General Renovate Issue Workflow diff --git a/docs/developer/test-package-against-bb.md b/docs/developer/test-package-against-bb.md index c52c8104ca..ba429f6754 100644 --- a/docs/developer/test-package-against-bb.md +++ b/docs/developer/test-package-against-bb.md @@ -1,11 +1,13 @@ -# Testing your package branch against bigbang before package merge +# Testing your Package Branch against Bigbang before Package Merge -These instructions right now are written for istio changes, but the same is probably true for kyverno and maybe for others as well. CODEOWNERS reviewing MRs should enforce this. +These instructions right now are written for istio changes, but the same is probably true for kyverno and possibly for others. CODEOWNERS reviewing Merge Requests (MRs) should enforce this. -## Run bigbang tests against your branch -As part of your MR that modifies istio you will need to run bigbang tests against your branch. To do this, at a minimum, you will need to -1. Create a new branch on bigbang off of master `git checkout master && git pull && git checkout -b my-bigbang-branch-for-testing` -1. Modify the [test values](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/tests/test-values.yaml?ref_type=heads), yours will be different for your package, you may need more than this +## Run Bigbang Tests Against your Branch + +As part of your MR that modifies istio you will need to run bigbang tests against your branch. To do this, at a minimum, you will need to complete the following: + +1. Create a new branch on bigbang off of master `git checkout master && git pull && git checkout -b my-bigbang-branch-for-testing.` +1. Modify the [test values](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/tests/test-values.yaml?ref_type=heads). Yours will be different for your package, you may need more than this. ```yaml myAppPackage: git: @@ -16,10 +18,10 @@ As part of your MR that modifies istio you will need to run bigbang tests agains hardened: enabled: true ``` -1. Stage your changes `git add -A` -1. Commit your changes `git commit -m "prepping for test"` -1. Push your changes `git push -u origin my-bigbang-branch-for-testing` -1. Create the bigbang MR as a draft with `TEST ONLY DO NOT MERGE` in the title, and add the label of the package to test, e.g. `monitoring` -1. Wait for tests to finish, and do fixes on your package branch as needed until they pass -1. Close the bigbang MR by deleting the bigbang branch `git push -d origin my-bigbang-branch-for-testing` -1. Link the bigbang MR on your package MR as evidence of your package working in bigbang +1. Stage your changes `git add -A.` +1. Commit your changes `git commit -m "prepping for test."` +1. Push your changes `git push -u origin my-bigbang-branch-for-testing.` +1. Create the bigbang MR as a draft with `TEST ONLY DO NOT MERGE` in the title, and add the label of the package to test (e.g., `monitoring`). +1. Wait for tests to finish, and do fixes on your package branch as needed until they pass. +1. Close the bigbang MR by deleting the bigbang branch `git push -d origin my-bigbang-branch-for-testing.` +1. Link the bigbang MR on your package MR as evidence of your package working in bigbang. diff --git a/docs/developer/testing.md b/docs/developer/testing.md index 53f81efefa..5e3889ca3f 100644 --- a/docs/developer/testing.md +++ b/docs/developer/testing.md @@ -11,7 +11,7 @@ There are multiple phases of testing for an application to get into a customer e ## Testing Platform -Big Bang Applications will leverage GitLab Runners to execute these common BigBang Pipelines. Each Big Bang application is required to use the Big Bang Pipelines, whose functionality is outlined here. +Big Bang Applications will leverage GitLab Runners to execute these common Big Bang Pipelines. Each Big Bang application is required to use the Big Bang Pipelines, whose functionality is outlined here. A detailed description of the pipelines and how to execute the testing process on a local system is described in the README.md in <https://repo1.dso.mil/big-bang/pipeline-templates/pipeline-templates>. @@ -23,13 +23,13 @@ A core feature of all testing capabilities is its ability to be run locally by d ### Linting -Initial phases of the applications tests will focus on compliance with approved formatting and rendering policies for BigBang. +Initial phases of the applications tests will focus on compliance with approved formatting and rendering policies for Big Bang. ### Smoke Deployments -The next phase of testing for each application will be to stand up healthy on a lightweight Kubernetes cluster. The GitLab Runners will standup a ephemeral Kubernetes cluster for use for the deployment, deploy the application and its dependencies and ensure the application comes up "Healthy". The testing configuration will allow for a configuration of the application and the ability to define and test functionality. +The next phase of testing for each application will be to stand up healthy on a lightweight Kubernetes cluster. The GitLab Runners will standup a ephemeral Kubernetes cluster for use for the deployment, deploy the application and its dependencies and ensure the application comes up "Healthy." The testing configuration will allow for a configuration of the application and the ability to define and test functionality. -Each "Test" scenario will contain the following information: +Each "test" scenario will contain the following information: 1. The Kubernetes cluster to stand up. Initial implementations will only allow customization of a k3d cluster. 2. Application configuration files. Once a repository format/tool is decided, this may look like a Helm values file, or a set of Kustomization overlays on a base deployment. @@ -51,7 +51,7 @@ The package pipelines have been enhanced to execute the "helm test" command and In order to add Helm Chart tests to your application, the following enhancements need to be made to the Helm Chart: -* A test directory is added to templates/ directory within the helm chart. This directory contains Kubernetes object definitions which are deployed only when a "helm Test" command is executed. As an example, tests can be YAML files that execute pods with containers, deploy config maps, secrets, or other objects. +* A test directory is added to templates/directory within the helm chart. This directory contains Kubernetes object definitions which are deployed only when a "helm test" command is executed. As an example, tests can be YAML files that execute pods with containers, deploy config maps, secrets, or other objects. * When a files contain a pod/container definition that executions tests, the container must return success or failure (i.e., the container should exit successfully with an exit 0 for a test to be considered a success). * Each test object definition must contain a "helm.sh/hook: test-success" annotation telling Helm that this object is a test and should only be deployed when tests are executed. The following example create a configmap that is only created during testing. @@ -83,7 +83,7 @@ The end consumable is the Umbrella Application. As new versions of Big Bang Appl The Umbrella application will be tested for functionality with customer focused kubernetes environments. As the Integration team works with customers to adopt Big Bang, the team will provide feedback to Umbrella Test Environments to provide representative environments to perform full End to End regression tests. A representative environment for the e2e tests is Mock Fences, which attempts to mirror the Fences environment owned by GBSD. -Each Environment will contain the Infrastructure as Code (IaC) to deploy the base infrastructure that Big Bang will be deployed onto. These tests will not validate that upgrades to IaC are successful. +Each environment will contain the Infrastructure as Code (IaC) to deploy the base infrastructure that Big Bang will be deployed onto. These tests will not validate that upgrades to IaC are successful. ### Upgrade Tests @@ -101,7 +101,7 @@ Part of testing shall provide tests for Single Sign On verification that applica ### Application Testing Infrastructure -The GitLab runners used for testing BigBang Applications will stand up dynamic [K3d](https://k3d.io/) or [Kind](https://kind.sigs.k8s.io/docs/) clusters. To do this dynamically in Kubernetes, the pods need access to the host. As a result, Big Bang with deploy and managed a separate Kubernetes cluster that GitLab will use to deploy ephemeral Kubernetes clusters for testing. +The GitLab runners used for testing Big Bang Applications will stand up dynamic [K3d](https://k3d.io/) or [Kind](https://kind.sigs.k8s.io/docs/) clusters. To do this dynamically in Kubernetes, the pods need access to the host. As a result, Big Bang with deploy and managed a separate Kubernetes cluster that GitLab will use to deploy ephemeral Kubernetes clusters for testing. This cluster will remain separate from the environment running GitLab since the use of privileged containers could pose a security risk to adjacent pods on the nodes. @@ -111,7 +111,7 @@ The GitLab Runners used for Umbrella testing will be provided appropriate servic #### Umbrella Clusters -Clusters for testing the Umbrella app will be provisioned from vendors that allow for creation of dev and test clusters without licencing limitations. Vendors will be required to provide the following: +Clusters for testing the Umbrella app will be provisioned from vendors that allow for creation of dev and test clusters without licensing limitations. Vendors will be required to provide the following: 1. A repository inside <https://repo1.dso.mil/platform-one/distros> to maintain code. 2. A GitLab pipeline task that provisions their distribution: [Vendor Distribution Integration](vendor-distro-integration.md). diff --git a/docs/developer/vendor-distro-integration.md b/docs/developer/vendor-distro-integration.md index 4c15f8c7de..6fccb35f2b 100644 --- a/docs/developer/vendor-distro-integration.md +++ b/docs/developer/vendor-distro-integration.md @@ -4,7 +4,7 @@ Vendor distributions are tested within the umbrella project's ci [pipelines][0]. These pipelines include jobs from the [umbrella-templates][1] repository. -The main thing to take into account is your cluster should have: +The main thing to take into account is your cluster should have the following: * A single stage for spinning up. * A single stage for spinning down. -- GitLab