UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit 916828f3 authored by Caitlin Bowman-Clare's avatar Caitlin Bowman-Clare Committed by Michael Martin
Browse files

Update docs/developer/develop-package.md,...

parent 5482ba02
No related branches found
No related tags found
1 merge request!4515Update docs/developer/develop-package.md,...
# Package Development
Package is the term we use for an application that has been prepared to be deployed with the BigBang helm chart. BigBang Packages are wrappers around Helm charts. All of the pertinent information should be included in the chart/values.yaml for configuration of the Package. These values are then available to be overridden in the BigBang chart/values.yaml file. The goal of these Packages is to take something that might be very complex and simplify it for consumers of BigBang. Rational and safe defaults should be used where possible while also allowing for overriding values when fine-grained control is needed. As much as possible test after each step so that the errors don't pile up, "code a little, test a little". Here are the steps:
Package is the term we use for an application that has been prepared to be deployed with the Big Bang helm chart. Big Bang Packages are wrappers around Helm charts. All of the pertinent information should be included in the chart/values.yaml for configuration of the Package. These values are then available to be overridden in the Big Bang chart/values.yaml file. The goal of these Packages is to take something that might be very complex and simplify it for consumers of Big Bang. Rational and safe defaults should be used where possible while also allowing for overriding values when fine-grained control is needed. As much as possible, test after each step so that the errors don't pile up; "code a little, test a little." The steps are provided in the following:
1. Create a repository under the appropriate group ( example: Security Tools, Developer Tools, Collaboration Tools) in [Repo1](https://repo1.dso.mil/big-bang/apps).
1. Create a repository under the appropriate group (e.g., Security Tools, Developer Tools, Collaboration Tools) in [Repo1](https://repo1.dso.mil/big-bang/apps).
1. Create a "main" branch that will serve as the master branch.
2. Create a "main" branch that will serve as the master branch.
1. There are two ways to start a new Package.
1. If there is no upstream helm chart we create a helm chart from scratch. Here is a T3 video that demonstrates creating a new helm chart. Create a directory called "chart" in your repo, change to the chart directory, and scaffold a new chart in the chart directory
3. There are two ways to start a new package.
a. If there is no upstream helm chart, we create a helm chart from scratch. Here is a T3 video that demonstrates creating a new helm chart. Create a directory called "chart" in your repo, change to the chart directory, and scaffold a new chart in the chart directory.
```shell
# Scaffold new helm chart
......@@ -16,19 +17,19 @@ Package is the term we use for an application that has been prepared to be deplo
helm create name-of-your-application
```
2. If there is an existing upstream chart we will use it and modify it. Essentially we create a "fork" of the upstream code. Use kpt to import the helm chart code into your repository. Note that kpt is not used to keep the Package code in sync with the upstream chart. It is a one time pull just to document where the upstream chart code came from. Kpt will generate a Kptfile that has the details. Do not manually create the "chart" directory. The kpt command will create it. Here is an example from when Gitlab Package was created. It is a good idea to push a commit "initial upstream chart with no changes" so you can refer back to the original code while you are developing.
b. If there is an existing upstream chart, we will use it and modify it. Essentially we create a "fork" of the upstream code. Use kpt to import the helm chart code into your repository. Note that kpt is not used to keep the Package code in sync with the upstream chart. It is a one time pull just to document where the upstream chart code came from. Kpt will generate a Kptfile that has the details. Do not manually create the "chart" directory. The kpt command will create it. Here is an example from when Gitlab Package was created. It is a good idea to push a commit "initial upstream chart with no changes" so you can refer back to the original code while you are developing.
```shell
kpt pkg get https://gitlab.com/gitlab-org/charts/gitlab.git@v4.8.0 chart
```
1. Run a helm dependency update that will download any external sub-chart dependencies. Commit any *.tgz files that are downloaded into the "charts" directory. The reason for doing this is that BigBang Packages must be able to be installed in an air-gap without any internet connectivity.
4. Run a helm dependency update that will download any external sub-chart dependencies. Commit any *.tgz files that are downloaded into the "charts" directory. The reason for doing this is that BigBang Packages must be able to be installed in an air-gap without any internet connectivity.
```shell
helm dependency update
```
1. Edit the Chart.yaml and set the chart ```version:``` number to be compliant with the charter versioning which is {UpstreamChartVersion}-bb.{BigBangVersion}. Note that the chart version is not the same thing as the application version. If this is a patch to an existing Package chart then increment the {BigBangVersion}. Here is an example from Gitlab Runner.
5. Edit the Chart.yaml and set the chart ```version:``` number to be compliant with the charter versioning which is {UpstreamChartVersion}-bb.{BigBangVersion}. Note that the chart version is not the same thing as the application version. If this is a patch to an existing Package chart then increment the {BigBangVersion}. Here is an example from Gitlab Runner.
```yaml
apiVersion: v1
......@@ -38,7 +39,7 @@ Package is the term we use for an application that has been prepared to be deplo
description: GitLab Runner
```
1. In the values.yaml replace public upstream images with IronBank hardened images. The image version should be compatible with the chart version. Here is a command to identify the images that need to be changed.
6. In the values.yaml replace public upstream images with IronBank hardened images. The image version should be compatible with the chart version. Here is a command to identify the images that need to be changed.
```shell
# list images
......@@ -58,22 +59,23 @@ Package is the term we use for an application that has been prepared to be deplo
- name: private-registry
```
1. Add a VirtualService if your application has a back-end API or a front-end GUI. Create the VirtualService in the sub-directory "chart/templates/bigbang/VirtualService.yaml". You will need to manually create the "bigbang" directory. It is convenient to copy VirtualService code from one of the other Packages and then modify it. You should be able to load the application in your browser if all the configuration is correct.
7. Add a VirtualService if your application has a back-end API or a front-end GUI. Create the VirtualService in the sub-directory "chart/templates/bigbang/VirtualService.yaml". You will need to manually create the "bigbang" directory. It is convenient to copy VirtualService code from one of the other Packages and then modify it. You should be able to load the application in your browser if all the configuration is correct.
8. Add NetworkPolices templates in the sub-directory "chart/templates/bigbang/networkpolicies/*.yaml." The intent is to lock down all ingress and egress traffic except for what is required for the application to function properly. Start with a deny-all policy and then add additional policies to open traffic as needed. Refer to the other Packages code for examples. The [Gitlab package](https://repo1.dso.mil/big-bang/product/packages/gitlab/-/tree/main/chart/templates/bigbang/networkpolicies) is a good/complete example.
1. Add NetworkPolices templates in the sub-directory "chart/templates/bigbang/networkpolicies/*.yaml". The intent is to lock down all ingress and egress traffic except for what is required for the application to function properly. Start with a deny-all policy and then add additional policies to open traffic as needed. Refer to the other Packages code for examples. The [Gitlab package](https://repo1.dso.mil/big-bang/product/packages/gitlab/-/tree/main/chart/templates/bigbang/networkpolicies) is a good/complete example.
9. Add a Continuous Integration (CI) pipeline to the Package. A Package should be able to be deployed by itself, independently from the Big Bang chart. The Package pipeline takes advantage of this to run a Package pipeline test. The package testing is done with a helm test library. Reference the [pipeline documentation](https://repo1.dso.mil/big-bang/pipeline-templates/pipeline-templates#using-the-infrastructure-in-your-package-ci-gitlab-pipeline) for how to create a pipeline and also [detailed instructions](https://repo1.dso.mil/big-bang/apps/library-charts/gluon/-/blob/master/docs/bb-tests.md) in the gluon library. Instructions are not repeated here.
1. Add a continuous integration (CI) pipeline to the Package. A Package should be able to be deployed by itself, independently from the BigBang chart. The Package pipeline takes advantage of this to run a Package pipeline test. The package testing is done with a helm test library. Reference the [pipeline documentation](https://repo1.dso.mil/big-bang/pipeline-templates/pipeline-templates#using-the-infrastructure-in-your-package-ci-gitlab-pipeline) for how to create a pipeline and also [detailed instructions](https://repo1.dso.mil/big-bang/apps/library-charts/gluon/-/blob/master/docs/bb-tests.md) in the gluon library. Instructions are not repeated here.
10. Documentation for the Package should be included. A "docs" directory would include all detailed documentation. Reference other Packages for examples.
1. Documentation for the Package should be included. A "docs" directory would include all detailed documentation. Reference other that Packages for examples.
1. You should include a `DEVELOPMENT_MAINTENANCE.md` file in this directory. Outlined in this file should the following:
a. You should include a `DEVELOPMENT_MAINTENANCE.md` file in this directory. Outlined in this file should the following:
- How to update the package
- How to deploy the package in a test environment
- How to test the package
- A list of modifications that were made from the upstream chart
* How to update the package.
* How to deploy the package in a test environment.
* How to test the package.
* A list of modifications that were made from the upstream chart.
1. Add the following markdown files to complete the Package. Reference other that Packages for examples of how to create them.
11. Add the following markdown files to complete the Package. Reference other that Packages for examples of how to create them.
```shell
CHANGELOG.md < standard history of changes made
......@@ -82,14 +84,14 @@ Package is the term we use for an application that has been prepared to be deplo
README.md < introduction and high level information
```
1. Create a top-level tests directory and inside put a test-values.yaml file that includes any special values overrides that are needed for CI pipeline testing. Refer to other packages for examples. But this is specific to what is needed for your package.
12. Create a top-level tests directory and inside put a test-values.yaml file that includes any special values overrides that are needed for CI pipeline testing. Refer to other packages for examples. But this is specific to what is needed for your package.
```shell
mkdir tests
touch test-values.yaml
```
1. At a high level, a Package structure should look like this when you are finished
13. At a high level, a Package structure should look like this (below) when you are finished.
```plaintext
├── chart/
......@@ -114,13 +116,15 @@ Package is the term we use for an application that has been prepared to be deplo
└── README.md
```
1. Merging code should require approval from a minimum of 2 codeowners. To setup merge requests to work properly with CODEOWNERS approval change these settings in your project:
Under Settings → General → Merge Request Approvals, change "Any eligible user" "Approvals required" to 1. Also ensure that "Require new approvals when new commits are added to an MR" is checked.
Under Settings → Repository → Protected Branches, add the main branch with "Developers + Maintainers" allowed to merge, "No one" allowed to push, and "Codeowner approval required" turned on.
Under Settings → Repository → Default Branch, ensure that main is selected.
14. Merging code should require approval from a minimum of two codeowners. To se tup merge requests to work properly with CODEOWNERS approval, change these settings in your project:
a. Under Settings → General → Merge Request Approvals, change "Any eligible user" "Approvals required" to 1. Also ensure that "Require new approvals when new commits are added to an MR" is checked.
b. Under Settings → Repository → Protected Branches, add the main branch with "Developers + Maintainers" allowed to merge, "No one" allowed to push, and "Codeowner approval required" turned on.
c. Under Settings → Repository → Default Branch, ensure that main is selected.
1. Development Testing Cycle: Test your Package chart by deploying with helm. Test frequently so you don't pile up multiple layers of errors. The goal is for Packages to be deployable independently of the bigbang chart. Most upstream helm charts come with internal services like a database that can be toggled on or off. If available use them for testing and CI pipelines. In some cases this is not an option. You can manually deploy required in-cluster services in order to complete your development testing.
Here is an example of an in-cluster postgres database
15. Development Testing Cycle: Test your Package chart by deploying with helm. Test frequently so you don't pile up multiple layers of errors. The goal is for Packages to be deployable independently of the bigbang chart. Most upstream helm charts come with internal services like a database that can be toggled on or off. If available use them for testing and CI pipelines. In some cases this is not an option. You can manually deploy required in-cluster services in order to complete your development testing. Here is an example of an in-cluster postgres database.
```shell
helm repo add bitnami https://charts.bitnami.com/bitnami
......@@ -168,7 +172,7 @@ Under Settings → Repository → Default Branch, ensure that main is selected.
kubectl delete ns <namespace>
```
1. Wait to create a git tag release until integration testing with BigBang chart is completed. You will very likely discover more Package changes that are needed during BigBang integration. When you are confident that the Package code is complete, squash commits and rebase your development branch with the "main" branch.
16. Wait to create a git tag release until integration testing with BigBang chart is completed. You will very likely discover more Package changes that are needed during BigBang integration. When you are confident that the Package code is complete, squash commits and rebase your development branch with the "main" branch.
```shell
git rebase origin/main
......@@ -178,22 +182,22 @@ Under Settings → Repository → Default Branch, ensure that main is selected.
git push --force
```
1. Then, create a merge request to branch "main"
17. Then, create a merge request to branch "main."
1. After the merge create a git tag following the charter convention of {UpstreamChartVersion}-bb.{BigBangVersion}. The tag should exactly match the chart version in the Chart.yaml.
18. After the merge create a git tag following the charter convention of {UpstreamChartVersion}-bb.{BigBangVersion}. The tag should exactly match the chart version in the Chart.yaml.
example: 1.2.3-bb.0
## Private registry secret creation
In some instances you may wish to manually create a private-registry secret in the namespace or during a helm deployment. There are a couple of ways to do this:
1. The first way is to add the secret manually using kubectl. This method is useful for standalone package testing/development.
19. The first way is to add the secret manually using kubectl. This method is useful for standalone package testing/development.
```shell
kubectl create secret docker-registry private-registry --docker-server="https://registry1.dso.mil" --docker-username='Username' --docker-password="CLI secret" --docker-email=<your-email> --namespace=<package-namespace>
```
2. The second is to create a yaml file containing the secret and apply it during a helm install. This method is applicable when installing your new package as part of the Big Bang chart. In this example the file name is "reg-creds.yaml":
20. The second is to create a yaml file containing the secret and apply it during a helm install. This method is applicable when installing your new package as part of the Big Bang chart. In this example the file name is "reg-creds.yaml":
Create the file with the secret contents:
......
......@@ -2,9 +2,9 @@
[[_TOC_]]
BigBang developers use [k3d](https://k3d.io/), a lightweight wrapper to run [k3s](https://github.com/rancher/k3s) (Rancher Lab’s minimal Kubernetes distribution) in Docker. K3d is a virtualized kubernetes cluster that is quick to start and tear down for fast development iteration. K3d is sufficient for 95% of BigBang development work. In limited cases developers will use real infrastructure k8s deployments with Rancher, Konvoy, EKS, etc. Only k3d is covered in this document.
Big Bang developers use [k3d](https://k3d.io/), a lightweight wrapper to run [k3s](https://github.com/rancher/k3s) (Rancher Lab’s minimal Kubernetes distribution) in Docker. K3d is a virtualized kubernetes cluster that is quick to start and tear down for fast development iteration. K3d is sufficient for 95% of BigBang development work. In limited cases, developers will use real infrastructure k8s deployments with Rancher, Konvoy, EKS, and more. Only k3d is covered in this document.
It is not recommend to run k3d with Big Bang on your local workstation. Instead use a remote k3d cluster running on an EC2 instance to shift the compute and network bandwidth to the cloud. Big Bang can be quite resource intensive and it requires a huge download bandwidth for the images. If you do insist on running k3d locally you should disable certain packages before deploying. You can do this in the values.yaml file by setting the package deploy to false. One of the packages that is most resource-intensive is the logging package. And you should create a local image registry cache to minimize the amount of image downloading.
It is not recommended to run k3d with Big Bang on your local workstation. Instead, use a remote k3d cluster running on an EC2 instance to shift the compute and network bandwidth to the cloud. Big Bang can be quite resource intensive and it requires a huge download bandwidth for the images. If you do insist on running k3d locally, you should disable certain packages before deploying. You can do this in the values.yaml file by setting the package deploy to false. One of the packages that is most resource-intensive is the logging package. And you should create a local image registry cache to minimize the amount of image downloading.
There is a script that automates the creation and teardown of a remote k3d development environment. First, read the [script instructions](aws-k3d-script.md), understand what it does, and install required dependencies. Then, run the script [docs/assets/scripts/developer/k3d-dev.sh](../assets/scripts/developer/k3d-dev.sh) from your workstation. The console output at the end of the script will give you the information necessary to access and use the dev environment. Also, there is a video tutorial in Platform One IL2 Confluence. Search for "T3" and click the link to the page. Scroll down the page to the 57th video on 22-February-2022.
......@@ -12,15 +12,15 @@ There is a script that automates the creation and teardown of a remote k3d devel
### Required Access
- AWS GovCloud "Big Bang dev" account - talk to your team government lead for access
- [BigBang repository](https://repo1.dso.mil/big-bang/bigbang)
- [Iron Bank registry](https://registry1.dso.mil/)
* AWS GovCloud "Big Bang dev" account: talk to your team government lead for access
* [BigBang repository](https://repo1.dso.mil/big-bang/bigbang)
* [Iron Bank registry](https://registry1.dso.mil/)
### Local Utilities
- [Helm](https://helm.sh/docs/intro/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [AWS cli](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
- [jq](https://stedolan.github.io/jq/download/)
- [KPT pre v1](https://github.com/kptdev/kpt/releases/tag/v0.39.2)
- optional: [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/)
* [Helm](https://helm.sh/docs/intro/install/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [AWS cli](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
* [jq](https://stedolan.github.io/jq/download/)
* [KPT pre v1](https://github.com/kptdev/kpt/releases/tag/v0.39.2)
* optional: [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/)
# MDO Pipelines Overview
At times Big Bang will have code for a plugin/binary/extension/etc that we'll need to fork/create/re-host and when we do so we should have the code ran through a PartyBus MDO pipeline and the resulting artifact used within the Platform.
At times, Big Bang will have code for a plugin, binary, and/or extension that we'll need to fork/create/re-host. When we do so, we should have the code ran through a Party Bus MDO pipeline and the resulting artifact used within the Platform.
1. Create a repo for the code within repo1 under https://repo1.dso.mil/big-bang/apps/product-tools/
1. This repo will need to be mirrored to code.il2.dso.mil. Create issue for the MDO team within [Jira IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) as a "New Pipeline Request" and state that you would like a pipeline and repo created from this repo1 link.
1. Create access token within repo1 project for the IL2 cloning, browse to Settings for the project > Access Tokens > check `read_repository` with a role of `Reporter` enter a name mentioning `partybus-il2` and ensure there is a date of expiration set for 1 year from this creation time > Click `Create project access token` and save the output at the top of the page to send to the MDO team over chat.il4 when prompted.
1. Once mirroring to code.il2 is successful the pipeline will start running and depending on the language, will run it's specific lint and unit testing stages and eventually get to trufflehog, fortify, dependencyCheck & sonarqube stages at the end. If any of these are throwing errors you will have to investigate why and can open issues to gain exceptions for any false-positives or other issues within [JIRA IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) with a "Pipeline Exception Request".
2. This repo will need to be mirrored to code.il2.dso.mil. Create issue for the MDO team within [Jira IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) as a "New Pipeline Request" and state that you would like a pipeline and repo created from this repo1 link.
3. Create access token within repo1 project for the IL2 cloning, browse to Settings for the project > Access Tokens > check `read_repository` with a role of `Reporter` enter a name mentioning `partybus-il2` and ensure there is a date of expiration set for 1 year from this creation time > Click `Create project access token` and save the output at the top of the page to send to the MDO team over chat.il4 when prompted.
4. Once mirroring to code.il2 is successful, the pipeline will start running and depending on the language, will run it's specific lint and unit testing stages and eventually get to trufflehog, fortify, dependencyCheck & sonarqube stages at the end. If any of these are throwing errors, you will have to investigate why and can open issues to gain exceptions for any false-positives or other issues within [JIRA IL2](https://jira.il2.dso.mil/servicedesk/customer/portal/73) with a "Pipeline Exception Request".
# Release Process
Big Bang Applications shall implement a following release process adhering to the following requirements
Big Bang Applications shall implement a release process adhering to the following requirements:
* Each Application shall maintain a long running release branch for all application version "N-2", meaning current upstream release and the previous two releases.
* The release process shall be automated by merging into this release branch
* The release process shall validate the application against all **supported** dependency releases using the automated [Testing Framework](testing.md)
* Each Application shall maintain a long running release branch for all application version "N-2," meaning current upstream release and the previous two releases.
* The release process shall be automated by merging into this release branch.
* The release process shall validate the application against all **supported** dependency releases using the automated [Testing Framework](testing.md).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment