UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit de67d1dd authored by Jason Krause's avatar Jason Krause :8ball: Committed by Ryan Garcia
Browse files

Renaming dev-doc to readme so it displays 1st

parent a5513b93
No related branches found
No related tags found
1 merge request!694Dev Environment Documentation Updates
# Developer Documentation
[[_TOC_]]
## Charter
The [BigBang Charter](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/tree/master/charter) is required reading for BigBang developers. Study all the documents carefully before you start developing. The Charter lays out the policies, requirements, and responsibilities for the BigBang product and the supported Packages (applications). At a high level, the BigBang product is a helm chart that wraps the deployment of DevSecOps applications (Packages). The goal of BigBang is to hide the complexity of deploying and integrating the supported Packages. Customers should be able to easily deploy and configure a DevSecOps environment. BigBang is intended to deploy on any OCI/CNCF compliant Kubernetes cluster.
## Communications
Join Mattermost channels to ask questions and communicate with the team. The team also has a daily stand up at 8:30 am MST. The link for the stand up is usually found in the channel headers. Here is the list of relevant Mattermost channels for BigBang development:
[public Big Bang](https://chat.il2.dso.mil/platform-one/channels/team---big-bang)
[private Big Bang - by invitation only](https://chat.il2.dso.mil/platform-one/channels/team---bigbang)
Join Mattermost channels to ask questions and communicate with the team. The team also has a daily Scrum at 8:15 am MST. The link for the stand up is found on the [Big Bang Group Calendar](https://confluence.il2.dso.mil/display/BB1/calendar/dfb757d4-110c-4aac-80d8-516fd2d0cea8?calendarName=Big%20Bang%20Group%20Calendar). Here is the list of relevant Mattermost channels for BigBang development:
* [Value Stream - Big Bang](https://chat.il2.dso.mil/platform-one/channels/team---big-bang)
* [Topic - Big Bang Documentation](https://chat.il2.dso.mil/platform-one/channels/topic-big-bang-documentation)
## Big Bang Framework
......
......@@ -64,7 +64,6 @@ Package is the term we use for an application that has been prepared to be deplo
1. Add a continuous integration (CI) pipeline to the Package. A Package should be able to be deployed by itself, independently from the BigBang chart. The Package pipeline takes advantage of this to run a Package pipeline test. The package testing is done with a helm test library. Reference the [pipeline documentation](https://repo1.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates#using-the-infrastructure-in-your-package-ci-gitlab-pipeline) for how to create a pipeline and also [detailed instructions](https://repo1.dso.mil/platform-one/big-bang/apps/library-charts/gluon/-/blob/master/docs/bb-tests.md) in the gluon library. Instructions are not repeated here.
1. Documentation for the Package should be included. A "docs" directory would include all detailed documentation. Reference other that Packages for examples.
1. Add the following markdown files to complete the Package. Reference other that Packages for examples of how to create them.
......
# Development Environment overview
[[_TOC_]]
BigBang developers use [k3d](https://k3d.io/), a lightweight wrapper to run [k3s](https://github.com/rancher/k3s) (Rancher Lab’s minimal Kubernetes distribution) in docker.
It is not recommend to run k3d with BigBang on your local computer. BigBang can be quite resource-intensive and it requires a huge download bandwidth for the images. It is best to use a remote k3d cluster running on an AWS EC2 instance. If you do insist on running k3d locally you should disable certain packages before deploying. You can do this in the values.yaml file by setting the package deploy to false. One of the packages that is most resource-intensive is the logging package. And you should create a local image registry cache to minimize the amount of image downloading. A script that shows how to create a local image cache is in the [BigBang Quick Start](https://repo1.dso.mil/platform-one/quick-start/big-bang/-/blob/master/init.sh)
It is not recommend to run k3d with BigBang on your local computer. BigBang can be quite resource-intensive and it requires a huge download bandwidth for the images. It is best to use a remote k3d cluster running on an AWS EC2 instance. If you do insist on running k3d locally you should disable certain packages before deploying. You can do this in the values.yaml file by setting the package deploy to false. One of the packages that is most resource-intensive is the logging package. And you should create a local image registry cache to minimize the amount of image downloading. A script that shows how to create a local image cache is in the [BigBang Quick Start](https://repo1.dso.mil/platform-one/quick-start/big-bang/)
This page contains the manual steps to create your k3d dev environment. Various persons have automated parts of these steps with scripts and terraform but we recommened that you do it manually so that you understand how it works. Automation is left to each person. It might be helpful to get a live demonstration by someone who already knows how to do it until a good video tutorial is created. We strive to make the documentation as good as possible but it is hard to keep it up-to-date and there are still pitfalls and gotchas.
This page contains the manual steps to create your k3d dev environment. Various persons have automated parts of these steps with scripts and terraform but we recommend that you do it manually so that you understand how it works. Automation is left to each person. It might be helpful to get a live demonstration by someone who already knows how to do it until a good video tutorial is created. We strive to make the documentation as good as possible but it is hard to keep it up-to-date and there are still pitfalls and gotchas.
## Prerequisites
### Access
### Required Access
- [AWS GovCloud "coder" account](https://927962728993.signin.amazonaws-us-gov.com/console)
- [BigBang repository](https://repo1.dso.mil/platform-one/big-bang/bigbang)
- [Iron Bank registry](https://registry1.dso.mil/)
### Utilities installed on local workstation
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) cli
- [flux](https://toolkit.fluxcd.io/guides/installation/) v2 cli. release [downloads](https://github.com/fluxcd/flux2/release)
### Local Utilities
**Note:** there is an issue with flux v0.15.0 causing helm to fail with duplicate key errors. Brew/yum/apt-get will probably install that version or newer. Instead, please use the [install flux script](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/scripts/install_flux.sh) or manually install an older version such as v0.14.2 from [fluxcd's git repo](https://github.com/fluxcd/flux2/releases/tag/v0.14.2).
- [Helm](https://helm.sh/docs/intro/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
## Manual Creation of a Development Environment
This section will cover the creation of an environment manually. This is a good place to start because it creates an understanding of everything that the automated method does for you and uses far less cloud resources.
**STEP 1:**
### Step 1
Create an Ubuntu EC2 instance using the AWS console with the following attributes. Please clean up after yourself. Stop or delete any instances that you are not currently using. See addendum for using Amazon Linux2 instead of Ubuntu. See addendum for using aws command line.
- Ubuntu Server 20.04 LTS (HVM), SSD Volume Type
......@@ -43,7 +44,14 @@ Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Set the vm.max_map_count to 262144.
# Required for Elastic to run correctly without OOM errors.
sysctl -w vm.max_map_count=262144
echo 'vm.max_map_count=524288' > /etc/sysctl.d/vm-max_map_count.conf
echo 'fs.file-max=131072' > /etc/sysctl.d/fs-file-max.conf
sysctl -p
ulimit -n 131072
ulimit -u 8192
modprobe xt_REDIRECT
modprobe xt_owner
modprobe xt_statistic
```
- 50 Gigs of disk space
......@@ -51,7 +59,8 @@ sysctl -w vm.max_map_count=262144
- Security Group: All TCP limited to your local IP address. If you already have a security group, select it. Otherwise create a new one. See addendum for more secure way with only port 22 for ssh traffic using sshuttle.
- If you have created an existing key pair that you still have access to, select it. If not, create a new key pair. Be sure to save the pem file.
**STEP 2:**:
### Step 2
Configure the EC2 instance. SSH into your new EC2 instance and configure it with the following:
- SSH: Find your instance's public IP. This may be in the output of your `run-instance` command, if not search for your instance id in the AWS web console and under the details copy your public ipv4 address. Example below assumes this value is `1.2.3.4`, replace that with the actual value.
......@@ -160,7 +169,8 @@ Here is an explanation of what we are doing with this command:
- `--volume /etc/machine-id:/etc/machine-id` volume mount so k3d nodes have a file at /etc/machine-id for fluentbit DaemonSet.
- `--api-port 6443` port that your k8s api will use. 6443 is the standard default port for k8s api
**STEP 3:**
### Step 3
Test the cluster from your local workstation. Copy the contents of the k3d kubeconfig from the EC2 instance to your local workstation. Do it manually with copy and paste.
```shell
......@@ -182,23 +192,21 @@ kubectl cluster-info
kubectl get nodes
```
**STEP 4:**:
Start deploying to your k3d cluster. The scope of this documentation is limited to creating your dev environment. How to deploy BigBang is intentionally NOT included here. Those steps are left to other documents. You will need to install flux in your cluster before deploying BigBang.
```
### Step 4
Start deploying to your k3d cluster. The scope of this documentation is limited to creating your dev environment. How to deploy BigBang is intentionally NOT included here. Those steps are left to other documents. You will need to install flux in your cluster before deploying BigBang.
```shell
# git clone the bigbang repo somewhere on your workstation
git clone https://repo1.dso.mil/platform-one/big-bang/bigbang.git
# run the script to install flux in your cluster using your registry1.dso.mil image pull credentials
cd ./bigbang
./scripts/install_flux.sh -u your-user-name -p your-pull-secret
```
Or, alternatively install flux from the internet upstream
```
flux install
```
## Addendum
### More secure method with sshuttle
### More secure method with `sshuttle`
Instead of opening all TCP traffic (all ports) to your local public ip address you only need port 22 for ssh traffic. And then use sshuttle to tunnel into your EC2 instance. Here is an example assuming that your EC2 is in the default VPC. All other steps being the same as above.
......
# Integrate a Package with BigBang helm chart
# Integrate a Package with Bigbang Helm Chart
[[_TOC_]]
1. Make a branch from the BigBang chart repository master branch. You can automatically create a branch from the Repo1 Gitlab issue. Or, in some cases you might manually create the branch. You should name the branch with your issue number. If your issue number is 9999 then your branch name can be "9999-my-description". It is best practice to make branch names short and simple.
......@@ -96,51 +98,55 @@
There are two ways to test BigBang, imperative or GitOps with Flux. Your initial development can start with imperative testing. But you should finish with GitOps to make sure that your code works with Flux.
1. **Imperative:** you can manually deploy bigbang with helm command line. With this method you can test local code changes without committing to a repository. Here are the steps that you can iterate with "code a little, test a little". From the root of your local bigbang repo:
### Imperative
```shell
# Deploy with helm while pointing to your override values files.
# In this example the files are placed on your workstation at ../overrides/*
# Bigbang packages should create any needed secrets from the chart values
# If you have the values file encrypted with sops, temporarily decrypt it
helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../overrides/override-values.yaml -f ../overrides/registry-values.yaml -f ./chart/ingress-certs.yaml
# Conduct testing
# If you make code changes you can run another helm upgrade to pick up the new changes
helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../overrides/override-values.yaml -f ../overrides/registry-values.yaml -f ./chart/ingress-certs.yaml
# Tear down
helm delete bigbang -n bigbang
# Helm delete will not delete the bigbang namespace
kubectl delete ns bigbang
# Istio namespace will be stuck in "finalizing". So run the script to delete it.
hack/remove-ns-finalizer.sh istio-system
```
You can manually deploy bigbang with helm command line. With this method you can test local code changes without committing to a repository. Here are the steps that you can iterate with "code a little, test a little". From the root of your local bigbang repo:
```shell
# Deploy with helm while pointing to your override values files.
# In this example the files are placed on your workstation at ../overrides/*
# Bigbang packages should create any needed secrets from the chart values
# If you have the values file encrypted with sops, temporarily decrypt it
helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../overrides/override-values.yaml -f ../overrides/registry-values.yaml -f ./chart/ingress-certs.yaml
# Conduct testing
# If you make code changes you can run another helm upgrade to pick up the new changes
helm upgrade -i bigbang ./chart -n bigbang --create-namespace -f ../overrides/override-values.yaml -f ../overrides/registry-values.yaml -f ./chart/ingress-certs.yaml
2. **GitOps with Flux:** You can deploy your development code the same way a customer would deploy using GitOps. You must commit any code changes to your development branches because this is how GitOps works. There is a [customer template repository](https://repo1.dso.mil/platform-one/big-bang/customers/template) that has an example template for how to deploy using BigBang. You can create a branch from one of the other developer's branch or start clean from the master branch. Make the necessary modifications as explained in the README.md. The setup information is not repeated here. This is a public repo so DO NOT commit unencrypted secrets. Before committing code it is a good idea to manually run `helm template` and a `helm install` with dry run. This will reveal many errors before you make a commit. Here are the steps you can iterate:
# Tear down
helm delete bigbang -n bigbang
# Helm delete will not delete the bigbang namespace
kubectl delete ns bigbang
# Istio namespace will be stuck in "finalizing". So run the script to delete it.
hack/remove-ns-finalizer.sh istio-system
```
### GitOps with Flux
You can deploy your development code the same way a customer would deploy using GitOps. You must commit any code changes to your development branches because this is how GitOps works. There is a [customer template repository](https://repo1.dso.mil/platform-one/big-bang/customers/template) that has an example template for how to deploy using BigBang. You can create a branch from one of the other developer's branch or start clean from the master branch. Make the necessary modifications as explained in the README.md. The setup information is not repeated here. This is a public repo so DO NOT commit unencrypted secrets. Before committing code it is a good idea to manually run `helm template` and a `helm install` with dry run. This will reveal many errors before you make a commit. Here are the steps you can iterate:
```shell
# Verify chart code before committing
helm template bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --debug
helm install bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --dry-run
# Commit and push your code
# Deploy your bigbang template
kubectl apply -f dev/bigbang.yaml
# Monitor rollout
watch kubectl get pod,helmrelease -A
# Conduct testing
# Tear down
kubectl delete -f dev/bigbang.yaml
# Istio namespace will be stuck in "finalizing". So run the script to delete it. You will need 'jq' installed
hack/remove-ns-finalizer.sh istio-system
# If you have pushed code changes before the tear down, occasionally the bigbang deployments are not terminated because Flux has not had enough time to reconcile the helmreleases
# Re-deploy bigbang
kubectl apply -f dev/bigbang.yaml
# Run the sync script.
hack/sync.sh
# Tear down
kubectl delete -f dev/bigbang.yaml
hack/remove-ns-finalizer.sh istio-system
```
```shell
# Verify chart code before committing
helm template bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --debug
helm install bigbang ./chart -n bigbang -f ../customers/template/dev/configmap.yaml --dry-run
# Commit and push your code
# Deploy your bigbang template
kubectl apply -f dev/bigbang.yaml
# Monitor rollout
watch kubectl get pod,helmrelease -A
# Conduct testing
# Tear down
kubectl delete -f dev/bigbang.yaml
# Istio namespace will be stuck in "finalizing". So run the script to delete it. You will need 'jq' installed
hack/remove-ns-finalizer.sh istio-system
# If you have pushed code changes before the tear down, occasionally the bigbang deployments are not terminated because Flux has not had enough time to reconcile the helmreleases
# Re-deploy bigbang
kubectl apply -f dev/bigbang.yaml
# Run the sync script.
hack/sync.sh
# Tear down
kubectl delete -f dev/bigbang.yaml
hack/remove-ns-finalizer.sh istio-system
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment