Release 1.28
Copy https://repo1.dso.mil/platform-one/big-bang/customers/bigbang/-/tree/master/docs/release into description
Release Process
1. Release Prep
-
Verify that the previous release branch commit hash matches the last release tag. Investigate with previous RE if they do not match -
Create release branch with name. Ex: release-1.9.x
-
Copy markdown from previous release notes. Build new draft release notes in the dogfood repo /docs/release directory. Make a new file release-notes-x-x-x.md
. Edit the contents and commit it to the repo for the benefit of the next release engineer.The command below will get you the BB Versions for all packages to use in the package table - make sure to run it from the root of the repo while on the release branch:
yq e '(.*.git.tag | select(. != null) | [{"path":(path | .[-3]), "value":.}], .addons.*.git.tag | select(. != null) | [{"Package":(path | .[-3]), "BB Version":.}])' chart/values.yaml
For the Package Version you will need to check each package manually. Depending on the package we may be tracking one or more of the image tags or the Chart's
appVersion
. -
Release specific code changes. Make the following changes in a single commit so it can be cherry picked into master later. -
Bump self-reference version in base/gitrepository.yaml
-
Update chart release version chart/Chart.yaml
-
Bump badge at the top of README.md
-
Update /Packages.md
with any new Packages -
Update CHANGELOG.md with links to MRs and any upgrade notices/known issues. release-diff update link for release -
Update README.md using helm-docs
. Overwrite the existing readme file. But restore the Homepage, Usage, and Getting Started content.# from root dir of your release branch docker run -v "$(pwd):/helm-docs" -u $(id -u) jnorwood/helm-docs:v1.5.0 -s file -t .gitlab/README.md.gotmpl --dry-run > README.md
-
2. Test and Validate Release Candidate
Deploy release branch on Dogfood cluster
-
Connect to Cluster -
Review Elasticsearch Health and trial License status & follow these steps if expired: kubectl delete hr ek eck-operator fluentbit cluster-auditor -n bigbang kubectl delete ns eck-operator logging flux reconcile kustomization environment -n bigbang flux suspend hr bigbang -n bigbang flux resume hr bigbang -n bigbang
-
Review Mattermost Enterprise trial license status & follow these steps if expired: To "renew" mattermost enterprise trial license: Connect to RDS posgres DB using `psql` (get command and auth from Ryan/Micah/Branden) \c mattermost select * from "public"."licenses"; delete from "public"."licenses"; \q kubectl delete mattermost mattermost -n mattermost
-
If Flux has updated in the latest release, checkout your release branch on the BB repo and run ./scripts/install_flux.sh -s
(the-s
option will reuse the existing secret so you don't have to provide credentials) -
Update bigbang/base/kustomization.yaml
&bigbang/prod/kustomization.yaml
with release branch. -
Verify cluster has updated to the new release -
Packages have fetched the new revision and match the new release -
Packages have reconciled # check release watch kubectl get gitrepositories,kustomizations,hr,po -A # if flux has not updated after 10 minutes. flux reconcile hr -n bigbang bigbang --with-source # if it is still not updating, delete the flux source controller kubectl get all -n flux-system kubectl delete pod/source-controller-xxxxxxxx-xxxxx -n flux-system
-
Confirm app UIs are loading
-
anchore -
argocd -
gitlab -
tracing -
kiali -
kibana -
mattermost (chat) -
minio -
alertmanager -
grafana -
prometheus -
sonarqube -
twistlock -
nexus -
keycloak -
TLS/SSL certs are valid
Logging
-
Login to kibana with SSO -
Kibana is actively indexing/logging.
Monitoring
-
Login to grafana with SSO -
Contains Kubernetes Dashboards and metrics -
contains istio dashboards -
Login to prometheus -
All apps are being scraped, no errors
Cluster Auditor
-
Login to grafana with SSO -
OPA Violations dashboard is present and shows violations in namespaces (check gitlab-runners
ns to validate violations over time)
Kiali
-
Login to kiali with SSO -
Validate graphs and traces are visible under applications/workloads -
Validate no errors appear (red notification bell would be visible if there are errors)
GitLab
-
Login to gitlab with SSO -
Edit profile and change user avatar -
Create new public group with release name. Example release-1-8-0
-
Create new public project with release name. Example release-1-8-0
-
git clone project -
Pick one of the project folders from https://github.com/SonarSource/sonar-scanning-examples/tree/master/sonarqube-scanner/src and copy all the files into your clone from dogfood, then push up -
docker push and docker pull image to/from registry
docker pull alpine
docker tag alpine registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest
docker login registry.dogfood.bigbang.dev
docker push registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest
Sonarqube
-
Login to sonarqube with SSO -
Add a project for your release -
Generate a token for the project and copy the token somewhere safe for use later -
Click other, linux, and copy the projectKey from -Dsonar.projectKey=XXXXXXX
for use later -
After completing the gitlab runner test return to sonar and check that your project now has analysis
Gitlab Runner
-
Log back into gitlab and navigate to your project -
Under settings, CI/CD, variables add two vars: -
SONAR_HOST_URL
set equal tohttps://sonarqube.dogfood.bigbang.dev/
-
SONAR_TOKEN
set equal to the token you copied from Sonarqube earlier (make this masked)
-
-
Add a .gitlab-ci.yml
file to the root of the project, paste in the contents of sample_ci.yaml, replacing-Dsonar.projectKey=XXXXXXX
with what you copied earlier -
Commit, validate the pipeline runs and succeeds (may need to retry if there is a connection error), then return to the last step of the sonar test
Nexus
-
Login to Nexus as admin, password is in the nexus-repository-manager-secret
secret -
Validate there are no errors displaying in the UI -
Push/pull an image to/from the nexus registry -
docker login containers.dogfood.bigbang.dev
with the creds from the encrypted values (or the admin user creds) -
docker tag alpine:latest containers.dogfood.bigbang.dev/alpine:1-20-0
(replace with your release number, pick a different image to tag if you want) -
docker push containers.dogfood.bigbang.dev/alpine:1-20-0
-
Pull down the image for the previous release ( docker pull containers.dogfood.bigbang.dev/alpine:1-19-0
)
-
Anchore
-
Login to Anchore with SSO -
Log out and log back in as the admin user - password is in anchore-anchore-engine-admin-pass
secret (admin will have pull creds set up for the registries) -
Scan image in dogfood registry, registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest
-
Scan image in nexus registry, containers.dogfood.bigbang.dev/alpine:1-19-0
(use your release number) -
Validate scans complete and Anchore displays data (click the SHA value for each image)
Argocd
-
Login to argocd with SSO -
Logout and login with username admin
. The password is in theargocd-initial-admin-secret
secret. If that doesn't work attempt a password reset. -
Create application *click* create application application name: podinfo Project: default Sync Policy: Automatic Sync Policy: check both boxes Sync Options: check "auto-create namespace" Repository URL: https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/podinfo/ Revision: HEAD Path: chart Cluster URL: https://kubernetes.default.svc Namespace: podinfo *click* Create (top of page)
-
Delete application
Minio
-
Log into the Minio UI - access and secret key are in the minio-root-creds-secret
secret -
Create bucket -
Store file to bucket -
Download file from bucket -
Delete bucket and files
Mattermost
-
Login to mattermost with SSO -
Update/modify profile picture -
Send chats/validate chats from previous releases are around
Twistlock
-
Login to twistlock/prisma cloud with the credentials encrypted in bigbang/prod/environment-bb-secret.enc.yaml
-
Only complete if Twistlock was upgraded -
Navigate to Manage -> Defenders -> Deploy -
Turn off "Use the official Twistlock registry" and in "Enter the full Defender image name" paste the latest IB image for defenders -
3: twistlock-console
-
11: On
Toggle on "Monitor Istio" -
14: registry1.dso.mil/ironbank/twistlock/defender/defender:latest
-
15: private-registry
-
16: On
Deploy Defenders with SELinux Policy -
16: On
Nodes use Container Runtime Interface (CRI), not Docker -
16: On
Nodes runs inside containerized environment -
17b: download the yaml files -
Apply the yaml in the dogfood cluster, validate the pods go to running
-
-
Under Manage -> Defenders -> Manage, make sure # of defenders online is equal to number of nodes on the cluster -
Under Radars -> Containers, validate pods are shown across all namespaces
Kyverno
-
Test secret sync in new namespace
# create secret in kyverno NS
kubectl create secret generic -n kyverno kyverno-bbtest-secret \
--from-literal=username='username' \
--from-literal=password='password'
# Create Kyverno Policy
kubectl apply -f https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/kyverno/-/raw/main/chart/tests/manifests/sync-secrets.yaml
# Check if secret is create in NEW namespace
kubectl create ns kyverno-test # wait for 5s for Policy to be ready
kubectl label ns kyverno-test kubernetes.io/metadata.name=kyverno-bbtest --overwrite=true
kubectl get secrets kyverno-bbtest-secret -n kyverno-test # Test passed if found
# If successful, delete test resources
kubectl delete -f https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/kyverno/-/raw/main/chart/tests/manifests/sync-secrets.yaml
kubectl delete secret kyverno-bbtest-secret -n kyverno
kubectl delete ns kyverno-test
-
Delete the test resources
Velero
-
Backup PVCs velero_test.yaml kubectl apply -f ./velero_test.yaml # exec into velero_test container cat /mnt/velero-test/test.log # take note of log entries and exit exec
velero backup create velero-test-backup-1-8-0 -l app=velero-test velero backup get kubectl delete -f ./velero_test.yaml kubectl get pv | grep velero-test kubectl delete pv INSERT-PV-ID
-
Restore PVCs velero restore create velero-test-restore-1-8-0 --from-backup velero-test-backup-1-8-0 # exec into velero_test container cat /mnt/velero-test/test.log # old log entires and new should be in log if backup was done correctly
-
Cleanup test kubectl delete -f ./velero_test.yaml kubectl get pv | grep velero-test kubectl delete pv INSERT-PV-ID
Keycloak
-
Login to Keycloak admin console. The credentials are in the keycloak-credentials
secret.
3. Create Release
-
Re-run helm docs in case any package tags changed as a result of issues found in testing. -
Create release candidate tag based on release branch. Tag EX: 1.8.0-rc.0
.Message: release candidate Release Notes: **Leave Blank**
-
Passed tag pipeline. -
Create release tag based on release branch. Tag EX: 1.8.0
.Message: release 1.x.x Release Notes: **Leave Blank**
-
Passed release pipeline. -
Add release notes to release. -
Cherry-pick release commit(s) as needed with merge request back to master branch -
Celebrate and announce release
Draft Release Note
Candidate Release Notes
Please see our documentation page for more information on how to consume and deploy BigBang.
Upgrade Notices
- TBD
Upgrades from previous releases
- TBD