Release 1.19.0
1. Release Prep
- Verify that the previous release branch commit hash matches the last release tag. Investigate with previous RE if they do not match
-
Create release branch with name. Ex:
release-1.9.x
- Build draft release notes, see release_notes_template.md
-
Release specific code changes. Make the following changes in a single commit so it can be cherry picked into master later.
-
Bump self-reference version in
base/gitrepository.yaml
-
Update chart release version
chart/Chart.yaml
-
Bump badge at the top of
README.md
-
Update
/Packages.md
with any new Packages -
Update CHANGELOG.md with links to MRs and any upgrade notices/known issues. release-diff update link for release
-
Update README.md using
helm-docs
. Overwrite the existing readme file.# from root dir of your release branch docker run -v "$(pwd):/helm-docs" -u $(id -u) jnorwood/helm-docs:v1.5.0 -s file -t .gitlab-ci/README.md.gotmpl --dry-run > README.md
-
2. Test and Validate Release Candidate
Deploy release branch on Dogfood cluster
- Connect to Cluster
-
Update
bigbang/base/kustomization.yaml
&bigbang/prod/kustomization.yaml
with release branch. -
Verify cluster has updated to the new release
-
Packages have fetched the new revision and match the new release
-
Packages have reconciled
# check release watch kubectl get gitrepositories,kustomizations,hr,po -A # if flux has not updated after 10 minutes. flux reconcile hr -n bigbang bigbang --with-source # if it is still not updating, delete the flux source controller kubectl get all -n flux-system kubectl delete pod/source-controller-xxxxxxxx-xxxxx -n flux-system
-
Confirm app UIs are loading
- anchore
- argocd
- gitlab
- tracing
- kiali
- kibana
- mattermost
- minio
- alertmanager
- grafana
- prometheus
- sonarqube
- twistlock
- nexus
- TLS/SSL certs are valid
Logging
- Login to kibana with SSO
- Kibana is actively indexing/logging.
Cluster Auditor
- Login to kibana with SSO
- violations index is present and contains images that aren't from registry1
Monitoring
- Login to grafana with SSO
- Contains Kubernetes Dashboards and metrics
- contains istio dashboards
- Login to prometheus
- All apps are being scraped, no errors
Kiali
- Login to kiali with SSO
- Validate graphs and traces are visible under applications/workloads
- Validate no errors appear (red notification bell would be visible if there are errors)
Sonarqube
- Login to sonarqube with SSO
- Add a project for your release
- Find a repo to scan sonar against and clone it locally (you can just use the BB repo, or pick an upstream repo for anything)
- Generate a token for the project, choose build type (when in doubt "Other"), your OS, and follow the instructions to run the scanner from local clone of the repo you chose
GitLab & Runners
-
Login to gitlab with SSO
-
Create new public group with release name. Example
release-1-8-0
-
Create new public project with release name. Example
release-1-8-0
-
git clone and git push to new project
-
docker push and docker pull image to registry
docker pull alpine docker tag alpine registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest docker login registry.dogfood.bigbang.dev docker push registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest
-
Edit profile and change user avatar
-
Test simple CI pipeline. sample_ci.yaml
Nexus
- Login to Nexus as admin
- Validate there are no errors displaying in the UI
-
docker login containers.dogfood.bigbang.dev
with the creds from the encrypted values -
docker tag alpine:latest containers.dogfood.bigbang.dev/alpine:1-19-0
(replace with your release number) -
docker push containers.dogfood.bigbang.dev/alpine:1-19-0
- Validate that the push is successful and then pull it down (or pull down the image for the previous release)
Anchore
- Login to anchore with SSO
- Log out and log back in as the admin user (this user should have pull creds set up for the registries)
-
Scan image in dogfood registry,
registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest
-
Scan image in nexus registry,
containers.dogfood.bigbang.dev/alpine:1-19-0
- Validate scans complete and Anchore displays data (click the SHA value)
Argocd
- Login to argocd with SSO
-
Logout and login with
admin
. password is in the encrypted environment values file. initially the password is the name of the argocd server pod. or password reset -
Create application
*click* create application application name: argocd-test Project: default Sync Policy: Automatic Sync Policy: check both boxes Sync Options: check "auto-create namespace" Repository URL: https://github.com/argoproj/argocd-example-apps Revision: HEAD Path: helm-guestbook Cluster URL: https://kubernetes.default.svc Namespace: argocd-test *click* Create (top of page)
- Delete application
Minio
-
Log into the minio UI as
minio
with passwordminio123
- Create bucket
- Store file to bucket
- Download file from bucket
- Delete bucket and files
Mattermost
- Login to mattermost with SSO
- Elastic integration
Twistlock
- Login to twistlock/prisma cloud with the credentials encrypted in bigbang/prod/environment-bb-secret.enc.yaml
-
Only complete if Twistlock was upgraded
- Navigate to Manage -> Defenders -> Deploy
- Turn off "Use the official Twistlock registry" and in "Enter the full Defender image name" paste the latest IB image for defenders
-
3:
twistlock-console
-
11:
On
Toggle on "Monitor Istio" -
14:
registry1.dso.mil/ironbank/twistlock/defender/defender:latest
-
15:
private-registry
-
16:
On
Deploy Defenders with SELinux Policy -
16:
On
Nodes use Container Runtime Interface (CRI), not Docker -
16:
On
Nodes runs inside containerized environment - 17b: download the yaml files
- Apply the yaml in the dogfood cluster, validate the pods go to running
- Under Manage -> Defenders -> Manage, make sure # of defenders online is equal to number of nodes on the cluster
- Under Radars -> Containers, validate pods are shown across all namespaces
Velero
-
Backup PVCs velero_test.yaml
kubectl apply -f ./velero_test.yaml # exec into velero_test container cat /mnt/velero-test/test.log # take note of log entries and exit exec
velero backup create velero-test-backup-1-8-0 -l app=velero-test velero backup get kubectl delete -f ./velero_test.yaml kubectl get pv | grep velero-test kubectl delete pv INSERT-PV-ID
-
Restore PVCs
velero restore create velero-test-restore-1-8-0 --from-backup velero-test-backup-1-8-0 # exec into velero_test container cat /mnt/velero-test/test.log # old log entires and new should be in log if backup was done correctly
-
Cleanup test
kubectl delete -f ./velero_test.yaml kubectl get pv | grep velero-test kubectl delete pv INSERT-PV-ID
Keycloak
- Login to Keycloak admin console. The credentials are in the encrypted environment-bb values file.
3. Create Release
- Re-run helm docs in case any package tags changed as a result of issues found in testing.
-
Create release candidate tag based on release branch. Tag EX:
1.8.0-rc.0
.Message: release candidate Release Notes: **Leave Blank**
- Passed tag pipeline.
-
Create release tag based on release branch. Tag EX:
1.8.0
.Message: release 1.x.x Release Notes: **Leave Blank**
- Passed release pipeline.
- Add release notes to release.
- Cherry-pick release commit(s) as needed with merge request back to master branch
- Celebrate and announce release