UNCLASSIFIED - NO CUI

Release 1.15.0

1. Release Prep

  • Verify that the previous release branch commit hash matches the last release tag. Investigate with previous RE if they do not match
  • Create release branch with name. Ex: release-1.15.x
  • Build draft release notes, see release_notes_template.md
  • Release specific code changes. Make the following changes in a single commit so it can be cherry picked into master later.
    • Bump self-reference version in base/gitrepository.yaml

    • Update chart release version chart/Chart.yaml

    • Bump badge at the top of README.md

    • Update /Packages.md with any new Packages

    • Update CHANGELOG.md with links to MRs and any upgrade notices/known issues. release-diff update link for release

    • Update README.md using helm-docs. Overwrite the existing readme file.

      # from root dir of your release branch
      docker run -v "$(pwd):/helm-docs" -u $(id -u) jnorwood/helm-docs:v1.5.0 -s file -t .gitlab-ci/README.md.gotmpl --dry-run > README.md

2. Test and Validate Release Candidate

Deploy release branch on Dogfood cluster

  • Connect to Cluster
  • Update bigbang/base/kustomization.yaml & bigbang/prod/kustomization.yaml with release branch.
  • Verify cluster has updated to the new release
    • Packages have fetched the new revision and match the new release

    • Packages have reconciled

      # check release
      watch kubectl get gitrepositories,kustomizations,hr,po -A
      # if flux has not updated after 10 minutes.
      flux reconcile hr -n bigbang bigbang --with-source
      # if it is still not updating, delete the flux source controller 
      kubectl get all -n flux-system 
      kubectl delete pod/source-controller-xxxxxxxx-xxxxx -n flux-system

Confirm app UIs are loading

Logging

  • Login to kibana with SSO
  • Kibana is actively indexing/logging.

Cluster Auditor

  • Login to kibana with SSO
  • violations index is present and contains images that aren't from registry1

Monitoring

  • Login to grafana with SSO
  • Contains Kubernetes Dashboards and metrics
  • contains istio dashboards
  • Login to prometheus
  • All apps are being scraped, no errors

Kiali

  • Login to kiali with SSO

Sonarqube

GitLab & Runners

  • Login to gitlab with SSO

  • Create new public group with release name. Example release-1-15-0

  • Create new public project with release name. Example release-1-15-0

  • git clone and git push to new project

  • docker push and docker pull image to registry

    docker pull alpine
    docker tag alpine registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest
    docker login registry.dogfood.bigbang.dev
    docker push registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest
  • Edit profile and change user avatar

  • Test simple CI pipeline. sample_ci.yaml

Anchore

  • Login to anchore with SSO
  • Scan image in dogfood registry, registry.dogfood.bigbang.dev/GROUPNAMEHERE/PROJECTNAMEHERE/alpine:latest

Argocd

  • Login to argocd with SSO
  • Logout and login with admin. password reset
  • Create application
    *click* create application
    application name: argocd-test
    Project: default
    Sync Policy: Automatic
    Sync Policy: check both boxes
    Sync Options: check both boxes
    Repository URL: https://github.com/argoproj/argocd-example-apps
    Revision: HEAD
    Path: helm-guestbook
    Cluster URL: https://kubernetes.default.svc
    Namespace: argocd-test
    *click* Create (top of page)
  • Delete application

Minio

  • Create bucket
  • Store file to bucket
  • Download file from bucket
  • Delete bucket and files

Mattermost

  • Login to mattermost with SSO
  • Elastic integration

Velero

  • Backup PVCs velero_test.yaml

    kubectl apply -f ./velero_test.yaml
    # exec into velero_test container
    cat /mnt/velero-test/test.log
    # take note of log entries and exit exec 
    velero backup create velero-test-backup-1-8-0 -l app=velero-test
    velero backup get
    kubectl delete -f ./velero_test.yaml
    kubectl get pv | grep velero-test
    kubectl delete pv INSERT-PV-ID
  • Restore PVCs

    velero restore create velero-test-restore-1-8-0 --from-backup velero-test-backup-1-8-0
    # exec into velero_test container
    cat /mnt/velero-test/test.log
    # old log entires and new should be in log if backup was done correctly

Keycloak

3. Create Release

  • Create release candidate tag based on release branch. Tag EX: 1.15.0-rc.0.
    Message: release candidate
    Release Notes: **Leave Blank**
  • Passed tag pipeline.
  • Create release tag based on release branch. Tag EX: 1.15.0.
    Message: release 1.x.x
    Release Notes: **Leave Blank**
  • Passed release pipeline.
  • Add release notes to release.
  • Cherry-pick release commit(s) as needed with merge request back to master branch
  • Celebrate and announce release

RELEASE NOTES

Release 1.15.0 Release Notes

Please see our documentation page for more information on how to consume and deploy BigBang.

Upgrade Notices

Resources

Bigbang has begun to implement resource requests and limits on pods in preparation of setting OPA constraints to deny. If you notice multiple pod restarts check for OOMKill termination errors, and pod limits may need to be increased.

MINIO INSTANCE CRITICAL UPGRADE INFORMATION - PLEASE READ BEFORE UPGRADING

If you have enabled the Minio Cluster Instance in the 'minio' namespace, this upgrade requires a backup and restore of your Minio instance buckets. Failure to do so will result in data loss during the upgrade.

By default, the update of the Minio Instance helm chart to V4.1.2 will keep the 2.0.9 instances in place and operational. This allow a backup to be performed on the operational Minio Instances. After the backup is complete, an upgrade to the V4.1.2 instance is required. This is accomplished by setting the upgrade key/value [in the values file] (show below) to TRUE.

# When true, upgradeTenants enables use of the V4.* Minio Operator CRD for creation of tenants is enabled.
addons:
  minio:
    values:
      upgradeTenants:
        enabled: false

After execution of the helm chart with this value set to true, the new V4 instances will be running and you can restore the backup data to the new instances.

NOTE: If you have not enabled the deployment of a Minio Instance before the V4.1.2 release, you must set the above mentioned upgradeTenants/enabled value to TRUE or the helm deployment will fail

One of the easiest ways to backup your Minio instance is to use the Minio MC command line tool on a different system. The MC command line tool can be found here: https://github.com/minio/mc or you can use the Iron Bank approved container located at registry1.dso.mil/ironbank/opensource/minio/mc:RELEASE.2021-06-08T01-29-37Z.

mc alias set <alias name> HOSTNAME ACCESSKEY SECRETKEY
mc mirror <alias name>/ <local storage location>

Istio upgrade from 1.8 to 1.9

This release upgrades istio to 1.9.7. Because of this all pods with an istio-proxy sidecar must be restarted to pull in the newest sidecar version.

Mattermost default value change

To allow for defining replica count and resource requests/limits, users is set to null by default. Changing this will negate replica and resource values and Mattermost may not run due to OPA Gatekeeper constraints.

To set a replica count greater than 1 requires an enterprise license, and can be configured like the following example:

addons:
  mattermost:
    values:
      enterprise:
        enabled: true
      replicaCount: 3

Setting a user value is not supported due to OPA constraint issues

If you want to use Mattermost's user/size value you will need to handle OPA violations and exceptions yourself since this is not BB supported. If all of these considerations have been accounted for and you still want to deploy with Mattermost's user sizing it can be done by setting the value as in this example:

addons:
  mattermost:
    values:
      users: 1000

Packages

Package Type Package Version BB Version
Updated: 1.15.0 Istio Controlplane Core 1.9.7 1.9.7-bb.0
Updated: 1.15.0 Istio Operator Core 1.9.7 1.9.7-bb.1
Jaeger Core 2.23.0 2.23.0-bb.1
Updated: 1.15.0 Kiali Core 1.37.0 1.37.0-bb.3
Updated: 1.15.0 Cluster Auditor Core 1.16.0 0.3.0-bb.6
Updated: 1.15.0 OPA Gatekeeper Core 3.5.1 3.5.1-bb.16
Updated: 1.15.0 Elasticsearch Kibana Core 7.13.4 0.1.20-bb.0
ECK Operator Core 1.6.0 1.6.0-bb.2
Fluentbit Core 1.8.1 0.16.1-bb.0
Updated: 1.15.0 Monitoring Core G: 7.5.2, P: 2.25.0, A: 0.21.0 14.0.0-bb.8
Updated: 1.15.0 Twistlock Core 21.04.439 0.0.8-bb.1
Updated: 1.15.0 Argocd Addon 2.0.1 (w/ p1 plugins) 3.6.8-bb.6
Updated: 1.15.0 Authservice Addon 0.4.0 0.4.0-bb.13
Updated: 1.15.0 MinIO Operator Addon 4.1.2 4.1.2-bb.3
Updated: 1.15.0 MinIO Addon RELEASE.2021-06-17T00-10-46Z 4.1.2-bb.6
Updated: 1.15.0 Gitlab Addon 13.12.9 4.12.9-bb.1
Updated: 1.15.0 Gitlab Runners Addon 13.12.0 0.29.0-bb.0
Nexus Addon 3.29.0 29.1.0-bb.7
SonarQube Addon 8.9 (w/ p1 plugins) 9.2.6-bb.13
Updated: 1.15.0 Anchore Addon ENG: 0.10.0, ENT: 3.1.0 1.13.0-bb.6
Updated: 1.15.0 Mattermost Operator Addon 1.14.0 1.14.0-bb.3
Updated: 1.15.0 Mattermost Addon 5.37.0 0.1.8-bb.1
Updated: 1.15.0 Velero Addon 1.6.2 2.23.5-bb.1
Updated: 1.15.0 Keycloak Addon 14.0.0 11.0.1-bb.2

Changes in v1.15.0

BigBang

Istio Operator

Istio Controlplane

Kiali

OPA Gatekeeper

Monitoring

Elasticsearch Kibana

Cluster Auditor

Twistlock

Gitlab

Gitlab Runners

Mattermost Operator

Mattermost

MinIO Operator

MinIO

Authservice

Anchore

Argocd

Velero

Keycloak

Documentation

Known Issues

Helpful Links

As always, we welcome and appreciate feedback from our community of users. Please feel free to:

Future

Don't see your feature and/or bug fix? Check out our roadmap for estimates on when you can expect things to drop, and as always, feel free to comment or create issues if you have questions, comments, or concerns.

Edited by Branden Cobb