UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Verified Commit 61c55420 authored by Micah Nagel's avatar Micah Nagel
Browse files

Merge branch 'master' into renovate/flux-flux

parents 952b6e19 7f1eb1cd
No related branches found
No related tags found
1 merge request!1335chore(deps): update flux docker tags
Pipeline #696818 canceled
......@@ -10,7 +10,7 @@ domain: {{ $domainName }}
# Define variables to help with conditionals later
{{- $istioInjection := (and (eq (dig "istio" "injection" "enabled" .Values.addons.sonarqube) "enabled") .Values.istio.enabled) }}
openshift:
OpenShift:
enabled: {{ .Values.openshift }}
istio:
......
......@@ -56,4 +56,10 @@ istio:
vault:
gateways:
- istio-system/{{ default "public" .Values.addons.vault.ingress.gateway }}
minio:
{{- if .Values.istio.enabled }}
annotations:
{{ include "istioAnnotation" . }}
{{- end }}
{{- end -}}
......@@ -339,7 +339,7 @@ kyverno:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/kyverno.git
path: "./chart"
tag: "2.2.0-bb.1"
tag: "2.2.0-bb.2"
# -- Flux reconciliation overrides specifically for the Kyverno Package
flux: {}
......@@ -414,7 +414,7 @@ fluentbit:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/core/fluentbit.git
path: "./chart"
tag: "0.19.16-bb.5"
tag: "0.19.19-bb.0"
# -- Flux reconciliation overrides specifically for the Fluent-Bit Package
flux: {}
......@@ -661,7 +661,7 @@ addons:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/application-utilities/minio.git
path: "./chart"
tag: "4.4.3-bb.2"
tag: "4.4.3-bb.3"
# -- Flux reconciliation overrides specifically for the Minio Package
flux: {}
......@@ -694,7 +694,7 @@ addons:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/developer-tools/gitlab.git
path: "./chart"
tag: "5.6.2-bb.4"
tag: "5.6.2-bb.5"
# -- Flux reconciliation overrides specifically for the Gitlab Package
flux: {}
......@@ -792,7 +792,7 @@ addons:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/developer-tools/nexus.git
path: "./chart"
tag: "36.0.0-bb.4"
tag: "37.3.0-bb.1"
# -- Base64 encoded license file.
license_key: ""
......@@ -857,7 +857,7 @@ addons:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/developer-tools/sonarqube.git
path: "./chart"
tag: "9.6.3-bb.15"
tag: "9.6.3-bb.16"
# -- Flux reconciliation overrides specifically for the Sonarqube Package
flux: {}
......@@ -1044,7 +1044,7 @@ addons:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/collaboration-tools/mattermost.git
path: "./chart"
tag: "0.4.0-bb.3"
tag: "0.5.0-bb.0"
# -- Flux reconciliation overrides specifically for the Mattermost Package
flux: {}
......@@ -1217,7 +1217,7 @@ addons:
git:
repo: https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/vault.git
path: "./chart"
tag: "0.18.0-bb.6"
tag: "0.18.0-bb.7"
# -- Flux reconciliation overrides specifically for the Vault Package
flux: {}
......
docs/guides/backups-and-migrations/images/doom.png

183 KiB

docs/guides/backups-and-migrations/images/volume-snapshot.png

276 KiB

# Migrating a Nexus Repository using Velero
This guide demonstrates how to perform a migration of Nexus repositories and
artifacts between Kubernetes clusters.
# Table of Contents
1. [Prerequisites/Assumptions](#prerequisitesassumptions)
1. [Preparation](#preparation)
2. [Backing Up Nexus](#backing-up-nexus)
3. [Restoring From Backup](#restoring-from-backup)
3. [Appendix](#appendix)
<a name="prerequisitesassumptions"></a>
# Prerequisites/Assumptions
- K8s running in AWS
- Nexus PersistentVolume is using AWS EBS
- Migration is between clusters on the same AWS instance and availability zone (due to known Velero [limitations](https://velero.io/docs/v1.6/locations/#limitations--caveats))
- Migation occurs between K8s clusters with the same version
- Velero CLI [tool](https://github.com/vmware-tanzu/velero/releases)
- Crane CLI [tool](https://github.com/google/go-containerregistry)
<a name="preparation"></a>
# Preparation
1. Ensure the Velero addon in the Big Bang values file is properly configured, sample configuration below:
```yaml
addons:
velero:
enabled: true
plugins:
- aws
values:
serviceAccount:
server:
name: velero
configuration:
provider: aws
backupStorageLocation:
bucket: nexus-velero-backup
volumeSnapshotLocation:
provider: aws
config:
region: us-east-1
credentials:
useSecret: true
secretContents:
cloud: |
[default]
aws_access_key_id = <CHANGE ME>
aws_secret_access_key = <CHANGE ME>
```
2. Manually create an S3 bucket that the backup configuration will be stored in (in this case it is named `nexus-velero-backup`), this should match the `configuration.backupStorageLocation.bucket` key above
3. The `credentials.secretContents.cloud` creds should have the necessary permissions to read/write to S3, volumes and volume snapshots
4. As a sanity check, take a look at the Velero logs to make sure the backup location (S3 bucket) is valid, you should see something like:
```
level=info msg="Backup storage location valid, marking as available" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
```
5. Ensure there are images/artifacts in Nexus. An as example we will use the [Doom DOS image](https://earthly.dev/blog/dos-gaming-in-docker/) and a simple nginx image. Running `crane catalog nexus-docker.bigbang.dev` will show all of the artifacts and images in Nexus:
```
repository/nexus-docker/doom-dos
repository/nexus-docker/nginx
```
<a name="backing-up-nexus"></a>
# Backing Up Nexus
In the cluster containing the Nexus repositories to migrate, running the following command will create a backup called `nexus-ns-backup` and will backup all resources in the `nexus-repository-manager` namespace, including the associated PersistentVolume:
`velero backup create nexus-ns-backup --include-namespaces nexus-repository-manager --include-cluster-resources=true`
Specifically, this will backup all Nexus resources to the S3 bucket `configuration.backupStorageLocation.bucket` specified above and will create a volume snapshot of the Nexus EBS volume.
**Double-check** AWS to make sure this is the case by reviewing the contents of the S3 bucket:
`aws s3 ls s3://nexus-velero-backup --recursive --human-readable --summarize`
Expected output:
```
backups/nexus-ns-backup/nexus-ns-backup-csi-volumesnapshotcontents.json.gz
backups/nexus-ns-backup/nexus-ns-backup-csi-volumesnapshots.json.gz
backups/nexus-ns-backup/nexus-ns-backup-logs.gz
backups/nexus-ns-backup/nexus-ns-backup-podvolumebackups.json.gz
backups/nexus-ns-backup/nexus-ns-backup-resource-list.json.gz
backups/nexus-ns-backup/nexus-ns-backup-volumesnapshots.json.gz
backups/nexus-ns-backup/nexus-ns-backup.tar.gz
backups/nexus-ns-backup/velero-backup.json
```
Also ensure an EBS volume snapshot has been created and the Snapshot status is `Completed`.
![volume-snapshot](images/volume-snapshot.png)
<a name="restoring-from-backup"></a>
# Restoring From Backup
1. In the new cluster, ensure that Nexus and Velero are running and healthy
* It is critical to ensure that Nexus has been included in the new cluster's Big Bang deployment, otherwise the restored Nexus configuration will not be managed by the Big Bang Helm chart.
2. If you are using the same `velero.values` from above, Velero should automatically be configured to use the same backup location as before. Verify this with `velero backup get` and you should see output that looks like:
```
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
nexus-ns-backup Completed 0 0 2022-02-08 12:34:46 +0100 CET 29d default <none>
```
3. To perform the migration, Nexus must be shut down. In the Nexus Deployment, bring the `spec.replicas` down to `0`.
4. Ensure that the Nexus PVC and PV are also removed (**you may have to delete these manually!**), and that the corresponding Nexus EBS volume has been deleted.
* If you have to remove the Nexus PV and PVC manually, delete the PVC first, which should cascade to the PV; then, manually delete the underlying EBS volume (if it still exists)
5. Now that Nexus is down and the new cluster is configured to use the same backup location as the old one, perform the migration by running:
`velero restore create --from-backup nexus-ns-backup`
6. The Nexus PV and PVC should be recreated (verify before continuing!), but the pod will fail to start due to the previous change in the Nexus deployment spec. Change the Nexus deployment `spec.replicas` back to `1`. This will bring up the Nexus pod which should connect to the PVC and PV created during the Velero restore.
7. Once the Nexus pod is running and healthy, log in to Nexus and verify that the repositories have been restored
* The credentials to log in will have been restored from the Nexus backup, so they should match the credentials of the Nexus that was migrated (not the new installation!)
* It is recommended to log in to Nexus and download a sampling of images/artifacts to ensure they are working as expected.
For example, login to Nexus using the migrated credentials:
`docker login -u admin -p admin nexus-docker.bigbang.dev/repository/nexus-docker`
Running `crane catalog nexus-docker.bigbang.dev` should show the same output as before:
```text
repository/nexus-docker/doom-dos
repository/nexus-docker/nginx
```
To ensure the integrity of the migrated image, we will pull and run the `doom-dos` image and defeat evil!
```
docker pull nexus-docker.bigbang.dev/repository/nexus-docker/doom-dos:latest && \
docker run -p 8000:8000 nexus-docker.bigbang.dev/repository/nexus-docker/doom-dos:latest
```
<img src="images/doom.png" alt="doom" width="750"/>
<a name="appendix"></a>
# Appendix
### Sample Nexus values:
```yaml
addons:
nexus:
enabled: true
values:
nexus:
docker:
enabled: true
registries:
- host: nexus-docker.bigbang.dev
port: 5000
```
\ No newline at end of file
......@@ -33,7 +33,8 @@ source:
- registry1.dso.mil/ironbank/anchore/enterpriseui/enterpriseui:3.2.1
- registry1.dso.mil/ironbank/big-bang/base:8.4
- registry1.dso.mil/ironbank/big-bang/base:1.0.0
- registry1.dso.mil/ironbank/gitlab/gitlab/kubectl:13.9.0
- registry1.dso.mil/ironbank/gitlab/gitlab/kubectl:14.6.2
- registry1.dso.mil/ironbank/gitlab/gitlab/gitlab-exporter:14.6.2
- registry1.dso.mil/ironbank/opensource/kubernetes-1.21/kubectl:v1.21.2
- registry1.dso.mil/ironbank/opensource/istio/install-cni:1.11.5
# NOTE: We use the velero AWS plugin in CI so it isn't listed here
......@@ -52,6 +53,6 @@ source:
# - registry.il2.dso.mil/platform-one/devops/pipeline-templates/pipeline-job/dependency-check616-sonar-scanner45-dotnet-31:052421
# gitlab-runner-helper image: This image does not get captured from the release deployment
# the gitlab-runner-helper image only gets pulled when a pipeline runs. So it must be listed here
- registry1.dso.mil/ironbank/gitlab/gitlab-runner/gitlab-runner-helper:v14.3.1
- registry1.dso.mil/ironbank/gitlab/gitlab-runner/gitlab-runner-helper:v14.4.0
# Don't include until fortify is supported
#- registry.il2.dso.mil/platform-one/devops/pipeline-templates/pipeline-job/dotnet-fortify:20.2.0
......@@ -140,6 +140,10 @@ gatekeeper:
- istio-system/lb-port-.*
# Allow argocd to deploy a test app in its cypress test
- argocd/guestbook-ui.*
allowedHostFilesystem:
parameters:
excludedResources:
- nexus-repository-manager/nexus-repository-manager-cypress-test
allowedSecCompProfiles:
parameters:
excludedResources:
......@@ -185,6 +189,10 @@ gatekeeper:
excludedResources:
# Allows k3d load balancer containers to not have readiness/liveness probes
- istio-system/lb-port-.*
volumeTypes:
parameters:
excludedResources:
- nexus-repository-manager/nexus-repository-manager-cypress-test
bbtests:
enabled: true
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment