UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit 899842f0 authored by Gedd Johnson's avatar Gedd Johnson
Browse files

adds documentation for nexus migrations

parent a1b4c8a8
No related branches found
No related tags found
3 merge requests!1658Draft: Merge branch 'tempo_tracing_updates' into 'master',!1386Master,!1347adds documentation for nexus migrations
docs/guides/backups-and-migrations/images/doom.png

183 KiB

docs/guides/backups-and-migrations/images/volume-snapshot.png

276 KiB

# Migrating a Nexus Repository using Velero
This guide demonstrates how to perform a migration of Nexus repositories and
artifacts between Kubernetes clusters.
# Table of Contents
1. [Prerequisites/Assumptions](#prerequisitesassumptions)
1. [Preparation](#preparation)
2. [Backing Up Nexus](#backing-up-nexus)
3. [Restoring From Backup](#restoring-from-backup)
3. [Appendix](#appendix)
<a name="prerequisitesassumptions"></a>
# Prerequisites/Assumptions
- K8s running in AWS
- Nexus PersistentVolume is using AWS EBS
- Migration is between clusters on the same AWS instance and availability zone (due to known Velero [limitations](https://velero.io/docs/v1.6/locations/#limitations--caveats))
- Migation occurs between K8s clusters with the same version
- Velero CLI [tool](https://github.com/vmware-tanzu/velero/releases)
- Crane CLI [tool](https://github.com/google/go-containerregistry)
<a name="preparation"></a>
# Preparation
1. Ensure the Velero addon in the Big Bang values file is properly configured, sample configuration below:
```yaml
addons:
velero:
enabled: true
plugins:
- aws
values:
serviceAccount:
server:
name: velero
configuration:
provider: aws
backupStorageLocation:
bucket: nexus-velero-backup
volumeSnapshotLocation:
provider: aws
config:
region: us-east-1
credentials:
useSecret: true
secretContents:
cloud: |
[default]
aws_access_key_id = <CHANGE ME>
aws_secret_access_key = <CHANGE ME>
```
2. Manually create an S3 bucket that the backup configuration will be stored in (in this case it is named `nexus-velero-backup`), this should match the `configuration.backupStorageLocation.bucket` key above
3. The `credentials.secretContents.cloud` creds should have the necessary permissions to read/write to S3, volumes and volume snapshots
4. As a sanity check, take a look at the Velero logs to make sure the backup location (S3 bucket) is valid, you should see something like:
```
level=info msg="Backup storage location valid, marking as available" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
```
5. Ensure there are images/artifacts in Nexus. An as example we will use the [Doom DOS image](https://earthly.dev/blog/dos-gaming-in-docker/) and a simple nginx image. Running `crane catalog nexus-docker.bigbang.dev` will show all of the artifacts and images in Nexus:
```
repository/nexus-docker/doom-dos
repository/nexus-docker/nginx
```
<a name="backing-up-nexus"></a>
# Backing Up Nexus
In the cluster containing the Nexus repositories to migrate, running the following command will create a backup called `nexus-ns-backup` and will backup all resources in the `nexus-repository-manager` namespace, including the associated PersistentVolume:
`velero backup create nexus-ns-backup --include-namespaces nexus-repository-manager --include-cluster-resources=true`
Specifically, this will backup all Nexus resources to the S3 bucket `configuration.backupStorageLocation.bucket` specified above and will create a volume snapshot of the Nexus EBS volume.
**Double-check** AWS to make sure this is the case by reviewing the contents of the S3 bucket:
`aws s3 ls s3://nexus-velero-backup --recursive --human-readable --summarize`
Expected output:
```
backups/nexus-ns-backup/nexus-ns-backup-csi-volumesnapshotcontents.json.gz
backups/nexus-ns-backup/nexus-ns-backup-csi-volumesnapshots.json.gz
backups/nexus-ns-backup/nexus-ns-backup-logs.gz
backups/nexus-ns-backup/nexus-ns-backup-podvolumebackups.json.gz
backups/nexus-ns-backup/nexus-ns-backup-resource-list.json.gz
backups/nexus-ns-backup/nexus-ns-backup-volumesnapshots.json.gz
backups/nexus-ns-backup/nexus-ns-backup.tar.gz
backups/nexus-ns-backup/velero-backup.json
```
Also ensure an EBS volume snapshot has been created and the Snapshot status is `Completed`.
![volume-snapshot](images/volume-snapshot.png)
<a name="restoring-from-backup"></a>
# Restoring From Backup
1. In the new cluster, ensure that Nexus and Velero are running and healthy
* It is critical to ensure that Nexus has been included in the new cluster's Big Bang deployment, otherwise the restored Nexus configuration will not be managed by the Big Bang Helm chart.
2. If you are using the same `velero.values` from above, Velero should automatically be configured to use the same backup location as before. Verify this with `velero backup get` and you should see output that looks like:
```
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
nexus-ns-backup Completed 0 0 2022-02-08 12:34:46 +0100 CET 29d default <none>
```
3. To perform the migration, Nexus must be shut down. In the Nexus Deployment, bring the `spec.replicas` down to `0`.
4. Ensure that the Nexus PVC and PV are also removed (**you may have to delete these manually!**), and that the corresponding Nexus EBS volume has been deleted.
* If you have to remove the Nexus PV and PVC manually, delete the PVC first, which should cascade to the PV; then, manually delete the underlying EBS volume (if it still exists)
5. Now that Nexus is down and the new cluster is configured to use the same backup location as the old one, perform the migration by running:
`velero restore create --from-backup nexus-ns-backup`
6. The Nexus PV and PVC should be recreated (verify before continuing!), but the pod will fail to start due to the previous change in the Nexus deployment spec. Change the Nexus deployment `spec.replicas` back to `1`. This will bring up the Nexus pod which should connect to the PVC and PV created during the Velero restore.
7. Once the Nexus pod is running and healthy, log in to Nexus and verify that the repositories have been restored
* The credentials to log in will have been restored from the Nexus backup, so they should match the credentials of the Nexus that was migrated (not the new installation!)
* It is recommended to log in to Nexus and download a sampling of images/artifacts to ensure they are working as expected.
For example, login to Nexus using the migrated credentials:
`docker login -u admin -p admin nexus-docker.bigbang.dev/repository/nexus-docker`
Running `crane catalog nexus-docker.bigbang.dev` should show the same output as before:
```text
repository/nexus-docker/doom-dos
repository/nexus-docker/nginx
```
To ensure the integrity of the migrated image, we will pull and run the `doom-dos` image and defeat evil!
```
docker pull nexus-docker.bigbang.dev/repository/nexus-docker/doom-dos:latest && \
docker run -p 8000:8000 nexus-docker.bigbang.dev/repository/nexus-docker/doom-dos:latest
```
<img src="images/doom.png" alt="doom" width="750"/>
<a name="appendix"></a>
# Appendix
### Sample Nexus values:
```yaml
addons:
nexus:
enabled: true
values:
nexus:
docker:
enabled: true
registries:
- host: nexus-docker.bigbang.dev
port: 5000
```
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment