UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit 4357a4e3 authored by Michael Martin's avatar Michael Martin
Browse files

Merge branch 'cmbc_17july' into 'master'

Update docs/guides/deployment-scenarios/sso-quickstart.md,...

See merge request !4741
parents 22522b7d 2593e6a8
No related branches found
No related tags found
1 merge request!4741Update docs/guides/deployment-scenarios/sso-quickstart.md,...
Pipeline #3483956 passed
# Gitlab Backups and Restores
## Gitlab Helm Chart Configuration
1. Follow the `Backup and rename gitlab-rails-secret` task within the [Production document](../../understanding-bigbang/configuration/sample-prod-config.md).
1. Fill in our externalStorage values, specifically `addons.gitlab.objectStorage.iamProfile` or both `.Values.addons.gitlab.objectStorage.accessKey` & `.Values.addons.gitlab.objectStorage.accessSecret` along with `.Values.addons.gitlab.objectStorage.bucketPrefix` or you can override in the name for your own bucket eg:
```yaml
......@@ -27,21 +28,23 @@ addons:
## Backing up Gitlab
### Manual Steps
To perform a manual complete backup of Gitlab, exec into your Gitlab Toolbox pod and run the following:
1. find your Gitlab Toolbox pod
1. Find your Gitlab Toolbox pod.
```shell
kubectl get pods -l release=gitlab,app=toolbox -n gitlab
kubectl exec -it gitlab-toolbox-XXXXXXXXX-XXXXX -n gitlab -- /bin/sh
```
1. Execute the backup-utility command which will pull down data from the database, gitaly, and other portions of the ecosystem, tar them up and push to your configured cloud storage.
1. Execute the backup-utility command which will pull down data from the database, gitaly, and other portions of the ecosystem. Tar them up and push to your configured cloud storage.
```shell
backup-utility --skip registry,lfs,artifacts,packages,uploads,pseudonymizer,terraformState,backups
```
You can read more on the upstream documentation: https://docs.gitlab.com/charts/backup-restore/backup.html#create-the-backup
You can read more on the upstream documentation: https://docs.gitlab.com/charts/backup-restore/backup.html#create-the-backup.
### Automatic Cron-based Backups
It is recommended to setup automatic backups via Gitlab toolbox's cron settings:
It is recommended to set up automatic backups via Gitlab toolbox's cron settings:
```yaml
addons:
gitlab:
......@@ -66,6 +69,7 @@ addons:
You can read more on the upstream documentation: https://docs.gitlab.com/charts/charts/gitlab/toolbox/#configuration
## Restore Gitlab
1. Ensure your gitlab-rails secret is present in gitops or in-cluster and it correctly matches the database to which the chart is pointed.
* If you need to replace or update your rails secret, once it is updated be sure to restart the following pods:
```shell
......@@ -74,7 +78,7 @@ You can read more on the upstream documentation: https://docs.gitlab.com/charts/
kubectl rollout -n gitlab restart deploy/gitlab-toolbox
```
2. Exec into the toolbox pod and run the backup-utility command:
1. find your Gitlab Toolbox pod
1. find your Gitlab Toolbox pod.
```shell
kubectl get pods -l release=gitlab,app=toolbox -n gitlab
kubectl exec -it gitlab-toolbox-XXXXXXXXX-XXXXX -n gitlab -- /bin/sh
......@@ -96,4 +100,4 @@ You can read more on the upstream documentation: https://docs.gitlab.com/charts/
# Using the Timestamp
backup-utility --restore -t TIMESTAMP_VALUE
```
You can read more on the upstream documentation: https://docs.gitlab.com/charts/backup-restore/restore.html#restoring-the-backup-file
You can read more on the upstream documentation: https://docs.gitlab.com/charts/backup-restore/restore.html#restoring-the-backup-file.
......@@ -6,16 +6,16 @@ This guide demonstrates how to perform a migration of Nexus repositories and art
## Prerequisites/Assumptions
- K8s running in AWS
- Nexus PersistentVolume is using AWS EBS
- Migration is between clusters on the same AWS instance and availability zone (due to known Velero [limitations](https://velero.io/docs/v1.6/locations/#limitations--caveats))
- Migration occurs between K8s clusters with the same version
- Velero CLI [tool](https://github.com/vmware-tanzu/velero/releases)
- Crane CLI [tool](https://github.com/google/go-containerregistry)
* K8s running in AWS
* Nexus PersistentVolume is using AWS EBS
* Migration is between clusters on the same AWS instance and availability zone (due to known Velero [limitations](https://velero.io/docs/v1.6/locations/#limitations--caveats))
* Migration occurs between K8s clusters with the same version
* Velero CLI [tool](https://github.com/vmware-tanzu/velero/releases)
* Crane CLI [tool](https://github.com/google/go-containerregistry)
## Preparation
1. Ensure the Velero addon in the Big Bang values file is properly configured, sample configuration below:
1. Ensure the Velero addon in the Big Bang values file is properly configured. Sample configuration is provided in the following:
```yaml
addons:
......@@ -44,9 +44,9 @@ This guide demonstrates how to perform a migration of Nexus repositories and art
aws_secret_access_key = <CHANGE ME>
```
1. Manually create an S3 bucket that the backup configuration will be stored in (in this case it is named `nexus-velero-backup`), this should match the `configuration.backupStorageLocation.bucket` key above
1. The `credentials.secretContents.cloud` credentials should have the necessary permissions to read/write to S3, volumes and volume snapshots
1. As a sanity check, take a look at the Velero logs to make sure the backup location (S3 bucket) is valid, you should see something like:
1. Manually create an S3 bucket that the backup configuration will be stored in (in this case it is named `nexus-velero-backup`), this should match the `configuration.backupStorageLocation.bucket` key above.
1. The `credentials.secretContents.cloud` credentials should have the necessary permissions to read/write to S3, volumes and volume snapshots.
1. As a sanity check, take a look at the Velero logs to make sure the backup location (S3 bucket) is valid, you should see something similar to the following:
```plaintext
level=info msg="Backup storage location valid, marking as available" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
......@@ -93,9 +93,9 @@ Also ensure an EBS volume snapshot has been created and the Snapshot status is `
## Restoring From Backup
1. In the new cluster, ensure that Nexus and Velero are running and healthy
1. In the new cluster, ensure that Nexus and Velero are running and healthy.
- It is critical to ensure that Nexus has been included in the new cluster's Big Bang deployment, otherwise the restored Nexus configuration will not be managed by the Big Bang Helm chart.
1. If you are using the same `velero.values` from above, Velero should automatically be configured to use the same backup location as before. Verify this with `velero backup get` and you should see output that looks like:
1. If you are using the same `velero.values` from above, Velero should automatically be configured to use the same backup location as before. Verify this with `velero backup get` and you should see output that looks similar to the following:
```console
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
......@@ -104,14 +104,15 @@ Also ensure an EBS volume snapshot has been created and the Snapshot status is `
1. To perform the migration, Nexus must be shut down. In the Nexus Deployment, bring the `spec.replicas` down to `0`.
1. Ensure that the Nexus PVC and PV are also removed (**you may have to delete these manually!**), and that the corresponding Nexus EBS volume has been deleted.
- If you have to remove the Nexus PV and PVC manually, delete the PVC first, which should cascade to the PV; then, manually delete the underlying EBS volume (if it still exists)
- If you have to remove the Nexus PV and PVC manually, delete the PVC first, which should cascade to the PV. Then, manually delete the underlying EBS volume (if it still exists).
1. Now that Nexus is down and the new cluster is configured to use the same backup location as the old one, perform the migration by running:
`velero restore create --from-backup nexus-ns-backup`
1. The Nexus PV and PVC should be recreated (verify before continuing!), but the pod will fail to start due to the previous change in the Nexus deployment spec. Change the Nexus deployment `spec.replicas` back to `1`. This will bring up the Nexus pod which should connect to the PVC and PV created during the Velero restore.
1. Once the Nexus pod is running and healthy, log in to Nexus and verify that the repositories have been restored
- The credentials to log in will have been restored from the Nexus backup, so they should match the credentials of the Nexus that was migrated (not the new installation!)
1. The Nexus PV and PVC should be recreated (**NOTE:** verify this before continuing!), but the pod will fail to start due to the previous change in the Nexus deployment spec. Change the Nexus deployment `spec.replicas` back to `1`. This will bring up the Nexus pod which should connect to the PVC and PV created during the Velero restore.
1. Once the Nexus pod is running and healthy, log in to Nexus and verify that the repositories have been restored.
- The credentials to log in will have been restored from the Nexus backup, so they should match the credentials of the Nexus that was migrated (not the new installation!).
- It is recommended to log in to Nexus and download a sampling of images/artifacts to ensure they are working as expected.
For example, login to Nexus using the migrated credentials:
......
This diff is collapsed.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment