UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Unverified Commit a71ba016 authored by Caitlin Bowman-Clare's avatar Caitlin Bowman-Clare Committed by Greg M
Browse files

Update docs/prerequisites/default-storageclass.md,...

parent 2aa2cd0b
No related branches found
No related tags found
1 merge request!4732deleted
# Default Storage Class Prerequisite
* BigBang assumes the cluster you're deploying to supports [dynamic volume provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/).
* A BigBang Cluster should have 1 Storage Class annotated as the default SC.
* For Production Deployments it is recommended to leverage a Storage Class that supports the creation of volumes that support ReadWriteMany [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), as there are a few BigBang Addons, where an HA application configuration requires a storage class that supports ReadWriteMany.
* Big Bang assumes the cluster you're deploying to supports [dynamic volume provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/).
* A Big Bang cluster should have 1 Storage Class (SC) annotated as the default SC.
* For production deployments, it is recommended to leverage a SC that supports the creation of volumes that support ReadWriteMany [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), as there are a few Big Bang add-ons, where an HA application configuration requires a storage class that supports ReadWriteMany.
## How Dynamic Volume Provisioning Works in a Nut Shell
* StorageClass + PersistentVolumeClaim = Dynamically Created Persistent Volume
* A PersistentVolumeClaim that does not reference a specific StorageClass will leverage the default StorageClass. (Of which there should only be 1, identified using kubernetes annotations.) Some Helm Charts allow a storage class to be explicitly specified so that multiple storage classes can be used simultaneously.
* A PersistentVolumeClaim that does not reference a specific SC will leverage the default SC, of which there should only be one, identified using Kubernetes annotations. Some helm charts allow a SC to be explicitly specified so that multiple SCs can be used simultaneously.
## How To Check What Storage Classes Are Installed on Your Cluster
* `kubectl get storageclass` can be used to see what storage classes are available on a cluster, the default will be marked as such.
* Note: You can have multiple storage classes, but you should only have 1 default storage class.
* `kubectl get storageclass` can be used to see what storage classes are available on a cluster; the default will be marked accordingly.
**NOTE:** You can have multiple storage classes, but you should only have one default storage class.
```shell
kubectl get storageclass
......@@ -47,14 +47,14 @@ allowVolumeExpansion: true
* AWS EBS Volumes have the following limitations:
* An EBS volume can only be attached to a single Kubernetes Node at a time, thus ReadWriteMany Access Mode isn't supported.
* An EBS PersistentVolume in AZ1 (Availability Zone 1), cannot be mounted by a worker node in AZ2.
* An EBS PersistentVolume in Availability Zone (AZ) 1, cannot be mounted by a worker node in AZ2.
### AWS EFS Volumes
* An AWS EFS Storage Class can be installed according to the [vendors docs](https://github.com/kubernetes-sigs/aws-efs-csi-driver#installation).
* AWS EFS Storage Class supports ReadWriteMany Access Mode.
* AWS EFS Persistent Volumes can be mounted by worker nodes in multiple AZs.
* AWS EFS is basically NFS(NetworkFileSystem) as a Service. NFS cons like latency apply equally to EFS, thus it's not a good fit for for databases.
* AWS EFS is basically NetworkFileSystem (NFS) as a Service. NFS cons like latency apply equally to EFS, and therefore it's not a good fit for for databases.
------------------------------------------------------
......@@ -62,15 +62,16 @@ allowVolumeExpansion: true
### Azure Disk Storage Class Notes
* The Kubernetes Docs offer an Example [Azure Disk Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk)
* The Kubernetes Docs offer an example [Azure Disk Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk)
* An Azure disk can only be mounted with Access mode type ReadWriteOnce, which makes it available to one node in AKS.
* An Azure Disk PersistentVolume in AZ1, can be mounted by a worker node in AZ2 (although some additional lag is involved in such transitions).
* An Azure Disk PersistentVolume in AZ1 can be mounted by a worker node in AZ2, although some additional lag is involved in such transitions.
------------------------------------------------------
## Bare Metal/Cloud Agnostic Store Class Notes
* The BigBang Product team put together a [Comparison Matrix of a few Cloud Agnostic Storage Class offerings](../developer/k8s-storage.md#kubernetes-storage-options)
* Note: No storage class specific container images exist in IronBank at this time.
* Approved IronBank Images will show up in <https://registry1.dso.mil>
* The Big Bang Product team put together a [Comparison Matrix of a few Cloud Agnostic Storage Class offerings](../developer/k8s-storage.md#kubernetes-storage-options)
**NOTE:** No storage class specific container images exist in IronBank at this time.
* Approved IronBank Images will show up in <https://registry1.dso.mil>.
* <https://repo1.dso.mil/dsop> can be used to check status of IronBank images.
......@@ -25,7 +25,7 @@ kubectl create secret docker-registry private-registry \
--namespace flux-system
kubectl apply -k https://repo1.dso.mil/big-bang/bigbang.git//base/flux?ref=master
```
Note that you can replace ```master``` in the ```kubectl apply -k``` command above with tag of the Big Bang release you need. For example:
**NOTE:** You can replace ```master``` in the ```kubectl apply -k``` command above with tag of the Big Bang release you need. For example:
```
kubectl apply -k https://repo1.dso.mil/big-bang/bigbang/bigbang.git//base/flux?ref=2.14.0
```
......@@ -38,11 +38,11 @@ kubectl get crds | grep flux
## Advanced Installation
Clone the Big Bang repo and use the awesome installation [scripts](https://repo1.dso.mil/big-bang/bigbang/-/tree/master/scripts) directory
Clone the Big Bang repo and use the awesome installation [scripts](https://repo1.dso.mil/big-bang/bigbang/-/tree/master/scripts) directory.
```shell
git clone https://repo1.dso.mil/big-bang/bigbang.git
./bigbang/scripts/install_flux.sh
```
> **NOTE** install_flux.sh requires arguments to run properly, calling it will print out a friendly USAGE message with required arguments needed to complete installation.
> **NOTE:** install_flux.sh requires arguments to run properly, calling it will print out a friendly USAGE message with required arguments needed to complete installation.
......@@ -60,13 +60,13 @@ It is important to note that while Big Bang does not require/mandate usage of a
### Cluster API
* Note that there are some OS hardening and VM Image Build automation tools in here, in addition to Cluster API.
**NOTE:** There are some OS hardening and VM Image Build automation tools in here, in addition to Cluster API.
* <https://repo1.dso.mil/platform-one/distros/clusterapi>
* <https://repo1.dso.mil/platform-one/distros/cluster-api/gov-image-builder>
### OpenShift
1. When deploying BigBang, set the OpenShift flag to true.
1. When deploying Big Bang, set the OpenShift flag to true.
```yaml
# inside a values.yaml being passed to the command installing bigbang
......@@ -115,8 +115,8 @@ It is important to note that while Big Bang does not require/mandate usage of a
### RKE2
* RKE2 turns PSPs on by default (see above for tips on disabling)
* RKE2 sets selinux to enforcing by default ([see os-preconfiguration.md for selinux config](os-preconfiguration.md))
* RKE2 turns PSPs on by default (see above for tips on disabling).
* RKE2 sets selinux to enforcing by default ([see os-preconfiguration.md for selinux config](os-preconfiguration.md)).
Since BigBang makes several assumptions about volume and load balancing provisioning by default, it's vital that the rke2 cluster must be properly configured. The easiest way to do this is through the in tree cloud providers, which can be configured through the `rke2` configuration file such as:
......
# Minimum Hardware Requirements
To calculate the minimum CPU, memory, and disk storage required to run Big Bang, open the [minimum hardware requirements Excel spreadsheet](./minimum-hardware-requirements.xlsx) and follow the instructions there. This will allow you to select exactly which packages and pods are enabled. In addition, it includes extra considerations to help you with sizing your cluster. The final values will be for the entire cluster and can be split between multiple nodes.
To calculate the minimum Central Processing Unit (CPU), memory, and disk storage required to run Big Bang, open the [minimum hardware requirements Excel spreadsheet](./minimum-hardware-requirements.xlsx) and follow the instructions there. This will allow you to select exactly which packages and pods are enabled. In addition, it includes extra considerations to help you with sizing your cluster. The final values will be for the entire cluster and can be split between multiple nodes.
When running across multiple availability zones, keep in mind that some of your nodes may be down for zone maintenance and the remaining nodes need to be able to handle the CPU, memory, and disk space for your cluster.
......@@ -9,8 +9,8 @@
## ECK Specific Configuration (ECK Is a Core BB App)
Elastic Cloud on Kubernetes (Elasticsearch Operator) deployed by BigBang uses memory mapping by default. In most cases, the default address space is too low and must be configured.
To ensure unnecessary privileged escalation containers are not used, these kernel settings should be applied before BigBang is deployed:
Elastic Cloud on Kubernetes (i.e., Elasticsearch Operator) deployed by Big Bang uses memory mapping by default. In most cases, the default address space is too low and must be configured.
To ensure unnecessary privileged escalation containers are not used, these kernel settings should be applied before BigB ang is deployed:
```shell
sudo sysctl -w vm.max_map_count=262144 #(ECK crash loops without this)
......@@ -42,7 +42,7 @@ linux_os_config {
* Depending on security requirements it may be possible to set selinux in permissive mode: `sudo setenforce 0`.
* Additional OS and Kubernetes specific configuration are required for istio to work on systems with selinux set to `Enforcing`.
By default, BigBang will deploy istio configured to use `istio-init` (read more [here](https://istio.io/latest/docs/setup/additional-setup/cni/)). To ensure istio can properly initialize envoy sidecars without container privileged escalation permissions, several system kernel modules must be pre-loaded before installing BigBang:
By default, Big Bang will deploy istio configured to use `istio-init` (read more [here](https://istio.io/latest/docs/setup/additional-setup/cni/)). To ensure istio can properly initialize envoy sidecars without container privileged escalation permissions, several system kernel modules must be pre-loaded before installing BigBang:
```shell
modprobe xt_REDIRECT
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment