UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit 065ee9c0 authored by Michael McLeroy's avatar Michael McLeroy
Browse files

fix: update hardcoded ingress gateways to new default

parent 5c5352c4
No related branches found
No related tags found
No related merge requests found
# Kubernetes Cluster Preconfiguration:
# Kubernetes Cluster Preconfiguration:
## Best Practices:
## Best Practices:
* A CNI (Container Network Interface) that supports Network Policies (which are basically firewalls for the Inner Cluster Network.) (Note: k3d, which is recommended for the quickstart demo, defaults to flannel, which does not support network policies.)
* All Kubernetes Nodes and the LB associated with the kube-apiserver should all use private IPs.
* In most case User Application Facing LBs should have Private IP Addresses and be paired with a defense in depth Ingress Protection mechanism like [P1's CNAP](https://p1.dso.mil/#/products/cnap/), a CNAP equivalent (Advanced Edge Firewall), VPN, VDI, port forwarding through a bastion, or air gap deployment.
* In most case User Application Facing LBs should have Private IP Addresses and be paired with a defense in depth Ingress Protection mechanism like [P1's CNAP](https://p1.dso.mil/#/products/cnap/), a CNAP equivalent (Advanced Edge Firewall), VPN, VDI, port forwarding through a bastion, or air gap deployment.
* CoreDNS in the kube-system namespace should be HA with pod anti-affinity rules
* Master Nodes should be HA and tainted.
* Consider using a licensed Kubernetes Distribution with a support contract.
* Consider using a licensed Kubernetes Distribution with a support contract.
* [A default storage class should exist](default_storageclass.md) to support dynamic provisioning of persistent volumes.
## Service of Type Load Balancer:
BigBang's default configuration assumes the cluster you're deploying to supports dynamic load balancer provisioning. Specifically Istio defaults to creating a Kubernetes Service of type Load Balancer, which usually creates an endpoint exposed outside of the cluster that can direct traffic inside the cluster to the istio ingress gateway.
How Kubernetes service of type LB works depends on implementation details, there are many ways of getting it to work, common methods are listed below:
* CSP API Method: (Recommended option for Cloud Deployments)
How Kubernetes service of type LB works depends on implementation details, there are many ways of getting it to work, common methods are listed below:
* CSP API Method: (Recommended option for Cloud Deployments)
The Kubernetes Control Plane has a --cloud-provider flag that can be set to aws, azure, etc. If the Kubernetes Master Nodes have that flag set and CSP IAM rights. The control plane will auto provision and configure CSP LBs. (Note: a Vendors Kubernetes Distro automation, may have IaC/CaC defaults that allow this to work turn key, but if you have issues when provisioning LBs, consult with the Vendor's support for the recommended way of configuring automatic LB provisioning.)
* External LB Method: (Good for bare metal and 0 IAM rights scenarios)
* External LB Method: (Good for bare metal and 0 IAM rights scenarios)
You can override bigbang's helm values so istio will provision a service of type NodePort instead of type LoadBalancer. Instead of randomly generating from the port range of 30000 - 32768, the NodePorts can be pinned to convention based port numbers like 30080 & 30443. If you're in a restricted cloud env or bare metal you can ask someone to provision a CSP LB where LB:443 would map to Nodeport:30443 (of every worker node), etc.
* No LB, Network Routing Methods: (Good options for bare metal)
* [MetalLB](https://metallb.universe.tf/)
* No LB, Network Routing Methods: (Good options for bare metal)
* [MetalLB](https://metallb.universe.tf/)
* [kubevip](https://kube-vip.io/)
* [kube-router](https://www.kube-router.io)
## BigBang doesn't support PSPs (Pod Security Policies):
* [PSP's are being removed from Kubernetes and will be gone by version 1.25.x](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/issues/10)
## BigBang doesn't support PSPs (Pod Security Policies):
* [PSP's are being removed from Kubernetes and will be gone by version 1.25.x](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/issues/10)
* [Open Policy Agent Gatekeeper can enforce the same security controls as PSPs](https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy#pod-security-policies), and is core component of BigBang, which operates as an elevated [validating admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to audit and enforce various [constraints](https://github.com/open-policy-agent/frameworks/tree/master/constraint) on all requests sent to the kubernetes api server.
* We recommend users disable PSPs completely given they're being removed, we have a replacement, and PSPs can prevent OPA from deploying (and if OPA is not able to deploy, nothing else gets deployed).
* Different ways of Disabling PSPs:
* We recommend users disable PSPs completely given they're being removed, we have a replacement, and PSPs can prevent OPA from deploying (and if OPA is not able to deploy, nothing else gets deployed).
* Different ways of Disabling PSPs:
* Edit the kube-apiserver's flags (methods for doing this varry per distro.)
* ```bash
kubectl patch psp system-unrestricted-psp -p '{"metadata": {"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames": "*"}}}'
......@@ -39,8 +39,8 @@ You can override bigbang's helm values so istio will provision a service of type
## Kubernetes Distribution Specific Notes
* Note: P1 has forks of various [Kubernetes Distribution Vendor Repos](https://repo1.dso.mil/platform-one/distros), there's nothing special about the P1 forks.
* We recommend you leverage the Vendors upstream docs in addition to any docs found in P1 Repos; infact, the Vendor's upstream docs are far more likely to be up to date.
* Note: P1 has forks of various [Kubernetes Distribution Vendor Repos](https://repo1.dso.mil/platform-one/distros), there's nothing special about the P1 forks.
* We recommend you leverage the Vendors upstream docs in addition to any docs found in P1 Repos; infact, the Vendor's upstream docs are far more likely to be up to date.
### VMWare Tanzu Kubernetes Grid:
[Prerequisites section of VMware Kubernetes Distribution Docs's](https://repo1.dso.mil/platform-one/distros/vmware/tkg#prerequisites)
......@@ -63,9 +63,9 @@ openshift: true
helm install bigbang chart --set openshift=true
```
2) Patch the istio-cni daemonset to allow containers to run privileged (AFTER istio-cni daemonset exists).
Note: it was unsuccessfully attempted to apply this setting via modifications to the helm chart. Online patching succeeded.
2) Patch the istio-cni daemonset to allow containers to run privileged (AFTER istio-cni daemonset exists).
Note: it was unsuccessfully attempted to apply this setting via modifications to the helm chart. Online patching succeeded.
```
kubectl get daemonset istio-cni-node -n kube-system -o json | jq '.spec.template.spec.containers[] += {"securityContext":{"privileged":true}}' | kubectl replace -f -
```
......@@ -73,8 +73,8 @@ kubectl get daemonset istio-cni-node -n kube-system -o json | jq '.spec.template
3) Modify the OpenShift cluster(s) with the following scripts based on https://istio.io/v1.7/docs/setup/platform-setup/openshift/
```
# Istio Openshift configurations Post Install
oc -n istio-system expose svc/istio-ingressgateway --port=http2
# Istio Openshift configurations Post Install
oc -n istio-system expose svc/public-ingressgateway --port=http2
oc adm policy add-scc-to-user privileged -z istio-cni -n kube-system
oc adm policy add-scc-to-group privileged system:serviceaccounts:logging
oc adm policy add-scc-to-group anyuid system:serviceaccounts:logging
......@@ -92,7 +92,7 @@ oc -n monitoring create -f NetworkAttachmentDefinition.yaml
```
### Konvoy
* [Prerequistes can be found here](https://repo1.dso.mil/platform-one/distros/d2iq/konvoy/konvoy/-/tree/master/docs/1.5.0#prerequisites)
* [Prerequistes can be found here](https://repo1.dso.mil/platform-one/distros/d2iq/konvoy/konvoy/-/tree/master/docs/1.5.0#prerequisites)
* [Different Deployment Scenarios have been documented here](https://repo1.dso.mil/platform-one/distros/d2iq/konvoy/konvoy/-/tree/master/docs/1.4.4/install)
### RKE2
......@@ -111,5 +111,5 @@ cloud-provider-config: ...
For example, if using the aws terraform modules provided [on repo1](https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform), setting the variable: `enable_ccm = true` will ensure all the necessary resources tags.
In the absence of an in-tree cloud provider (such as on-prem), the requirements can be met by ensuring a default storage class and automatic load balancer provisioning exist.
In the absence of an in-tree cloud provider (such as on-prem), the requirements can be met by ensuring a default storage class and automatic load balancer provisioning exist.
#!/bin/bash
#!/bin/bash
set -e
## Adds all the vs hostnames and LB IP to /etc/hosts
## Get the LB Hostname
INGRESS_LB_Hostname=$(kubectl get svc -n istio-system istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
## Adds all the vs hostnames and LB IP to /etc/hosts
## Get the LB Hostname
INGRESS_LB_Hostname=$(kubectl get svc -n istio-system public-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
## Get IP address from Hostname
INGRESS_LB_IP=$(dig $INGRESS_LB_Hostname +search +short | head -1)
......
......@@ -4,7 +4,7 @@
set -e
# Populate /etc/hosts
ip=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
ip=$(kubectl -n istio-system get service public-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "Checking "
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment