UNCLASSIFIED

Commits (2)
{
"cSpell.words": [
"Autoscale",
"Autoscaled",
"CNAME",
"CODEOWNER",
"Hashicorp",
"Istio's",
"Kubeconfig",
"Kubectl",
"Kustomization",
"Kustomize",
"MYIP",
"Quickstart",
"RHEL",
"STIG",
"SonarQube",
"Twistlock",
"alertmanager",
"bigbang",
"bigbangkey",
"codeowners",
"configmap",
"configmaps",
"decryptable",
"fluxcd",
"forkable",
"grafana",
"kibana",
"kubernetes",
"kustomizations",
"nodepool",
"rebased",
"sshuttle",
"storageclass",
"terragrunt",
"tfstate",
"uncommenting",
"updatekeys",
"xlarge",
"yamldecode"
]
}
\ No newline at end of file
...@@ -2,6 +2,21 @@ ...@@ -2,6 +2,21 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.2.2]
### Added
- Added cSpell workspace configuration.
- Added Table of Contents
### Changed
- Formatted Terraform configurations to canonical format.
- Updated CONTRIBUTING to reflect forking process.
- Updated link to Kubernetes cluster prerequisites.
- Updated spelling and markdown formatting.
- Updates CODEOWNERS
## [1.2.1] ## [1.2.1]
### Changed ### Changed
......
* @michaelmcleroy * @michaelmcleroy @jasonkrause @cmcgrath @rkernick
\ No newline at end of file
...@@ -12,7 +12,10 @@ Development requires the Kubernetes CLI tool as well as a Kubernetes cluster. [k ...@@ -12,7 +12,10 @@ Development requires the Kubernetes CLI tool as well as a Kubernetes cluster. [k
To contribute a change: To contribute a change:
1. Create a branch on the cloned repository 1. Create a fork of the existing repository
1. On the project’s home page, in the top right, click Fork.
1. Below *Select a namespace to fork the project*, identify the project you want to fork to, and click Select. Only namespaces you have Developer and higher permissions for are shown.
1. GitLab creates your fork, and redirects you to the project page for your new fork. The permissions you have in the namespace are your permissions in the fork.
1. Make the changes in code. 1. Make the changes in code.
1. Test by deploying [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) to your Kubernetes cluster. 1. Test by deploying [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) to your Kubernetes cluster.
1. Make commits using the [Conventional Commits](https://www.conventionalcommits.org/) format. This helps with automation for changelog. Update `CHANGELOG.md` in the same commit using the [Keep a Changelog](https://keepachangelog.com). Depending on tooling maturity, this step may be automated. 1. Make commits using the [Conventional Commits](https://www.conventionalcommits.org/) format. This helps with automation for changelog. Update `CHANGELOG.md` in the same commit using the [Keep a Changelog](https://keepachangelog.com). Depending on tooling maturity, this step may be automated.
......
# BigBang Template # BigBang Template
#### _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to https://repo1.dso.mil/platform-one/big-bang/customers/template_** > _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to <https://repo1.dso.mil/platform-one/big-bang/customers/template>_**
[[_TOC_]]
This folder contains a template that you can replicate in your own Git repo to get started with Big Bang configuration. If you are new to Big Bang it is recommended you start with the [Big Bang Quickstart](https://repo1.dso.mil/platform-one/quick-start/big-bang) before attempting customization. This folder contains a template that you can replicate in your own Git repo to get started with Big Bang configuration. If you are new to Big Bang it is recommended you start with the [Big Bang Quickstart](https://repo1.dso.mil/platform-one/quick-start/big-bang) before attempting customization.
...@@ -9,7 +11,7 @@ The main benefits of this template include: ...@@ -9,7 +11,7 @@ The main benefits of this template include:
- Isolation of the Big Bang product and your custom configuration - Isolation of the Big Bang product and your custom configuration
- Allows you to easily consume upstream Big Bang changes since you never change the product - Allows you to easily consume upstream Big Bang changes since you never change the product
- Big Bang product tags are explicitly referenced in your configuration, giving you control over upgrades - Big Bang product tags are explicitly referenced in your configuration, giving you control over upgrades
- [GitOps](https://www.weave.works/technologies/gitops/) for your deployments configrations - [GitOps](https://www.weave.works/technologies/gitops/) for your deployments configurations
- Single source of truth for the configurations deployed - Single source of truth for the configurations deployed
- Historical tracking of changes made - Historical tracking of changes made
- Allows tighter control of what is deployed to production (via merge requests) - Allows tighter control of what is deployed to production (via merge requests)
...@@ -21,11 +23,11 @@ The main benefits of this template include: ...@@ -21,11 +23,11 @@ The main benefits of this template include:
- Secrets (e.g. pull credentials) can be shared across deployments. - Secrets (e.g. pull credentials) can be shared across deployments.
> NOTE: SOPS [supports multiple keys for encrypting the same secret](https://dev.to/stack-labs/manage-your-secrets-in-git-with-sops-common-operations-118g) so that each environment can use a different SOPS key but share a secret. > NOTE: SOPS [supports multiple keys for encrypting the same secret](https://dev.to/stack-labs/manage-your-secrets-in-git-with-sops-common-operations-118g) so that each environment can use a different SOPS key but share a secret.
### Prerequisites ## Prerequisites
To deploy Big Bang, the following items are required: To deploy Big Bang, the following items are required:
- Kubernetes cluster [ready for Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/docs/d_prerequisites.md) - Kubernetes cluster [ready for Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/tree/master/docs/guides/prerequisites)
- A git repo for your configuration - A git repo for your configuration
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [GPG (Mac users need to read this important note)](https://repo1.dso.mil/platform-one/onboarding/big-bang/engineering-cohort/-/blob/master/lab_guides/01-Preflight-Access-Checks/A-software-check.md#gpg) - [GPG (Mac users need to read this important note)](https://repo1.dso.mil/platform-one/onboarding/big-bang/engineering-cohort/-/blob/master/lab_guides/01-Preflight-Access-Checks/A-software-check.md#gpg)
...@@ -64,7 +66,7 @@ git checkout -b template-demo ...@@ -64,7 +66,7 @@ git checkout -b template-demo
### Create GPG Encryption Key ### Create GPG Encryption Key
To make sure your pull secrets are not comprimized when uploaded to Git, you must generate your own encryption key: To make sure your pull secrets are not compromised when uploaded to Git, you must generate your own encryption key:
> Keys should be created without a passphrase so that Flux can use the private key to decrypt secrets in the Big Bang cluster. > Keys should be created without a passphrase so that Flux can use the private key to decrypt secrets in the Big Bang cluster.
...@@ -101,7 +103,7 @@ The `base/configmap.yaml` is setup to use the domain `bigbang.dev` by default. ...@@ -101,7 +103,7 @@ The `base/configmap.yaml` is setup to use the domain `bigbang.dev` by default.
```shell ```shell
cd base cd base
# Encrypt the existing certifiate # Encrypt the existing certificate
sops -e bigbang-dev-cert.yaml > secrets.enc.yaml sops -e bigbang-dev-cert.yaml > secrets.enc.yaml
# Save encrypted TLS certificate into Git # Save encrypted TLS certificate into Git
...@@ -149,7 +151,7 @@ git commit -m "chore: added iron bank pull credentials" ...@@ -149,7 +151,7 @@ git commit -m "chore: added iron bank pull credentials"
git push git push
``` ```
> Your private key to decrypt these secrets is stored in your GPG key ring. You must **NEVER** export this key and commit it to your Git repository since this would comprimise your secrets. > Your private key to decrypt these secrets is stored in your GPG key ring. You must **NEVER** export this key and commit it to your Git repository since this would compromise your secrets.
### Configure for GitOps ### Configure for GitOps
...@@ -245,7 +247,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really) ...@@ -245,7 +247,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really)
# Verify secrets and configmaps are deployed # Verify secrets and configmaps are deployed
# At a minimum, you will have the following: # At a minimum, you will have the following:
# secrets: sops-gpg, private-git, common-bb, and environment-bb # secrets: sops-gpg, private-git, common-bb, and environment-bb
# configmaps: conmmon, environment # configmaps: common, environment
kubectl get -n bigbang secrets,configmaps kubectl get -n bigbang secrets,configmaps
# Watch deployment # Watch deployment
...@@ -257,7 +259,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really) ...@@ -257,7 +259,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really)
``` ```
> If you cannot get to the main page of Kiali, it may be due to an expired certificate. Check the expiration of the certificate in `base/configmap.yaml`. > If you cannot get to the main page of Kiali, it may be due to an expired certificate. Check the expiration of the certificate in `base/configmap.yaml`.
>
> For troubleshooting deployment problems, refer to the [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) documentation. > For troubleshooting deployment problems, refer to the [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) documentation.
You now have successfully deployed Big Bang. Your next step is to customize the configuration. You now have successfully deployed Big Bang. Your next step is to customize the configuration.
...@@ -284,7 +286,7 @@ You now have successfully deployed Big Bang. Your next step is to customize the ...@@ -284,7 +286,7 @@ You now have successfully deployed Big Bang. Your next step is to customize the
1. Big Bang will automatically pick up your change and make the necessary changes. 1. Big Bang will automatically pick up your change and make the necessary changes.
```shell ```shell
# Watch deployment for twislock to be deployed # Watch deployment for twistlock to be deployed
watch kubectl get hr,po -A watch kubectl get hr,po -A
# Test deployment by opening a browser to "twistlock.bigbang.dev" to get to the Twistlock application # Test deployment by opening a browser to "twistlock.bigbang.dev" to get to the Twistlock application
...@@ -383,7 +385,7 @@ For additional configuration options, refer to the [Big Bang](https://repo1.dsop ...@@ -383,7 +385,7 @@ For additional configuration options, refer to the [Big Bang](https://repo1.dsop
### Additional resources ### Additional resources
Using Kustomize, you can add additional resources to the deployment if needed. Read the [Kustomization](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) documentation for futher details. Using Kustomize, you can add additional resources to the deployment if needed. Read the [Kustomization](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) documentation for further details.
## Secrets ## Secrets
...@@ -405,7 +407,7 @@ You will need to update `.sops.yaml` with your configuration based on the links ...@@ -405,7 +407,7 @@ You will need to update `.sops.yaml` with your configuration based on the links
If you need to [rotate your GPG encryption keys](#create-gpg-encryption-key) for any reason, you will also need to re-encrypt any encrypted secrets. If you need to [rotate your GPG encryption keys](#create-gpg-encryption-key) for any reason, you will also need to re-encrypt any encrypted secrets.
1. Update `.sops.yaml` configuration file 1. Update `.sops.yaml` configuration file
`.sops.yaml` holds all of the key fingerpints used for SOPS. Update `pgp`'s value to the new key's fingerprint. You can list your locally stored fingerprints using `gpg -k`. `.sops.yaml` holds all of the key fingerprints used for SOPS. Update `pgp`'s value to the new key's fingerprint. You can list your locally stored fingerprints using `gpg -k`.
```yaml ```yaml
creation_rules: creation_rules:
...@@ -472,7 +474,7 @@ In our template, we have a `dev` and a `prod` environment with a shared `base`. ...@@ -472,7 +474,7 @@ In our template, we have a `dev` and a `prod` environment with a shared `base`.
- Shared Iron Bank pull credential - Shared Iron Bank pull credential
- Different database passwords for `dev` and `prod` - Different database passwords for `dev` and `prod`
- Differnet SOPS keys for `dev` and `prod` - Different SOPS keys for `dev` and `prod`
1. Setup `.sops.yaml` for multiple folders: 1. Setup `.sops.yaml` for multiple folders:
...@@ -547,6 +549,6 @@ To start, we may have the following in each folder: ...@@ -547,6 +549,6 @@ To start, we may have the following in each folder:
Big Bang `dev` value changes can be made by simply modifying `dev/configmap.yaml`. `base` and `dev` create two separate configmaps, named `common` and `environment` respectively, with the `environment` values taking precedence over `common` values in Big Bang. Big Bang `dev` value changes can be made by simply modifying `dev/configmap.yaml`. `base` and `dev` create two separate configmaps, named `common` and `environment` respectively, with the `environment` values taking precedence over `common` values in Big Bang.
The same concept applies to `dev` secret changes, with two separate secrets named `common-bb` and `environment-bb` used for values to Big Bang, with the `environment-bb` values taking prcedence over the `common-bb` values in Big Bang. The same concept applies to `dev` secret changes, with two separate secrets named `common-bb` and `environment-bb` used for values to Big Bang, with the `environment-bb` values taking precedence over the `common-bb` values in Big Bang.
If a new resource must be deployed, for example a TLS cert, you must add a `resources:` section to the `kustomization.yaml` to refer to the new file. See the base directory for an example. If a new resource must be deployed, for example a TLS cert, you must add a `resources:` section to the `kustomization.yaml` to refer to the new file. See the base directory for an example.
# Big Bang Infrastructure as Code (IaC) # Big Bang Infrastructure as Code (IaC)
#### _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to https://repo1.dso.mil/platform-one/big-bang/customers/template_ > _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to <https://repo1.dso.mil/platform-one/big-bang/customers/template>_
[[_TOC_]]
The terraform/terragrunt code in this directory will setup all the infrastructure for a Big Bang deployment in Amazon Web Services (AWS). It starts from scratch with a new VPC and finishes by deploying a multi-node [RKE2 Cluster](https://docs.rke2.io/). The infrastructure and cluster provisioned can then be used to deploy Big Bang. The terraform/terragrunt code in this directory will setup all the infrastructure for a Big Bang deployment in Amazon Web Services (AWS). It starts from scratch with a new VPC and finishes by deploying a multi-node [RKE2 Cluster](https://docs.rke2.io/). The infrastructure and cluster provisioned can then be used to deploy Big Bang.
> This code is intended to be a forkable starting point / example for users to get their infrastructure setup quickly. It is up to the users to futher customize and secure the infrastructure for the intended use. > This code is intended to be a forkable starting point / example for users to get their infrastructure setup quickly. It is up to the users to further customize and secure the infrastructure for the intended use.
## Layout ## Layout
...@@ -15,7 +17,7 @@ terraform ...@@ -15,7 +17,7 @@ terraform
└── main # Shared terraform code └── main # Shared terraform code
└── us-gov-west-1 # Terragrunt code for a specific AWS region └── us-gov-west-1 # Terragrunt code for a specific AWS region
├── region.yaml # Regional configuration ├── region.yaml # Regional configuration
└── prod # Teragrunt code for a specific environment (e.g. prod, stage, dev) └── prod # Terragrunt code for a specific environment (e.g. prod, stage, dev)
└── env.yaml # Environment specific configuration └── env.yaml # Environment specific configuration
``` ```
...@@ -35,7 +37,7 @@ terraform ...@@ -35,7 +37,7 @@ terraform
- Validate your configuration - Validate your configuration
```bash ```shell
cd ./terraform/us-gov-west-1/prod cd ./terraform/us-gov-west-1/prod
terragrunt run-all validate terragrunt run-all validate
# Successful output: Success! The configuration is valid. # Successful output: Success! The configuration is valid.
...@@ -43,7 +45,7 @@ terraform ...@@ -43,7 +45,7 @@ terraform
- Run the deployment - Run the deployment
```bash ```shell
# Initialize # Initialize
terragrunt run-all init terragrunt run-all init
...@@ -57,7 +59,7 @@ terraform ...@@ -57,7 +59,7 @@ terraform
- Connect to cluster - Connect to cluster
```bash ```shell
# Setup your cluster name (same as `name` in `env.yaml`) # Setup your cluster name (same as `name` in `env.yaml`)
export CNAME="bigbang-dev" export CNAME="bigbang-dev"
...@@ -100,17 +102,17 @@ Prior to deploying Big Bang, you should setup the following in the Kubernetes cl ...@@ -100,17 +102,17 @@ Prior to deploying Big Bang, you should setup the following in the Kubernetes cl
### Storage Class ### Storage Class
By default, Big Bang will use the cluster's default `StorageClass` to dynamically provision the required persistent volumes. This means the cluster must be able to dynamically provision PVCs. Since we're on AWS, the simplest method is to use the [AWS EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) Storage Class using Kubernetes' in tree [AWS cloud provider](https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs). By default, Big Bang will use the cluster's default `StorageClass` to dynamically provision the required persistent volumes. This means the cluster must be able to dynamically provision persistent volume claims (PVCs). Since we're on AWS, the simplest method is to use the [AWS EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) Storage Class using Kubernetes' in tree [AWS cloud provider](https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs).
> Without a default storage class, some Big Bang components, like Elasticsearch, Jaeger, or Twistlock, will never reach the running state. > Without a default storage class, some Big Bang components, like Elasticsearch, Jaeger, or Twistlock, will never reach the running state.
```bash ```shell
kubectl apply -f ./terraform/storageclass/ebs-gp2-storage-class.yaml kubectl apply -f ./terraform/storageclass/ebs-gp2-storage-class.yaml
``` ```
If you have an alternative storage class, you can run the following to replace the EBS GP2 one provided. If you have an alternative storage class, you can run the following to replace the EBS GP2 one provided.
```bash ```shell
kubectl patch storageclass ebs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' kubectl patch storageclass ebs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
# Install your storage of choice, for example... # Install your storage of choice, for example...
...@@ -125,7 +127,7 @@ kubectl patch storageclass <name of your storage class> -p '{"metadata": {"annot ...@@ -125,7 +127,7 @@ kubectl patch storageclass <name of your storage class> -p '{"metadata": {"annot
To ensure ingress into the cluster, load balancers must be configured to ensure proper port mappings for `istio`. The simplest method is the default scenario, where the cluster (`rke2` in this example) is running the appropriate cloud provider capable of dynamically provisioning load balancers when requesting `Services` of `type: LoadBalancer`. This is the default configuration in BigBang, and if you choose to continue this way, you can skip the following steps. To ensure ingress into the cluster, load balancers must be configured to ensure proper port mappings for `istio`. The simplest method is the default scenario, where the cluster (`rke2` in this example) is running the appropriate cloud provider capable of dynamically provisioning load balancers when requesting `Services` of `type: LoadBalancer`. This is the default configuration in BigBang, and if you choose to continue this way, you can skip the following steps.
However, for brevity in this example, we are introducing an alternative, where the load balancer is preprovisioned and owned by terraform (from your earlier apply step). This provides more control over the load balancer, but also requires the extra step of informing `istio` on installation of the required ports to expose on each node that the pre-created load balancer should forward to. It's important to note these are the _exact_ same steps that the cloud provider would take if we let Kubernetes provision things for us. However, for brevity in this example, we are introducing an alternative, where the load balancer is pre-provisioned and owned by terraform (from your earlier apply step). This provides more control over the load balancer, but also requires the extra step of informing `istio` on installation of the required ports to expose on each node that the pre-created load balancer should forward to. It's important to note these are the _exact_ same steps that the cloud provider would take if we let Kubernetes provision things for us.
The following configuration in Big Bang's values.yaml will setup the appropriate `NodePorts` to match the [Quickstart](#quickstart) configuration. The following configuration in Big Bang's values.yaml will setup the appropriate `NodePorts` to match the [Quickstart](#quickstart) configuration.
...@@ -280,11 +282,11 @@ end ...@@ -280,11 +282,11 @@ end
## Debug ## Debug
After Big Bang deployment, if you wish to access your deployed web applications that are not exposed publically, add an entry into your /etc/hosts to point the host name to the elastic load balancer. After Big Bang deployment, if you wish to access your deployed web applications that are not exposed publicly, add an entry into your /etc/hosts to point the host name to the elastic load balancer.
> This bypasses load balancing since you are using the resolved IP address of one of the connected nodes in the pool > This bypasses load balancing since you are using the resolved IP address of one of the connected nodes in the pool
```bash ```shell
# Setup cluster name from env.yaml # Setup cluster name from env.yaml
export CName="bigbang-dev" export CName="bigbang-dev"
...@@ -298,7 +300,7 @@ export LBDNS=`aws elb describe-load-balancers --query "LoadBalancerDescriptions[ ...@@ -298,7 +300,7 @@ export LBDNS=`aws elb describe-load-balancers --query "LoadBalancerDescriptions[
# Retrieve IP address of load balancer for /etc/hosts # Retrieve IP address of load balancer for /etc/hosts
export ELBIP=`dig $LBDNS +short | head -1` export ELBIP=`dig $LBDNS +short | head -1`
# Now add the hostname of the web appliation into /etc/hosts (or `C:\Windows\System32\drivers\etc\hosts` on Windows) # Now add the hostname of the web application into /etc/hosts (or `C:\Windows\System32\drivers\etc\hosts` on Windows)
# You may need to log out and back into for hosts to take effect # You may need to log out and back into for hosts to take effect
printf "\nAdd the following line to /etc/hosts to alias Big Bang core products:\n${ELBIP} twistlock.bigbang.dev kibana.bigbang.dev prometheus.bigbang.dev grafana.bigbang.dev tracing.bigbang.dev kiali.bigbang.dev alertmanager.bigbang.dev\n\n" printf "\nAdd the following line to /etc/hosts to alias Big Bang core products:\n${ELBIP} twistlock.bigbang.dev kibana.bigbang.dev prometheus.bigbang.dev grafana.bigbang.dev tracing.bigbang.dev kiali.bigbang.dev alertmanager.bigbang.dev\n\n"
``` ```
...@@ -320,7 +322,7 @@ terragrunt run-all destroy ...@@ -320,7 +322,7 @@ terragrunt run-all destroy
## Optional Terraform ## Optional Terraform
Depending on your needs, you may want to deploy additional infrastructure, such as Key Stores, S3 Buckets, or Databases, that can be used with your deployment. In the [options](./options) directory, you will find terraform / terragrunt snippits that can assist you in deploying these items. Depending on your needs, you may want to deploy additional infrastructure, such as Key Stores, S3 Buckets, or Databases, that can be used with your deployment. In the [options](./options) directory, you will find terraform / terragrunt snippets that can assist you in deploying these items.
> These examples may required updates to be compatible with the [Quickstart](#quickstart) > These examples may required updates to be compatible with the [Quickstart](#quickstart)
......
...@@ -7,14 +7,14 @@ ...@@ -7,14 +7,14 @@
resource "aws_security_group" "bastion_sg" { resource "aws_security_group" "bastion_sg" {
name_prefix = "${var.name}-bastion-" name_prefix = "${var.name}-bastion-"
description = "${var.name} bastion" description = "${var.name} bastion"
vpc_id = "${var.vpc_id}" vpc_id = var.vpc_id
# Allow all egress # Allow all egress
egress { egress {
from_port = 0 from_port = 0
to_port = 0 to_port = 0
protocol = "-1" protocol = "-1"
cidr_blocks = ["0.0.0.0/0"] cidr_blocks = ["0.0.0.0/0"]
} }
tags = var.tags tags = var.tags
...@@ -23,37 +23,37 @@ resource "aws_security_group" "bastion_sg" { ...@@ -23,37 +23,37 @@ resource "aws_security_group" "bastion_sg" {
# Bastion Launch Template # Bastion Launch Template
resource "aws_launch_template" "bastion" { resource "aws_launch_template" "bastion" {
name_prefix = "${var.name}-bastion-" name_prefix = "${var.name}-bastion-"
description = "Bastion launch template for ${var.name} cluster" description = "Bastion launch template for ${var.name} cluster"
image_id = "${var.ami}" image_id = var.ami
instance_type = "${var.instance_type}" instance_type = var.instance_type
key_name = "${var.key_name}" key_name = var.key_name
network_interfaces { network_interfaces {
associate_public_ip_address = true associate_public_ip_address = true
security_groups = ["${aws_security_group.bastion_sg.id}"] security_groups = ["${aws_security_group.bastion_sg.id}"]
} }
update_default_version = true update_default_version = true
user_data = filebase64("${path.module}/dependencies/install_python.sh") user_data = filebase64("${path.module}/dependencies/install_python.sh")
tag_specifications { tag_specifications {
resource_type = "instance" resource_type = "instance"
tags = merge({"Name" = "${var.name}-bastion"}, var.tags) tags = merge({ "Name" = "${var.name}-bastion" }, var.tags)
} }
} }
# Bastion Auto-Scaling Group # Bastion Auto-Scaling Group
resource "aws_autoscaling_group" "bastion" { resource "aws_autoscaling_group" "bastion" {
name_prefix = "${var.name}-bastion-" name_prefix = "${var.name}-bastion-"
max_size = 2 max_size = 2
min_size = 1 min_size = 1
desired_capacity = 1 desired_capacity = 1
vpc_zone_identifier = var.subnet_ids vpc_zone_identifier = var.subnet_ids
launch_template { launch_template {
id = aws_launch_template.bastion.id id = aws_launch_template.bastion.id
version = "$Latest" version = "$Latest"
} }
} }
\ No newline at end of file
variable "name" { variable "name" {
description = "The project name to prepend to resources" description = "The project name to prepend to resources"
type = string type = string
default = "bigbang-dev" default = "bigbang-dev"
} }
variable "vpc_id" { variable "vpc_id" {
description = "The VPC where the bastion should be deployed" description = "The VPC where the bastion should be deployed"
type = string type = string
} }
variable "subnet_ids" { variable "subnet_ids" {
description = "List of subnet ids where the bastion is allowed" description = "List of subnet ids where the bastion is allowed"
type = list(string) type = list(string)
} }
variable "ami" { variable "ami" {
description = "The image to use for the bastion" description = "The image to use for the bastion"
type = string type = string
default = "ami-017e342d9500ef3b2" # RKE2 RHEL8 STIG (even though we don't need RHEL8, it is hardened) default = "ami-017e342d9500ef3b2" # RKE2 RHEL8 STIG (even though we don't need RHEL8, it is hardened)
} }
variable "instance_type" { variable "instance_type" {
description = "The AWS EC2 instance type for the bastion" description = "The AWS EC2 instance type for the bastion"
type = string type = string
default = "t2.micro" default = "t2.micro"
} }
variable "key_name" { variable "key_name" {
description = "The key pair name to install on the bastion" description = "The key pair name to install on the bastion"
type = string type = string
default = "" default = ""
} }
variable "tags" { variable "tags" {
description = "The tags to apply to resources" description = "The tags to apply to resources"
type = map(string) type = map(string)
default = {} default = {}
} }
\ No newline at end of file
...@@ -21,8 +21,8 @@ resource "aws_lb_target_group" "public_nlb_http" { ...@@ -21,8 +21,8 @@ resource "aws_lb_target_group" "public_nlb_http" {
vpc_id = var.vpc_id vpc_id = var.vpc_id
health_check { health_check {
port = var.node_port_health_checks port = var.node_port_health_checks
path = "/healthz/ready" path = "/healthz/ready"
} }
lifecycle { lifecycle {
create_before_destroy = true create_before_destroy = true
...@@ -37,8 +37,8 @@ resource "aws_lb_target_group" "public_nlb_https" { ...@@ -37,8 +37,8 @@ resource "aws_lb_target_group" "public_nlb_https" {
vpc_id = var.vpc_id vpc_id = var.vpc_id
health_check { health_check {
port = var.node_port_health_checks port = var.node_port_health_checks
path = "/healthz/ready" path = "/healthz/ready"
} }
lifecycle { lifecycle {
create_before_destroy = true create_before_destroy = true
...@@ -53,8 +53,8 @@ resource "aws_lb_target_group" "public_nlb_sni" { ...@@ -53,8 +53,8 @@ resource "aws_lb_target_group" "public_nlb_sni" {
vpc_id = var.vpc_id vpc_id = var.vpc_id
health_check { health_check {
port = var.node_port_health_checks port = var.node_port_health_checks
path = "/healthz/ready" path = "/healthz/ready"
} }
lifecycle { lifecycle {
create_before_destroy = true create_before_destroy = true
...@@ -114,39 +114,39 @@ data "aws_network_interface" "public_nlb" { ...@@ -114,39 +114,39 @@ data "aws_network_interface" "public_nlb" {
resource "aws_security_group" "public_nlb_pool" { resource "aws_security_group" "public_nlb_pool" {
name_prefix = "${var.name}-public-nlb-to-pool-" name_prefix = "${var.name}-public-nlb-to-pool-"
description = "${var.name} Traffic from public Network Load Balancer to server pool" description = "${var.name} Traffic from public Network Load Balancer to server pool"
vpc_id = "${var.vpc_id}" vpc_id = var.vpc_id
# Allow all traffic from load balancer # Allow all traffic from load balancer
ingress { ingress {
description = "Allow public Network Load Balancer traffic to health check" description = "Allow public Network Load Balancer traffic to health check"
from_port = var.node_port_health_checks from_port = var.node_port_health_checks
to_port = var.node_port_health_checks to_port = var.node_port_health_checks
protocol = "tcp" protocol = "tcp"
cidr_blocks = formatlist("%s/32", [for eni in data.aws_network_interface.public_nlb : eni.private_ip]) cidr_blocks = formatlist("%s/32", [for eni in data.aws_network_interface.public_nlb : eni.private_ip])
} }
ingress { ingress {
description = "Allow internet traffic to HTTP node port" description = "Allow internet traffic to HTTP node port"
from_port = var.node_port_http from_port = var.node_port_http
to_port = var.node_port_http to_port = var.node_port_http
protocol = "tcp" protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] cidr_blocks = ["0.0.0.0/0"]
} }
ingress { ingress {
description = "Allow internet traffic to HTTPS node port" description = "Allow internet traffic to HTTPS node port"
from_port = var.node_port_https from_port = var.node_port_https
to_port = var.node_port_https to_port = var.node_port_https
protocol = "tcp" protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] cidr_blocks = ["0.0.0.0/0"]
} }
ingress { ingress {
description = "Allow internet traffic to SNI node port" description = "Allow internet traffic to SNI node port"
from_port = var.node_port_sni from_port = var.node_port_sni
to_port = var.node_port_sni to_port = var.node_port_sni
protocol = "tcp" protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] cidr_blocks = ["0.0.0.0/0"]
} }
tags = var.tags tags = var.tags
......
output "pool_sg_id" { output "pool_sg_id" {
description = "The ID of the security group used as an inbound rule for load balancer's back-end server pool" description = "The ID of the security group used as an inbound rule for load balancer's back-end server pool"
value = aws_security_group.public_nlb_pool.id value = aws_security_group.public_nlb_pool.id
} }
output "elb_target_group_arns" { output "elb_target_group_arns" {
description = "The load balancer target group ARNs" description = "The load balancer target group ARNs"
value = [aws_lb_target_group.public_nlb_http.arn, aws_lb_target_group.public_nlb_https.arn, aws_lb_target_group.public_nlb_sni.arn] value = [aws_lb_target_group.public_nlb_http.arn, aws_lb_target_group.public_nlb_https.arn, aws_lb_target_group.public_nlb_sni.arn]
} }
\ No newline at end of file
variable "name" { variable "name" {
description = "The name to apply to the external load balancer resources" description = "The name to apply to the external load balancer resources"
type = string type = string
default = "bigbang-dev" default = "bigbang-dev"
} }
variable "vpc_id" { variable "vpc_id" {
description = "The VPC where the load balancer should be deployed" description = "The VPC where the load balancer should be deployed"
type = string type = string
} }
variable "subnet_ids" { variable "subnet_ids" {
description = "The subnet ids to load balance" description = "The subnet ids to load balance"
type = list(string) type = list(string)
} }
variable "node_port_health_checks" { variable "node_port_health_checks" {
description = "The node port to use for Istio health check traffic" description = "The node port to use for Istio health check traffic"
type = string type = string
default = "30000" default = "30000"
} }
variable "node_port_http" { variable "node_port_http" {
description = "The node port to use for HTTP traffic" description = "The node port to use for HTTP traffic"
type = string type = string
default = "30001" default = "30001"
} }
variable "node_port_https" { variable "node_port_https" {
description = "The node port to use for HTTPS traffic" description = "The node port to use for HTTPS traffic"
type = string type = string
default = "30002" default = "30002"
} }
variable "node_port_sni" { variable "node_port_sni" {
description = "The node port to use for Istio SNI traffic" description = "The node port to use for Istio SNI traffic"
type = string type = string
default = "30003" default = "30003"
} }
variable "tags" { variable "tags" {
description = "The tags to apply to resources" description = "The tags to apply to resources"
type = map(string) type = map(string)
default = {} default = {}
} }
\ No newline at end of file
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
# See https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform/-/blob/master/modules/nodepool/main.tf#L113 # See https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform/-/blob/master/modules/nodepool/main.tf#L113
resource "aws_autoscaling_attachment" "pool" { resource "aws_autoscaling_attachment" "pool" {
for_each = toset(var.elb_target_group_arns) for_each = toset(var.elb_target_group_arns)
autoscaling_group_name = var.pool_asg_id autoscaling_group_name = var.pool_asg_id
alb_target_group_arn = each.value alb_target_group_arn = each.value
} }
\ No newline at end of file
variable "name" { variable "name" {
description = "The name to apply to resources" description = "The name to apply to resources"
type = string type = string
default = "bigbang-dev" default = "bigbang-dev"
} }
variable "elb_target_group_arns" { variable "elb_target_group_arns" {
description = "The load balancer's target group ARNs to attach to the autoscale group" description = "The load balancer's target group ARNs to attach to the autoscale group"
type = list(string) type = list(string)
} }
variable "pool_asg_id" { variable "pool_asg_id" {
description = "The pool's autoscale group ID" description = "The pool's autoscale group ID"
type = string type = string
} }
\ No newline at end of file
...@@ -35,7 +35,7 @@ resource "null_resource" "kubeconfig" { ...@@ -35,7 +35,7 @@ resource "null_resource" "kubeconfig" {
# Upload SSH private key # Upload SSH private key
resource "aws_s3_bucket_object" "sshkey" { resource "aws_s3_bucket_object" "sshkey" {
key = "ssh-private-key.pem" key = "ssh-private-key.pem"
# Get bucket name in middle of s3://<bucket name>/rke2.yaml # Get bucket name in middle of s3://<bucket name>/rke2.yaml
bucket = replace(replace(var.kubeconfig_path, "/\\/[^/]*$/", ""), "/^[^/]*\\/\\//", "") bucket = replace(replace(var.kubeconfig_path, "/\\/[^/]*$/", ""), "/^[^/]*\\/\\//", "")
source = pathexpand("${var.private_key_path}/${var.name}.pem") source = pathexpand("${var.private_key_path}/${var.name}.pem")
......
variable "name" { variable "name" {
description = "The name of the SSH key" description = "The name of the SSH key"
type = string type = string
default = "bigbang-dev" default = "bigbang-dev"
} }
variable "kubeconfig_path" { variable "kubeconfig_path" {
description = "Remote path to kubeconfig" description = "Remote path to kubeconfig"
type = string type = string
} }
variable "private_key_path" { variable "private_key_path" {
description = "Local path to SSH private key" description = "Local path to SSH private key"
type = string type = string
default = "~/.ssh" default = "~/.ssh"
} }
\ No newline at end of file
...@@ -17,6 +17,6 @@ resource "local_file" "pem" { ...@@ -17,6 +17,6 @@ resource "local_file" "pem" {
# #
resource "aws_key_pair" "ssh" { resource "aws_key_pair" "ssh" {
key_name = "${var.name}" key_name = var.name
public_key = tls_private_key.ssh.public_key_openssh public_key = tls_private_key.ssh.public_key_openssh
} }
\ No newline at end of file
output "key_name" { output "key_name" {
description = "The name of the AWS SSH key pair" description = "The name of the AWS SSH key pair"
value = aws_key_pair.ssh.key_name value = aws_key_pair.ssh.key_name
} }
output "public_key" { output "public_key" {
description = "The public SSH key" description = "The public SSH key"
value = tls_private_key.ssh.public_key_openssh value = tls_private_key.ssh.public_key_openssh
} }
\ No newline at end of file
variable "private_key_path" { variable "private_key_path" {
description = "Local path to store private key for SSH" description = "Local path to store private key for SSH"
type = string type = string
default = "~/.ssh" default = "~/.ssh"
} }
variable "name" { variable "name" {
description = "Name of the SSH keypair to create" description = "Name of the SSH keypair to create"
type = string type = string
default = "bigbang" default = "bigbang"
} }
\ No newline at end of file
data "aws_availability_zones" "available" { data "aws_availability_zones" "available" {
state = "available" state = "available"
filter { filter {
name = "group-name" name = "group-name"
values = [var.aws_region] values = [var.aws_region]
} }
} }
\ No newline at end of file
...@@ -19,9 +19,9 @@ locals { ...@@ -19,9 +19,9 @@ locals {
cidr_step = max(10, local.num_azs) cidr_step = max(10, local.num_azs)
# Based on VPC CIDR, create subnet ranges # Based on VPC CIDR, create subnet ranges
cidr_index = range(local.num_azs) cidr_index = range(local.num_azs)
public_subnet_cidrs = [ for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i) ] public_subnet_cidrs = [for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i)]
private_subnet_cidrs = [ for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i + local.cidr_step) ] private_subnet_cidrs = [for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i + local.cidr_step)]
} }
# https://github.com/terraform-aws-modules/terraform-aws-vpc # https://github.com/terraform-aws-modules/terraform-aws-vpc
...@@ -39,8 +39,8 @@ module "vpc" { ...@@ -39,8 +39,8 @@ module "vpc" {
# and if the NAT gateway’s Availability Zone is down, resources in the other Availability # and if the NAT gateway’s Availability Zone is down, resources in the other Availability
# Zones lose internet access. To create an Availability Zone-independent architecture, # Zones lose internet access. To create an Availability Zone-independent architecture,
# create a NAT gateway in each Availability Zone. # create a NAT gateway in each Availability Zone.
enable_nat_gateway = true enable_nat_gateway = true
single_nat_gateway = false single_nat_gateway = false
one_nat_gateway_per_az = true one_nat_gateway_per_az = true
enable_dns_hostnames = true enable_dns_hostnames = true
...@@ -52,12 +52,12 @@ module "vpc" { ...@@ -52,12 +52,12 @@ module "vpc" {
# Add in required tags for proper AWS CCM integration # Add in required tags for proper AWS CCM integration
public_subnet_tags = merge({ public_subnet_tags = merge({
"kubernetes.io/cluster/${var.name}" = "shared" "kubernetes.io/cluster/${var.name}" = "shared"
"kubernetes.io/role/elb" = "1" "kubernetes.io/role/elb" = "1"
}, var.tags) }, var.tags)
private_subnet_tags = merge({ private_subnet_tags = merge({
"kubernetes.io/cluster/${var.name}" = "shared" "kubernetes.io/cluster/${var.name}" = "shared"
"kubernetes.io/role/internal-elb" = "1" "kubernetes.io/role/internal-elb" = "1"
}, var.tags) }, var.tags)
tags = merge({ tags = merge({
......