UNCLASSIFIED

You need to sign in or sign up before continuing.
Commits (2)
{
"cSpell.words": [
"Autoscale",
"Autoscaled",
"CNAME",
"CODEOWNER",
"Hashicorp",
"Istio's",
"Kubeconfig",
"Kubectl",
"Kustomization",
"Kustomize",
"MYIP",
"Quickstart",
"RHEL",
"STIG",
"SonarQube",
"Twistlock",
"alertmanager",
"bigbang",
"bigbangkey",
"codeowners",
"configmap",
"configmaps",
"decryptable",
"fluxcd",
"forkable",
"grafana",
"kibana",
"kubernetes",
"kustomizations",
"nodepool",
"rebased",
"sshuttle",
"storageclass",
"terragrunt",
"tfstate",
"uncommenting",
"updatekeys",
"xlarge",
"yamldecode"
]
}
\ No newline at end of file
...@@ -2,6 +2,21 @@ ...@@ -2,6 +2,21 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.2.2]
### Added
- Added cSpell workspace configuration.
- Added Table of Contents
### Changed
- Formatted Terraform configurations to canonical format.
- Updated CONTRIBUTING to reflect forking process.
- Updated link to Kubernetes cluster prerequisites.
- Updated spelling and markdown formatting.
- Updates CODEOWNERS
## [1.2.1] ## [1.2.1]
### Changed ### Changed
......
* @michaelmcleroy * @michaelmcleroy @jasonkrause @cmcgrath @rkernick
\ No newline at end of file
...@@ -12,7 +12,10 @@ Development requires the Kubernetes CLI tool as well as a Kubernetes cluster. [k ...@@ -12,7 +12,10 @@ Development requires the Kubernetes CLI tool as well as a Kubernetes cluster. [k
To contribute a change: To contribute a change:
1. Create a branch on the cloned repository 1. Create a fork of the existing repository
1. On the project’s home page, in the top right, click Fork.
1. Below *Select a namespace to fork the project*, identify the project you want to fork to, and click Select. Only namespaces you have Developer and higher permissions for are shown.
1. GitLab creates your fork, and redirects you to the project page for your new fork. The permissions you have in the namespace are your permissions in the fork.
1. Make the changes in code. 1. Make the changes in code.
1. Test by deploying [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) to your Kubernetes cluster. 1. Test by deploying [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) to your Kubernetes cluster.
1. Make commits using the [Conventional Commits](https://www.conventionalcommits.org/) format. This helps with automation for changelog. Update `CHANGELOG.md` in the same commit using the [Keep a Changelog](https://keepachangelog.com). Depending on tooling maturity, this step may be automated. 1. Make commits using the [Conventional Commits](https://www.conventionalcommits.org/) format. This helps with automation for changelog. Update `CHANGELOG.md` in the same commit using the [Keep a Changelog](https://keepachangelog.com). Depending on tooling maturity, this step may be automated.
......
# BigBang Template # BigBang Template
#### _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to https://repo1.dso.mil/platform-one/big-bang/customers/template_** > _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to <https://repo1.dso.mil/platform-one/big-bang/customers/template>_**
[[_TOC_]]
This folder contains a template that you can replicate in your own Git repo to get started with Big Bang configuration. If you are new to Big Bang it is recommended you start with the [Big Bang Quickstart](https://repo1.dso.mil/platform-one/quick-start/big-bang) before attempting customization. This folder contains a template that you can replicate in your own Git repo to get started with Big Bang configuration. If you are new to Big Bang it is recommended you start with the [Big Bang Quickstart](https://repo1.dso.mil/platform-one/quick-start/big-bang) before attempting customization.
...@@ -9,7 +11,7 @@ The main benefits of this template include: ...@@ -9,7 +11,7 @@ The main benefits of this template include:
- Isolation of the Big Bang product and your custom configuration - Isolation of the Big Bang product and your custom configuration
- Allows you to easily consume upstream Big Bang changes since you never change the product - Allows you to easily consume upstream Big Bang changes since you never change the product
- Big Bang product tags are explicitly referenced in your configuration, giving you control over upgrades - Big Bang product tags are explicitly referenced in your configuration, giving you control over upgrades
- [GitOps](https://www.weave.works/technologies/gitops/) for your deployments configrations - [GitOps](https://www.weave.works/technologies/gitops/) for your deployments configurations
- Single source of truth for the configurations deployed - Single source of truth for the configurations deployed
- Historical tracking of changes made - Historical tracking of changes made
- Allows tighter control of what is deployed to production (via merge requests) - Allows tighter control of what is deployed to production (via merge requests)
...@@ -21,11 +23,11 @@ The main benefits of this template include: ...@@ -21,11 +23,11 @@ The main benefits of this template include:
- Secrets (e.g. pull credentials) can be shared across deployments. - Secrets (e.g. pull credentials) can be shared across deployments.
> NOTE: SOPS [supports multiple keys for encrypting the same secret](https://dev.to/stack-labs/manage-your-secrets-in-git-with-sops-common-operations-118g) so that each environment can use a different SOPS key but share a secret. > NOTE: SOPS [supports multiple keys for encrypting the same secret](https://dev.to/stack-labs/manage-your-secrets-in-git-with-sops-common-operations-118g) so that each environment can use a different SOPS key but share a secret.
### Prerequisites ## Prerequisites
To deploy Big Bang, the following items are required: To deploy Big Bang, the following items are required:
- Kubernetes cluster [ready for Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/docs/d_prerequisites.md) - Kubernetes cluster [ready for Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/tree/master/docs/guides/prerequisites)
- A git repo for your configuration - A git repo for your configuration
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [GPG (Mac users need to read this important note)](https://repo1.dso.mil/platform-one/onboarding/big-bang/engineering-cohort/-/blob/master/lab_guides/01-Preflight-Access-Checks/A-software-check.md#gpg) - [GPG (Mac users need to read this important note)](https://repo1.dso.mil/platform-one/onboarding/big-bang/engineering-cohort/-/blob/master/lab_guides/01-Preflight-Access-Checks/A-software-check.md#gpg)
...@@ -64,7 +66,7 @@ git checkout -b template-demo ...@@ -64,7 +66,7 @@ git checkout -b template-demo
### Create GPG Encryption Key ### Create GPG Encryption Key
To make sure your pull secrets are not comprimized when uploaded to Git, you must generate your own encryption key: To make sure your pull secrets are not compromised when uploaded to Git, you must generate your own encryption key:
> Keys should be created without a passphrase so that Flux can use the private key to decrypt secrets in the Big Bang cluster. > Keys should be created without a passphrase so that Flux can use the private key to decrypt secrets in the Big Bang cluster.
...@@ -101,7 +103,7 @@ The `base/configmap.yaml` is setup to use the domain `bigbang.dev` by default. ...@@ -101,7 +103,7 @@ The `base/configmap.yaml` is setup to use the domain `bigbang.dev` by default.
```shell ```shell
cd base cd base
# Encrypt the existing certifiate # Encrypt the existing certificate
sops -e bigbang-dev-cert.yaml > secrets.enc.yaml sops -e bigbang-dev-cert.yaml > secrets.enc.yaml
# Save encrypted TLS certificate into Git # Save encrypted TLS certificate into Git
...@@ -149,7 +151,7 @@ git commit -m "chore: added iron bank pull credentials" ...@@ -149,7 +151,7 @@ git commit -m "chore: added iron bank pull credentials"
git push git push
``` ```
> Your private key to decrypt these secrets is stored in your GPG key ring. You must **NEVER** export this key and commit it to your Git repository since this would comprimise your secrets. > Your private key to decrypt these secrets is stored in your GPG key ring. You must **NEVER** export this key and commit it to your Git repository since this would compromise your secrets.
### Configure for GitOps ### Configure for GitOps
...@@ -245,7 +247,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really) ...@@ -245,7 +247,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really)
# Verify secrets and configmaps are deployed # Verify secrets and configmaps are deployed
# At a minimum, you will have the following: # At a minimum, you will have the following:
# secrets: sops-gpg, private-git, common-bb, and environment-bb # secrets: sops-gpg, private-git, common-bb, and environment-bb
# configmaps: conmmon, environment # configmaps: common, environment
kubectl get -n bigbang secrets,configmaps kubectl get -n bigbang secrets,configmaps
# Watch deployment # Watch deployment
...@@ -257,7 +259,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really) ...@@ -257,7 +259,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really)
``` ```
> If you cannot get to the main page of Kiali, it may be due to an expired certificate. Check the expiration of the certificate in `base/configmap.yaml`. > If you cannot get to the main page of Kiali, it may be due to an expired certificate. Check the expiration of the certificate in `base/configmap.yaml`.
>
> For troubleshooting deployment problems, refer to the [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) documentation. > For troubleshooting deployment problems, refer to the [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) documentation.
You now have successfully deployed Big Bang. Your next step is to customize the configuration. You now have successfully deployed Big Bang. Your next step is to customize the configuration.
...@@ -284,7 +286,7 @@ You now have successfully deployed Big Bang. Your next step is to customize the ...@@ -284,7 +286,7 @@ You now have successfully deployed Big Bang. Your next step is to customize the
1. Big Bang will automatically pick up your change and make the necessary changes. 1. Big Bang will automatically pick up your change and make the necessary changes.
```shell ```shell
# Watch deployment for twislock to be deployed # Watch deployment for twistlock to be deployed
watch kubectl get hr,po -A watch kubectl get hr,po -A
# Test deployment by opening a browser to "twistlock.bigbang.dev" to get to the Twistlock application # Test deployment by opening a browser to "twistlock.bigbang.dev" to get to the Twistlock application
...@@ -383,7 +385,7 @@ For additional configuration options, refer to the [Big Bang](https://repo1.dsop ...@@ -383,7 +385,7 @@ For additional configuration options, refer to the [Big Bang](https://repo1.dsop
### Additional resources ### Additional resources
Using Kustomize, you can add additional resources to the deployment if needed. Read the [Kustomization](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) documentation for futher details. Using Kustomize, you can add additional resources to the deployment if needed. Read the [Kustomization](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) documentation for further details.
## Secrets ## Secrets
...@@ -405,7 +407,7 @@ You will need to update `.sops.yaml` with your configuration based on the links ...@@ -405,7 +407,7 @@ You will need to update `.sops.yaml` with your configuration based on the links
If you need to [rotate your GPG encryption keys](#create-gpg-encryption-key) for any reason, you will also need to re-encrypt any encrypted secrets. If you need to [rotate your GPG encryption keys](#create-gpg-encryption-key) for any reason, you will also need to re-encrypt any encrypted secrets.
1. Update `.sops.yaml` configuration file 1. Update `.sops.yaml` configuration file
`.sops.yaml` holds all of the key fingerpints used for SOPS. Update `pgp`'s value to the new key's fingerprint. You can list your locally stored fingerprints using `gpg -k`. `.sops.yaml` holds all of the key fingerprints used for SOPS. Update `pgp`'s value to the new key's fingerprint. You can list your locally stored fingerprints using `gpg -k`.
```yaml ```yaml
creation_rules: creation_rules:
...@@ -472,7 +474,7 @@ In our template, we have a `dev` and a `prod` environment with a shared `base`. ...@@ -472,7 +474,7 @@ In our template, we have a `dev` and a `prod` environment with a shared `base`.
- Shared Iron Bank pull credential - Shared Iron Bank pull credential
- Different database passwords for `dev` and `prod` - Different database passwords for `dev` and `prod`
- Differnet SOPS keys for `dev` and `prod` - Different SOPS keys for `dev` and `prod`
1. Setup `.sops.yaml` for multiple folders: 1. Setup `.sops.yaml` for multiple folders:
...@@ -547,6 +549,6 @@ To start, we may have the following in each folder: ...@@ -547,6 +549,6 @@ To start, we may have the following in each folder:
Big Bang `dev` value changes can be made by simply modifying `dev/configmap.yaml`. `base` and `dev` create two separate configmaps, named `common` and `environment` respectively, with the `environment` values taking precedence over `common` values in Big Bang. Big Bang `dev` value changes can be made by simply modifying `dev/configmap.yaml`. `base` and `dev` create two separate configmaps, named `common` and `environment` respectively, with the `environment` values taking precedence over `common` values in Big Bang.
The same concept applies to `dev` secret changes, with two separate secrets named `common-bb` and `environment-bb` used for values to Big Bang, with the `environment-bb` values taking prcedence over the `common-bb` values in Big Bang. The same concept applies to `dev` secret changes, with two separate secrets named `common-bb` and `environment-bb` used for values to Big Bang, with the `environment-bb` values taking precedence over the `common-bb` values in Big Bang.
If a new resource must be deployed, for example a TLS cert, you must add a `resources:` section to the `kustomization.yaml` to refer to the new file. See the base directory for an example. If a new resource must be deployed, for example a TLS cert, you must add a `resources:` section to the `kustomization.yaml` to refer to the new file. See the base directory for an example.
# Big Bang Infrastructure as Code (IaC) # Big Bang Infrastructure as Code (IaC)
#### _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to https://repo1.dso.mil/platform-one/big-bang/customers/template_ > _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to <https://repo1.dso.mil/platform-one/big-bang/customers/template>_
[[_TOC_]]
The terraform/terragrunt code in this directory will setup all the infrastructure for a Big Bang deployment in Amazon Web Services (AWS). It starts from scratch with a new VPC and finishes by deploying a multi-node [RKE2 Cluster](https://docs.rke2.io/). The infrastructure and cluster provisioned can then be used to deploy Big Bang. The terraform/terragrunt code in this directory will setup all the infrastructure for a Big Bang deployment in Amazon Web Services (AWS). It starts from scratch with a new VPC and finishes by deploying a multi-node [RKE2 Cluster](https://docs.rke2.io/). The infrastructure and cluster provisioned can then be used to deploy Big Bang.
> This code is intended to be a forkable starting point / example for users to get their infrastructure setup quickly. It is up to the users to futher customize and secure the infrastructure for the intended use. > This code is intended to be a forkable starting point / example for users to get their infrastructure setup quickly. It is up to the users to further customize and secure the infrastructure for the intended use.
## Layout ## Layout
...@@ -15,7 +17,7 @@ terraform ...@@ -15,7 +17,7 @@ terraform
└── main # Shared terraform code └── main # Shared terraform code
└── us-gov-west-1 # Terragrunt code for a specific AWS region └── us-gov-west-1 # Terragrunt code for a specific AWS region
├── region.yaml # Regional configuration ├── region.yaml # Regional configuration
└── prod # Teragrunt code for a specific environment (e.g. prod, stage, dev) └── prod # Terragrunt code for a specific environment (e.g. prod, stage, dev)
└── env.yaml # Environment specific configuration └── env.yaml # Environment specific configuration
``` ```
...@@ -35,7 +37,7 @@ terraform ...@@ -35,7 +37,7 @@ terraform
- Validate your configuration - Validate your configuration
```bash ```shell
cd ./terraform/us-gov-west-1/prod cd ./terraform/us-gov-west-1/prod
terragrunt run-all validate terragrunt run-all validate
# Successful output: Success! The configuration is valid. # Successful output: Success! The configuration is valid.
...@@ -43,7 +45,7 @@ terraform ...@@ -43,7 +45,7 @@ terraform
- Run the deployment - Run the deployment
```bash ```shell
# Initialize # Initialize
terragrunt run-all init terragrunt run-all init
...@@ -57,7 +59,7 @@ terraform ...@@ -57,7 +59,7 @@ terraform
- Connect to cluster - Connect to cluster
```bash ```shell
# Setup your cluster name (same as `name` in `env.yaml`) # Setup your cluster name (same as `name` in `env.yaml`)
export CNAME="bigbang-dev" export CNAME="bigbang-dev"
...@@ -100,17 +102,17 @@ Prior to deploying Big Bang, you should setup the following in the Kubernetes cl ...@@ -100,17 +102,17 @@ Prior to deploying Big Bang, you should setup the following in the Kubernetes cl
### Storage Class ### Storage Class
By default, Big Bang will use the cluster's default `StorageClass` to dynamically provision the required persistent volumes. This means the cluster must be able to dynamically provision PVCs. Since we're on AWS, the simplest method is to use the [AWS EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) Storage Class using Kubernetes' in tree [AWS cloud provider](https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs). By default, Big Bang will use the cluster's default `StorageClass` to dynamically provision the required persistent volumes. This means the cluster must be able to dynamically provision persistent volume claims (PVCs). Since we're on AWS, the simplest method is to use the [AWS EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) Storage Class using Kubernetes' in tree [AWS cloud provider](https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs).
> Without a default storage class, some Big Bang components, like Elasticsearch, Jaeger, or Twistlock, will never reach the running state. > Without a default storage class, some Big Bang components, like Elasticsearch, Jaeger, or Twistlock, will never reach the running state.
```bash ```shell
kubectl apply -f ./terraform/storageclass/ebs-gp2-storage-class.yaml kubectl apply -f ./terraform/storageclass/ebs-gp2-storage-class.yaml
``` ```
If you have an alternative storage class, you can run the following to replace the EBS GP2 one provided. If you have an alternative storage class, you can run the following to replace the EBS GP2 one provided.
```bash ```shell
kubectl patch storageclass ebs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' kubectl patch storageclass ebs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
# Install your storage of choice, for example... # Install your storage of choice, for example...
...@@ -125,7 +127,7 @@ kubectl patch storageclass <name of your storage class> -p '{"metadata": {"annot ...@@ -125,7 +127,7 @@ kubectl patch storageclass <name of your storage class> -p '{"metadata": {"annot
To ensure ingress into the cluster, load balancers must be configured to ensure proper port mappings for `istio`. The simplest method is the default scenario, where the cluster (`rke2` in this example) is running the appropriate cloud provider capable of dynamically provisioning load balancers when requesting `Services` of `type: LoadBalancer`. This is the default configuration in BigBang, and if you choose to continue this way, you can skip the following steps. To ensure ingress into the cluster, load balancers must be configured to ensure proper port mappings for `istio`. The simplest method is the default scenario, where the cluster (`rke2` in this example) is running the appropriate cloud provider capable of dynamically provisioning load balancers when requesting `Services` of `type: LoadBalancer`. This is the default configuration in BigBang, and if you choose to continue this way, you can skip the following steps.
However, for brevity in this example, we are introducing an alternative, where the load balancer is preprovisioned and owned by terraform (from your earlier apply step). This provides more control over the load balancer, but also requires the extra step of informing `istio` on installation of the required ports to expose on each node that the pre-created load balancer should forward to. It's important to note these are the _exact_ same steps that the cloud provider would take if we let Kubernetes provision things for us. However, for brevity in this example, we are introducing an alternative, where the load balancer is pre-provisioned and owned by terraform (from your earlier apply step). This provides more control over the load balancer, but also requires the extra step of informing `istio` on installation of the required ports to expose on each node that the pre-created load balancer should forward to. It's important to note these are the _exact_ same steps that the cloud provider would take if we let Kubernetes provision things for us.
The following configuration in Big Bang's values.yaml will setup the appropriate `NodePorts` to match the [Quickstart](#quickstart) configuration. The following configuration in Big Bang's values.yaml will setup the appropriate `NodePorts` to match the [Quickstart](#quickstart) configuration.
...@@ -280,11 +282,11 @@ end ...@@ -280,11 +282,11 @@ end
## Debug ## Debug
After Big Bang deployment, if you wish to access your deployed web applications that are not exposed publically, add an entry into your /etc/hosts to point the host name to the elastic load balancer. After Big Bang deployment, if you wish to access your deployed web applications that are not exposed publicly, add an entry into your /etc/hosts to point the host name to the elastic load balancer.
> This bypasses load balancing since you are using the resolved IP address of one of the connected nodes in the pool > This bypasses load balancing since you are using the resolved IP address of one of the connected nodes in the pool
```bash ```shell
# Setup cluster name from env.yaml # Setup cluster name from env.yaml
export CName="bigbang-dev" export CName="bigbang-dev"
...@@ -298,7 +300,7 @@ export LBDNS=`aws elb describe-load-balancers --query "LoadBalancerDescriptions[ ...@@ -298,7 +300,7 @@ export LBDNS=`aws elb describe-load-balancers --query "LoadBalancerDescriptions[
# Retrieve IP address of load balancer for /etc/hosts # Retrieve IP address of load balancer for /etc/hosts
export ELBIP=`dig $LBDNS +short | head -1` export ELBIP=`dig $LBDNS +short | head -1`
# Now add the hostname of the web appliation into /etc/hosts (or `C:\Windows\System32\drivers\etc\hosts` on Windows) # Now add the hostname of the web application into /etc/hosts (or `C:\Windows\System32\drivers\etc\hosts` on Windows)
# You may need to log out and back into for hosts to take effect # You may need to log out and back into for hosts to take effect
printf "\nAdd the following line to /etc/hosts to alias Big Bang core products:\n${ELBIP} twistlock.bigbang.dev kibana.bigbang.dev prometheus.bigbang.dev grafana.bigbang.dev tracing.bigbang.dev kiali.bigbang.dev alertmanager.bigbang.dev\n\n" printf "\nAdd the following line to /etc/hosts to alias Big Bang core products:\n${ELBIP} twistlock.bigbang.dev kibana.bigbang.dev prometheus.bigbang.dev grafana.bigbang.dev tracing.bigbang.dev kiali.bigbang.dev alertmanager.bigbang.dev\n\n"
``` ```
...@@ -320,7 +322,7 @@ terragrunt run-all destroy ...@@ -320,7 +322,7 @@ terragrunt run-all destroy
## Optional Terraform ## Optional Terraform
Depending on your needs, you may want to deploy additional infrastructure, such as Key Stores, S3 Buckets, or Databases, that can be used with your deployment. In the [options](./options) directory, you will find terraform / terragrunt snippits that can assist you in deploying these items. Depending on your needs, you may want to deploy additional infrastructure, such as Key Stores, S3 Buckets, or Databases, that can be used with your deployment. In the [options](./options) directory, you will find terraform / terragrunt snippets that can assist you in deploying these items.
> These examples may required updates to be compatible with the [Quickstart](#quickstart) > These examples may required updates to be compatible with the [Quickstart](#quickstart)
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
resource "aws_security_group" "bastion_sg" { resource "aws_security_group" "bastion_sg" {
name_prefix = "${var.name}-bastion-" name_prefix = "${var.name}-bastion-"
description = "${var.name} bastion" description = "${var.name} bastion"
vpc_id = "${var.vpc_id}" vpc_id = var.vpc_id
# Allow all egress # Allow all egress
egress { egress {
...@@ -25,9 +25,9 @@ resource "aws_launch_template" "bastion" { ...@@ -25,9 +25,9 @@ resource "aws_launch_template" "bastion" {
name_prefix = "${var.name}-bastion-" name_prefix = "${var.name}-bastion-"
description = "Bastion launch template for ${var.name} cluster" description = "Bastion launch template for ${var.name} cluster"
image_id = "${var.ami}" image_id = var.ami
instance_type = "${var.instance_type}" instance_type = var.instance_type
key_name = "${var.key_name}" key_name = var.key_name
network_interfaces { network_interfaces {
associate_public_ip_address = true associate_public_ip_address = true
...@@ -39,7 +39,7 @@ resource "aws_launch_template" "bastion" { ...@@ -39,7 +39,7 @@ resource "aws_launch_template" "bastion" {
tag_specifications { tag_specifications {
resource_type = "instance" resource_type = "instance"
tags = merge({"Name" = "${var.name}-bastion"}, var.tags) tags = merge({ "Name" = "${var.name}-bastion" }, var.tags)
} }
} }
......
...@@ -114,7 +114,7 @@ data "aws_network_interface" "public_nlb" { ...@@ -114,7 +114,7 @@ data "aws_network_interface" "public_nlb" {
resource "aws_security_group" "public_nlb_pool" { resource "aws_security_group" "public_nlb_pool" {
name_prefix = "${var.name}-public-nlb-to-pool-" name_prefix = "${var.name}-public-nlb-to-pool-"
description = "${var.name} Traffic from public Network Load Balancer to server pool" description = "${var.name} Traffic from public Network Load Balancer to server pool"
vpc_id = "${var.vpc_id}" vpc_id = var.vpc_id
# Allow all traffic from load balancer # Allow all traffic from load balancer
ingress { ingress {
......
...@@ -17,6 +17,6 @@ resource "local_file" "pem" { ...@@ -17,6 +17,6 @@ resource "local_file" "pem" {
# #
resource "aws_key_pair" "ssh" { resource "aws_key_pair" "ssh" {
key_name = "${var.name}" key_name = var.name
public_key = tls_private_key.ssh.public_key_openssh public_key = tls_private_key.ssh.public_key_openssh
} }
\ No newline at end of file
...@@ -20,8 +20,8 @@ locals { ...@@ -20,8 +20,8 @@ locals {
# Based on VPC CIDR, create subnet ranges # Based on VPC CIDR, create subnet ranges
cidr_index = range(local.num_azs) cidr_index = range(local.num_azs)
public_subnet_cidrs = [ for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i) ] public_subnet_cidrs = [for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i)]
private_subnet_cidrs = [ for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i + local.cidr_step) ] private_subnet_cidrs = [for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i + local.cidr_step)]
} }
# https://github.com/terraform-aws-modules/terraform-aws-vpc # https://github.com/terraform-aws-modules/terraform-aws-vpc
......