UNCLASSIFIED

You need to sign in or sign up before continuing.
Commits (4)
{
"cSpell.words": [
"Autoscale",
"Autoscaled",
"CNAME",
"CODEOWNER",
"Hashicorp",
"Istio's",
"Kubeconfig",
"Kubectl",
"Kustomization",
"Kustomize",
"MYIP",
"Quickstart",
"RHEL",
"STIG",
"SonarQube",
"Twistlock",
"alertmanager",
"bigbang",
"bigbangkey",
"configmap",
"configmaps",
"decryptable",
"fluxcd",
"forkable",
"grafana",
"kibana",
"kustomizations",
"nodepool",
"rebased",
"sshuttle",
"storageclass",
"terragrunt",
"tfstate",
"uncommenting",
"updatekeys",
"xlarge",
"yamldecode"
]
}
\ No newline at end of file
......@@ -2,6 +2,15 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.2.2]
### Changed
- Updated spelling and markdown formatting.
- Formatted Terraform configurations to canonical format.
- Updates CODEOWNERS
- Updated CONTRIBUTING to reflect forking process.
## [1.2.1]
### Changed
......
* @michaelmcleroy
* @michaelmcleroy @jasonkrause @cmcgrath @rkernick
\ No newline at end of file
# BigBang Template
#### _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to https://repo1.dso.mil/platform-one/big-bang/customers/template_**
> _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to <https://repo1.dso.mil/platform-one/big-bang/customers/template>_**
This folder contains a template that you can replicate in your own Git repo to get started with Big Bang configuration. If you are new to Big Bang it is recommended you start with the [Big Bang Quickstart](https://repo1.dso.mil/platform-one/quick-start/big-bang) before attempting customization.
......@@ -9,7 +9,7 @@ The main benefits of this template include:
- Isolation of the Big Bang product and your custom configuration
- Allows you to easily consume upstream Big Bang changes since you never change the product
- Big Bang product tags are explicitly referenced in your configuration, giving you control over upgrades
- [GitOps](https://www.weave.works/technologies/gitops/) for your deployments configrations
- [GitOps](https://www.weave.works/technologies/gitops/) for your deployments configurations
- Single source of truth for the configurations deployed
- Historical tracking of changes made
- Allows tighter control of what is deployed to production (via merge requests)
......@@ -21,11 +21,11 @@ The main benefits of this template include:
- Secrets (e.g. pull credentials) can be shared across deployments.
> NOTE: SOPS [supports multiple keys for encrypting the same secret](https://dev.to/stack-labs/manage-your-secrets-in-git-with-sops-common-operations-118g) so that each environment can use a different SOPS key but share a secret.
### Prerequisites
## Prerequisites
To deploy Big Bang, the following items are required:
- Kubernetes cluster [ready for Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/blob/master/docs/d_prerequisites.md)
- Kubernetes cluster [ready for Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang/-/tree/master/docs/guides/prerequisites)
- A git repo for your configuration
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [GPG (Mac users need to read this important note)](https://repo1.dso.mil/platform-one/onboarding/big-bang/engineering-cohort/-/blob/master/lab_guides/01-Preflight-Access-Checks/A-software-check.md#gpg)
......@@ -64,7 +64,7 @@ git checkout -b template-demo
### Create GPG Encryption Key
To make sure your pull secrets are not comprimized when uploaded to Git, you must generate your own encryption key:
To make sure your pull secrets are not compromised when uploaded to Git, you must generate your own encryption key:
> Keys should be created without a passphrase so that Flux can use the private key to decrypt secrets in the Big Bang cluster.
......@@ -101,7 +101,7 @@ The `base/configmap.yaml` is setup to use the domain `bigbang.dev` by default.
```shell
cd base
# Encrypt the existing certifiate
# Encrypt the existing certificate
sops -e bigbang-dev-cert.yaml > secrets.enc.yaml
# Save encrypted TLS certificate into Git
......@@ -149,7 +149,7 @@ git commit -m "chore: added iron bank pull credentials"
git push
```
> Your private key to decrypt these secrets is stored in your GPG key ring. You must **NEVER** export this key and commit it to your Git repository since this would comprimise your secrets.
> Your private key to decrypt these secrets is stored in your GPG key ring. You must **NEVER** export this key and commit it to your Git repository since this would compromise your secrets.
### Configure for GitOps
......@@ -245,7 +245,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really)
# Verify secrets and configmaps are deployed
# At a minimum, you will have the following:
# secrets: sops-gpg, private-git, common-bb, and environment-bb
# configmaps: conmmon, environment
# configmaps: common, environment
kubectl get -n bigbang secrets,configmaps
# Watch deployment
......@@ -257,7 +257,7 @@ Big Bang follows a [GitOps](https://www.weave.works/blog/what-is-gitops-really)
```
> If you cannot get to the main page of Kiali, it may be due to an expired certificate. Check the expiration of the certificate in `base/configmap.yaml`.
>
> For troubleshooting deployment problems, refer to the [Big Bang](https://repo1.dsop.io/platform-one/big-bang/bigbang) documentation.
You now have successfully deployed Big Bang. Your next step is to customize the configuration.
......@@ -284,7 +284,7 @@ You now have successfully deployed Big Bang. Your next step is to customize the
1. Big Bang will automatically pick up your change and make the necessary changes.
```shell
# Watch deployment for twislock to be deployed
# Watch deployment for twistlock to be deployed
watch kubectl get hr,po -A
# Test deployment by opening a browser to "twistlock.bigbang.dev" to get to the Twistlock application
......@@ -383,7 +383,7 @@ For additional configuration options, refer to the [Big Bang](https://repo1.dsop
### Additional resources
Using Kustomize, you can add additional resources to the deployment if needed. Read the [Kustomization](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) documentation for futher details.
Using Kustomize, you can add additional resources to the deployment if needed. Read the [Kustomization](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) documentation for further details.
## Secrets
......@@ -405,7 +405,7 @@ You will need to update `.sops.yaml` with your configuration based on the links
If you need to [rotate your GPG encryption keys](#create-gpg-encryption-key) for any reason, you will also need to re-encrypt any encrypted secrets.
1. Update `.sops.yaml` configuration file
`.sops.yaml` holds all of the key fingerpints used for SOPS. Update `pgp`'s value to the new key's fingerprint. You can list your locally stored fingerprints using `gpg -k`.
`.sops.yaml` holds all of the key fingerprints used for SOPS. Update `pgp`'s value to the new key's fingerprint. You can list your locally stored fingerprints using `gpg -k`.
```yaml
creation_rules:
......@@ -472,7 +472,7 @@ In our template, we have a `dev` and a `prod` environment with a shared `base`.
- Shared Iron Bank pull credential
- Different database passwords for `dev` and `prod`
- Differnet SOPS keys for `dev` and `prod`
- Different SOPS keys for `dev` and `prod`
1. Setup `.sops.yaml` for multiple folders:
......@@ -547,6 +547,6 @@ To start, we may have the following in each folder:
Big Bang `dev` value changes can be made by simply modifying `dev/configmap.yaml`. `base` and `dev` create two separate configmaps, named `common` and `environment` respectively, with the `environment` values taking precedence over `common` values in Big Bang.
The same concept applies to `dev` secret changes, with two separate secrets named `common-bb` and `environment-bb` used for values to Big Bang, with the `environment-bb` values taking prcedence over the `common-bb` values in Big Bang.
The same concept applies to `dev` secret changes, with two separate secrets named `common-bb` and `environment-bb` used for values to Big Bang, with the `environment-bb` values taking precedence over the `common-bb` values in Big Bang.
If a new resource must be deployed, for example a TLS cert, you must add a `resources:` section to the `kustomization.yaml` to refer to the new file. See the base directory for an example.
# Big Bang Infrastructure as Code (IaC)
#### _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to https://repo1.dso.mil/platform-one/big-bang/customers/template_
> _This is a mirror of a government repo hosted on [Repo1](https://repo1.dso.mil/) by [DoD Platform One](http://p1.dso.mil/). Please direct all code changes, issues and comments to <https://repo1.dso.mil/platform-one/big-bang/customers/template>_
The terraform/terragrunt code in this directory will setup all the infrastructure for a Big Bang deployment in Amazon Web Services (AWS). It starts from scratch with a new VPC and finishes by deploying a multi-node [RKE2 Cluster](https://docs.rke2.io/). The infrastructure and cluster provisioned can then be used to deploy Big Bang.
> This code is intended to be a forkable starting point / example for users to get their infrastructure setup quickly. It is up to the users to futher customize and secure the infrastructure for the intended use.
> This code is intended to be a forkable starting point / example for users to get their infrastructure setup quickly. It is up to the users to further customize and secure the infrastructure for the intended use.
## Layout
......@@ -15,7 +15,7 @@ terraform
└── main # Shared terraform code
└── us-gov-west-1 # Terragrunt code for a specific AWS region
├── region.yaml # Regional configuration
└── prod # Teragrunt code for a specific environment (e.g. prod, stage, dev)
└── prod # Terragrunt code for a specific environment (e.g. prod, stage, dev)
└── env.yaml # Environment specific configuration
```
......@@ -35,7 +35,7 @@ terraform
- Validate your configuration
```bash
```shell
cd ./terraform/us-gov-west-1/prod
terragrunt run-all validate
# Successful output: Success! The configuration is valid.
......@@ -43,7 +43,7 @@ terraform
- Run the deployment
```bash
```shell
# Initialize
terragrunt run-all init
......@@ -57,7 +57,7 @@ terraform
- Connect to cluster
```bash
```shell
# Setup your cluster name (same as `name` in `env.yaml`)
export CNAME="bigbang-dev"
......@@ -100,17 +100,17 @@ Prior to deploying Big Bang, you should setup the following in the Kubernetes cl
### Storage Class
By default, Big Bang will use the cluster's default `StorageClass` to dynamically provision the required persistent volumes. This means the cluster must be able to dynamically provision PVCs. Since we're on AWS, the simplest method is to use the [AWS EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) Storage Class using Kubernetes' in tree [AWS cloud provider](https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs).
By default, Big Bang will use the cluster's default `StorageClass` to dynamically provision the required persistent volumes. This means the cluster must be able to dynamically provision persistent volume claims (PVCs). Since we're on AWS, the simplest method is to use the [AWS EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) Storage Class using Kubernetes' in tree [AWS cloud provider](https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs).
> Without a default storage class, some Big Bang components, like Elasticsearch, Jaeger, or Twistlock, will never reach the running state.
```bash
```shell
kubectl apply -f ./terraform/storageclass/ebs-gp2-storage-class.yaml
```
If you have an alternative storage class, you can run the following to replace the EBS GP2 one provided.
```bash
```shell
kubectl patch storageclass ebs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
# Install your storage of choice, for example...
......@@ -125,7 +125,7 @@ kubectl patch storageclass <name of your storage class> -p '{"metadata": {"annot
To ensure ingress into the cluster, load balancers must be configured to ensure proper port mappings for `istio`. The simplest method is the default scenario, where the cluster (`rke2` in this example) is running the appropriate cloud provider capable of dynamically provisioning load balancers when requesting `Services` of `type: LoadBalancer`. This is the default configuration in BigBang, and if you choose to continue this way, you can skip the following steps.
However, for brevity in this example, we are introducing an alternative, where the load balancer is preprovisioned and owned by terraform (from your earlier apply step). This provides more control over the load balancer, but also requires the extra step of informing `istio` on installation of the required ports to expose on each node that the pre-created load balancer should forward to. It's important to note these are the _exact_ same steps that the cloud provider would take if we let Kubernetes provision things for us.
However, for brevity in this example, we are introducing an alternative, where the load balancer is pre-provisioned and owned by terraform (from your earlier apply step). This provides more control over the load balancer, but also requires the extra step of informing `istio` on installation of the required ports to expose on each node that the pre-created load balancer should forward to. It's important to note these are the _exact_ same steps that the cloud provider would take if we let Kubernetes provision things for us.
The following configuration in Big Bang's values.yaml will setup the appropriate `NodePorts` to match the [Quickstart](#quickstart) configuration.
......@@ -280,11 +280,11 @@ end
## Debug
After Big Bang deployment, if you wish to access your deployed web applications that are not exposed publically, add an entry into your /etc/hosts to point the host name to the elastic load balancer.
After Big Bang deployment, if you wish to access your deployed web applications that are not exposed publicly, add an entry into your /etc/hosts to point the host name to the elastic load balancer.
> This bypasses load balancing since you are using the resolved IP address of one of the connected nodes in the pool
```bash
```shell
# Setup cluster name from env.yaml
export CName="bigbang-dev"
......@@ -298,7 +298,7 @@ export LBDNS=`aws elb describe-load-balancers --query "LoadBalancerDescriptions[
# Retrieve IP address of load balancer for /etc/hosts
export ELBIP=`dig $LBDNS +short | head -1`
# Now add the hostname of the web appliation into /etc/hosts (or `C:\Windows\System32\drivers\etc\hosts` on Windows)
# Now add the hostname of the web application into /etc/hosts (or `C:\Windows\System32\drivers\etc\hosts` on Windows)
# You may need to log out and back into for hosts to take effect
printf "\nAdd the following line to /etc/hosts to alias Big Bang core products:\n${ELBIP} twistlock.bigbang.dev kibana.bigbang.dev prometheus.bigbang.dev grafana.bigbang.dev tracing.bigbang.dev kiali.bigbang.dev alertmanager.bigbang.dev\n\n"
```
......@@ -320,7 +320,7 @@ terragrunt run-all destroy
## Optional Terraform
Depending on your needs, you may want to deploy additional infrastructure, such as Key Stores, S3 Buckets, or Databases, that can be used with your deployment. In the [options](./options) directory, you will find terraform / terragrunt snippits that can assist you in deploying these items.
Depending on your needs, you may want to deploy additional infrastructure, such as Key Stores, S3 Buckets, or Databases, that can be used with your deployment. In the [options](./options) directory, you will find terraform / terragrunt snippets that can assist you in deploying these items.
> These examples may required updates to be compatible with the [Quickstart](#quickstart)
......
......@@ -7,14 +7,14 @@
resource "aws_security_group" "bastion_sg" {
name_prefix = "${var.name}-bastion-"
description = "${var.name} bastion"
vpc_id = "${var.vpc_id}"
vpc_id = var.vpc_id
# Allow all egress
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = var.tags
......@@ -23,37 +23,37 @@ resource "aws_security_group" "bastion_sg" {
# Bastion Launch Template
resource "aws_launch_template" "bastion" {
name_prefix = "${var.name}-bastion-"
description = "Bastion launch template for ${var.name} cluster"
image_id = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
name_prefix = "${var.name}-bastion-"
description = "Bastion launch template for ${var.name} cluster"
image_id = var.ami
instance_type = var.instance_type
key_name = var.key_name
network_interfaces {
associate_public_ip_address = true
security_groups = ["${aws_security_group.bastion_sg.id}"]
}
update_default_version = true
user_data = filebase64("${path.module}/dependencies/install_python.sh")
update_default_version = true
user_data = filebase64("${path.module}/dependencies/install_python.sh")
tag_specifications {
resource_type = "instance"
tags = merge({"Name" = "${var.name}-bastion"}, var.tags)
resource_type = "instance"
tags = merge({ "Name" = "${var.name}-bastion" }, var.tags)
}
}
# Bastion Auto-Scaling Group
resource "aws_autoscaling_group" "bastion" {
name_prefix = "${var.name}-bastion-"
max_size = 2
min_size = 1
desired_capacity = 1
name_prefix = "${var.name}-bastion-"
max_size = 2
min_size = 1
desired_capacity = 1
vpc_zone_identifier = var.subnet_ids
vpc_zone_identifier = var.subnet_ids
launch_template {
id = aws_launch_template.bastion.id
version = "$Latest"
id = aws_launch_template.bastion.id
version = "$Latest"
}
}
\ No newline at end of file
variable "name" {
description = "The project name to prepend to resources"
type = string
default = "bigbang-dev"
type = string
default = "bigbang-dev"
}
variable "vpc_id" {
description = "The VPC where the bastion should be deployed"
type = string
type = string
}
variable "subnet_ids" {
description = "List of subnet ids where the bastion is allowed"
type = list(string)
type = list(string)
}
variable "ami" {
description = "The image to use for the bastion"
type = string
default = "ami-017e342d9500ef3b2" # RKE2 RHEL8 STIG (even though we don't need RHEL8, it is hardened)
type = string
default = "ami-017e342d9500ef3b2" # RKE2 RHEL8 STIG (even though we don't need RHEL8, it is hardened)
}
variable "instance_type" {
description = "The AWS EC2 instance type for the bastion"
type = string
default = "t2.micro"
default = "t2.micro"
}
variable "key_name" {
description = "The key pair name to install on the bastion"
type = string
default = ""
default = ""
}
variable "tags" {
description = "The tags to apply to resources"
type = map(string)
default = {}
type = map(string)
default = {}
}
\ No newline at end of file
......@@ -21,8 +21,8 @@ resource "aws_lb_target_group" "public_nlb_http" {
vpc_id = var.vpc_id
health_check {
port = var.node_port_health_checks
path = "/healthz/ready"
port = var.node_port_health_checks
path = "/healthz/ready"
}
lifecycle {
create_before_destroy = true
......@@ -37,8 +37,8 @@ resource "aws_lb_target_group" "public_nlb_https" {
vpc_id = var.vpc_id
health_check {
port = var.node_port_health_checks
path = "/healthz/ready"
port = var.node_port_health_checks
path = "/healthz/ready"
}
lifecycle {
create_before_destroy = true
......@@ -53,8 +53,8 @@ resource "aws_lb_target_group" "public_nlb_sni" {
vpc_id = var.vpc_id
health_check {
port = var.node_port_health_checks
path = "/healthz/ready"
port = var.node_port_health_checks
path = "/healthz/ready"
}
lifecycle {
create_before_destroy = true
......@@ -114,39 +114,39 @@ data "aws_network_interface" "public_nlb" {
resource "aws_security_group" "public_nlb_pool" {
name_prefix = "${var.name}-public-nlb-to-pool-"
description = "${var.name} Traffic from public Network Load Balancer to server pool"
vpc_id = "${var.vpc_id}"
vpc_id = var.vpc_id
# Allow all traffic from load balancer
ingress {
description = "Allow public Network Load Balancer traffic to health check"
from_port = var.node_port_health_checks
to_port = var.node_port_health_checks
protocol = "tcp"
cidr_blocks = formatlist("%s/32", [for eni in data.aws_network_interface.public_nlb : eni.private_ip])
description = "Allow public Network Load Balancer traffic to health check"
from_port = var.node_port_health_checks
to_port = var.node_port_health_checks
protocol = "tcp"
cidr_blocks = formatlist("%s/32", [for eni in data.aws_network_interface.public_nlb : eni.private_ip])
}
ingress {
description = "Allow internet traffic to HTTP node port"
from_port = var.node_port_http
to_port = var.node_port_http
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow internet traffic to HTTP node port"
from_port = var.node_port_http
to_port = var.node_port_http
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Allow internet traffic to HTTPS node port"
from_port = var.node_port_https
to_port = var.node_port_https
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow internet traffic to HTTPS node port"
from_port = var.node_port_https
to_port = var.node_port_https
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Allow internet traffic to SNI node port"
from_port = var.node_port_sni
to_port = var.node_port_sni
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow internet traffic to SNI node port"
from_port = var.node_port_sni
to_port = var.node_port_sni
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = var.tags
......
output "pool_sg_id" {
description = "The ID of the security group used as an inbound rule for load balancer's back-end server pool"
value = aws_security_group.public_nlb_pool.id
value = aws_security_group.public_nlb_pool.id
}
output "elb_target_group_arns" {
description = "The load balancer target group ARNs"
value = [aws_lb_target_group.public_nlb_http.arn, aws_lb_target_group.public_nlb_https.arn, aws_lb_target_group.public_nlb_sni.arn]
value = [aws_lb_target_group.public_nlb_http.arn, aws_lb_target_group.public_nlb_https.arn, aws_lb_target_group.public_nlb_sni.arn]
}
\ No newline at end of file
variable "name" {
description = "The name to apply to the external load balancer resources"
type = string
default = "bigbang-dev"
type = string
default = "bigbang-dev"
}
variable "vpc_id" {
description = "The VPC where the load balancer should be deployed"
type = string
type = string
}
variable "subnet_ids" {
description = "The subnet ids to load balance"
type = list(string)
type = list(string)
}
variable "node_port_health_checks" {
description = "The node port to use for Istio health check traffic"
type = string
default = "30000"
type = string
default = "30000"
}
variable "node_port_http" {
description = "The node port to use for HTTP traffic"
type = string
default = "30001"
type = string
default = "30001"
}
variable "node_port_https" {
description = "The node port to use for HTTPS traffic"
type = string
default = "30002"
type = string
default = "30002"
}
variable "node_port_sni" {
description = "The node port to use for Istio SNI traffic"
type = string
default = "30003"
type = string
default = "30003"
}
variable "tags" {
description = "The tags to apply to resources"
type = map(string)
default = {}
type = map(string)
default = {}
}
\ No newline at end of file
......@@ -3,7 +3,7 @@
# See https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2/rke2-aws-terraform/-/blob/master/modules/nodepool/main.tf#L113
resource "aws_autoscaling_attachment" "pool" {
for_each = toset(var.elb_target_group_arns)
for_each = toset(var.elb_target_group_arns)
autoscaling_group_name = var.pool_asg_id
alb_target_group_arn = each.value
alb_target_group_arn = each.value
}
\ No newline at end of file
variable "name" {
description = "The name to apply to resources"
type = string
default = "bigbang-dev"
type = string
default = "bigbang-dev"
}
variable "elb_target_group_arns" {
description = "The load balancer's target group ARNs to attach to the autoscale group"
type = list(string)
type = list(string)
}
variable "pool_asg_id" {
description = "The pool's autoscale group ID"
type = string
type = string
}
\ No newline at end of file
......@@ -35,7 +35,7 @@ resource "null_resource" "kubeconfig" {
# Upload SSH private key
resource "aws_s3_bucket_object" "sshkey" {
key = "ssh-private-key.pem"
key = "ssh-private-key.pem"
# Get bucket name in middle of s3://<bucket name>/rke2.yaml
bucket = replace(replace(var.kubeconfig_path, "/\\/[^/]*$/", ""), "/^[^/]*\\/\\//", "")
source = pathexpand("${var.private_key_path}/${var.name}.pem")
......
variable "name" {
description = "The name of the SSH key"
type = string
default = "bigbang-dev"
type = string
default = "bigbang-dev"
}
variable "kubeconfig_path" {
description = "Remote path to kubeconfig"
type = string
type = string
}
variable "private_key_path" {
description = "Local path to SSH private key"
type = string
default = "~/.ssh"
type = string
default = "~/.ssh"
}
\ No newline at end of file
......@@ -17,6 +17,6 @@ resource "local_file" "pem" {
#
resource "aws_key_pair" "ssh" {
key_name = "${var.name}"
key_name = var.name
public_key = tls_private_key.ssh.public_key_openssh
}
\ No newline at end of file
output "key_name" {
description = "The name of the AWS SSH key pair"
value = aws_key_pair.ssh.key_name
value = aws_key_pair.ssh.key_name
}
output "public_key" {
description = "The public SSH key"
value = tls_private_key.ssh.public_key_openssh
value = tls_private_key.ssh.public_key_openssh
}
\ No newline at end of file
variable "private_key_path" {
description = "Local path to store private key for SSH"
type = string
default = "~/.ssh"
type = string
default = "~/.ssh"
}
variable "name" {
description = "Name of the SSH keypair to create"
type = string
default = "bigbang"
type = string
default = "bigbang"
}
\ No newline at end of file
data "aws_availability_zones" "available" {
state = "available"
filter {
name = "group-name"
name = "group-name"
values = [var.aws_region]
}
}
\ No newline at end of file
......@@ -19,9 +19,9 @@ locals {
cidr_step = max(10, local.num_azs)
# Based on VPC CIDR, create subnet ranges
cidr_index = range(local.num_azs)
public_subnet_cidrs = [ for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i) ]
private_subnet_cidrs = [ for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i + local.cidr_step) ]
cidr_index = range(local.num_azs)
public_subnet_cidrs = [for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i)]
private_subnet_cidrs = [for i in local.cidr_index : cidrsubnet(var.vpc_cidr, local.cidr_size, i + local.cidr_step)]
}
# https://github.com/terraform-aws-modules/terraform-aws-vpc
......@@ -39,8 +39,8 @@ module "vpc" {
# and if the NAT gateway’s Availability Zone is down, resources in the other Availability
# Zones lose internet access. To create an Availability Zone-independent architecture,
# create a NAT gateway in each Availability Zone.
enable_nat_gateway = true
single_nat_gateway = false
enable_nat_gateway = true
single_nat_gateway = false
one_nat_gateway_per_az = true
enable_dns_hostnames = true
......@@ -52,12 +52,12 @@ module "vpc" {
# Add in required tags for proper AWS CCM integration
public_subnet_tags = merge({
"kubernetes.io/cluster/${var.name}" = "shared"
"kubernetes.io/role/elb" = "1"
"kubernetes.io/role/elb" = "1"
}, var.tags)
private_subnet_tags = merge({
"kubernetes.io/cluster/${var.name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/role/internal-elb" = "1"
}, var.tags)
tags = merge({
......
output "vpc_id" {
description = "The Virtual Private Cloud (VPC) ID"
value = module.vpc.vpc_id
value = module.vpc.vpc_id
}
output "private_subnet_ids" {
description = "The list of private subnet IDs in the VPC"
value = module.vpc.private_subnets
value = module.vpc.private_subnets
}
output "public_subnet_ids" {
description = "Thge list of public subnet IDs in the VPC"
value = module.vpc.public_subnets
value = module.vpc.public_subnets
}
\ No newline at end of file