UNCLASSIFIED

Commit d0d65996 authored by Andy Maksymowicz's avatar Andy Maksymowicz
Browse files

Merge branch 'development' into 'master'

Updated Dockerfile

See merge request !19
parents 5864302d eada8a70
Pipeline #465629 failed with stages
in 51 minutes and 48 seconds
......@@ -17,6 +17,13 @@ If you need to contact the Container Hardening team, please identify your assign
If you have no assignee, feel free to tag Container Hardening leadership in your issue by commenting on this issue with your questions/concerns and then add `/cc @ironbank-notifications/leadership`. Gitlab will automatically notify all Container Hardening leadership to look at this issue and respond.
## Get Unstuck/AMA:
Iron Bank Get Unstuck/AMA Working Sessions every Wednesday from 1630-1730EST.
Need some help with your containers getting through Iron Bank? Have questions on where things are at? Are you feeling stuck and want to figure out the next steps? This is the meeting for you! Come meet with the Iron Bank leadership and engineers to get answers to your questions.
Register in advance for this meeting: https://www.zoomgov.com/meeting/register/vJIsf-ytpz8qHSN_JW8Hl9Qf0AZZXSCSmfo
After registering, you will receive a confirmation email containing information about joining the meeting.
If you have any questions, please come to our Get Unstuck/AMA sessions. There we will have the right combination of business folks and engineers to get your questions answered.
## Responsibilities
......@@ -25,28 +32,140 @@ If this application is owned by a Contributor or Vendor (identifed as `Owner::Co
## Definition of Done
Hardening:
- [ ] Hardening manifest is created and adheres to the schema (https://repo1.dsop.io/ironbank-tools/ironbank-pipeline/-/blob/master/schema/hardening_manifest.schema.json)
- [ ] Container builds successfully through the Gitlab CI pipeline
- [ ] Branch has been merged into `development`
- [ ] Project is configured for automatic renovate updates (if possible)
Justifications:
- [ ] All findings have been justified per the above documentation
This checklist is meant to provide a high level overview of the process and steps for getting your container(s) onto Iron Bank.
- [ ] Create a Repo1 account (https://repo1.dso.mil/users/sign_in) to get access to the public repository of containers. You can register by clicking on the 'Sign in with Iron Bank SSO' button in the sign-in page, followed by the Register button
- [ ] Fill out the onboarding form: https://p1.dso.mil/#/products/iron-bank/getting-started
- [ ] Attend our once weekly onboarding session where you can ask questions. [Register here](https://www.zoomgov.com/meeting/register/vJIsce6rpzkqGq9hHHRscNfGENYqvRL1s10%E2%81%A9).
- [ ] Your Onboarding form will be processed by the Iron Bank team, who will then assign it a priority level and create your repository. You will receive an email that your Gitlab issue has been created and is ready for you to complete the hardening process
- [ ] Ensure that all POCs are assigned to the issue to ensure proper tracking and notifications
## Hardening Process
### Repository Requirements
[Full documentation](https://repo1.dso.mil/dsop/dccscr/-/blob/master/Hardening/structure_requirements.md)
- [ ] A Dockerfile has been created in the root of the repository
- [ ] Hardening_manifest.yaml has been created in the root of the repository
- [ ] The project has a LICENSE or a copy of the EULA
- [ ] The project has a README in the root of the repository with sufficient instructions on using the Iron Bank version of the image
- [ ] If your container is an enterprise/commercial container, the opensource version is ready
- [ ] Scripts used in the Dockerfile are placed into a `scripts` directory
- [ ] Configuration files are placed into a `config` directory
- [ ] Project is [configured for automatic renovate updates](https://repo1.dso.mil/dsop/dccscr/-/blob/master/Hardening/Renovate.md) (if possible)
- [ ] Renovate.json is present in root of repository
- [ ] Reviewers have been specified for notifications on new merge requests
### Dockerfile Requirements
[Full documentation](https://repo1.dso.mil/dsop/dccscr/-/blob/master/Hardening/Dockerfile_Requirements.md)
- [ ] There is one Dockerfile named Dockerfile
- [ ] The Dockerfile has the BASE_REGISTRY, BASE_IMAGE, and BASE_TAG arguments (used for local builds; the values in hardening_manifest.yaml are what will be used in the Container Hardening Pipeline)
- [ ] The Dockerfile is [based on a hardened Iron Bank image](https://repo1.dso.mil/dsop/dccscr/-/blob/master/Hardening/Dockerfile_Requirements.md#requirements)
- [ ] The Dockerfile includes a HEALTHCHECK (required if it is an application container)
- [ ] The Dockerfile starts the container as a non-root USER. Otherwise, if you must run as root, you must have proper justification.
- [ ] If your ENTRYPOINT entails using a script, the script is copied from a scripts directory on the project root
- [ ] No ADD instructions are used in the Dockerfile
## Hardening Manifest
[Full documentation](https://repo1.dso.mil/dsop/dccscr/-/tree/master/hardening%20manifest)
- [ ] Begin with this example and update with relevant information: https://repo1.dso.mil/dsop/dccscr/-/blob/master/hardening%20manifest/hardening_manifest.yaml
- [ ] Hardening manifest adheres to the following schema: https://repo1.dsop.io/ironbank-tools/ironbank-pipeline/-/blob/master/schema/hardening_manifest.schema.json
- [ ] The BASE_IMAGE and BASE_TAG arguments refer to a hardened/approved Iron Bank image (BASE_REGISTRY defaults to `registry1.dso.mil/ironbank` in the pipeline)
- [ ] Relevant image metadata has been entered for the corresponding labels
- [ ] Any downloaded resources include a checksum for verification (letters must be lowercase)
- [ ] For resource URLs that require authentication, credentials have been provided to an Iron Bank team member
- [ ] The maintainers' contact information has been provided in the `maintainers` section
## Gitlab CI Pipeline
[Full documentation](https://repo1.dso.mil/dsop/dccscr/-/tree/master/pipeline)
- [ ] Validate your container builds successfully through the Gitlab CI pipeline. When viewing the repository in repo1.dso.mil, go to `CI/CD > Pipelines` on the left. From there, you can see the status of your pipelines.
- [ ] Review scan output from `csv output` stage of the pipeline. For instructions on downloading the findings spreadsheet, click [here](https://repo1.dso.mil/dsop/dccscr/-/blob/master/pre-approval/spreadsheet.md)
- [ ] Fix vulnerabilities that were found and run the pipeline again before requesting a merge to the development branch
## Pre-Approval:
[Full documentation](https://repo1.dso.mil/dsop/dccscr/-/tree/master/pre-approval)
- [ ] Submit a Merge Request to the development branch
- [ ] Feature branch has been merged into development
- [ ] All findings from the development branch pipeline have been justified per the above documentation
- [ ] Justifications have been attached to this issue
- [ ] Apply the label `Approval` to indicate this container is ready for the approval phase
Note: The justifications must be provided in a timely fashion. Failure to do so could result in new findings being identified which may start this process over.
- [ ] Apply the `Approval` label and remove the `Doing` label to indicate this container is ready for the approval phase
_Note: The justifications must be provided in a timely fashion. Failure to do so could result in new findings being identified which may start this process over._
## Approval Process (Container Hardening Team processes):
[Full documentation](https://repo1.dso.mil/dsop/dccscr/-/tree/master/approval)
Approval Process (Container Hardening Team processes):
- [ ] Peer review from Container Hardening Team
- [ ] Findings Approver has reviewed and approved all justifications
- [ ] Approval request has been sent to Authorizing Official
- [ ] Approval request has been processed by Authorizing Official
Note: If the above approval process is kicked back for any reason, the `Approval` label will be removed and the issue will be sent back to `Open`. Any comments will be listed in this issue for you to address. Once they have been addressed, you may re-add the `Approval` label.
One of the following statuses is assigned:
- [ ] Conditional approval has been granted by the Authorizing Official for this container (`Approval::Expiring` label is applied)
- [ ] This container has been approved by the Authorizing Official (`Approved` label is applied)
_Note: If the above approval process is kicked back for any reason, the `Approval` label will be removed and the issue will be sent back to `Open`. Any comments will be listed in this issue for you to address. Once they have been addressed, you may re-add the `Approval` label._
## Post-Approval
[Full documentation](https://repo1.dso.mil/dsop/dccscr/-/tree/master/post%20approval)
- [ ] Your issue has been closed
- [ ] Your project has been merged into master
- [ ] Master branch pipeline has completed successfully (at this point, the image is made available on `ironbank.dso.mil` and `registry1.dso.mil` )
_Note: Now that your application has been approved, your container(s) will be subjected to continuous monitoring. If new CVEs are discovered or bugs are identified, you will need to address the issues and return to step 5 (Gitlab CI Pipeline). As you make changes, please make sure you are adhering to all of the requirements of the hardening process._
## Post Approval
### Continuous Monitoring
......
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
ARG BASE_REGISTRY=registry1.dso.mil
ARG BASE_IMAGE=ironbank/redhat/ubi/ubi8
ARG BASE_TAG=8.4
ARG TERRAFORM_VERSION=1.0.0
ARG AZURERM_VERSION=2.67.0
ARG RANDOM_VERSION=3.1.0
ARG TIME_VERSION=0.7.1
ARG RELTAG=2021.06.0
FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG} as intermediate
#Hardening: Update packages and remove cache
RUN dnf update -y --nodocs && \
dnf clean all && \
rm -rf /var/cache/dnf
#Install unzip package
RUN dnf --nodocs -y install --setopt=install_weak_deps=False \
unzip
# Downbload MLZ source code
RUN mkdir /workspaces
WORKDIR /workspaces
ARG MLZ_DEPENDENCY=v2021.06.0.zip
COPY ["${MLZ_DEPENDENCY}", "/tmp"]
RUN unzip /tmp/${MLZ_DEPENDENCY} -d /workspaces/missionlz \
&& rm /tmp/${MLZ_DEPENDENCY}
RUN dnf install unzip -y
WORKDIR /workspaces/missionlz
FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG}
#Hardening: Update packages and remove cache
RUN dnf update -y --nodocs && \
dnf clean all && \
rm -rf /var/cache/dnf
#Install unzip package
RUN dnf --nodocs -y install --setopt=install_weak_deps=False \
unzip \
ca-certificates \
sudo
# Delete cached files we don't need anymore:
RUN dnf clean all
# Install the Microsoft package key
# This file is downloaded using the hardening_manifest file
ARG KEY_DEPENDENCY=microsoft.asc
COPY ["${KEY_DEPENDENCY}", "/tmp"]
RUN rpm --import /tmp/${KEY_DEPENDENCY}
# Install azure-cli
ARG AZURE_CLI_DEPENDENCY=azure-cli-2.26.1-1.el7.x86_64.rpm
COPY ["${AZURE_CLI_DEPENDENCY}", "/tmp"]
RUN dnf install /tmp/${AZURE_CLI_DEPENDENCY} -y && \
dnf clean all && \
rm -rf /tmp/${AZURE_CLI_DEPENDENCY}
RUN echo $'[azure-cli] \n\
name=Azure CLI \n\
baseurl=https://packages.microsoft.com/yumrepos/azure-cli \n\
enabled=1 \n\
gpgcheck=1 \n\
gpgkey=/tmp/${KEY_DEPENDENCY}' >> /etc/yum.repos.d/azure-cli.repo
# Install Terraform
ARG TERRAFORM_DEPENDENCY=terraform.zip
COPY ["${TERRAFORM_DEPENDENCY}", "/tmp"]
RUN unzip /tmp/${TERRAFORM_DEPENDENCY} -d /usr/local/bin/ \
&& rm /tmp/${TERRAFORM_DEPENDENCY}
# Download Terraform providers (plugins)
# Setting the TF_PLUGIN_CACHE_DIR environment variable instructs Terraform to search that folder for plugins first
ENV TF_PLUGIN_CACHE_DIR=/usr/lib/tf-plugins
ARG AZURERM_LOCAL_PATH="${TF_PLUGIN_CACHE_DIR}/registry.terraform.io/hashicorp/azurerm/2.55.0/linux_amd64"
ARG RANDOM_LOCAL_PATH="${TF_PLUGIN_CACHE_DIR}/registry.terraform.io/hashicorp/random/3.1.0/linux_amd64"
ARG TIME_LOCAL_PATH="${TF_PLUGIN_CACHE_DIR}/registry.terraform.io/hashicorp/time/0.7.1/linux_amd64"
RUN mkdir -p ${AZURERM_LOCAL_PATH} \
&& mkdir -p ${RANDOM_LOCAL_PATH} \
&& mkdir -p ${TIME_LOCAL_PATH}
ARG AZURERM_PROVIDER=terraform-provider-azurerm_2.55.0_linux_amd64.zip
COPY ["${AZURERM_PROVIDER}", "/tmp"]
RUN unzip /tmp/${AZURERM_PROVIDER} -d ${AZURERM_LOCAL_PATH} \
&& rm /tmp/${AZURERM_PROVIDER}
ARG RANDOM_PROVIDER=terraform-provider-random_3.1.0_linux_amd64.zip
COPY ["${RANDOM_PROVIDER}", "/tmp"]
RUN unzip /tmp/${RANDOM_PROVIDER} -d ${RANDOM_LOCAL_PATH} \
&& rm /tmp/${RANDOM_PROVIDER}
ARG TIME_PROVIDER=terraform-provider-time_0.7.1_linux_amd64.zip
COPY ["${TIME_PROVIDER}", "/tmp"]
RUN unzip /tmp/${TIME_PROVIDER} -d ${TIME_LOCAL_PATH} \
&& rm /tmp/${TIME_PROVIDER}
# Copy cloud-init script into image
COPY ./scripts/cloud-init.sh /usr/local/bin
# Add repo source files
ARG RELTAG
RUN mkdir /workspaces
RUN mkdir /workspaces/missionlz
COPY --from=intermediate /workspaces/missionlz/missionlz-${RELTAG}/src /workspaces/missionlz/src
# Add environment variables
WORKDIR /workspaces/missionlz/src
#Hardening: Healthcheck
#Check every five minutes or so that we're able to retrive file system info within three seconds:
HEALTHCHECK --interval=5m --timeout=3s \
CMD df -h || exit 1
CMD ["/bin/bash", "/usr/local/bin/cloud-init.sh"]
\ No newline at end of file
Microsoft Public License (MS-PL)
This license governs use of the accompanying software. If you use the software, you accept this license. If you do not accept the license, do not use the software.
1. Definitions
The terms "reproduce," "reproduction," "derivative works," and "distribution" have the
same meaning here as under U.S. copyright law.
A "contribution" is the original software, or any additions or changes to the software.
A "contributor" is any person that distributes its contribution under this license.
"Licensed patents" are a contributor's patent claims that read directly on its contribution.
2. Grant of Rights
(A) Copyright Grant- Subject to the terms of this license, including the license conditions and limitations in section 3, each contributor grants you a non-exclusive, worldwide, royalty-free copyright license to reproduce its contribution, prepare derivative works of its contribution, and distribute its contribution or any derivative works that you create.
(B) Patent Grant- Subject to the terms of this license, including the license conditions and limitations in section 3, each contributor grants you a non-exclusive, worldwide, royalty-free license under its licensed patents to make, have made, use, sell, offer for sale, import, and/or otherwise dispose of its contribution in the software or derivative works of the contribution in the software.
3. Conditions and Limitations
(A) No Trademark License- This license does not grant you rights to use any contributors' name, logo, or trademarks.
(B) If you bring a patent claim against any contributor over patents that you claim are infringed by the software, your patent license from such contributor to the software ends automatically.
(C) If you distribute any portion of the software, you must retain all copyright, patent, trademark, and attribution notices that are present in the software.
(D) If you distribute any portion of the software in source code form, you may do so only under this license by including a complete copy of this license with your distribution. If you distribute any portion of the software in compiled or object code form, you may only do so under a license that complies with this license.
(E) The software is licensed "as-is." You bear the risk of using it. The contributors give no express warranties, guarantees or conditions. You may have additional consumer rights under your local laws which this license cannot change. To the extent permitted under your local laws, the contributors exclude the implied warranties of merchantability, fitness for a particular purpose and non-infringement.
# <application name>
# Mission LZ
Project template for all Iron Bank container repositories.
\ No newline at end of file
Mission Landing Zone is a highly opinionated template which IT oversight organizations can use to create a cloud management system to deploy Azure environments for their teams. It addresses a narrowly scoped, specific need for an SCCA compliant hub and spoke infrastructure.
Mission LZ is:
- Designed for US Gov mission customers​
- Implements [SCCA](https://docs.microsoft.com/en-us/azure/azure-government/compliance/secure-azure-computing-architecture) requirements following Microsoft's [SACA](https://aka.ms/saca) implementation guidance
- Deployable in commercial, government, and air-gapped Azure clouds
- A narrow scope for a specific common need​
- A simple solution with low configuration​
- Written in Terraform and Linux shell scripts
Mission Landing Zone is the right solution when:
- A simple, secure, and scalable hub and spoke infrastructure is needed
- Various teams need separate, secure cloud environments administered by a central IT team
- There is a need to implement SCCA
- Hosting any workload requiring a secure environment, for example: data warehousing, AI/ML, and containerized applications
Design goals include:
- A simple, minimal set of code that is easy to configure
- Good defaults that allow experimentation and testing in a single subscription
- Deployment via command line or with a user interface
- Uses Azure PaaS products
Our intent is to enable IT Admins to use this software to:
- Test and evaluate the landing zone using a single Azure subscription
- Develop a known good configuration that can be used for production with multiple Azure subscriptions
- Optionally, customize the Terraform deployment configuration to suit specific needs
- Deploy multiple customer workloads in production
## Scope
Mission LZ has the following scope:
- Hub and spoke networking intended to comply with SCCA controls
- Remote access
- Shared services, i.e., services available to all workloads via the networking hub
- Ability to create multiple workloads or team subscriptions
- Compatibility with SCCA compliance (and other compliance frameworks)
- Security using standard Azure tools with sensible defaults
<!-- markdownlint-disable MD033 -->
<!-- allow html for images so that they can be sized -->
<img src="src/docs/images/scope.png" alt="Mission LZ Scope" width="600" />
<!-- markdownlint-enable MD033 -->
## Networking
Networking is set up in a hub and spoke design, separated by tiers: T0 (Identity and Authorization), T1 (Infrastructure Operations), T2 (DevSecOps and Shared Services), and multiple T3s (Workloads). Security can be configured to allow separation of duties between all tiers. Most customers will deploy each tier to a separate Azure subscription, but multiple subscriptions are not required.
<!-- markdownlint-disable MD033 -->
<img src="src/docs/images/networking.png" alt="Mission LZ Networking" width="600" />
<!-- markdownlint-enable MD033 -->
## Deploying the Mission LZ Container
1. On the system running Docker, enter the command below:
- export ARM_CLOUD_METADATA_URL=<metadata_url_for_target_cloud>
2. Using Docker, authenticate to registry1
3. Run the following commands:
- image_name=<copy_pull_link_from_registry1>
- container_name=<name_of_container>
- docker run -it -d --name ${container_name} ${image_name}
- docker exec -it -e ARM_CLOUD_METADATA_URL=${ARM_CLOUD_METADATA_URL} ${container_name} /bin/bash
After performing Steps 1-3, proceed with the deployment of the Mission Landing Zone arhcitecture by folowing the documentation located in the folder below:
https://repo1.dso.mil/dsop/microsoft/azure/mission-landing-zone/src/docs
## Getting Started using Mission LZ
See our [Getting Started Guide](src/docs/getting-started.md) in the docs.
## Product Roadmap
See the [Projects](https://github.com/Azure/missionlz/projects) page for the release timeline and feature areas.
Here's what the repo consists of as of May 2021:
<!-- markdownlint-disable MD033 -->
<img src="src/docs/images/missionlz_as_of_may2021.png" alt="Mission LZ as of April 2021" width="600" />
<!-- markdownlint-enable MD033 -->
## Contributing
This project welcomes contributions and suggestions. See our [Contributing Guide](CONTRIBUTING.md) for details.
## Feedback, Support, and How to Contact Us
Please see the [Support and Feedback Guide](SUPPORT.md). To report a security issue please see our [security guidance](./SECURITY.md).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
---
apiVersion: v1
# The repository name in registry1, excluding /ironbank/
name: "microsoft/azure/mission-landing-zone"
# List of tags to push for the repository in registry1
# The most specific version should be the first tag and will be shown
# on ironbank.dsop.io
tags:
- "mlz-v2021.06"
- "latest"
# Build args passed to Dockerfile ARGs
args:
BASE_IMAGE: "redhat/ubi/ubi8"
BASE_TAG: "8.4"
# Docker image labels
labels:
org.opencontainers.image.title: "mission-landing-zone"
## Human-readable description of the software packaged in the image
org.opencontainers.image.description: "Mission Landing Zone is template which IT oversight organizations can use to create a cloud management system for quick cloud adoption"
## License(s) under which contained software is distributed
org.opencontainers.image.licenses: "Microsoft Public License (MS-PL)"
## URL to find more information on the image
org.opencontainers.image.url: "https://github.com/Azure/missionlz"
## Name of the distributing entity, organization or individual
org.opencontainers.image.vendor: "opensource"
org.opencontainers.image.version: "mlz-v2021.06"
## Keywords to help with search (ex. "cicd,gitops,golang")
mil.dso.ironbank.image.keywords: "mission,landing,zone,mlz"
## This value can be "opensource" or "commercial"
mil.dso.ironbank.image.type: "opensource"
## Product the image belongs to for grouping multiple images
mil.dso.ironbank.product.name: "mision-landing-zone"
# List of resources to make available to the offline build context
resources:
- url: https://packages.microsoft.com/keys/microsoft.asc
filename: microsoft.asc
validation:
type: sha256
value: 2cfd20a306b2fa5e25522d78f2ef50a1f429d35fd30bd983e2ebffc2b80944fa
- url: https://packages.microsoft.com/yumrepos/azure-cli/azure-cli-2.26.1-1.el7.x86_64.rpm
filename: azure-cli-2.26.1-1.el7.x86_64.rpm
validation:
type: sha256
value: a42784024da7805fda8cd51f80b647ccf54f37437cc686a5d0cc7d00e81b989b
- url: https://releases.hashicorp.com/terraform/1.0.0/terraform_1.0.0_linux_amd64.zip
filename: terraform.zip
validation:
type: sha256
value: 8be33cc3be8089019d95eb8f546f35d41926e7c1e5deff15792e969dde573eb5
- url: https://github.com/Azure/missionlz/archive/refs/tags/v2021.06.0.zip
filename: v2021.06.0.zip
validation:
type: sha256
value: 28ed59538c1e45afdee5e4cbcdab17d976d04389b7700cacc103713cd6e38799
- url: https://releases.hashicorp.com/terraform-provider-azurerm/2.55.0/terraform-provider-azurerm_2.55.0_linux_amd64.zip
filename: terraform-provider-azurerm_2.55.0_linux_amd64.zip
validation:
type: sha256
value: 7e26b4b1e91a608a51169830b26fb26b039b1ec7457b445d98718a3f5eb969ee
- url: https://releases.hashicorp.com/terraform-provider-random/3.1.0/terraform-provider-random_3.1.0_linux_amd64.zip
filename: terraform-provider-random_3.1.0_linux_amd64.zip
validation:
type: sha256
value: d9e13427a7d011dbd654e591b0337e6074eef8c3b9bb11b2e39eaaf257044fd7
- url: https://releases.hashicorp.com/terraform-provider-time/0.7.1/terraform-provider-time_0.7.1_linux_amd64.zip
filename: terraform-provider-time_0.7.1_linux_amd64.zip
validation:
type: sha256
value: 96c3da650bda44b31ba5513e322fd1902d3cfa9cc99129ede70929c71ca74364
# List of project maintainers
maintainers:
- email: "Byron.Boudreaux@microsoft.com"
name: "Byron Boudreaux"
username: "Phydeauxman"
- email: "jeromejansen@microsoft.com"
name: "Jerome Jansen"
username: "jjansen23"
#!/bin/bash
# Variables
image_name=""
# Build image
docker build -t "${image_name}" .
\ No newline at end of file
{
"$schema": "https://json-schema.org/draft/2019-09/schema",
"$id": "https://repo1.dsop.io/ironbank-tools/ironbank-pipeline/schema/hardening_manifest.schema.json",
"definitions": {
"printable-characters-without-newlines": {
"type": "string",
"pattern": "^[ -~]*$",
"minLength": 1
},
"printable-characters-without-newlines-or-slashes": {
"type": "string",
"pattern": "^[A-Za-z0-9][ -.0-~]*$",
"minLength": 1
},
"docker-NameRegexp-without-domain": {
"$comment": "https://github.com/docker/distribution/blob/master/reference/regexp.go",
"type": "string",
"pattern": "^[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?(?:(?:/[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?)+)?$"
},
"docker-TagRegexp": {
"$comment": "https://github.com/docker/distribution/blob/master/reference/regexp.go",
"type": "string",
"pattern": "^[\\w][\\w.-]{0,127}$"
},
"docker-TagRegexp-non-latest": {
"$comment": "https://github.com/docker/distribution/blob/master/reference/regexp.go",
"type": "string",
"pattern": "^(?!latest$)[\\w][\\w.-]{0,127}$"
},
"docker-ReferenceRegexp-url": {
"$comment": "https://github.com/docker/distribution/blob/master/reference/regexp.go",
"type": "string",
"pattern": "^docker://((?:(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])(?:(?:\\.(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]))+)?(?::[0-9]+)?/)?[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?(?:(?:/[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?)+)?)(?::([\\w][\\w.-]{0,127}))?(?:@([A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][0-9A-Fa-f]{32,}))?$"
},
"docker-name-and-tag": {
"$comment": "https://github.com/docker/distribution/blob/master/reference/regexp.go",
"type": "string",
"pattern": "^[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?(?:(?:/[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?)+)?:[\\w][\\w.-]{0,127}$"
},
"docker-label-name": {
"$comment": "https://docs.docker.com/config/labels-custom-metadata/",
"type": "string",
"pattern": "^[a-z0-9]([.-]?[a-z0-9]+)*$"
},
"github-ReferenceRegexp-url": {
"$comment": "https://github.com/docker/distribution/blob/master/reference/regexp.go",
"type": "string",
"pattern": "^docker.pkg.github.com/((?:(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])(?:(?:\\.(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]))+)?(?::[0-9]+)?/)?[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?(?:(?:/[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?)+)?)(?::([\\w][\\w.-]{0,127}))?(?:@([A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][0-9A-Fa-f]{32,}))?$"
},
"environment-variable-name": {
"type": "string",
"pattern": "^[a-zA-Z0-9][a-zA-Z0-9_.-]*$"
}
},
"title": "IronBank",
"description": "Metadata surrounding an Iron Bank Container",
"type": "object",
"properties": {
"apiVersion": {
"description": "Version of Iron Bank metadata file",
"type": "string",
"const": "v1"
},
"name": {
"description": "Name of the Iron Bank container",
"$ref": "#/definitions/docker-NameRegexp-without-domain"
},
"tags": {
"description": "Tags to tag an image with when pushed to registry1",
"type": "array",
"items": [
{
"$ref": "#/definitions/docker-TagRegexp-non-latest"
}
],
"additionalItems": {
"$ref": "#/definitions/docker-TagRegexp"
},
"minItems": 1,
"uniqueItems": true
},
"args": {
"description": "Arguments passed to image build",
"type": "object",
"properties": {
"BASE_IMAGE": {
"$comment": "May be an empty string if the Dockerfile does not use this variable",
"oneOf": [
{
"$ref": "#/definitions/docker-NameRegexp-without-domain"
},
{
"const": ""
}
]
},
"BASE_TAG": {
"$comment": "May be an empty string if the Dockerfile does not use this variable",
"oneOf": [
{
"$ref": "#/definitions/docker-TagRegexp"
},
{
"const": ""
}
]
}
},
"additionalProperties": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"propertyNames": {
"$ref": "#/definitions/environment-variable-name"
},
"required": ["BASE_IMAGE", "BASE_TAG"]
},
"labels": {
"description": "Labels added to Iron Bank containers",
"type": "object",
"properties": {
"org.opencontainers.image.title": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"org.opencontainers.image.description": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"org.opencontainers.image.licenses": {
"$comment": "See https://spdx.org/licenses/",
"$ref": "#/definitions/printable-characters-without-newlines"
},
"org.opencontainers.image.url": {
"format": "uri",
"$ref": "#/definitions/printable-characters-without-newlines"
},
"org.opencontainers.image.vendor": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"org.opencontainers.image.version": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"mil.dso.ironbank.image.keywords": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"mil.dso.ironbank.image.type": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"mil.dso.ironbank.product.name": {
"$ref": "#/definitions/printable-characters-without-newlines"
}
},
"propertyNames": {
"$ref": "#/definitions/docker-label-name"
},
"additionalProperties": false,
"required": [
"org.opencontainers.image.description",
"org.opencontainers.image.licenses",
"org.opencontainers.image.title",
"org.opencontainers.image.vendor",
"org.opencontainers.image.version"
]
},
"resources": {
"description": "Resources to download before building the image",
"type": "array",
"items": {
"oneOf": [
{
"type": "object",
"properties": {
"url": {
"type": "string",
"pattern": "^https?://.+$"
},
"filename": {
"$ref": "#/definitions/printable-characters-without-newlines-or-slashes"
},
"validation": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["sha256", "sha512"]
},
"value": {
"type": "string",
"pattern": "^[a-f0-9]+$"
}
},
"additionalProperties": false,
"required": ["type", "value"]
},
"auth": {
"type": "object",
"properties": {
"id": {
"$ref": "#/definitions/environment-variable-name"
},
"type": {
"type": "string",
"const": "basic"
}
},
"additionalProperties": false,
"required": ["id"]
}
},
"additionalProperties": false,
"required": ["url", "filename"]
},
{
"type": "object",
"properties": {
"url": {
"type": "string",
"pattern": "^s3://.+$"
},
"filename": {
"$ref": "#/definitions/printable-characters-without-newlines-or-slashes"
},
"validation": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["sha256", "sha512"]
},
"value": {
"type": "string",
"pattern": "^[a-f0-9]+$"
}
},
"additionalProperties": false,
"required": ["type", "value"]
},
"auth": {
"type": "object",
"properties": {
"id": {
"$ref": "#/definitions/environment-variable-name"
},
"region": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"type": {
"$comment": "aws is left for backwards compatibility. Please use s3 moving forward",
"type": "string",
"enum": ["aws", "s3"]
}
},
"additionalProperties": false,
"required": ["id"]
}
},
"additionalProperties": false,
"required": ["url", "filename"]
},
{
"type": "object",
"properties": {
"url": {
"$ref": "#/definitions/docker-ReferenceRegexp-url"
},
"tag": {
"$ref": "#/definitions/docker-name-and-tag"
},
"auth": {
"type": "object",
"properties": {
"id": {
"$ref": "#/definitions/environment-variable-name"
},
"type": {
"type": "string",
"const": "basic"
}
},
"additionalProperties": false,
"required": ["id"]
}
},
"additionalProperties": false,
"required": ["url", "tag"]
},
{
"type": "object",
"properties": {
"url": {
"$ref": "#/definitions/github-ReferenceRegexp-url"
},
"tag": {
"$ref": "#/definitions/docker-name-and-tag"
}
},
"additionalProperties": false,
"required": ["url"]
}
]
},
"uniqueItems": true
},
"maintainers": {
"description": "Maintainers for this specific container",
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"username": {
"$ref": "#/definitions/printable-characters-without-newlines"
},
"email": {
"$ref": "#/definitions/printable-characters-without-newlines",
"format": "email"
},
"cht_member": {
"type": "boolean"
}
},
"additionalProperties": false,
"required": ["name", "username"]
},
"minItems": 1,
"uniqueItems": true
}
},
"required": ["apiVersion", "name", "tags", "args", "labels", "maintainers"],
"additionalProperties": false
}
\ No newline at end of file
#!/bin/bash
###########################################################################################
# This script pulls the CA bundle from the host wireserver, parses it, #
# and installs them in the Ubuntu CA bundle. Recommended execution method #
# is cloud-init or VM custom-data for execution at provisioning time #
# #
# NOTES: #
# Many Linux applications use their own CA bundle instead of the system one. #
# If you are still seeing TLS certificate validation errors, ensure that #
# you have also copied these certificates to the calling application's CA Bundle #
# or identify the environment setting to direct it to use the system CA bundle #
# Example: export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt for python/az cli #
# #
# This script must be run as root or else calls to the wireserver will time out #
# #
# This script will run in a container, but does require sed so that must be installed #
###########################################################################################
metadata=$(curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2019-08-01")
cloudEnvironment=$(echo $metadata | grep -oP '(?<=azEnvironment\":\")[^\"]*')
if [ $cloudEnvironment == "USNat" ] || [ $cloudEnvironment == "USSec" ]; then
if [[ ! -d /root/AzureCACertificates ]]; then
mkdir -p /root/AzureCACertificates
# http://168.63.129.16 is a constant for the host's wireserver endpoint
certs=$(curl "http://168.63.129.16/machine?comp=acmspackage&type=cacertificates&ext=json")
IFS_backup=$IFS
IFS=$'\r\n'
certNames=($(echo $certs | grep -oP '(?<=Name\": \")[^\"]*'))
certBodies=($(echo $certs | grep -oP '(?<=CertBody\": \")[^\"]*'))
for i in ${!certBodies[@]}; do
echo ${certBodies[$i]} | sed 's/\\r\\n/\n/g' | sed 's/\\//g' > "/root/AzureCACertificates/$(echo ${certNames[$i]} | sed 's/.cer/.crt/g')"
done
IFS=$IFS_backup
cp /root/AzureCACertificates/*.crt /etc/pki/ca-trust/source/anchors
update-ca-trust extract
fi
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /etc/bashrc
fi
tail -f /dev/null
\ No newline at end of file
#!/bin/bash
# Variables
cloud_storage_suffix=""
cloud_keyvault_suffix=""
sas_expiry_time=""
source_rg_name=""
source_acr_name=""
source_kv_name=""
source_sas_secret=""
source_sa_name=""
source_container_name=""
export_pipeline_name=""
# Create Storage SAS Token
export_sas=?$(az storage container generate-sas \
--name ${source_container_name} \
--account-name ${source_sa_name} \
--expiry ${sas_expiry_time} \
--permissions rwalc \
--https-only \
--output tsv)
# Store Token in Key Vault
az keyvault secret set \
--name ${source_sas_secret} \
--value ${export_sas} \
--vault-name ${source_kv_name}
# Add transfer extension to azcli
az extension add --source https://acrtransferext.${cloud_storage_suffix}/dist/acrtransfer-1.0.0-py2.py3-none-any.whl
# Create export pipeline
az acr export-pipeline create \
--resource-group ${source_rg_name} \
--registry ${source_acr_name} \
--name ${export_pipeline_name} \
--secret-uri https://${source_kv_name}.${cloud_keyvault_suffix}/secrets/${source_sas_secret} \
--storage-container-uri https://${source_sa_name}.${cloud_storage_suffix}/${source_container_name}
# Get Principal ID created as part of export pipeline
principal_id=$(az acr export-pipeline show \
--resource-group ${source_rg_name} \
--registry ${source_acr_name} \
--name ${export_pipeline_name} \
--query identity.principalId \
--output tsv)
# Get Resource ID of Key Vault
key_vault_id=$(az keyvault show \
--name ${target_kv_name} \
--query id \
--output tsv)
# Assign Key Vault Secrets User role to pipeline principal
az role assignment create \
--role "Key Vault Secrets User" \
--scope ${key_vault_id} \
--assignee-object-id ${principal_id}
\ No newline at end of file
#!/bin/bash
# Variables
cloud_storage_suffix=""
cloud_keyvault_suffix=""
sas_expiry_time=""
target_rg_name=""
target_acr_name=""
target_kv_name=""
target_sas_secret=""
target_sa_name=""
target_container_name=""
import_pipeline_name=""
# Create Storage SAS Token
import_sas=?$(az storage container generate-sas \
--name ${target_container_name} \
--account-name ${target_sa_name} \
--expiry ${sas_expiry_time} \
--permissions rwalc \
--https-only \
--output tsv)
# Store Token in Key Vault
az keyvault secret set \
--name ${target_sas_secret} \
--value ${import_sas} \
--vault-name ${target_kv_name}
# Add transfer extension to azcli
az extension add --target https://acrtransferext.${cloud_storage_suffix}/dist/acrtransfer-1.0.0-py2.py3-none-any.whl
# Create import pipeline
az acr import-pipeline create \
--resource-group ${target_rg_name} \
--registry ${target_acr_name} \
--name ${import_pipeline_name} \
--secret-uri https://${target_kv_name}.${cloud_keyvault_suffix}/secrets/${target_sas_secret} \
--storage-container-uri https://${target_sa_name}.${cloud_storage_suffix}/${target_container_name}
# Get Principal ID created as part of import pipeline
principal_id=$(az acr import-pipeline show \
--resource-group ${target_rg_name} \
--registry ${target_acr_name} \
--name ${import_pipeline_name} \
--query identity.principalId \
--output tsv)
# Get Resource ID of Key Vault
key_vault_id=$(az keyvault show \
--name ${target_kv_name} \
--query id \
--output tsv)
# Assign Key Vault Secrets User role to pipeline principal
az role assignment create \
--role "Key Vault Secrets User" \
--scope ${key_vault_id} \
--assignee-object-id ${principal_id}
\ No newline at end of file
#!/bin/bash
# Variables
acr_name=""
image_name=""
image_tag=""
kv_name=""
rg_name=""
container_name=""
container_dns_name=""
acr_sp_pwd_secret_name=""
acr_sp_appid_secret_name=""
arm_endpoint=""
cloud_metadata_api="metadata/endpoints?api-version=2020-06-01"
# Get ACR login server
acr_login_server=$(az acr show \
--name ${acr_name} \
--resource-group ${rg_name} \
--query "loginServer" \
--output tsv)
# Create container instance
az container create \
--name ${container_name} \
--resource-group ${rg_name} \
--image ${acr_login_server}/${image_name}:${image_tag} \
--registry-login-server ${acr_login_server} \
--registry-username $(az keyvault secret show --vault-name ${kv_name} --name ${acr_sp_appid_secret_name} --query value --output tsv) \
--registry-password $(az keyvault secret show --vault-name ${kv_name} --name ${acr_sp_pwd_secret_name} --query value --output tsv) \
--dns-name-label ${container_dns_name} \
--environment-variables "ARM_CLOUD_METADATA_URL=${arm_endpoint}${cloud_metadata_api}" \
--query ipAddress.fqdn
\ No newline at end of file
#!/bin/bash
# Variables
image_name=""
container_name=""
mlz_metadatahost=""
cloud_metadata_api="metadata/endpoints?api-version=2020-06-01"
# Create instance from image
docker run -it -d --env ARM_CLOUD_METADATA_URL="${mlz_metadatahost}${cloud_metadata_api}" --name ${container_name} ${image_name}
# Login to running instance
docker exec -it ${container_name} /bin/bash
\ No newline at end of file
#!/bin/bash
# Variables
acr_name=""
rg_name=""
image_name=""
container_name=""
mlz_metadatahost=""
cloud_metadata_api="metadata/endpoints?api-version=2020-06-01"
# Login to Azure registry
acr_login_server=$(az acr show \
--name ${acr_name} \
--resource-group ${rg_name} \
--query "loginServer" \
--output tsv)
az acr login --name ${acr_name}
# Pull down image from ACR
docker pull "${acr_login_server}/${image_name}:${image_tag}"
# Create instance from image
docker run -it -d --env ARM_CLOUD_METADATA_URL="${mlz_metadatahost}${cloud_metadata_api}" --name ${container_name} ${image_name}
# Login to running instance
docker exec -it ${container_name} /bin/bash
\ No newline at end of file
#!/bin/bash
# Bash "strict mode", to help catch problems and bugs in the shell
# script. Every bash script you write should include this. See
# http://redsymbol.net/articles/unofficial-bash-strict-mode/ for
# details.
set -euo pipefail
# Install security updates, bug fixes and enhancements only.
# --nodocs skips documentationm, which we don't need production
# Docker images.
dnf --nodocs -y upgrade-minimal
# Install a new package, without unnecessary recommended packages:
dnf --nodocs -y install --setopt=install_weak_deps=False \
wget \
python3 \
unzip \
ca-certificates \
sudo \
azure-cli
# Delete cached files we don't need anymore:
dnf clean all
\ No newline at end of file
#!/bin/bash
# Bash "strict mode", to help catch problems and bugs in the shell
# script. Every bash script you write should include this. See
# http://redsymbol.net/articles/unofficial-bash-strict-mode/ for
# details.
set -euo pipefail
# Install security updates, bug fixes and enhancements only.
# --nodocs skips documentationm, which we don't need production
# Docker images.
dnf --nodocs -y upgrade-minimal
# Install a new package, without unnecessary recommended packages:
dnf --nodocs -y install --setopt=install_weak_deps=False \
git
# Delete cached files we don't need anymore:
dnf clean all
\ No newline at end of file
#!/bin/bash
# Bash "strict mode", to help catch problems and bugs in the shell
# script. Every bash script you write should include this. See
# http://redsymbol.net/articles/unofficial-bash-strict-mode/ for
# details.
set -euo pipefail
# Install security updates, bug fixes and enhancements only.
# --nodocs skips documentationm, which we don't need production
# Docker images.
dnf --nodocs -y upgrade-minimal
# Install a new package, without unnecessary recommended packages:
dnf --nodocs -y install --setopt=install_weak_deps=False \
wget \
python3 \
unzip \
ca-certificates \
sudo \
azure-cli
# Delete cached files we don't need anymore:
dnf clean all
\ No newline at end of file
#!/bin/bash
# Variables
acr_name=""
rg_name=""
image_name=""
image_tag="latest"
# Login to Azure registry
acr_login_server=$(az acr show \
--name ${acr_name} \
--resource-group ${rg_name} \
--query "loginServer" \
--output tsv)
az acr login --name ${acr_name}
# Tag image for Azure and push to registry
docker tag "${image_name}:${image_tag}" "${acr_login_server}/${image_name}:${image_tag}"
docker push "${acr_login_server}/${image_name}:${image_tag}"
#!/bin/bash
# Variables
source_rg_name=""
source_acr_name=""
export_pipeline_name=""
pipeline_run_name=""
image_name=""
image_version=""
image_blob_name=""
# Run the export pipeline
az acr pipeline-run create \
--resource-group ${source_rg_name} \
--registry ${source_acr_name} \
--pipeline ${export_pipeline_name} \
--name ${pipeline_run_name} \
--pipeline-type export \
--storage-blob ${image_blob_name} \
--artifacts ${image_name}:${image_version} \
--force-redeploy
\ No newline at end of file
#!/bin/bash
# Variables
target_rg_name=""
target_acr_name=""
import_pipeline_name=""
pipeline_run_name=""
image_name=""
image_version=""
image_blob_name=""
# Run the import pipeline
az acr pipeline-run create \
--resource-group ${target_rg_name} \
--registry ${target_acr_name} \
--pipeline ${import_pipeline_name} \
--name ${pipeline_run_name} \
--pipeline-type import \
--storage-blob ${image_blob_name} \
--artifacts ${image_name}:${image_version} \
--force-redeploy
\ No newline at end of file
# build
This folder contains scripts that would be used by some automation tool to apply/destroy terraform in the repo.
This is a work in progress. Future work will be done to integrate this into a GitHub Actions workflow.
## Why
Provide an unattended way to ensure things are deployable in the repo.
## What you need
- Terraform CLI
- Azure CLI
- Deployed MLZ Config resources (Service Principal for deployment, Key Vault)
- A MLZ Config file
- A global.tfvars
- .tfvars for saca-hub, tier-0, tier-1, tier-2
## How
See the root [README's "Configure the Terraform Backend"](../README.md#Configure-the-Terraform-Backend) on how to get the MLZ Config resources deployed and a MLZ Config file.
Today, the global.tfvars file and the .tfvars for saca-hub, tier0-2, are well known and stored elsewhere. Reach out to the team if you need them.
Then, to apply and destroy pass those files as arguments to the relevant script.
There's an [optional argument to display terraform output](#Optionally-display-Terraform-output).
```shell
usage() {
echo "apply_tf.sh: Automation that calls apply terraform given a MLZ configuration and some tfvars"
error_log "usage: apply_tf.sh <mlz config> <mlz.tfvars> <display terraform output (y/n)>"
}
```
```shell
# assuming src/scripts/config/create_required_resources.sh has been run before...
./apply_tf.sh \
./path-to/mlz.config \
./path-to/mlz.tfvars
y
```
```shell
# assuming src/scripts/config/create_required_resources.sh has been run before...
./destroy_tf.sh \
./path-to/mlz.config \
./path-to/mlz.tfvars \
y
```
### Optionally display Terraform output
There's an optional argument at the end to specify whether or not to display terraform's output. Set it to 'y' if you want to see things as they happen.
By default, if you do not set this argument, terraform output will be sent to /dev/null (to support clean logs in a CI/CD environment) and your logs will look like:
```plaintext
Applying saca-hub (1/5)...
Finished applying saca-hub!
Applying tier-0 (1/5)...
Finished applying tier-0!
Applying tier-1 (1/5)...
Finished applying tier-1!
Applying tier-2 (1/5)...
Finished applying tier-2!
```
## Gotchas
There's wonky behavior with how Log Analytics Workspaces and Azure Monitor diagnostic log settings are deleted at the Azure Resource Manager level.
For example, if you deployed your environment with Terraform, then deleted it with Azure CLI or the Portal, you can end up with orphan/ghost resources that will be deleted at some other unknown time.
To ensure you're able to deploy on-top of existing resources over and over again, __use Terraform to apply and destroy your environment.__
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment