Running with gitlab-runner 13.6.0 (8fa89735)  on bigbang-public-runner-gitlab-runner-848b4ffbcd-gxfzz pP4YiAQX section_start:1614170043:resolve_secrets Resolving secrets section_end:1614170043:resolve_secrets section_start:1614170043:prepare_executor Preparing the "kubernetes" executor Using Kubernetes namespace: private-bigbang-runner Using Kubernetes executor with image registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/k3d-builder:afdd9b77 ... section_end:1614170043:prepare_executor section_start:1614170043:prepare_script Preparing environment Waiting for pod private-bigbang-runner/runner-pp4yiaqx-project-4885-concurrent-0wxzdt to be running, status is Pending Running on runner-pp4yiaqx-project-4885-concurrent-0wxzdt via bigbang-public-runner-gitlab-runner-848b4ffbcd-gxfzz... section_end:1614170046:prepare_script section_start:1614170046:get_sources Getting source from Git repository Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/platform-one/big-bang/apps/sandbox/podinfo/.git/ Created fresh repository. Checking out 01b86a4d as master... Skipping Git submodules setup section_end:1614170047:get_sources section_start:1614170047:step_script Executing "step_script" stage of the job script $ docker run -d -p 53:53/udp -p 53:53 registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq:87fca1d1 Unable to find image 'registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq:87fca1d1' locally 87fca1d1: Pulling from platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq a6b97b4963f5: Pulling fs layer 13948a011eec: Pulling fs layer 420065a186b9: Pulling fs layer 3b4ee3c09e52: Pulling fs layer 13948a011eec: Verifying Checksum 13948a011eec: Download complete 420065a186b9: Verifying Checksum 420065a186b9: Download complete 3b4ee3c09e52: Verifying Checksum a6b97b4963f5: Download complete a6b97b4963f5: Pull complete 13948a011eec: Pull complete 420065a186b9: Pull complete 3b4ee3c09e52: Pull complete Digest: sha256:9d8416babf25e66a6a7e47a545b98a36e5d4fb8aeadf083ed5bd4d6e2b80e923 Status: Downloaded newer image for registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq:87fca1d1 ad677b78987b9a0b9c244d8b6736c57a6fc755dab609d2de6a34bab7bfaab5e4 $ echo "nameserver 127.0.0.1" >> /etc/resolv.conf $ k3d cluster create ${CI_PROJECT_NAME} --servers 1 --k3s-server-arg "--disable=metrics-server" --k3s-server-arg "--disable=traefik" -p 80:80@loadbalancer -p 443:443@loadbalancer --wait INFO[0000] Created network 'k3d-podinfo' INFO[0000] Created volume 'k3d-podinfo-images' INFO[0001] Creating node 'k3d-podinfo-server-0' INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.19.4-k3s1' INFO[0011] Creating LoadBalancer 'k3d-podinfo-serverlb' INFO[0012] Pulling image 'docker.io/rancher/k3d-proxy:v3.4.0' INFO[0016] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0019] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap INFO[0019] Cluster 'podinfo' created successfully! INFO[0019] You can now use it like this: kubectl cluster-info $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command namespace/podinfo created secret/private-registry created secret/private-registry-mil created $ git clone -b ${PIPELINE_REPO_BRANCH} ${PIPELINE_REPO} ${PIPELINE_REPO_DESTINATION} Cloning into '../pipeline-repo'... $ source ${YAML_PARSE_PATH} $ source ${WAIT_PATH} $ if [[ "${CI_PROJECT_NAME}" != *"istio"* ]]; then # collapsed multi-line command - Processing resources for Istio core. ✔ Istio core installed - Processing resources for Istiod. - Processing resources for Istiod. Waiting for Deployment/istio-system/istiod ✔ Istiod installed - Processing resources for Ingress gateways. - Processing resources for Ingress gateways. Waiting for Deployment/istio-system/istio-ingressgat... ✔ Ingress gateways installed - Pruning removed resources ✔ Installation complete$ if [ -f "tests/main-test-gateway.yaml" ]; then # collapsed multi-line command Generating a RSA private key ....+++++ .....+++++ writing new private key to 'tls.key' ----- secret/wildcard-cert created gateway.networking.istio.io/main created $ if [ -f "tests/dependencies.yaml" ]; then # collapsed multi-line command $ sleep 10 $ kubectl wait --for=condition=established --timeout 60s -A crd --all > /dev/null $ if [ -f ${dep_repo_folder}/tests/wait.sh ]; then # collapsed multi-line command $ wait_sts $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ if [ -f "tests/test-sysctl-mod.yml" ]; then # collapsed multi-line command $ echo "Package install" Package install $ if [ -f "tests/test-values.yml" ] ; then # collapsed multi-line command Helm installing podinfo/chart into podinfo namespace using podinfo/tests/test-values.yml for values NAME: podinfo LAST DEPLOYED: Wed Feb 24 12:35:21 2021 NAMESPACE: podinfo STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: echo "Visit http://127.0.0.1:8080 to use your application" kubectl -n podinfo port-forward deploy/podinfo 8080:9898 $ sleep 10 $ kubectl wait --for=condition=established --timeout 60s -A crd --all > /dev/null $ if [ -f tests/wait.sh ]; then # collapsed multi-line command $ wait_sts $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ echo "Package tests" Package tests $ if [ $CI_PROJECT_NAME == "istio-system" ]; then # collapsed multi-line command $ if [ ! -z $(kubectl get services -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') ] && [ ! -z $(kubectl get vs -A -o jsonpath='{.items[0].spec.hosts[0]}') ]; then # collapsed multi-line command $ if [ -f "tests/cypress.json" ]; then # collapsed multi-line command ====================================================================================================  (Run Starting) tput: No value for $TERM and no -T specified  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ Cypress: 6.1.0 │  │ Browser: Electron 87 (headless) │  │ Specs: 1 found (podinfo-health.spec.js) │  └────────────────────────────────────────────────────────────────────────────────────────────────┘ ──────────────────────────────────────────────────────────────────────────────────────────────────── Running: podinfo-health.spec.js (1 of 1)   Basic Podinfo  ✓ Check Podinfo is accessible (1728ms)   1 passing (3s)  (Results)  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ Tests: 1 │  │ Passing: 1 │  │ Failing: 0 │  │ Pending: 0 │  │ Skipped: 0 │  │ Screenshots: 0 │  │ Video: true │  │ Duration: 3 seconds │  │ Spec Ran: podinfo-health.spec.js │  └────────────────────────────────────────────────────────────────────────────────────────────────┘  (Video)   - Started processing: Compressing to 32 CRF   - Finished processing: /builds/platform-one/big-bang/apps/sandbox/podinfo/tests/cy (0 seconds)   press/videos/podinfo-health.spec.js.mp4 tput: No value for $TERM and no -T specified ====================================================================================================  (Run Finished)   Spec Tests Passing Failing Pending Skipped    ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ ✔ podinfo-health.spec.js 00:03 1 1 - - - │  └────────────────────────────────────────────────────────────────────────────────────────────────┘   ✔ All specs passed! 00:03 1 1 - - -   $ touch $CI_PROJECT_DIR/success section_end:1614170148:step_script section_start:1614170148:after_script Running after_script Running after script... $ if [ -e success ]; then # collapsed multi-line command Job Succeeded $ k3d cluster delete ${CI_PROJECT_NAME} INFO[0000] Deleting cluster 'podinfo' INFO[0000] Deleted k3d-podinfo-serverlb INFO[0001] Deleted k3d-podinfo-server-0 INFO[0001] Deleting cluster network '5f7e987cf019b856ac41da74b2da8459e1aed8136f89835e0cb8cd0969b48175' INFO[0001] Deleting image volume 'k3d-podinfo-images' INFO[0001] Removing cluster details from default kubeconfig... INFO[0001] Removing standalone kubeconfig file (if there is one)... INFO[0001] Successfully deleted cluster podinfo! section_end:1614170150:after_script section_start:1614170150:upload_artifacts_on_success Uploading artifacts for successful job Uploading artifacts... WARNING: tests/cypress/screenshots: no matching files tests/cypress/videos: found 2 matching files and directories Uploading artifacts as "archive" to coordinator... ok id=2143821 responseStatus=201 Created token=zSNZuMfb section_end:1614170151:upload_artifacts_on_success section_start:1614170151:cleanup_file_variables Cleaning up file based variables section_end:1614170151:cleanup_file_variables Job succeeded