Running with gitlab-runner 13.6.0 (8fa89735)  on bigbang-public-runner-gitlab-runner-848b4ffbcd-gxfzz pP4YiAQX section_start:1615231172:resolve_secrets Resolving secrets section_end:1615231172:resolve_secrets section_start:1615231172:prepare_executor Preparing the "kubernetes" executor Using Kubernetes namespace: private-bigbang-runner Using Kubernetes executor with image registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/k3d-builder:afdd9b77 ... section_end:1615231172:prepare_executor section_start:1615231172:prepare_script Preparing environment Waiting for pod private-bigbang-runner/runner-pp4yiaqx-project-3874-concurrent-0nqmk6 to be running, status is Pending Running on runner-pp4yiaqx-project-3874-concurrent-0nqmk6 via bigbang-public-runner-gitlab-runner-848b4ffbcd-gxfzz... section_end:1615231175:prepare_script section_start:1615231175:get_sources Getting source from Git repository Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/platform-one/big-bang/apps/developer-tools/haproxy/.git/ Created fresh repository. Checking out 427a56fb as test-values... Skipping Git submodules setup section_end:1615231176:get_sources section_start:1615231176:step_script Executing "step_script" stage of the job script $ docker run -d -p 53:53/udp -p 53:53 registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq:87fca1d1 Unable to find image 'registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq:87fca1d1' locally 87fca1d1: Pulling from platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq a6b97b4963f5: Pulling fs layer 13948a011eec: Pulling fs layer 420065a186b9: Pulling fs layer 3b4ee3c09e52: Pulling fs layer 3b4ee3c09e52: Waiting 13948a011eec: Download complete 420065a186b9: Download complete 3b4ee3c09e52: Download complete a6b97b4963f5: Verifying Checksum a6b97b4963f5: Download complete a6b97b4963f5: Pull complete 13948a011eec: Pull complete 420065a186b9: Pull complete 3b4ee3c09e52: Pull complete Digest: sha256:9d8416babf25e66a6a7e47a545b98a36e5d4fb8aeadf083ed5bd4d6e2b80e923 Status: Downloaded newer image for registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/go-dnsmasq:87fca1d1 9f030e157d02932982f758101d5e2f1994cf6e18f0190fb767e6770ed76b1a22 $ echo "nameserver 127.0.0.1" >> /etc/resolv.conf $ k3d cluster create ${CI_PROJECT_NAME} --servers 1 --k3s-server-arg "--disable=metrics-server" --k3s-server-arg "--disable=traefik" -p 80:80@loadbalancer -p 443:443@loadbalancer --wait INFO[0000] Created network 'k3d-haproxy' INFO[0000] Created volume 'k3d-haproxy-images' INFO[0001] Creating node 'k3d-haproxy-server-0' INFO[0004] Pulling image 'docker.io/rancher/k3s:v1.19.4-k3s1' INFO[0015] Creating LoadBalancer 'k3d-haproxy-serverlb' INFO[0016] Pulling image 'docker.io/rancher/k3d-proxy:v3.4.0' INFO[0019] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0022] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap INFO[0022] Cluster 'haproxy' created successfully! INFO[0023] You can now use it like this: kubectl cluster-info $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command namespace/haproxy created secret/private-registry created secret/private-registry-mil created $ git clone -b ${PIPELINE_REPO_BRANCH} ${PIPELINE_REPO} ${PIPELINE_REPO_DESTINATION} Cloning into '../pipeline-repo'... $ source ${YAML_PARSE_PATH} $ source ${WAIT_PATH} $ if [[ "${CI_PROJECT_NAME}" != *"istio"* ]]; then # collapsed multi-line command - Processing resources for Istio core. ✔ Istio core installed - Processing resources for Istiod. - Processing resources for Istiod. Waiting for Deployment/istio-system/istiod ✔ Istiod installed - Processing resources for Ingress gateways. - Processing resources for Ingress gateways. Waiting for Deployment/istio-system/istio-ingressgat... ✔ Ingress gateways installed - Pruning removed resources ✔ Installation complete$ if [ -f "tests/main-test-gateway.yaml" ]; then # collapsed multi-line command Generating a RSA private key ...............................+++++ ............+++++ writing new private key to 'tls.key' ----- secret/wildcard-cert created gateway.networking.istio.io/main created $ if [ -f "tests/dependencies.yaml" ]; then # collapsed multi-line command $ sleep 10 $ kubectl wait --for=condition=established --timeout 60s -A crd --all > /dev/null $ if [ -f ${dep_repo_folder}/tests/wait.sh ]; then # collapsed multi-line command $ wait_sts $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ if [ -f "tests/test-sysctl-mod.yml" ]; then # collapsed multi-line command $ echo "Package install" Package install $ if [ -f "tests/test-values.yml" ] ; then # collapsed multi-line command Helm installing haproxy/chart into haproxy namespace using haproxy/tests/test-values.yml for values Error: timed out waiting for the condition section_end:1615231854:step_script section_start:1615231854:after_script Running after_script Running after script... $ if [ -e success ]; then # collapsed multi-line command Job Failed Printing Debug Logs kubectl get all -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/local-path-provisioner-7ff9579c6-ndwzj 1/1 Running 0 10m kube-system pod/coredns-66c464876b-fsh78 1/1 Running 0 10m istio-system pod/istiod-6df7c99878-wglgv 1/1 Running 0 10m istio-system pod/svclb-istio-ingressgateway-76xm8 5/5 Running 1 10m istio-system pod/istio-ingressgateway-748fbb4988-z59bn 1/1 Running 0 10m haproxy pod/haproxy-58784b49f9-nc28s 0/1 CrashLoopBackOff 6 10m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.43.0.1 443/TCP 11m kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 11m istio-system service/istiod ClusterIP 10.43.246.16 15010/TCP,15012/TCP,443/TCP,15014/TCP 10m istio-system service/istio-ingressgateway LoadBalancer 10.43.153.92 172.18.0.2 15021:31576/TCP,80:30293/TCP,443:31881/TCP,15012:32179/TCP,15443:30146/TCP 10m haproxy service/haproxy ClusterIP 10.43.132.135 8080/TCP,8443/TCP,10024/TCP 10m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE istio-system daemonset.apps/svclb-istio-ingressgateway 1 1 1 1 1 10m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/local-path-provisioner 1/1 1 1 11m kube-system deployment.apps/coredns 1/1 1 1 11m istio-system deployment.apps/istiod 1/1 1 1 10m istio-system deployment.apps/istio-ingressgateway 1/1 1 1 10m haproxy deployment.apps/haproxy 0/1 1 0 10m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/local-path-provisioner-7ff9579c6 1 1 1 10m kube-system replicaset.apps/coredns-66c464876b 1 1 1 10m istio-system replicaset.apps/istiod-6df7c99878 1 1 1 10m istio-system replicaset.apps/istio-ingressgateway-748fbb4988 1 1 1 10m haproxy replicaset.apps/haproxy-58784b49f9 1 1 0 10m NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE istio-system horizontalpodautoscaler.autoscaling/istiod Deployment/istiod /80% 1 5 1 10m istio-system horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway /80% 1 5 1 10m $ k3d cluster delete ${CI_PROJECT_NAME} INFO[0000] Deleting cluster 'haproxy' INFO[0000] Deleted k3d-haproxy-serverlb INFO[0001] Deleted k3d-haproxy-server-0 INFO[0001] Deleting cluster network 'af2795554349b0f3bbdfec92236c2205c7c5db4791e5fc55efedb339e4a2973c' INFO[0001] Deleting image volume 'k3d-haproxy-images' INFO[0001] Removing cluster details from default kubeconfig... INFO[0001] Removing standalone kubeconfig file (if there is one)... INFO[0001] Successfully deleted cluster haproxy! section_end:1615231856:after_script section_start:1615231856:upload_artifacts_on_failure Uploading artifacts for failed job Uploading artifacts... WARNING: tests/cypress/screenshots: no matching files WARNING: tests/cypress/videos: no matching files  ERROR: No files to upload  section_end:1615231857:upload_artifacts_on_failure section_start:1615231857:cleanup_file_variables Cleaning up file based variables section_end:1615231857:cleanup_file_variables ERROR: Job failed: command terminated with exit code 1