Running with gitlab-runner 14.1.0 (8925d9a0)  on gitlab-runners-bigbang-gl-packages-privileged-gitlab-runnelz84t TzvAZ4Y8 section_start:1627428452:resolve_secrets Resolving secrets section_end:1627428452:resolve_secrets section_start:1627428452:prepare_executor Preparing the "kubernetes" executor Using Kubernetes namespace: gitlab-runners Using Kubernetes executor with image registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/k3d-builder:0.0.5 ... Using attach strategy to execute scripts... section_end:1627428452:prepare_executor section_start:1627428452:prepare_script Preparing environment Waiting for pod gitlab-runners/runner-tzvaz4y8-project-2489-concurrent-0jt5dc to be running, status is Pending Running on runner-tzvaz4y8-project-2489-concurrent-0jt5dc via gitlab-runners-bigbang-gl-packages-privileged-gitlab-runnelz84t... section_end:1627428456:prepare_script section_start:1627428456:get_sources Getting source from Git repository Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/TzvAZ4Y8/0/platform-one/big-bang/apps/application-utilities/minio/.git/ Created fresh repository. Checking out 3d833297 as refs/merge-requests/34/head... Skipping Git submodules setup section_end:1627428457:get_sources section_start:1627428457:step_script Executing "step_script" stage of the job script $ if [ -z ${PIPELINE_REPO_BRANCH} ]; then # collapsed multi-line command $ git clone -b ${PIPELINE_REPO_BRANCH} ${PIPELINE_REPO} ${PIPELINE_REPO_DESTINATION} Cloning into '../pipeline-repo'... $ source ${WAIT_PATH} $ docker network create ${CI_JOB_ID} --driver=bridge -o "com.docker.network.driver.mtu"="1450" 27ac57e149d56c8fcce283e38853ee67556f9cf5b2924937cbb100aba13c8880 $ k3d cluster create ${CI_JOB_ID} --config ${K3D_CONFIG_PATH} --network ${CI_JOB_ID} INFO[0000] Using config file ../pipeline-repo/jobs/k3d-ci/config.yaml INFO[0000] Prep: Network INFO[0000] Network with name '5171981' already exists with ID '27ac57e149d56c8fcce283e38853ee67556f9cf5b2924937cbb100aba13c8880' INFO[0000] Created volume 'k3d-5171981-images' INFO[0001] Creating node 'k3d-5171981-server-0' INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.20.4-k3s1' INFO[0004] Creating LoadBalancer 'k3d-5171981-serverlb' INFO[0009] Pulling image 'docker.io/rancher/k3d-proxy:v4.3.0' INFO[0011] Starting cluster '5171981' INFO[0011] Starting servers... INFO[0011] Starting Node 'k3d-5171981-server-0' INFO[0017] Starting agents... INFO[0017] Starting helpers... INFO[0017] Starting Node 'k3d-5171981-serverlb' INFO[0017] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0020] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap INFO[0020] Cluster '5171981' created successfully! INFO[0020] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0020] You can now use it like this: kubectl config use-context k3d-5171981 kubectl cluster-info $ until kubectl get deployment coredns -n kube-system -o go-template='{{.status.availableReplicas}}' | grep -v -e ''; do sleep 1s; done 1 $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command namespace/minio created secret/private-registry created secret/private-registry-mil created $ if [[ "${CI_PROJECT_NAME}" != *"istio"* ]]; then # collapsed multi-line command - Processing resources for Istio core. ✔ Istio core installed - Processing resources for Istiod. - Processing resources for Istiod. Waiting for Deployment/istio-system/istiod ✔ Istiod installed - Processing resources for Ingress gateways. - Processing resources for Ingress gateways. Waiting for Deployment/istio-system/istio-ingressgat... ✔ Ingress gateways installed - Pruning removed resources ✔ Installation completenamespace/istio-system labeled $ if [[ "${PACKAGE_NAMESPACE}" != "istio-operator" ]]; then # collapsed multi-line command Generating a RSA private key ......+++++ .........................................+++++ writing new private key to 'tls.key' ----- secret/wildcard-cert created $ if [ -f "tests/main-test-gateway.yaml" ]; then # collapsed multi-line command $ if [ -f "tests/dependencies.yaml" ]; then # collapsed multi-line command Cloning default branch from https://repo1.dso.mil/platform-one/big-bang/apps/application-utilities/minio-operator.git Cloning into 'repos/dependencyname'... Installing dependency: repos/dependencyname into minio-operator namespace namespace/minio-operator created secret/private-registry created secret/private-registry-mil created Helm installing repos/dependencyname/chart into minio-operator namespace using repos/dependencyname/tests/test-values.yaml for values NAME: dependencyname LAST DEPLOYED: Tue Jul 27 23:28:37 2021 NAMESPACE: minio-operator STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the JWT for logging in to the console: kubectl get secret $(kubectl get serviceaccount console-sa --namespace minio-operator -o jsonpath="{.secrets[0].name}") --namespace minio-operator -o jsonpath="{.data.token}" | base64 --decode 2. Get the Operator Console URL by running these commands: kubectl --namespace minio-operator port-forward svc/console 9090:9090 echo "Visit the Operator Console at http://127.0.0.1:9090" $ sleep 10 $ kubectl wait --for=condition=established --timeout 60s -A crd --all > /dev/null $ if [ -f tests/dependencies.yaml ]; then # collapsed multi-line command $ wait_sts $ wait_daemonset $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ echo "Package install" Package install $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command $ if [ $(ls -1 tests/test-values.y*ml 2>/dev/null | wc -l) -gt 0 ]; then # collapsed multi-line command Helm installing minio/chart into minio namespace using minio/tests/test-values.yaml for values NAME: minio LAST DEPLOYED: Tue Jul 27 23:28:57 2021 NAMESPACE: minio STATUS: deployed REVISION: 1 $ sleep 10 $ kubectl wait --for=condition=established --timeout 60s -A crd --all > /dev/null $ if [ -f tests/wait.sh ]; then # collapsed multi-line command $ wait_sts $ wait_daemonset $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ echo "Package tests" Package tests $ if [ ! -z $(kubectl get services -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}' &> /dev/null) ] && [ ! -z $(kubectl get vs -A -o jsonpath='{.items[0].spec.hosts[0]}' &> /dev/null) ]; then # collapsed multi-line command $ if [ -f "tests/cypress.json" ]; then # collapsed multi-line command $ if [ -d "chart/templates/tests" ]; then # collapsed multi-line command NAME: minio LAST DEPLOYED: Tue Jul 27 23:28:57 2021 NAMESPACE: minio STATUS: deployed REVISION: 1 TEST SUITE: minio-instance-cypress-sa Last Started: Tue Jul 27 23:29:11 2021 Last Completed: Tue Jul 27 23:29:11 2021 Phase: Succeeded TEST SUITE: minio-instance-cypress-config Last Started: Tue Jul 27 23:29:11 2021 Last Completed: Tue Jul 27 23:29:11 2021 Phase: Succeeded TEST SUITE: minio-instance-script-config Last Started: Tue Jul 27 23:29:11 2021 Last Completed: Tue Jul 27 23:29:11 2021 Phase: Succeeded TEST SUITE: minio-instance-cypress-role Last Started: Tue Jul 27 23:29:11 2021 Last Completed: Tue Jul 27 23:29:11 2021 Phase: Succeeded TEST SUITE: minio-instance-cypress-rolebinding Last Started: Tue Jul 27 23:29:11 2021 Last Completed: Tue Jul 27 23:29:11 2021 Phase: Succeeded TEST SUITE: minio-instance-cypress-test Last Started: Tue Jul 27 23:29:11 2021 Last Completed: Tue Jul 27 23:30:06 2021 Phase: Failed Error: pod minio-instance-cypress-test failed ***** Start Helm Test Logs ***** ====================================================================================================  (Run Starting)  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ Cypress: 5.0.0 │  │ Browser: Chrome 83 (headless) │  │ Specs: 2 found (minio-health.spec.js, minio-login.js) │  └────────────────────────────────────────────────────────────────────────────────────────────────┘ ──────────────────────────────────────────────────────────────────────────────────────────────────── Running: minio-health.spec.js (1 of 2) Browserslist: caniuse-lite is outdated. Please run: npx browserslist@latest --update-db   Basic Minio  1) Check Minio UI is accessible   0 passing (7s)  1 failing  1) Basic Minio Check Minio UI is accessible:  CypressError: `cy.visit()` failed trying to load: http://minio/ We attempted to make an http request to this URL but the request failed without a response. We received this error at the network level: > Error: connect ECONNREFUSED 10.43.181.32:80 Common situations why this would fail: - you don't have internet access - you forgot to run / boot your web server - your web server isn't accessible - you have weird network configuration settings on your computer at http://localhost:42325/__cypress/runner/cypress_runner.js:157498:23 at visitFailedByErr (http://localhost:42325/__cypress/runner/cypress_runner.js:156857:12) at http://localhost:42325/__cypress/runner/cypress_runner.js:157497:11 at tryCatcher (http://localhost:42325/__cypress/runner/cypress_runner.js:9852:23) at Promise._settlePromiseFromHandler (http://localhost:42325/__cypress/runner/cypress_runner.js:7787:31) at Promise._settlePromise (http://localhost:42325/__cypress/runner/cypress_runner.js:7844:18) at Promise._settlePromise0 (http://localhost:42325/__cypress/runner/cypress_runner.js:7889:10) at Promise._settlePromises (http://localhost:42325/__cypress/runner/cypress_runner.js:7965:18) at _drainQueueStep (http://localhost:42325/__cypress/runner/cypress_runner.js:4559:12) at _drainQueue (http://localhost:42325/__cypress/runner/cypress_runner.js:4552:9) at Async.../../node_modules/bluebird/js/release/async.js.Async._drainQueues (http://localhost:42325/__cypress/runner/cypress_runner.js:4568:5) at Async.drainQueues (http://localhost:42325/__cypress/runner/cypress_runner.js:4438:14) From Your Spec Code: at Context.eval (http://localhost:42325/__cypress/tests?p=cypress/integration/minio-health.spec.js:101:8) From Node.js Internals: Error: connect ECONNREFUSED 10.43.181.32:80 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1134:16)   (Results)  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ Tests: 1 │  │ Passing: 0 │  │ Failing: 1 │  │ Pending: 0 │  │ Skipped: 0 │  │ Screenshots: 1 │  │ Video: true │  │ Duration: 6 seconds │  │ Spec Ran: minio-health.spec.js │  └────────────────────────────────────────────────────────────────────────────────────────────────┘  (Screenshots)   - /test/cypress/screenshots/minio-health.spec.js/Basic Minio -- Check Minio UI is  (1280x720)   accessible (failed).png  (Video)   - Started processing: Compressing to 32 CRF   - Finished processing: /test/cypress/videos/minio-health.spec.js.mp4 (0 seconds) ──────────────────────────────────────────────────────────────────────────────────────────────────── Running: minio-login.js (2 of 2)   Minio Login  1) Check Minio Login   0 passing (7s)  1 failing  1) Minio Login Check Minio Login:  CypressError: `cy.visit()` failed trying to load: http://minio/minio/login We attempted to make an http request to this URL but the request failed without a response. We received this error at the network level: > Error: connect ECONNREFUSED 10.43.181.32:80 Common situations why this would fail: - you don't have internet access - you forgot to run / boot your web server - your web server isn't accessible - you have weird network configuration settings on your computer at http://localhost:42325/__cypress/runner/cypress_runner.js:157498:23 at visitFailedByErr (http://localhost:42325/__cypress/runner/cypress_runner.js:156857:12) at http://localhost:42325/__cypress/runner/cypress_runner.js:157497:11 at tryCatcher (http://localhost:42325/__cypress/runner/cypress_runner.js:9852:23) at Promise._settlePromiseFromHandler (http://localhost:42325/__cypress/runner/cypress_runner.js:7787:31) at Promise._settlePromise (http://localhost:42325/__cypress/runner/cypress_runner.js:7844:18) at Promise._settlePromise0 (http://localhost:42325/__cypress/runner/cypress_runner.js:7889:10) at Promise._settlePromises (http://localhost:42325/__cypress/runner/cypress_runner.js:7965:18) at _drainQueueStep (http://localhost:42325/__cypress/runner/cypress_runner.js:4559:12) at _drainQueue (http://localhost:42325/__cypress/runner/cypress_runner.js:4552:9) at Async.../../node_modules/bluebird/js/release/async.js.Async._drainQueues (http://localhost:42325/__cypress/runner/cypress_runner.js:4568:5) at Async.drainQueues (http://localhost:42325/__cypress/runner/cypress_runner.js:4438:14) From Your Spec Code: at Context.eval (http://localhost:42325/__cypress/tests?p=cypress/integration/minio-login.js:101:8) From Node.js Internals: Error: connect ECONNREFUSED 10.43.181.32:80 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1134:16)   (Results)  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ Tests: 1 │  │ Passing: 0 │  │ Failing: 1 │  │ Pending: 0 │  │ Skipped: 0 │  │ Screenshots: 1 │  │ Video: true │  │ Duration: 6 seconds │  │ Spec Ran: minio-login.js │  └────────────────────────────────────────────────────────────────────────────────────────────────┘  (Screenshots)   - /test/cypress/screenshots/minio-login.js/Minio Login -- Check Minio Login (faile (1280x720)   d).png  (Video)   - Started processing: Compressing to 32 CRF   - Finished processing: /test/cypress/videos/minio-login.js.mp4 (0 seconds) ====================================================================================================  (Run Finished)   Spec Tests Passing Failing Pending Skipped    ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ ✖ minio-health.spec.js 00:06 1 - 1 - - │  ├────────────────────────────────────────────────────────────────────────────────────────────────┤  │ ✖ minio-login.js 00:06 1 - 1 - - │  └────────────────────────────────────────────────────────────────────────────────────────────────┘   ✖ 2 of 2 failed (100%) 00:13 2 - 2 - -   tar: Removing leading `/' from member names configmap/cypress-screenshots created tar: Removing leading `/' from member names configmap/cypress-videos created ***** End Helm Test Logs ***** section_end:1627428607:step_script section_start:1627428607:after_script Running after_script Running after script... $ if [ -e success ]; then # collapsed multi-line command Job Failed Printing Debug Logs kubectl get all -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/local-path-provisioner-5ff76fc89d-wb2g7 1/1 Running 0 116s kube-system pod/metrics-server-86cbb8457f-h8hn5 1/1 Running 0 116s kube-system pod/coredns-854c77959c-hb4zh 1/1 Running 0 116s istio-system pod/istiod-7b57d88d9c-ql8ls 1/1 Running 0 106s istio-system pod/svclb-istio-ingressgateway-rdg7z 5/5 Running 0 98s istio-system pod/istio-ingressgateway-69c8589df9-wg4g8 1/1 Running 0 98s minio-operator pod/minio-operator-86d6c9ccd6-z2hxp 1/1 Running 0 90s minio pod/minio-minio-instance-ss-0-3 0/1 CrashLoopBackOff 1 54s minio pod/minio-minio-instance-ss-0-0 0/1 CrashLoopBackOff 2 55s minio pod/minio-minio-instance-ss-0-1 0/1 Error 2 55s minio pod/minio-minio-instance-ss-0-2 1/1 Running 2 55s minio pod/minio-instance-cypress-test 0/1 Error 0 56s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.43.0.1 443/TCP 2m13s kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 2m11s kube-system service/metrics-server ClusterIP 10.43.114.157 443/TCP 2m11s istio-system service/istiod ClusterIP 10.43.160.25 15010/TCP,15012/TCP,443/TCP,15014/TCP 106s istio-system service/istio-ingressgateway LoadBalancer 10.43.244.18 172.18.0.2 15021:32543/TCP,80:31437/TCP,443:31630/TCP,15012:32539/TCP,15443:31710/TCP 98s minio-operator service/operator ClusterIP 10.43.31.80 4222/TCP,4233/TCP 90s minio service/minio ClusterIP 10.43.181.32 80/TCP 65s minio service/minio-minio-instance-hl ClusterIP None 9000/TCP 65s NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE istio-system daemonset.apps/svclb-istio-ingressgateway 1 1 1 1 1 98s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/local-path-provisioner 1/1 1 1 2m11s kube-system deployment.apps/metrics-server 1/1 1 1 2m11s kube-system deployment.apps/coredns 1/1 1 1 2m11s istio-system deployment.apps/istiod 1/1 1 1 106s istio-system deployment.apps/istio-ingressgateway 1/1 1 1 98s minio-operator deployment.apps/minio-operator 1/1 1 1 90s NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/local-path-provisioner-5ff76fc89d 1 1 1 116s kube-system replicaset.apps/metrics-server-86cbb8457f 1 1 1 116s kube-system replicaset.apps/coredns-854c77959c 1 1 1 116s istio-system replicaset.apps/istiod-7b57d88d9c 1 1 1 106s istio-system replicaset.apps/istio-ingressgateway-69c8589df9 1 1 1 98s minio-operator replicaset.apps/minio-operator-86d6c9ccd6 1 1 1 90s NAMESPACE NAME READY AGE minio statefulset.apps/minio-minio-instance-ss-0 1/4 55s NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE istio-system horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 24%/80% 1 5 1 98s istio-system horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 1%/80% 1 5 1 106s $ docker exec -i k3d-${CI_JOB_ID}-server-0 crictl images -o json | jq -r '.images[].repoTags[0] | select(. != null)' > images.txt $ sed -i '/docker.io\/istio\//d' images.txt $ sed -i '/docker.io\/rancher\//d' images.txt $ if [ -f tests/images.txt ]; then # collapsed multi-line command $ k3d cluster delete ${CI_JOB_ID} INFO[0000] Deleting cluster '5171981' INFO[0000] Deleted k3d-5171981-serverlb INFO[0004] Deleted k3d-5171981-server-0 INFO[0004] Deleting image volume 'k3d-5171981-images' INFO[0004] Removing cluster details from default kubeconfig... INFO[0004] Removing standalone kubeconfig file (if there is one)... INFO[0004] Successfully deleted cluster 5171981! $ docker network rm ${CI_JOB_ID} 5171981 section_end:1627428612:after_script section_start:1627428612:upload_artifacts_on_failure Uploading artifacts for failed job Uploading artifacts... images.txt: found 1 matching files and directories WARNING: tests/cypress/screenshots: no matching files WARNING: tests/cypress/videos: no matching files  cypress-artifacts: found 9 matching files and directories Uploading artifacts as "archive" to coordinator... ok id=5171981 responseStatus=201 Created token=cne6Nskx section_end:1627428613:upload_artifacts_on_failure section_start:1627428613:cleanup_file_variables Cleaning up file based variables section_end:1627428614:cleanup_file_variables ERROR: Job failed: command terminated with exit code 1