Running with gitlab-runner 14.1.0 (8925d9a0)  on gitlab-runners-bigbang-gl-packages-privileged-gitlab-runneb7dvc L_wAsvdS section_start:1630337718:resolve_secrets Resolving secrets section_end:1630337718:resolve_secrets section_start:1630337718:prepare_executor Preparing the "kubernetes" executor Using Kubernetes namespace: gitlab-runners Using Kubernetes executor with image registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/k3d-builder:0.0.5 ... Using attach strategy to execute scripts... section_end:1630337718:prepare_executor section_start:1630337718:prepare_script Preparing environment Waiting for pod gitlab-runners/runner-lwasvds-project-2317-concurrent-069cgq to be running, status is Pending Running on runner-lwasvds-project-2317-concurrent-069cgq via gitlab-runners-bigbang-gl-packages-privileged-gitlab-runneb7dvc... section_end:1630337724:prepare_script section_start:1630337724:get_sources Getting source from Git repository Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/L_wAsvdS/0/platform-one/big-bang/apps/developer-tools/sonarqube/.git/ Created fresh repository. Checking out 7d969ac7 as refs/merge-requests/39/head... Skipping Git submodules setup section_end:1630337725:get_sources section_start:1630337725:step_script Executing "step_script" stage of the job script $ echo -e "\e[0Ksection_start:`date +%s`:cluster_setup[collapsed=true]\r\e[0KCluster Setup" section_start:1630337725:cluster_setup[collapsed=true] Cluster Setup $ if [ -z ${PIPELINE_REPO_BRANCH} ]; then # collapsed multi-line command $ git clone -b ${PIPELINE_REPO_BRANCH} ${PIPELINE_REPO} ${PIPELINE_REPO_DESTINATION} Cloning into '../pipeline-repo'... $ source ${WAIT_PATH} $ i=0; while [ "$i" -lt 12 ]; do docker info &>/dev/null && break; sleep 5; i=$(( i + 1 )) ; done $ docker network create ${CI_JOB_ID} --driver=bridge -o "com.docker.network.driver.mtu"="1450" d7a4a69c11483dca0d7fe2b125cecf2bc7c47432b48504b03d25d910aa54d386 $ k3d cluster create ${CI_JOB_ID} --config ${K3D_CONFIG_PATH} --network ${CI_JOB_ID} INFO[0000] Using config file ../pipeline-repo/jobs/k3d-ci/config.yaml INFO[0000] Prep: Network INFO[0000] Network with name '6078675' already exists with ID 'd7a4a69c11483dca0d7fe2b125cecf2bc7c47432b48504b03d25d910aa54d386' INFO[0000] Created volume 'k3d-6078675-images' INFO[0001] Creating node 'k3d-6078675-server-0' INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.20.4-k3s1' INFO[0004] Creating LoadBalancer 'k3d-6078675-serverlb' INFO[0005] Pulling image 'docker.io/rancher/k3d-proxy:v4.3.0' INFO[0007] Starting cluster '6078675' INFO[0007] Starting servers... INFO[0007] Starting Node 'k3d-6078675-server-0' INFO[0012] Starting agents... INFO[0012] Starting helpers... INFO[0012] Starting Node 'k3d-6078675-serverlb' INFO[0012] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0016] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap INFO[0016] Cluster '6078675' created successfully! INFO[0016] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0016] You can now use it like this: kubectl config use-context k3d-6078675 kubectl cluster-info $ until kubectl get deployment coredns -n kube-system -o go-template='{{.status.availableReplicas}}' | grep -v -e ''; do sleep 1s; done 1 $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command namespace/sonarqube created secret/private-registry created secret/private-registry-mil created $ echo -e "\e[0Ksection_end:`date +%s`:cluster_setup\r\e[0K" section_end:1630337762:cluster_setup  $ echo -e "\e[0Ksection_start:`date +%s`:dependency_clean[collapsed=true]\r\e[0KDependency Install and Wait" section_start:1630337762:dependency_clean[collapsed=true] Dependency Install and Wait $ if [ -f "tests/dependencies.yaml" ]; then # collapsed multi-line command $ if [ -f "tests/dependencies.yaml" ]; then # collapsed multi-line command $ echo -e "\e[0Ksection_end:`date +%s`:dependency_clean\r\e[0K" section_end:1630337762:dependency_clean  $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command Helm installing sonarqube/chart into sonarqube namespace using sonarqube/tests/test-values.yaml for values Release "sonarqube" does not exist. Installing it now. NAME: sonarqube LAST DEPLOYED: Mon Aug 30 15:36:03 2021 NAMESPACE: sonarqube STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace sonarqube -l "app=sonarqube,release=sonarqube" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:9000 -n sonarqube $ sleep 10 # collapsed multi-line command $ if [ -d "chart/templates/tests" ]; then # collapsed multi-line command NAME: sonarqube LAST DEPLOYED: Mon Aug 30 15:36:03 2021 NAMESPACE: sonarqube STATUS: deployed REVISION: 1 TEST SUITE: sonarqube-cypress-sa Last Started: Mon Aug 30 15:37:54 2021 Last Completed: Mon Aug 30 15:37:54 2021 Phase: Succeeded TEST SUITE: sonarqube-cypress-config Last Started: Mon Aug 30 15:37:53 2021 Last Completed: Mon Aug 30 15:37:54 2021 Phase: Succeeded TEST SUITE: sonarqube-cypress-role Last Started: Mon Aug 30 15:37:54 2021 Last Completed: Mon Aug 30 15:37:54 2021 Phase: Succeeded TEST SUITE: sonarqube-cypress-rolebinding Last Started: Mon Aug 30 15:37:54 2021 Last Completed: Mon Aug 30 15:37:54 2021 Phase: Succeeded TEST SUITE: sonarqube-cypress-test Last Started: Mon Aug 30 15:37:54 2021 Last Completed: Mon Aug 30 15:38:38 2021 Phase: Succeeded NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace sonarqube -l "app=sonarqube,release=sonarqube" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:9000 -n sonarqube ***** Start Helm Test Logs ***** ====================================================================================================  (Run Starting)  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ Cypress: 5.0.0 │  │ Browser: Chrome 83 (headless) │  │ Specs: 1 found (sonarqube-health.spec.js) │  └────────────────────────────────────────────────────────────────────────────────────────────────┘ ──────────────────────────────────────────────────────────────────────────────────────────────────── Running: sonarqube-health.spec.js (1 of 1) Browserslist: caniuse-lite is outdated. Please run: npx browserslist@latest --update-db   Basic Sonarqube  ✓ Check Sonarqube is accessible (4122ms)   1 passing (5s)  (Results)  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ Tests: 1 │  │ Passing: 1 │  │ Failing: 0 │  │ Pending: 0 │  │ Skipped: 0 │  │ Screenshots: 0 │  │ Video: true │  │ Duration: 4 seconds │  │ Spec Ran: sonarqube-health.spec.js │  └────────────────────────────────────────────────────────────────────────────────────────────────┘  (Video)   - Started processing: Compressing to 32 CRF   - Finished processing: /test/cypress/videos/sonarqube-health.spec.js.mp4 (0 seconds) ====================================================================================================  (Run Finished)   Spec Tests Passing Failing Pending Skipped    ┌────────────────────────────────────────────────────────────────────────────────────────────────┐  │ ✔ sonarqube-health.spec.js 00:04 1 1 - - - │  └────────────────────────────────────────────────────────────────────────────────────────────────┘   ✔ All specs passed! 00:04 1 1 - - -   tar: Removing leading `/' from member names configmap/cypress-videos created ***** End Helm Test Logs ***** $ touch $CI_PROJECT_DIR/success $ echo -e "\e[0Ksection_start:`date +%s`:image_fetch[collapsed=true]\r\e[0KFetch Images" section_start:1630337918:image_fetch[collapsed=true] Fetch Images $ images=$(timeout 65 bash -c "until docker exec -i k3d-${CI_JOB_ID}-server-0 crictl images -o json; do sleep 10; done;") $ echo $images | jq -r '.images[].repoTags[0] | select(. != null)' | tee images.txt docker.io/rancher/coredns-coredns:1.8.0 docker.io/rancher/library-busybox:1.32.1 docker.io/rancher/local-path-provisioner:v0.0.19 docker.io/rancher/metrics-server:v0.3.6 docker.io/rancher/pause:3.1 registry.dso.mil/platform-one/big-bang/apps/developer-tools/sonarqube/postgresql:11.7.0-debian-10-r26 registry.dso.mil/platform-one/big-bang/apps/developer-tools/sonarqube/sonarqube8-community-bb:8.9-community-bb registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/cypress/kubectl:5.0.0 registry1.dso.mil/ironbank/opensource/postgres/postgresql96:9.6.20 $ sed -i '/docker.io\/rancher\//d' images.txt $ if [ -f tests/images.txt ]; then # collapsed multi-line command $ echo -e "\e[0Ksection_end:`date +%s`:image_fetch\r\e[0K" section_end:1630337918:image_fetch  section_end:1630337918:step_script section_start:1630337918:after_script Running after_script Running after script... $ if [ -e success ]; then # collapsed multi-line command section_start:1630337919:cluster_status[collapsed=true] Cluster Status Job Succeeded, cluster status: NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/metrics-server-86cbb8457f-k92t7 1/1 Running 0 2m46s kube-system pod/local-path-provisioner-5ff76fc89d-gt8xm 1/1 Running 0 2m46s kube-system pod/coredns-854c77959c-d6hjg 1/1 Running 0 2m46s sonarqube pod/sonarqube-postgresql-0 1/1 Running 0 2m36s sonarqube pod/sonarqube-sonarqube-658f5db98c-76cp6 1/1 Running 0 2m36s sonarqube pod/sonarqube-cypress-test 0/1 Completed 0 45s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.43.0.1 443/TCP 3m2s kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 2m59s kube-system service/metrics-server ClusterIP 10.43.100.176 443/TCP 2m59s sonarqube service/sonarqube-postgresql-headless ClusterIP None 5432/TCP 2m36s sonarqube service/sonarqube-postgresql ClusterIP 10.43.60.193 5432/TCP 2m36s sonarqube service/sonarqube-sonarqube ClusterIP 10.43.27.71 9000/TCP 2m36s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/metrics-server 1/1 1 1 2m59s kube-system deployment.apps/local-path-provisioner 1/1 1 1 2m59s kube-system deployment.apps/coredns 1/1 1 1 2m59s sonarqube deployment.apps/sonarqube-sonarqube 1/1 1 1 2m36s NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/metrics-server-86cbb8457f 1 1 1 2m46s kube-system replicaset.apps/local-path-provisioner-5ff76fc89d 1 1 1 2m46s kube-system replicaset.apps/coredns-854c77959c 1 1 1 2m46s sonarqube replicaset.apps/sonarqube-sonarqube-658f5db98c 1 1 1 2m36s NAMESPACE NAME READY AGE sonarqube statefulset.apps/sonarqube-postgresql 1/1 2m36s section_end:1630337919:cluster_status  $ echo -e "\e[0Ksection_start:`date +%s`:cluster_clean[collapsed=true]\r\e[0KCluster Cleanup" section_start:1630337919:cluster_clean[collapsed=true] Cluster Cleanup $ k3d cluster delete ${CI_JOB_ID} INFO[0000] Deleting cluster '6078675' INFO[0000] Deleted k3d-6078675-serverlb INFO[0004] Deleted k3d-6078675-server-0 INFO[0004] Deleting image volume 'k3d-6078675-images' INFO[0004] Removing cluster details from default kubeconfig... INFO[0004] Removing standalone kubeconfig file (if there is one)... INFO[0004] Successfully deleted cluster 6078675! $ docker network rm ${CI_JOB_ID} 6078675 $ echo -e "\e[0Ksection_end:`date +%s`:cluster_clean\r\e[0K" section_end:1630337924:cluster_clean  section_end:1630337924:after_script section_start:1630337924:upload_artifacts_on_success Uploading artifacts for successful job Uploading artifacts... images.txt: found 1 matching files and directories WARNING: tests/cypress/screenshots: no matching files WARNING: tests/cypress/videos: no matching files  cypress-artifacts: found 3 matching files and directories Uploading artifacts as "archive" to coordinator... ok id=6078675 responseStatus=201 Created token=Sg_Sq5xp section_end:1630337925:upload_artifacts_on_success section_start:1630337925:cleanup_file_variables Cleaning up file based variables section_end:1630337925:cleanup_file_variables Job succeeded