Running with gitlab-runner 13.12.0 (7a6612da)  on gitlab-runners-bigbang-gl-packages-privileged-gitlab-runne2wzft meVv6eaZ  feature flags: FF_GITLAB_REGISTRY_HELPER_IMAGE:true section_start:1625070363:resolve_secrets Resolving secrets section_end:1625070363:resolve_secrets section_start:1625070363:prepare_executor Preparing the "kubernetes" executor Using Kubernetes namespace: gitlab-runners Using Kubernetes executor with image registry.dso.mil/platform-one/big-bang/pipeline-templates/pipeline-templates/k3d-builder:0.0.5 ... section_end:1625070363:prepare_executor section_start:1625070363:prepare_script Preparing environment Waiting for pod gitlab-runners/runner-mevv6eaz-project-2324-concurrent-2df9kz to be running, status is Pending Running on runner-mevv6eaz-project-2324-concurrent-2df9kz via gitlab-runners-bigbang-gl-packages-privileged-gitlab-runne2wzft... section_end:1625070366:prepare_script section_start:1625070366:get_sources Getting source from Git repository Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/meVv6eaZ/2/platform-one/big-bang/apps/security-tools/keycloak/.git/ Created fresh repository. Checking out 0cd3cb82 as refs/merge-requests/31/head... Skipping Git submodules setup section_end:1625070367:get_sources section_start:1625070367:step_script Executing "step_script" stage of the job script $ if [ -z ${PIPELINE_REPO_BRANCH} ]; then # collapsed multi-line command $ git clone -b ${PIPELINE_REPO_BRANCH} ${PIPELINE_REPO} ${PIPELINE_REPO_DESTINATION} Cloning into '../pipeline-repo'... $ source ${WAIT_PATH} $ docker network create ${CI_JOB_ID} --driver=bridge -o "com.docker.network.driver.mtu"="1450" 697c9d6877e23982828c7d8fa9569a731e1e13c825661116379a2759e717cef1 $ k3d cluster create ${CI_JOB_ID} --config ${K3D_CONFIG_PATH} --network ${CI_JOB_ID} INFO[0000] Using config file ../pipeline-repo/jobs/k3d-ci/config.yaml INFO[0000] Prep: Network INFO[0000] Network with name '4484513' already exists with ID '697c9d6877e23982828c7d8fa9569a731e1e13c825661116379a2759e717cef1' INFO[0000] Created volume 'k3d-4484513-images' INFO[0001] Creating node 'k3d-4484513-server-0' INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.20.4-k3s1' INFO[0004] Creating LoadBalancer 'k3d-4484513-serverlb' INFO[0006] Pulling image 'docker.io/rancher/k3d-proxy:v4.3.0' INFO[0008] Starting cluster '4484513' INFO[0008] Starting servers... INFO[0008] Starting Node 'k3d-4484513-server-0' INFO[0013] Starting agents... INFO[0013] Starting helpers... INFO[0013] Starting Node 'k3d-4484513-serverlb' INFO[0014] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0017] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap INFO[0017] Cluster '4484513' created successfully! INFO[0017] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0017] You can now use it like this: kubectl config use-context k3d-4484513 kubectl cluster-info $ until kubectl get deployment coredns -n kube-system -o go-template='{{.status.availableReplicas}}' | grep -v -e ''; do sleep 1s; done 1 $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command namespace/keycloak created secret/private-registry created secret/private-registry-mil created $ if [[ "${CI_PROJECT_NAME}" != *"istio"* ]]; then # collapsed multi-line command - Processing resources for Istio core. ✔ Istio core installed - Processing resources for Istiod. - Processing resources for Istiod. Waiting for Deployment/istio-system/istiod ✔ Istiod installed - Processing resources for Ingress gateways. - Processing resources for Ingress gateways. Waiting for Deployment/istio-system/istio-ingressgat... ✔ Ingress gateways installed - Pruning removed resources ✔ Installation completenamespace/istio-system labeled $ if [[ "${PACKAGE_NAMESPACE}" != "istio-operator" ]]; then # collapsed multi-line command Generating a RSA private key ..............................+++++ ............+++++ writing new private key to 'tls.key' ----- secret/wildcard-cert created $ if [ -f "tests/main-test-gateway.yaml" ]; then # collapsed multi-line command $ if [ -f "tests/dependencies.yaml" ]; then # collapsed multi-line command $ sleep 10 $ kubectl wait --for=condition=established --timeout 60s -A crd --all > /dev/null $ if [ -f tests/dependencies.yaml ]; then # collapsed multi-line command $ wait_sts $ wait_daemonset $ kubectl wait --for=condition=available --timeout 600s -A deployment --all > /dev/null $ kubectl wait --for=condition=ready --timeout 600s -A pods --all --field-selector status.phase=Running > /dev/null $ echo "Package install" Package install $ if [ ! -z ${PROJECT_NAME} ]; then # collapsed multi-line command $ if [ $(ls -1 tests/test-values.y*ml 2>/dev/null | wc -l) -gt 0 ]; then # collapsed multi-line command Helm installing keycloak/chart into keycloak namespace using keycloak/tests/test-values.yaml for values Error: timed out waiting for the condition section_end:1625071039:step_script section_start:1625071039:after_script Running after_script Running after script... $ if [ -e success ]; then # collapsed multi-line command Job Failed Printing Debug Logs kubectl get all -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/local-path-provisioner-5ff76fc89d-jkbt5 1/1 Running 0 10m kube-system pod/metrics-server-86cbb8457f-d8ns5 1/1 Running 0 10m kube-system pod/coredns-854c77959c-ssmn6 1/1 Running 0 10m istio-system pod/istiod-7b57d88d9c-v7q56 1/1 Running 0 10m istio-system pod/svclb-istio-ingressgateway-6tdq4 5/5 Running 0 10m istio-system pod/istio-ingressgateway-69c8589df9-nwk9d 1/1 Running 0 10m keycloak pod/keycloak-postgresql-0 1/1 Running 0 10m keycloak pod/keycloak-0 0/1 CrashLoopBackOff 6 10m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.43.0.1 443/TCP 10m kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 10m kube-system service/metrics-server ClusterIP 10.43.155.182 443/TCP 10m istio-system service/istiod ClusterIP 10.43.138.154 15010/TCP,15012/TCP,443/TCP,15014/TCP 10m istio-system service/istio-ingressgateway LoadBalancer 10.43.151.224 172.18.0.2 15021:30821/TCP,80:32169/TCP,443:30325/TCP,15012:31027/TCP,15443:30729/TCP 10m keycloak service/keycloak-headless ClusterIP None 80/TCP 10m keycloak service/keycloak-postgresql-headless ClusterIP None 5432/TCP 10m keycloak service/keycloak-postgresql ClusterIP 10.43.204.102 5432/TCP 10m keycloak service/keycloak-http ClusterIP 10.43.236.217 80/TCP,8443/TCP,9990/TCP,7600/TCP 10m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE istio-system daemonset.apps/svclb-istio-ingressgateway 1 1 1 1 1 10m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/local-path-provisioner 1/1 1 1 10m kube-system deployment.apps/metrics-server 1/1 1 1 10m kube-system deployment.apps/coredns 1/1 1 1 10m istio-system deployment.apps/istiod 1/1 1 1 10m istio-system deployment.apps/istio-ingressgateway 1/1 1 1 10m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/local-path-provisioner-5ff76fc89d 1 1 1 10m kube-system replicaset.apps/metrics-server-86cbb8457f 1 1 1 10m kube-system replicaset.apps/coredns-854c77959c 1 1 1 10m istio-system replicaset.apps/istiod-7b57d88d9c 1 1 1 10m istio-system replicaset.apps/istio-ingressgateway-69c8589df9 1 1 1 10m NAMESPACE NAME READY AGE keycloak statefulset.apps/keycloak 0/1 10m keycloak statefulset.apps/keycloak-postgresql 1/1 10m NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE istio-system horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 1%/80% 1 5 1 10m istio-system horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 32%/80% 1 5 1 10m $ docker exec -i k3d-${CI_JOB_ID}-server-0 crictl images -o json | jq -r '.images[].repoTags[0] | select(. != null)' > images.txt $ sed -i '/docker.io\/istio\//d' images.txt $ sed -i '/docker.io\/rancher\//d' images.txt $ if [ -f tests/images.txt ]; then # collapsed multi-line command $ k3d cluster delete ${CI_JOB_ID} INFO[0000] Deleting cluster '4484513' INFO[0000] Deleted k3d-4484513-serverlb INFO[0002] Deleted k3d-4484513-server-0 INFO[0002] Deleting image volume 'k3d-4484513-images' INFO[0002] Removing cluster details from default kubeconfig... INFO[0002] Removing standalone kubeconfig file (if there is one)... INFO[0002] Successfully deleted cluster 4484513! $ docker network rm ${CI_JOB_ID} 4484513 section_end:1625071041:after_script section_start:1625071041:upload_artifacts_on_failure Uploading artifacts for failed job Uploading artifacts... images.txt: found 1 matching files and directories WARNING: tests/cypress/screenshots: no matching files WARNING: tests/cypress/videos: no matching files  WARNING: cypress-artifacts: no matching files  Uploading artifacts as "archive" to coordinator... ok id=4484513 responseStatus=201 Created token=n4hzHWNu section_end:1625071042:upload_artifacts_on_failure section_start:1625071042:cleanup_file_variables Cleaning up file based variables section_end:1625071042:cleanup_file_variables ERROR: Job failed: command terminated with exit code 1