Running with gitlab-runner 17.5.4 (d8d88d9e)
  on graduated-runner-graduated-runner-gitlab-runner-c8fd66bb8-7t2sw t2_cmeAC, system ID: r_RjKnXc9h57C2
Resolving secrets
section_start:1736875936:prepare_executor
Preparing the "kubernetes" executor
Using Kubernetes namespace: graduated-runner
Using Kubernetes executor with image registry1.dso.mil/bigbang-ci/bb-ci:2.21.0 ...
Using attach strategy to execute scripts...
section_end:1736875936:prepare_executor
section_start:1736875936:prepare_script
Preparing environment
Using FF_USE_POD_ACTIVE_DEADLINE_SECONDS, the Pod activeDeadlineSeconds will be set to the job timeout: 1h0m0s...
Waiting for pod graduated-runner/runner-t2cmeac-project-6751-concurrent-0-zlfdf9fr to be running, status is Pending
Waiting for pod graduated-runner/runner-t2cmeac-project-6751-concurrent-0-zlfdf9fr to be running, status is Pending
	ContainersNotInitialized: "containers with incomplete status: [istio-proxy init-permissions]"
	ContainersNotReady: "containers with unready status: [istio-proxy build helper svc-0]"
	ContainersNotReady: "containers with unready status: [istio-proxy build helper svc-0]"
Waiting for pod graduated-runner/runner-t2cmeac-project-6751-concurrent-0-zlfdf9fr to be running, status is Pending
	ContainersNotReady: "containers with unready status: [build helper svc-0]"
	ContainersNotReady: "containers with unready status: [build helper svc-0]"
Running on runner-t2cmeac-project-6751-concurrent-0-zlfdf9fr via graduated-runner-graduated-runner-gitlab-runner-c8fd66bb8-7t2sw...

section_end:1736875946:prepare_script
section_start:1736875946:get_sources
Getting source from Git repository
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/big-bang/product/packages/gluon/.git/
Created fresh repository.
Checking out 4b85f1c0 as detached HEAD (ref is refs/merge-requests/103/head)...

Skipping Git submodules setup

section_end:1736875947:get_sources
section_start:1736875947:step_script
Executing "step_script" stage of the job script
$ echo -e "\e[0Ksection_start:`date +%s`:k3d_up[collapsed=true]\r\e[0K\e[33;1mK3D Cluster Create\e[37m"
section_start:1736875947:k3d_up[collapsed=true]
K3D Cluster Create
$ git clone -b ${PIPELINE_REPO_BRANCH} ${PIPELINE_REPO} ${PIPELINE_REPO_DESTINATION}
Cloning into '../pipeline-repo'...
$ source ${PIPELINE_REPO_DESTINATION}/library/templates.sh
DEBUG_ENABLED is set to true, setting -x in bash
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ source ${PIPELINE_REPO_DESTINATION}/library/bigbang-functions.sh'
$ source ${PIPELINE_REPO_DESTINATION}/library/bigbang-functions.sh
++ source ../pipeline-repo/library/bigbang-functions.sh
+++ [[ ../pipeline-repo/library/bigbang-functions.sh == /scripts-6751-41553679/step_script ]]
+++ [[ '' == \t\r\u\e ]]
+++ [[ SKIP UPDATE CHECK Resolve "Replace asterisk with dash to appease markdownlint" == *\D\E\B\U\G* ]]
+++ [[ debug,status::review,team::Pipelines & Infrastructure == *\d\e\b\u\g* ]]
+++ echo 'DEBUG_ENABLED is set to true, setting -x in bash'
DEBUG_ENABLED is set to true, setting -x in bash
+++ DEBUG=true
+++ set -x
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ source ${PIPELINE_REPO_DESTINATION}/library/package-functions.sh'
$ source ${PIPELINE_REPO_DESTINATION}/library/package-functions.sh
++ source ../pipeline-repo/library/package-functions.sh
+++ [[ ../pipeline-repo/library/package-functions.sh == /scripts-6751-41553679/step_script ]]
+++ [[ '' == \t\r\u\e ]]
+++ [[ SKIP UPDATE CHECK Resolve "Replace asterisk with dash to appease markdownlint" == *\D\E\B\U\G* ]]
+++ [[ debug,status::review,team::Pipelines & Infrastructure == *\d\e\b\u\g* ]]
+++ echo 'DEBUG_ENABLED is set to true, setting -x in bash'
DEBUG_ENABLED is set to true, setting -x in bash
+++ DEBUG=true
+++ set -x
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ source ${PIPELINE_REPO_DESTINATION}/library/k8s-functions.sh'
$ source ${PIPELINE_REPO_DESTINATION}/library/k8s-functions.sh
++ source ../pipeline-repo/library/k8s-functions.sh
+++ [[ ../pipeline-repo/library/k8s-functions.sh == /scripts-6751-41553679/step_script ]]
+++ [[ '' == \t\r\u\e ]]
+++ [[ SKIP UPDATE CHECK Resolve "Replace asterisk with dash to appease markdownlint" == *\D\E\B\U\G* ]]
+++ [[ debug,status::review,team::Pipelines & Infrastructure == *\d\e\b\u\g* ]]
+++ echo 'DEBUG_ENABLED is set to true, setting -x in bash'
DEBUG_ENABLED is set to true, setting -x in bash
+++ DEBUG=true
+++ set -x
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ source ${PIPELINE_REPO_DESTINATION}/library/rds-functions.sh'
$ source ${PIPELINE_REPO_DESTINATION}/library/rds-functions.sh
++ source ../pipeline-repo/library/rds-functions.sh
+++ [[ ../pipeline-repo/library/rds-functions.sh == /scripts-6751-41553679/step_script ]]
+++ [[ '' == \t\r\u\e ]]
+++ [[ SKIP UPDATE CHECK Resolve "Replace asterisk with dash to appease markdownlint" == *\D\E\B\U\G* ]]
+++ [[ debug,status::review,team::Pipelines & Infrastructure == *\d\e\b\u\g* ]]
+++ echo 'DEBUG_ENABLED is set to true, setting -x in bash'
DEBUG_ENABLED is set to true, setting -x in bash
+++ DEBUG=true
+++ set -x
+++ export ACCESSOR_ROLE_ACCESS_KEY_ID=
+++ ACCESSOR_ROLE_ACCESS_KEY_ID=
+++ export ACCESSOR_ROLE_SECRET_KEY=
+++ ACCESSOR_ROLE_SECRET_KEY=
+++ export ACCESSOR_ROLE_SESSION_TOKEN=
+++ ACCESSOR_ROLE_SESSION_TOKEN=
+++ export ACCESSOR_ROLE_TIME=0
+++ ACCESSOR_ROLE_TIME=0
++ echo '$ package_auth_setup'
$ package_auth_setup
++ package_auth_setup
++ mkdir -p /root/.docker
++ jq -n '{"auths": {"registry1.dso.mil": {"auth": $registry1_auth}, "registry.il2.dso.mil": {"auth": $il2_registry_auth}, "docker.io": {"auth": $bb_docker_auth} } }' --arg registry1_auth [MASKED] --arg il2_registry_auth [MASKED] --arg bb_docker_auth [MASKED]
++ echo '$ i=0; while [ "$i" -lt 12 ]; do docker info &>/dev/null && break; sleep 5; i=$(( i + 1 )) ; done'
$ i=0; while [ "$i" -lt 12 ]; do docker info &>/dev/null && break; sleep 5; i=$(( i + 1 )) ; done
++ i=0
++ '[' 0 -lt 12 ']'
++ docker info
++ break
++ echo '$ docker network create --opt com.docker.network.bridge.name=${CI_JOB_ID} ${CI_JOB_ID} --driver=bridge -o "com.docker.network.driver.mtu"="1450" --subnet=172.20.0.0/16 --gateway 172.20.0.1'
$ docker network create --opt com.docker.network.bridge.name=${CI_JOB_ID} ${CI_JOB_ID} --driver=bridge -o "com.docker.network.driver.mtu"="1450" --subnet=172.20.0.0/16 --gateway 172.20.0.1
++ docker network create --opt com.docker.network.bridge.name=41553679 41553679 --driver=bridge -o com.docker.network.driver.mtu=1450 --subnet=172.20.0.0/16 --gateway 172.20.0.1
909a02ebb38d69278865fd838a7e15c9f8a207b8e85929e539aec4f754101a93
++ echo '$ chmod +x ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh; echo "Executing ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh..."; ./${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh'
$ chmod +x ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh; echo "Executing ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh..."; ./${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh
++ chmod +x ../pipeline-repo/clusters/k3d/dependencies/k3d/deploy_k3d.sh
++ echo 'Executing ../pipeline-repo/clusters/k3d/dependencies/k3d/deploy_k3d.sh...'
Executing ../pipeline-repo/clusters/k3d/dependencies/k3d/deploy_k3d.sh...
++ ./../pipeline-repo/clusters/k3d/dependencies/k3d/deploy_k3d.sh
DEBUG_ENABLED is set to true, setting -x in bash
++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
+++ dirname -- ./../pipeline-repo/clusters/k3d/dependencies/k3d/deploy_k3d.sh
++ cd -- ./../pipeline-repo/clusters/k3d/dependencies/k3d
++ pwd
+ SCRIPT_DIR=/builds/big-bang/product/packages/pipeline-repo/clusters/k3d/dependencies/k3d
+ mkdir -p /cypress/logs
+ chown 1000:1000 /cypress/logs
+ mkdir -p /cypress/screenshots
+ chown 1000:1000 /cypress/screenshots
+ mkdir -p /cypress/videos
+ chown 1000:1000 /cypress/videos
+ [[ '' == \t\r\u\e ]]
+ [[ '' == \t\r\u\e ]]
+ [[ ! -z '' ]]
+ [[ '' == \B\B ]]
+ [[ false == \t\r\u\e ]]
+ ARGS+=' --volume /etc/machine-id:/etc/machine-id@all:*'
+ [[ '' == \B\B ]]
+ [[ '' == \B\B ]]
+ [[ '' == \B\B ]]
+ [[ '' == \B\B ]]
+ [[ '' == \I\N\T\E\G\R\A\T\I\O\N ]]
+ [[ '' == \t\r\u\e ]]
+ echo 'Creating k3d cluster with default metrics server'
Creating k3d cluster with default metrics server
+ k3d cluster create 41553679 --config ../pipeline-repo/clusters/k3d/dependencies/k3d/config.yaml --network 41553679 --volume '/etc/machine-id:/etc/machine-id@all:*'
INFO[0000] Using config file ../pipeline-repo/clusters/k3d/dependencies/k3d/config.yaml (k3d.io/v1alpha4#simple) 
WARN[0000] Default config apiVersion is 'k3d.io/v1alpha5', but you're using 'k3d.io/v1alpha4': consider migrating. 
INFO[0000] portmapping '80:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] portmapping '443:443' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network '41553679' (909a02ebb38d69278865fd838a7e15c9f8a207b8e85929e539aec4f754101a93) 
INFO[0000] Created image volume k3d-41553679-images     
INFO[0000] Starting new tools node...                   
INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.7.5' 
INFO[0001] Creating node 'k3d-41553679-server-0'        
INFO[0001] Starting node 'k3d-41553679-tools'           
INFO[0002] Pulling image 'rancher/k3s:v1.31.4-k3s1'     
INFO[0005] Creating LoadBalancer 'k3d-41553679-serverlb' 
INFO[0005] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.7.5' 
INFO[0007] Using the k3d-tools node to gather environment information 
INFO[0008] HostIP: using network gateway 172.20.0.1 address 
INFO[0008] Starting cluster '41553679'                  
INFO[0008] Starting servers...                          
INFO[0008] Starting node 'k3d-41553679-server-0'        
INFO[0012] All agents already running.                  
INFO[0012] Starting helpers...                          
INFO[0013] Starting node 'k3d-41553679-serverlb'        
INFO[0020] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... 
INFO[0022] Cluster '41553679' created successfully!     
INFO[0022] You can now use it like this:                
kubectl cluster-info
++ echo '$ until kubectl get deployment coredns -n kube-system -o go-template='\''{{.status.availableReplicas}}'\'' | grep -v -e '\''<no value>'\''; do sleep 1s; done'
$ until kubectl get deployment coredns -n kube-system -o go-template='{{.status.availableReplicas}}' | grep -v -e '<no value>'; do sleep 1s; done
++ kubectl get deployment coredns -n kube-system -o 'go-template={{.status.availableReplicas}}'
++ grep -v -e '<no value>'
++ sleep 1s
++ kubectl get deployment coredns -n kube-system -o 'go-template={{.status.availableReplicas}}'
++ grep -v -e '<no value>'
++ sleep 1s
++ kubectl get deployment coredns -n kube-system -o 'go-template={{.status.availableReplicas}}'
++ grep -v -e '<no value>'
++ sleep 1s
++ kubectl get deployment coredns -n kube-system -o 'go-template={{.status.availableReplicas}}'
++ grep -v -e '<no value>'
1
++ echo '$ chmod +x ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh; echo "Executing ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh...";./${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh ;'
$ chmod +x ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh; echo "Executing ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh...";./${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh ;
++ chmod +x ../pipeline-repo/clusters/k3d/dependencies/metallb/install_metallb.sh
++ echo 'Executing ../pipeline-repo/clusters/k3d/dependencies/metallb/install_metallb.sh...'
Executing ../pipeline-repo/clusters/k3d/dependencies/metallb/install_metallb.sh...
++ ./../pipeline-repo/clusters/k3d/dependencies/metallb/install_metallb.sh
DEBUG_ENABLED is set to true, setting -x in bash
++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
+ kubectl create ns metallb-system
namespace/metallb-system created
+ kubectl label ns metallb-system app=metallb
namespace/metallb-system labeled
+ kubectl create -n metallb-system secret docker-registry private-registry --docker-server=https://registry1.dso.mil --docker-username=[MASKED] --docker-password=[MASKED]
secret/private-registry created
+ kubectl create -f ../pipeline-repo/clusters/k3d/dependencies/metallb/metallb.yaml
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
+ echo 'Waiting on MetalLB controller/webhook...'
Waiting on MetalLB controller/webhook...
+ kubectl wait --for=condition=available --timeout 120s -n metallb-system deployment controller
deployment.apps/controller condition met
+ kubectl create -f ../pipeline-repo/clusters/k3d/dependencies/metallb/metallb-config.yaml
ipaddresspool.metallb.io/default created
l2advertisement.metallb.io/l2advertisement1 created
+ kubectl rollout status daemonset speaker -n metallb-system
Waiting for daemon set "speaker" rollout to finish: 0 of 1 updated pods are available...
daemon set "speaker" successfully rolled out
++ echo '$ get_all'
$ get_all
++ get_all
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875998:all_resources[collapsed=true]\r\e[0K\e[33;1mAll Cluster Resources\e[37m'
section_start:1736875998:all_resources[collapsed=true]
All Cluster Resources
++ kubectl get all -A
NAMESPACE        NAME                                          READY   STATUS    RESTARTS   AGE
kube-system      pod/coredns-ccb96694c-lp987                   1/1     Running   0          32s
kube-system      pod/local-path-provisioner-5cf85fd84d-rshtv   1/1     Running   0          32s
kube-system      pod/metrics-server-5985cbc9d7-5q2nf           1/1     Running   0          32s
metallb-system   pod/controller-5f67f69db-kmgn6                1/1     Running   0          24s
metallb-system   pod/speaker-ppqc9                             1/1     Running   0          24s

NAMESPACE        NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default          service/kubernetes        ClusterIP   172.20.0.1       <none>        443/TCP                  38s
kube-system      service/kube-dns          ClusterIP   172.20.0.10      <none>        53/UDP,53/TCP,9153/TCP   36s
kube-system      service/metrics-server    ClusterIP   172.20.72.72     <none>        443/TCP                  36s
metallb-system   service/webhook-service   ClusterIP   172.20.240.148   <none>        443/TCP                  24s

NAMESPACE        NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
metallb-system   daemonset.apps/speaker   1         1         1       1            1           kubernetes.io/os=linux   24s

NAMESPACE        NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system      deployment.apps/coredns                  1/1     1            1           36s
kube-system      deployment.apps/local-path-provisioner   1/1     1            1           36s
kube-system      deployment.apps/metrics-server           1/1     1            1           36s
metallb-system   deployment.apps/controller               1/1     1            1           24s

NAMESPACE        NAME                                                DESIRED   CURRENT   READY   AGE
kube-system      replicaset.apps/coredns-ccb96694c                   1         1         1       32s
kube-system      replicaset.apps/local-path-provisioner-5cf85fd84d   1         1         1       32s
kube-system      replicaset.apps/metrics-server-5985cbc9d7           1         1         1       32s
metallb-system   replicaset.apps/controller-5f67f69db                1         1         1       24s
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875998:all_resources\r\e[0K'
section_end:1736875998:all_resources

++ echo '$ echo -e "\e[0Ksection_end:`date +%s`:k3d_up\r\e[0K"'
$ echo -e "\e[0Ksection_end:`date +%s`:k3d_up\r\e[0K"
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875998:k3d_up\r\e[0K'
section_end:1736875998:k3d_up

++ echo '$ helm dependency update ${VALIDATION_CHART_NAME}'
$ helm dependency update ${VALIDATION_CHART_NAME}
++ helm dependency update validate-chart
Error: can't get a valid version for 1 subchart(s): "gluon" (repository "file://../chart", version "0.5.10"). Make sure a matching chart version exists in the repo, or change the version constraint in Chart.yaml
+++ echo $'\342\235\214' exit at /scripts-6751-41553679/step_script:262, command was: helm dependency update '${VALIDATION_CHART_NAME}'
❌ exit at /scripts-6751-41553679/step_script:262, command was: helm dependency update ${VALIDATION_CHART_NAME}

WARNING: Event retrieved from the cluster: policy require-labels/check-for-labels fail: validation error: The pod is missing a required label. rule check-for-labels failed at path /metadata/labels/app.kubernetes.io/name/
WARNING: Event retrieved from the cluster: policy require-labels/check-for-labels fail: validation error: The pod is missing a required label. rule check-for-labels failed at path /metadata/labels/app.kubernetes.io/name/
WARNING: Event retrieved from the cluster: policy require-labels/check-for-labels fail: validation error: The pod is missing a required label. rule check-for-labels failed at path /metadata/labels/app.kubernetes.io/name/
section_end:1736875998:step_script
section_start:1736875998:after_script
Running after_script
Running after script...
$ source ${PIPELINE_REPO_DESTINATION}/library/templates.sh
DEBUG_ENABLED is set to true, setting -x in bash
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ source ${PIPELINE_REPO_DESTINATION}/library/bigbang-functions.sh'
$ source ${PIPELINE_REPO_DESTINATION}/library/bigbang-functions.sh
++ source ../pipeline-repo/library/bigbang-functions.sh
+++ [[ ../pipeline-repo/library/bigbang-functions.sh == /scripts-6751-41553679/after_script ]]
+++ [[ '' == \t\r\u\e ]]
+++ [[ SKIP UPDATE CHECK Resolve "Replace asterisk with dash to appease markdownlint" == *\D\E\B\U\G* ]]
+++ [[ debug,status::review,team::Pipelines & Infrastructure == *\d\e\b\u\g* ]]
+++ echo 'DEBUG_ENABLED is set to true, setting -x in bash'
DEBUG_ENABLED is set to true, setting -x in bash
+++ DEBUG=true
+++ set -x
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ source ${PIPELINE_REPO_DESTINATION}/library/package-functions.sh'
$ source ${PIPELINE_REPO_DESTINATION}/library/package-functions.sh
++ source ../pipeline-repo/library/package-functions.sh
+++ [[ ../pipeline-repo/library/package-functions.sh == /scripts-6751-41553679/after_script ]]
+++ [[ '' == \t\r\u\e ]]
+++ [[ SKIP UPDATE CHECK Resolve "Replace asterisk with dash to appease markdownlint" == *\D\E\B\U\G* ]]
+++ [[ debug,status::review,team::Pipelines & Infrastructure == *\d\e\b\u\g* ]]
+++ echo 'DEBUG_ENABLED is set to true, setting -x in bash'
DEBUG_ENABLED is set to true, setting -x in bash
+++ DEBUG=true
+++ set -x
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ source ${PIPELINE_REPO_DESTINATION}/library/k8s-functions.sh'
$ source ${PIPELINE_REPO_DESTINATION}/library/k8s-functions.sh
++ source ../pipeline-repo/library/k8s-functions.sh
+++ [[ ../pipeline-repo/library/k8s-functions.sh == /scripts-6751-41553679/after_script ]]
+++ [[ '' == \t\r\u\e ]]
+++ [[ SKIP UPDATE CHECK Resolve "Replace asterisk with dash to appease markdownlint" == *\D\E\B\U\G* ]]
+++ [[ debug,status::review,team::Pipelines & Infrastructure == *\d\e\b\u\g* ]]
+++ echo 'DEBUG_ENABLED is set to true, setting -x in bash'
DEBUG_ENABLED is set to true, setting -x in bash
+++ DEBUG=true
+++ set -x
+++ trap 'echo ❌ exit at ${0}:${LINENO}, command was: ${BASH_COMMAND} 1>&2' ERR
++ echo '$ get_ns'
$ get_ns
++ get_ns
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875998:namespaces[collapsed=true]\r\e[0K\e[33;1mNamespaces\e[37m'
section_start:1736875998:namespaces[collapsed=true]
Namespaces
++ kubectl get namespace --show-labels
NAME              STATUS   AGE   LABELS
default           Active   40s   kubernetes.io/metadata.name=default
kube-node-lease   Active   40s   kubernetes.io/metadata.name=kube-node-lease
kube-public       Active   41s   kubernetes.io/metadata.name=kube-public
kube-system       Active   41s   kubernetes.io/metadata.name=kube-system
metallb-system    Active   26s   app=metallb,kubernetes.io/metadata.name=metallb-system
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:namespaces\r\e[0K'
section_end:1736875999:namespaces

++ echo '$ get_all'
$ get_all
++ get_all
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:all_resources[collapsed=true]\r\e[0K\e[33;1mAll Cluster Resources\e[37m'
section_start:1736875999:all_resources[collapsed=true]
All Cluster Resources
++ kubectl get all -A
NAMESPACE        NAME                                          READY   STATUS    RESTARTS   AGE
kube-system      pod/coredns-ccb96694c-lp987                   1/1     Running   0          33s
kube-system      pod/local-path-provisioner-5cf85fd84d-rshtv   1/1     Running   0          33s
kube-system      pod/metrics-server-5985cbc9d7-5q2nf           1/1     Running   0          33s
metallb-system   pod/controller-5f67f69db-kmgn6                1/1     Running   0          25s
metallb-system   pod/speaker-ppqc9                             1/1     Running   0          25s

NAMESPACE        NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default          service/kubernetes        ClusterIP   172.20.0.1       <none>        443/TCP                  39s
kube-system      service/kube-dns          ClusterIP   172.20.0.10      <none>        53/UDP,53/TCP,9153/TCP   37s
kube-system      service/metrics-server    ClusterIP   172.20.72.72     <none>        443/TCP                  37s
metallb-system   service/webhook-service   ClusterIP   172.20.240.148   <none>        443/TCP                  25s

NAMESPACE        NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
metallb-system   daemonset.apps/speaker   1         1         1       1            1           kubernetes.io/os=linux   25s

NAMESPACE        NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system      deployment.apps/coredns                  1/1     1            1           37s
kube-system      deployment.apps/local-path-provisioner   1/1     1            1           37s
kube-system      deployment.apps/metrics-server           1/1     1            1           37s
metallb-system   deployment.apps/controller               1/1     1            1           25s

NAMESPACE        NAME                                                DESIRED   CURRENT   READY   AGE
kube-system      replicaset.apps/coredns-ccb96694c                   1         1         1       33s
kube-system      replicaset.apps/local-path-provisioner-5cf85fd84d   1         1         1       33s
kube-system      replicaset.apps/metrics-server-5985cbc9d7           1         1         1       33s
metallb-system   replicaset.apps/controller-5f67f69db                1         1         1       25s
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:all_resources\r\e[0K'
section_end:1736875999:all_resources

++ echo '$ get_events'
$ get_events
++ get_events
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:show_event_log[collapsed=true]\r\e[0K\e[33;1mCluster Event Log\e[37m'
section_start:1736875999:show_event_log[collapsed=true]
Cluster Event Log
++ echo -e '\e[31mNOTICE: Cluster events can be found in artifact events.txt\e[0m'
NOTICE: Cluster events can be found in artifact events.txt
++ kubectl get events -A --sort-by=.metadata.creationTimestamp
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:show_event_log\r\e[0K'
section_end:1736875999:show_event_log

++ echo '$ bigbang_pipeline'
$ bigbang_pipeline
++ bigbang_pipeline
++ [[ '' == \B\B ]]
++ [[ '' == \I\N\T\E\G\R\A\T\I\O\N ]]
++ echo 'Pipeline type is not BB, skipping'
Pipeline type is not BB, skipping
++ echo '$ get_debug'
$ get_debug
++ get_debug
++ [[ -n true ]]
++ describe_hr
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:describehr[collapsed=true]\r\e[0K\e[33;1mDescribe Helmreleases\e[37m'
section_start:1736875999:describehr[collapsed=true]
Describe Helmreleases
++ kubectl describe helmrelease -A
error: the server doesn't have a resource type "helmrelease"
++ true
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:describehr\r\e[0K'
section_end:1736875999:describehr

++ get_kustomize
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:kust[collapsed=true]\r\e[0K\e[33;1mKustomize\e[37m'
section_start:1736875999:kust[collapsed=true]
Kustomize
++ kubectl get kustomizations -A
error: the server doesn't have a resource type "kustomizations"
++ true
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:kust\r\e[0K'
section_end:1736875999:kust

++ get_gateways
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:gateways[collapsed=true]\r\e[0K\e[33;1mIstio Gateways\e[37m'
section_start:1736875999:gateways[collapsed=true]
Istio Gateways
++ kubectl get gateways -A
error: the server doesn't have a resource type "gateways"
++ true
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:gateways\r\e[0K'
section_end:1736875999:gateways

++ get_virtualservices
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:virtual_services[collapsed=true]\r\e[0K\e[33;1mVirtual Services\e[37m'
section_start:1736875999:virtual_services[collapsed=true]
Virtual Services
++ kubectl get virtualservices -A
error: the server doesn't have a resource type "virtualservices"
++ true
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:virtual_services\r\e[0K'
section_end:1736875999:virtual_services

++ get_hosts
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:hosts[collapsed=true]\r\e[0K\e[33;1mHosts File Contents\e[37m'
section_start:1736875999:hosts[collapsed=true]
Hosts File Contents
++ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
fe00::0	ip6-mcastprefix
fe00::1	ip6-allnodes
fe00::2	ip6-allrouters
10.42.37.198	runner-t2cmeac-project-6751-concurrent-0-zlfdf9fr

# Entries added by HostAliases.
127.0.0.1	registry1.dso.mil-bigbang-ci-bb-runner-dind
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:hosts\r\e[0K'
section_end:1736875999:hosts

++ get_dns_config
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:dns[collapsed=true]\r\e[0K\e[33;1mDNS Config\e[37m'
section_start:1736875999:dns[collapsed=true]
DNS Config
++ kubectl get configmap -n kube-system coredns
++ kubectl get configmap -n kube-system coredns -o 'jsonpath={.data.NodeHosts}'
172.20.0.1 host.k3d.internal
172.20.0.3 k3d-41553679-serverlb
172.20.0.2 k3d-41553679-server-0
+++ date +%s
++ echo -e '\e[0Ksection_end:1736875999:dns\r\e[0K'
section_end:1736875999:dns

++ get_log_dump
+++ date +%s
++ echo -e '\e[0Ksection_start:1736875999:log_dump[collapsed=true]\r\e[0K\e[33;1mLog Dump\e[37m'
section_start:1736875999:log_dump[collapsed=true]
Log Dump
++ echo -e '\e[31mNOTICE: Logs can be found in artifacts pod_logs/<namespace>/<pod_name>.txt\e[0m'
NOTICE: Logs can be found in artifacts pod_logs/<namespace>/<pod_name>.txt
++ mkdir -p pod_logs
+++ kubectl get pods -A --template '{{range .items}}{{.metadata.namespace}} {{.metadata.name}}{{"\n"}}{{end}}'
++ pods='kube-system coredns-ccb96694c-lp987
kube-system local-path-provisioner-5cf85fd84d-rshtv
kube-system metrics-server-5985cbc9d7-5q2nf
metallb-system controller-5f67f69db-kmgn6
metallb-system speaker-ppqc9'
++ echo 'kube-system coredns-ccb96694c-lp987
kube-system local-path-provisioner-5cf85fd84d-rshtv
kube-system metrics-server-5985cbc9d7-5q2nf
metallb-system controller-5f67f69db-kmgn6
metallb-system speaker-ppqc9'
++ read -r line
+++ echo 'kube-system coredns-ccb96694c-lp987'
+++ awk '{print $1}'
++ namespace=kube-system
+++ echo 'kube-system coredns-ccb96694c-lp987'
+++ awk '{print $2}'
++ pod=coredns-ccb96694c-lp987
++ does_pod_exist coredns-ccb96694c-lp987 kube-system
+++ kubectl -n kube-system get pods
+++ awk 'NR>1{print $1}'
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ coredns-ccb96694c-lp987 == \c\o\r\e\d\n\s\-\c\c\b\9\6\6\9\4\c\-\l\p\9\8\7 ]]
++ return 0
++ mkdir -p pod_logs/kube-system
++ kubectl -n kube-system logs --all-containers=true --prefix=true --previous=true --ignore-errors=true coredns-ccb96694c-lp987
++ kubectl -n kube-system logs --all-containers=true --prefix=true --ignore-errors=true coredns-ccb96694c-lp987
++ read -r line
+++ echo 'kube-system local-path-provisioner-5cf85fd84d-rshtv'
+++ awk '{print $1}'
++ namespace=kube-system
+++ echo 'kube-system local-path-provisioner-5cf85fd84d-rshtv'
+++ awk '{print $2}'
++ pod=local-path-provisioner-5cf85fd84d-rshtv
++ does_pod_exist local-path-provisioner-5cf85fd84d-rshtv kube-system
+++ kubectl -n kube-system get pods
+++ awk 'NR>1{print $1}'
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ coredns-ccb96694c-lp987 == \l\o\c\a\l\-\p\a\t\h\-\p\r\o\v\i\s\i\o\n\e\r\-\5\c\f\8\5\f\d\8\4\d\-\r\s\h\t\v ]]
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ local-path-provisioner-5cf85fd84d-rshtv == \l\o\c\a\l\-\p\a\t\h\-\p\r\o\v\i\s\i\o\n\e\r\-\5\c\f\8\5\f\d\8\4\d\-\r\s\h\t\v ]]
++ return 0
++ mkdir -p pod_logs/kube-system
++ kubectl -n kube-system logs --all-containers=true --prefix=true --previous=true --ignore-errors=true local-path-provisioner-5cf85fd84d-rshtv
++ kubectl -n kube-system logs --all-containers=true --prefix=true --ignore-errors=true local-path-provisioner-5cf85fd84d-rshtv
++ read -r line
+++ echo 'kube-system metrics-server-5985cbc9d7-5q2nf'
+++ awk '{print $1}'
++ namespace=kube-system
+++ echo 'kube-system metrics-server-5985cbc9d7-5q2nf'
+++ awk '{print $2}'
++ pod=metrics-server-5985cbc9d7-5q2nf
++ does_pod_exist metrics-server-5985cbc9d7-5q2nf kube-system
+++ kubectl -n kube-system get pods
+++ awk 'NR>1{print $1}'
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ coredns-ccb96694c-lp987 == \m\e\t\r\i\c\s\-\s\e\r\v\e\r\-\5\9\8\5\c\b\c\9\d\7\-\5\q\2\n\f ]]
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ local-path-provisioner-5cf85fd84d-rshtv == \m\e\t\r\i\c\s\-\s\e\r\v\e\r\-\5\9\8\5\c\b\c\9\d\7\-\5\q\2\n\f ]]
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ metrics-server-5985cbc9d7-5q2nf == \m\e\t\r\i\c\s\-\s\e\r\v\e\r\-\5\9\8\5\c\b\c\9\d\7\-\5\q\2\n\f ]]
++ return 0
++ mkdir -p pod_logs/kube-system
++ kubectl -n kube-system logs --all-containers=true --prefix=true --previous=true --ignore-errors=true metrics-server-5985cbc9d7-5q2nf
++ kubectl -n kube-system logs --all-containers=true --prefix=true --ignore-errors=true metrics-server-5985cbc9d7-5q2nf
++ read -r line
+++ echo 'metallb-system controller-5f67f69db-kmgn6'
+++ awk '{print $1}'
++ namespace=metallb-system
+++ echo 'metallb-system controller-5f67f69db-kmgn6'
+++ awk '{print $2}'
++ pod=controller-5f67f69db-kmgn6
++ does_pod_exist controller-5f67f69db-kmgn6 metallb-system
+++ kubectl -n metallb-system get pods
+++ awk 'NR>1{print $1}'
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ controller-5f67f69db-kmgn6 == \c\o\n\t\r\o\l\l\e\r\-\5\f\6\7\f\6\9\d\b\-\k\m\g\n\6 ]]
++ return 0
++ mkdir -p pod_logs/metallb-system
++ kubectl -n metallb-system logs --all-containers=true --prefix=true --previous=true --ignore-errors=true controller-5f67f69db-kmgn6
++ kubectl -n metallb-system logs --all-containers=true --prefix=true --ignore-errors=true controller-5f67f69db-kmgn6
++ read -r line
+++ echo 'metallb-system speaker-ppqc9'
+++ awk '{print $1}'
++ namespace=metallb-system
+++ echo 'metallb-system speaker-ppqc9'
+++ awk '{print $2}'
++ pod=speaker-ppqc9
++ does_pod_exist speaker-ppqc9 metallb-system
+++ kubectl -n metallb-system get pods
+++ awk 'NR>1{print $1}'
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ controller-5f67f69db-kmgn6 == \s\p\e\a\k\e\r\-\p\p\q\c\9 ]]
++ for pod in $(kubectl -n $2 get pods | awk 'NR>1{print $1}')
++ [[ speaker-ppqc9 == \s\p\e\a\k\e\r\-\p\p\q\c\9 ]]
++ return 0
++ mkdir -p pod_logs/metallb-system
++ kubectl -n metallb-system logs --all-containers=true --prefix=true --previous=true --ignore-errors=true speaker-ppqc9
++ kubectl -n metallb-system logs --all-containers=true --prefix=true --ignore-errors=true speaker-ppqc9
++ read -r line
+++ date +%s
++ echo -e '\e[0Ksection_end:1736876000:log_dump\r\e[0K'
section_end:1736876000:log_dump

++ get_cluster_info_dump
+++ date +%s
++ echo -e '\e[0Ksection_start:1736876000:cluster_info_dump[collapsed=true]\r\e[0K\e[33;1mCluster Info Dump\e[37m'
section_start:1736876000:cluster_info_dump[collapsed=true]
Cluster Info Dump
++ echo -e '\e[31mNOTICE: cluster-info can be found in artifact cluster_info_dump.txt\e[0m'
NOTICE: cluster-info can be found in artifact cluster_info_dump.txt
++ kubectl cluster-info dump
+++ date +%s
++ echo -e '\e[0Ksection_end:1736876000:cluster_info_dump\r\e[0K'
section_end:1736876000:cluster_info_dump

++ describe_resources
+++ date +%s
++ echo -e '\e[0Ksection_start:1736876000:describe_resources[collapsed=true]\r\e[0K\e[33;1mDescribe Cluster Resources\e[37m'
section_start:1736876000:describe_resources[collapsed=true]
Describe Cluster Resources
++ echo -e '\e[31mNOTICE: Cluster resource describes can be found in artifacts kubectl_describes\e[0m'
NOTICE: Cluster resource describes can be found in artifacts kubectl_describes
++ echo -e 'Running '\''kubectl describe'\'' on all resources...'
Running 'kubectl describe' on all resources...
++ additional_resources=("NetworkPolicy")
+++ kubectl get all -A --template '{{range .items}} {{.kind}}{{"\n"}}{{end}}'
+++ uniq
++ default_resources=' Pod
 Service
 DaemonSet
 Deployment
 ReplicaSet'
+++ echo -e ' Pod
 Service
 DaemonSet
 Deployment
 ReplicaSet\nNetworkPolicy'
+++ sort -u
++ default_resources=' DaemonSet
 Deployment
 Pod
 ReplicaSet
 Service
NetworkPolicy'
+++ kubectl get crds --template '{{range .items}} {{.status.acceptedNames.plural}} {{.spec.scope}}{{"\n"}}{{end}}'
++ custom_resources=' addons Namespaced
 addresspools Namespaced
 bfdprofiles Namespaced
 bgpadvertisements Namespaced
 bgppeers Namespaced
 communities Namespaced
 etcdsnapshotfiles Cluster
 helmchartconfigs Namespaced
 helmcharts Namespaced
 ipaddresspools Namespaced
 l2advertisements Namespaced'
++ echo ' DaemonSet
 Deployment
 Pod
 ReplicaSet
 Service
NetworkPolicy'
++ read -r line
+++ echo DaemonSet
+++ awk '{print $1}'
++ default_resource=DaemonSet
+++ kubectl get DaemonSet -A --template '{{range .items}} {{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ namespaces=' metallb-system'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/metallb-system
++ kubectl -n metallb-system describe DaemonSet
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo Deployment
+++ awk '{print $1}'
++ default_resource=Deployment
+++ kubectl get Deployment -A --template '{{range .items}} {{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ namespaces=' kube-system
 metallb-system'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/kube-system
++ kubectl -n kube-system describe Deployment
++ sed '/^$/d;/^Name:.*/i ---'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/metallb-system
++ kubectl -n metallb-system describe Deployment
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo Pod
+++ awk '{print $1}'
++ default_resource=Pod
+++ kubectl get Pod -A --template '{{range .items}} {{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ namespaces=' kube-system
 metallb-system'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/kube-system
++ kubectl -n kube-system describe Pod
++ sed '/^$/d;/^Name:.*/i ---'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/metallb-system
++ kubectl -n metallb-system describe Pod
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo ReplicaSet
+++ awk '{print $1}'
++ default_resource=ReplicaSet
+++ kubectl get ReplicaSet -A --template '{{range .items}} {{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ namespaces=' kube-system
 metallb-system'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/kube-system
++ kubectl -n kube-system describe ReplicaSet
++ sed '/^$/d;/^Name:.*/i ---'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/metallb-system
++ kubectl -n metallb-system describe ReplicaSet
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo Service
+++ awk '{print $1}'
++ default_resource=Service
+++ kubectl get Service -A --template '{{range .items}} {{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ namespaces=' default
 kube-system
 metallb-system'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/default
++ kubectl -n default describe Service
++ sed '/^$/d;/^Name:.*/i ---'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/kube-system
++ kubectl -n kube-system describe Service
++ sed '/^$/d;/^Name:.*/i ---'
++ for namespace in ${namespaces}
++ mkdir -p kubectl_describes/namespaces/metallb-system
++ kubectl -n metallb-system describe Service
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo NetworkPolicy
+++ awk '{print $1}'
++ default_resource=NetworkPolicy
+++ kubectl get NetworkPolicy -A --template '{{range .items}} {{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ namespaces=
++ read -r line
++ echo ' addons Namespaced
 addresspools Namespaced
 bfdprofiles Namespaced
 bgpadvertisements Namespaced
 bgppeers Namespaced
 communities Namespaced
 etcdsnapshotfiles Cluster
 helmchartconfigs Namespaced
 helmcharts Namespaced
 ipaddresspools Namespaced
 l2advertisements Namespaced'
++ read -r line
+++ echo 'addons Namespaced'
+++ awk '{print $1}'
++ crd=addons
+++ echo 'addons Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get addons -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=kube-system
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ for namespace in ${crd_namespaces}
++ mkdir -p kubectl_describes/namespaces/kube-system
++ kubectl -n kube-system describe addons
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo 'addresspools Namespaced'
+++ awk '{print $1}'
++ crd=addresspools
+++ echo 'addresspools Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get addresspools -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
Warning: metallb.io v1beta1 AddressPool is deprecated, consider using IPAddressPool
++ crd_namespaces=
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ read -r line
+++ echo 'bfdprofiles Namespaced'
+++ awk '{print $1}'
++ crd=bfdprofiles
+++ echo 'bfdprofiles Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get bfdprofiles -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ read -r line
+++ echo 'bgpadvertisements Namespaced'
+++ awk '{print $1}'
++ crd=bgpadvertisements
+++ echo 'bgpadvertisements Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get bgpadvertisements -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ read -r line
+++ echo 'bgppeers Namespaced'
+++ awk '{print $1}'
++ crd=bgppeers
+++ echo 'bgppeers Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get bgppeers -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ read -r line
+++ echo 'communities Namespaced'
+++ awk '{print $1}'
++ crd=communities
+++ echo 'communities Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get communities -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ read -r line
+++ echo 'etcdsnapshotfiles Cluster'
+++ awk '{print $1}'
++ crd=etcdsnapshotfiles
+++ echo 'etcdsnapshotfiles Cluster'
+++ awk '{print $2}'
++ crd_scope=Cluster
+++ kubectl get etcdsnapshotfiles -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=
++ [[ Cluster = \C\l\u\s\t\e\r ]]
++ mkdir -p kubectl_describes/cluster_resources
++ kubectl describe etcdsnapshotfiles
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo 'helmchartconfigs Namespaced'
+++ awk '{print $1}'
++ crd=helmchartconfigs
+++ echo 'helmchartconfigs Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get helmchartconfigs -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ read -r line
+++ echo 'helmcharts Namespaced'
+++ awk '{print $1}'
++ crd=helmcharts
+++ echo 'helmcharts Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get helmcharts -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ read -r line
+++ echo 'ipaddresspools Namespaced'
+++ awk '{print $1}'
++ crd=ipaddresspools
+++ echo 'ipaddresspools Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get ipaddresspools -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=metallb-system
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ for namespace in ${crd_namespaces}
++ mkdir -p kubectl_describes/namespaces/metallb-system
++ kubectl -n metallb-system describe ipaddresspools
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
+++ echo 'l2advertisements Namespaced'
+++ awk '{print $1}'
++ crd=l2advertisements
+++ echo 'l2advertisements Namespaced'
+++ awk '{print $2}'
++ crd_scope=Namespaced
+++ kubectl get l2advertisements -A --template '{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}'
+++ sort -u
++ crd_namespaces=metallb-system
++ [[ Namespaced = \C\l\u\s\t\e\r ]]
++ [[ Namespaced = \N\a\m\e\s\p\a\c\e\d ]]
++ for namespace in ${crd_namespaces}
++ mkdir -p kubectl_describes/namespaces/metallb-system
++ kubectl -n metallb-system describe l2advertisements
++ sed '/^$/d;/^Name:.*/i ---'
++ read -r line
++ find kubectl_describes/ -empty -delete
+++ date +%s
++ echo -e '\e[0Ksection_end:1736876002:describe_resources\r\e[0K'
section_end:1736876002:describe_resources

++ get_cpumem
+++ date +%s
++ echo -e '\e[0Ksection_start:1736876002:get_cpumem[collapsed=true]\r\e[0K\e[33;1mCPU and Memory usage\e[37m'
section_start:1736876002:get_cpumem[collapsed=true]
CPU and Memory usage
++ echo -e '\e[31mNOTICE: Logs can be found in artifacts get_cpumem.txt\e[0m'
NOTICE: Logs can be found in artifacts get_cpumem.txt
++ MAX_RETRIES=5
++ RETRY_DELAY=5
++ COUNT=0
++ kubectl top pods --all-namespaces --use-protocol-buffers
++ tee get_cpumem.txt
NAMESPACE        NAME                                      CPU(cores)   MEMORY(bytes)   
kube-system      coredns-ccb96694c-lp987                   3m           14Mi            
kube-system      local-path-provisioner-5cf85fd84d-rshtv   1m           8Mi             
kube-system      metrics-server-5985cbc9d7-5q2nf           19m          20Mi            
metallb-system   controller-5f67f69db-kmgn6                105m         18Mi            
metallb-system   speaker-ppqc9                             3m           15Mi            
+++ date +%s
++ echo -e '\e[0Ksection_end:1736876002:get_cpumem\r\e[0K'
section_end:1736876002:get_cpumem

++ echo '$ k3d cluster delete ${CI_JOB_ID}'
$ k3d cluster delete ${CI_JOB_ID}
++ k3d cluster delete 41553679
INFO[0000] Deleting cluster '41553679'                  
INFO[0001] Deleting 1 attached volumes...               
INFO[0001] Removing cluster details from default kubeconfig... 
INFO[0001] Removing standalone kubeconfig file (if there is one)... 
INFO[0001] Successfully deleted cluster 41553679!       
++ echo '$ docker network rm ${CI_JOB_ID}'
$ docker network rm ${CI_JOB_ID}
++ docker network rm 41553679
41553679

section_end:1736876004:after_script
section_start:1736876004:upload_artifacts_on_failure
Uploading artifacts for failed job
Uploading artifacts...
events.txt: found 1 matching artifact files and directories 
get_cpumem.txt: found 1 matching artifact files and directories 
WARNING: images.txt: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/gluon) 
pod_logs: found 8 matching artifact files and directories 
cluster_info_dump.txt: found 1 matching artifact files and directories 
kubectl_describes: found 18 matching artifact files and directories 
WARNING: cypress-artifacts: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/gluon) 
Uploading artifacts as "archive" to coordinator... 201 Created  id=41553679 responseStatus=201 Created token=glcbt-64

section_end:1736876006:upload_artifacts_on_failure
section_start:1736876006:cleanup_file_variables
Cleaning up project directory and file based variables

section_end:1736876006:cleanup_file_variables
ERROR: Job failed: command terminated with exit code 1