Running with gitlab-runner 17.8.0 (e4f782b3)
  on graduated-runner-graduated-runner-gitlab-runner-766f6ddc85tl2fg t2_yNF_V, system ID: r_54DTYspBjyJp
Resolving secrets
section_start:1742245068:prepare_executor
Preparing the "kubernetes" executor
Using Kubernetes namespace: graduated-runner
Using Kubernetes executor with image registry1.dso.mil/bigbang-ci/bb-ci:2.21.1 ...
Using attach strategy to execute scripts...
section_end:1742245068:prepare_executor
section_start:1742245068:prepare_script
Preparing environment
Using FF_USE_POD_ACTIVE_DEADLINE_SECONDS, the Pod activeDeadlineSeconds will be set to the job timeout: 1h0m0s...
Waiting for pod graduated-runner/runner-t2ynfv-project-2489-concurrent-0-ffh7zo0b to be running, status is Pending
Waiting for pod graduated-runner/runner-t2ynfv-project-2489-concurrent-0-ffh7zo0b to be running, status is Pending
	ContainersNotInitialized: "containers with incomplete status: [istio-proxy init-permissions]"
	ContainersNotReady: "containers with unready status: [istio-proxy build helper svc-0]"
	ContainersNotReady: "containers with unready status: [istio-proxy build helper svc-0]"
Running on runner-t2ynfv-project-2489-concurrent-0-ffh7zo0b via graduated-runner-graduated-runner-gitlab-runner-766f6ddc85tl2fg...

section_end:1742245075:prepare_script
section_start:1742245075:get_sources
Getting source from Git repository
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/big-bang/product/packages/minio/.git/
Created fresh repository.
Checking out d4db6ec1 as detached HEAD (ref is refs/merge-requests/229/head)...

Skipping Git submodules setup

section_end:1742245076:get_sources
section_start:1742245076:step_script
Executing "step_script" stage of the job script
$ echo -e "\e[0Ksection_start:`date +%s`:k3d_up[collapsed=true]\r\e[0K\e[33;1mK3D Cluster Create\e[37m"
section_start:1742245076:k3d_up[collapsed=true]
K3D Cluster Create
$ git clone -b ${PIPELINE_REPO_BRANCH} ${PIPELINE_REPO} ${PIPELINE_REPO_DESTINATION}
Cloning into '../pipeline-repo'...
$ source ${PIPELINE_REPO_DESTINATION}/library/templates.sh
$ source ${PIPELINE_REPO_DESTINATION}/library/bigbang-functions.sh
$ source ${PIPELINE_REPO_DESTINATION}/library/package-functions.sh
$ source ${PIPELINE_REPO_DESTINATION}/library/k8s-functions.sh
$ source ${PIPELINE_REPO_DESTINATION}/library/rds-functions.sh
$ package_auth_setup
$ i=0; while [ "$i" -lt 12 ]; do docker info &>/dev/null && break; sleep 5; i=$(( i + 1 )) ; done
$ docker network create --opt com.docker.network.bridge.name=${CI_JOB_ID} ${CI_JOB_ID} --driver=bridge -o "com.docker.network.driver.mtu"="1450" --subnet=172.20.0.0/16 --gateway 172.20.0.1
49165f3948f9ff84558fbcea6b56032c422ea195c4e903246037b4797d8b8bf8
$ chmod +x ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh; echo "Executing ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh..."; ./${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/k3d/deploy_k3d.sh
Executing ../pipeline-repo/clusters/k3d/dependencies/k3d/deploy_k3d.sh...
Creating k3d cluster with default metrics server
Configuring DNS for k3d-43431107-agent-0
Configuring DNS for k3d-43431107-agent-2
Configuring DNS for k3d-43431107-agent-1
INFO[0000] Using config file ../pipeline-repo/clusters/k3d/dependencies/k3d/config.yaml (k3d.io/v1alpha5#simple) 
INFO[0000] portmapping '80:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] portmapping '443:443' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
Waiting for k3d-43431107-agent-1 to start... (0s elapsed)
Waiting for k3d-43431107-agent-0 to start... (0s elapsed)
Waiting for k3d-43431107-agent-2 to start... (0s elapsed)
INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network '43431107' (49165f3948f9ff84558fbcea6b56032c422ea195c4e903246037b4797d8b8bf8) 
INFO[0000] Created image volume k3d-43431107-images     
INFO[0000] Starting new tools node...                   
INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.7.5' 
INFO[0001] Creating node 'k3d-43431107-server-0'        
INFO[0001] Starting node 'k3d-43431107-tools'           
INFO[0002] Pulling image 'rancher/k3s:v1.31.4-k3s1'     
Waiting for k3d-43431107-agent-1 to start... (2s elapsed)
Waiting for k3d-43431107-agent-2 to start... (2s elapsed)
Waiting for k3d-43431107-agent-0 to start... (2s elapsed)
Waiting for k3d-43431107-agent-0 to start... (4s elapsed)
Waiting for k3d-43431107-agent-1 to start... (4s elapsed)
Waiting for k3d-43431107-agent-2 to start... (4s elapsed)
INFO[0006] Creating LoadBalancer 'k3d-43431107-serverlb' 
Waiting for k3d-43431107-agent-1 to start... (6s elapsed)
Waiting for k3d-43431107-agent-0 to start... (6s elapsed)
Waiting for k3d-43431107-agent-2 to start... (6s elapsed)
INFO[0006] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.7.5' 
INFO[0008] Using the k3d-tools node to gather environment information 
Waiting for k3d-43431107-agent-1 to start... (8s elapsed)
Waiting for k3d-43431107-agent-0 to start... (8s elapsed)
Waiting for k3d-43431107-agent-2 to start... (8s elapsed)
INFO[0008] HostIP: using network gateway 172.20.0.1 address 
INFO[0008] Starting cluster '43431107'                  
INFO[0008] Starting servers...                          
INFO[0008] Starting node 'k3d-43431107-server-0'        
Waiting for k3d-43431107-agent-0 to start... (10s elapsed)
Waiting for k3d-43431107-agent-1 to start... (10s elapsed)
Waiting for k3d-43431107-agent-2 to start... (10s elapsed)
INFO[0011] All agents already running.                  
INFO[0011] Starting helpers...                          
INFO[0011] Starting node 'k3d-43431107-serverlb'        
Waiting for k3d-43431107-agent-0 to start... (12s elapsed)
Waiting for k3d-43431107-agent-1 to start... (12s elapsed)
Waiting for k3d-43431107-agent-2 to start... (12s elapsed)
Waiting for k3d-43431107-agent-0 to start... (14s elapsed)
Waiting for k3d-43431107-agent-2 to start... (14s elapsed)
Waiting for k3d-43431107-agent-1 to start... (14s elapsed)
Timeout waiting for k3d-43431107-agent-0, skipping DNS configuration
Timeout waiting for k3d-43431107-agent-2, skipping DNS configuration
Timeout waiting for k3d-43431107-agent-1, skipping DNS configuration
INFO[0018] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... 
INFO[0020] Cluster '43431107' created successfully!     
INFO[0020] You can now use it like this:                
kubectl cluster-info
K3d cluster creation completed
$ until kubectl get deployment coredns -n kube-system -o go-template='{{.status.availableReplicas}}' | grep -v -e '<no value>'; do sleep 1s; done
1
$ chmod +x ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh; echo "Executing ${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh...";./${PIPELINE_REPO_DESTINATION}/clusters/k3d/dependencies/metallb/install_metallb.sh ;
Executing ../pipeline-repo/clusters/k3d/dependencies/metallb/install_metallb.sh...
namespace/metallb-system created
namespace/metallb-system labeled
secret/private-registry created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
Waiting on MetalLB controller/webhook...
deployment.apps/controller condition met
ipaddresspool.metallb.io/default created
l2advertisement.metallb.io/l2advertisement1 created
Waiting for daemon set "speaker" rollout to finish: 0 of 1 updated pods are available...
daemon set "speaker" successfully rolled out
$ get_all
section_start:1742245126:all_resources[collapsed=true]
All Cluster Resources
NAMESPACE        NAME                                          READY   STATUS    RESTARTS   AGE
kube-system      pod/coredns-ccb96694c-9pk5r                   1/1     Running   0          31s
kube-system      pod/local-path-provisioner-5cf85fd84d-ddnjg   1/1     Running   0          31s
kube-system      pod/metrics-server-5985cbc9d7-q4dbr           1/1     Running   0          31s
metallb-system   pod/controller-5f67f69db-wl96s                1/1     Running   0          24s
metallb-system   pod/speaker-vqnvv                             1/1     Running   0          24s

NAMESPACE        NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default          service/kubernetes        ClusterIP   172.20.0.1       <none>        443/TCP                  37s
kube-system      service/kube-dns          ClusterIP   172.20.0.10      <none>        53/UDP,53/TCP,9153/TCP   34s
kube-system      service/metrics-server    ClusterIP   172.20.187.106   <none>        443/TCP                  34s
metallb-system   service/webhook-service   ClusterIP   172.20.190.254   <none>        443/TCP                  24s

NAMESPACE        NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
metallb-system   daemonset.apps/speaker   1         1         1       1            1           kubernetes.io/os=linux   24s

NAMESPACE        NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system      deployment.apps/coredns                  1/1     1            1           34s
kube-system      deployment.apps/local-path-provisioner   1/1     1            1           34s
kube-system      deployment.apps/metrics-server           1/1     1            1           34s
metallb-system   deployment.apps/controller               1/1     1            1           24s

NAMESPACE        NAME                                                DESIRED   CURRENT   READY   AGE
kube-system      replicaset.apps/coredns-ccb96694c                   1         1         1       31s
kube-system      replicaset.apps/local-path-provisioner-5cf85fd84d   1         1         1       31s
kube-system      replicaset.apps/metrics-server-5985cbc9d7           1         1         1       31s
metallb-system   replicaset.apps/controller-5f67f69db                1         1         1       24s
section_end:1742245126:all_resources

$ echo -e "\e[0Ksection_end:`date +%s`:k3d_up\r\e[0K"
section_end:1742245126:k3d_up

$ rds_create
section_start:`date +%s`:dependency_install1[collapsed=true]
RDS Database Dependency Creation
$ dependency_install
section_start:1742245126:dependency_install[collapsed=true]
Dependency Install
Cloning default branch from https://repo1.dso.mil/platform-one/big-bang/apps/application-utilities/minio-operator.git
Cloning into 'repos/minio-operator'...
Installing dependency: repos/minio-operator into minio-operator namespace
namespace/minio-operator created
namespace/minio-operator labeled
secret/private-registry created
Helm installing repos/minio-operator/chart into minio-operator namespace using repos/minio-operator/tests/test-values.yaml for values
Release "minio-operator" does not exist. Installing it now.
NAME: minio-operator
LAST DEPLOYED: Mon Mar 17 20:58:47 2025
NAMESPACE: minio-operator
STATUS: deployed
REVISION: 1
NOTES:
1. Get the JWT for logging in to the console:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: console-sa-secret
  namespace: minio-operator
  annotations:
    kubernetes.io/service-account.name: console-sa
type: kubernetes.io/service-account-token
EOF
kubectl -n minio-operator get secret console-sa-secret -o jsonpath="{.data.token}" | base64 --decode

2. Get the Operator Console URL by running these commands:
  kubectl --namespace minio-operator port-forward svc/console 9090:9090
  echo "Visit the Operator Console at http://127.0.0.1:9090"
section_end:1742245132:dependency_install

$ dependency_wait
section_start:1742245132:dependency_wait[collapsed=true]
Dependency Wait
Waiting on CRDS ... done.
Waiting on stateful sets ... done.
Waiting on daemon sets ... done.
Waiting on deployments ... done.
Waiting on terminating pods ... done.
done.
section_end:1742245145:dependency_wait

$ dependency_images
section_start:1742245145:dep_images[collapsed=true]
Getting List of Dependency Images
section_end:1742245145:dep_images

$ chart_images
section_start:1742245145:chart_images[collapsed=true]
Getting List of Chart.yaml Images
registry1.dso.mil/ironbank/opensource/minio/minio:RELEASE.2025-01-20T14-49-07Z
registry1.dso.mil/ironbank/opensource/minio/operator-sidecar:v7.0.0
registry1.dso.mil/ironbank/opensource/kubernetes/kubectl:v1.29.6
section_end:1742245145:chart_images

$ get_upstream_ib_helm_test_values_overrides
section_start:1742245145:get_upstream_ib_helm_test_values_overrides[collapsed=true]
Getting any Test Values Overrides from Upstream IB Pipeline
Not triggered by an upstream IB pipeline
section_end:1742245145:get_upstream_ib_helm_test_values_overrides

$ package_install
section_start:1742245145:package_install[collapsed=true]
Package Install
namespace/minio created
namespace/minio labeled
secret/private-registry created
Helm installing minio/chart into minio namespace using minio/tests/test-values.yaml for values
Release "minio" does not exist. Installing it now.
W0317 20:59:05.812795    1066 warnings.go:70] unknown field "spec.pools[0].securityContext.capabilities"
NAME: minio
LAST DEPLOYED: Mon Mar 17 20:59:05 2025
NAMESPACE: minio
STATUS: deployed
REVISION: 1
NOTES:
To connect to the myminio tenant if it doesn't have a service exposed, you can port-forward to it by running:
  kubectl --namespace minio port-forward svc/myminio-console 9090:9090

  Then visit the MinIO Console at http://127.0.0.1:9090
section_end:1742245182:package_install

$ package_wait
section_start:1742245182:package_wait[collapsed=true]
Package Wait
Waiting on CRDs ... done.
Waiting on stateful sets ... done.
Waiting on daemon sets ... done.
Waiting on deployments ... done.
Waiting on terminating pods ... done.
done.
section_end:1742245195:package_wait

$ installed_images
section_start:1742245195:inst_images[collapsed=true]
Getting List of Installed Images
section_end:1742245196:inst_images

$ image_list_creation
section_start:1742245196:image_fetch[collapsed=true]
Image List Creation
docker.io/rancher/mirrored-library-busybox:1.36.1
registry1.dso.mil/ironbank/opensource/kubernetes/kubectl:v1.29.6
registry1.dso.mil/ironbank/opensource/minio/minio:RELEASE.2025-01-20T14-49-07Z
registry1.dso.mil/ironbank/opensource/minio/operator-sidecar:v7.0.0
section_end:1742245196:image_fetch

$ post_install_packages
section_start:1742245196:post_install_packages[collapsed=true]
Post Install Packages
section_end:1742245196:post_install_packages

$ post_install_wait
section_start:1742245196:post_install_wait[collapsed=true]
Post Install Wait
section_end:1742245196:post_install_wait

$ package_test
section_start:1742245196:package_test[collapsed=true]
Package Test
NAME: minio
LAST DEPLOYED: Mon Mar 17 20:59:05 2025
NAMESPACE: minio
STATUS: deployed
REVISION: 1
TEST SUITE:     minio-instance-cypress-config
Last Started:   Mon Mar 17 20:59:56 2025
Last Completed: Mon Mar 17 20:59:56 2025
Phase:          Succeeded
TEST SUITE:     minio-instance-script-config
Last Started:   Mon Mar 17 20:59:56 2025
Last Completed: Mon Mar 17 20:59:56 2025
Phase:          Succeeded
TEST SUITE:     minio-instance-cypress-test
Last Started:   Mon Mar 17 20:59:56 2025
Last Completed: Mon Mar 17 21:00:36 2025
Phase:          Succeeded
TEST SUITE:     minio-instance-script-test
Last Started:   Mon Mar 17 21:00:36 2025
Last Completed: Mon Mar 17 21:00:40 2025
Phase:          Succeeded
NOTES:
To connect to the myminio tenant if it doesn't have a service exposed, you can port-forward to it by running:
  kubectl --namespace minio port-forward svc/myminio-console 9090:9090

  Then visit the MinIO Console at http://127.0.0.1:9090
***** Start Helm Test Logs *****
--2025-03-17 21:00:11--  https://repo1.dso.mil/big-bang/product/packages/gluon/-/raw/master/common/commands.js
Resolving repo1.dso.mil (repo1.dso.mil)... 15.205.173.153
Connecting to repo1.dso.mil (repo1.dso.mil)|15.205.173.153|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4656 (4.5K) [text/plain]
Saving to: '/test/cypress/common/commands.js'

     0K ....                                                  100%  147M=0s

2025-03-17 21:00:12 (147 MB/s) - '/test/cypress/common/commands.js' saved [4656/4656]


DevTools listening on ws://127.0.0.1:42933/devtools/browser/e2091951-82e5-4837-839c-8b7f35125921
This folder is not writable: /test

Writing to this directory is required by Cypress in order to store screenshots and videos.

Enable write permissions to this directory to ensure screenshots and videos are stored.

If you don't require screenshots or videos to be stored you can safely ignore this warning.

tput: No value for $TERM and no -T specified
================================================================================

  (Run Starting)

  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
  │ Cypress:        13.17.0                                                                        │
  │ Browser:        Chrome 132 (headless)                                                          │
  │ Node Version:   v22.13.0 (/usr/local/bin/node)                                                 │
  │ Specs:          1 found (01-minio-login.spec.cy.js)                                            │
  │ Searched:       cypress/e2e/**/*.cy.{js,jsx,ts,tsx}                                            │
  └────────────────────────────────────────────────────────────────────────────────────────────────┘


────────────────────────────────────────────────────────────────────────────────────────────────────
                                                                                                    
  Running:  01-minio-login.spec.cy.js                                                       (1 of 1)


  Minio Login
    ✓ Check Minio Login (1224ms)


  1 passing (3s)


  (Results)

  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
  │ Tests:        1                                                                                │
  │ Passing:      1                                                                                │
  │ Failing:      0                                                                                │
  │ Pending:      0                                                                                │
  │ Skipped:      0                                                                                │
  │ Screenshots:  0                                                                                │
  │ Video:        true                                                                             │
  │ Duration:     2 seconds                                                                        │
  │ Spec Ran:     01-minio-login.spec.cy.js                                                        │
  └────────────────────────────────────────────────────────────────────────────────────────────────┘


  (Video)

  -  Started compressing: Compressing to 35 CRF                                                     
  -  Finished compressing: 0 seconds                                                 

  -  Video output: /test/cypress/videos/01-minio-login.spec.cy.js.mp4


tput: No value for $TERM and no -T specified
================================================================================

  (Run Finished)


       Spec                                              Tests  Passing  Failing  Pending  Skipped  
  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
  │ ✔  01-minio-login.spec.cy.js                00:02        1        1        -        -        - │
  └────────────────────────────────────────────────────────────────────────────────────────────────┘
    ✔  All specs passed!                        00:02        1        1        -        -        -  

npm notice
npm notice New major version of npm available! 10.9.2 -> 11.2.0
npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.2.0
npm notice To update run: npm install -g npm@11.2.0
npm notice
found cypress logs from the pod
no cypress screenshots found from the pod
found cypress videos from the pod
---
Running test-write.sh...
---
+ attempt_counter=0
+ max_attempts=25
+++ '[' -n 80 ']'
+++ echo :
++ mc --config-dir /test config host add bigbang http://minio:80 minio minio123
++ echo 0
+ '[' 0 -eq 0 ']'
+ mc --config-dir /test rb bigbang/foobar --force
+ mc --config-dir /test mb bigbang/foobar
Bucket created successfully `bigbang/foobar`.
+ mc --config-dir /test ls bigbang/foobar
+ base64 /dev/urandom
+ head -c 10000000
+ md5sum /test/file.txt
+ mc --config-dir /test cp /test/file.txt bigbang/foobar/file.txt
`/test/file.txt` -> `bigbang/foobar/file.txt`
Total: 9.54 MiB, Transferred: 9.54 MiB, Speed: 159.42 MiB/s
+ mc --config-dir /test ls bigbang/foobar/file.txt
[2025-03-17 21:00:38 UTC] 9.5MiB STANDARD file.txt
+ mc --config-dir /test cp bigbang/foobar/file.txt /test/file.txt
`bigbang/foobar/file.txt` -> `/test/file.txt`
Total: 9.54 MiB, Transferred: 9.54 MiB, Speed: 444.87 MiB/s
+ mc --config-dir /test rb bigbang/foobar --force
Removed `bigbang/foobar` successfully.
+ md5sum -c /test/filesig
/test/file.txt: OK
***** End Helm Test Logs *****
Cypress logs found from the pipe
No Cypress screenshots found from the pipe
Cypress videos found from the pipe
section_end:1742245241:package_test

$ cluster_deprecation_check
section_start:1742245241:kubent_check[collapsed=true]
In Cluster Deprecation Check
9:00PM INF >>> Kube No Trouble `kubent` <<<
9:00PM INF version 0.7.3 (git sha 57480c07b3f91238f12a35d0ec88d9368aae99aa)
9:00PM INF Initializing collectors and retrieving data
9:00PM INF Target K8s version is 1.31.5-eks-8cce635
9:00PM INF Retrieved 0 resources from collector name=Cluster
9:00PM INF Retrieved 762 resources from collector name="Helm v3"
9:00PM INF Loaded ruleset name=custom.rego.tmpl
9:00PM INF Loaded ruleset name=deprecated-1-16.rego
9:00PM INF Loaded ruleset name=deprecated-1-22.rego
9:00PM INF Loaded ruleset name=deprecated-1-25.rego
9:00PM INF Loaded ruleset name=deprecated-1-26.rego
9:00PM INF Loaded ruleset name=deprecated-1-27.rego
9:00PM INF Loaded ruleset name=deprecated-1-29.rego
9:00PM INF Loaded ruleset name=deprecated-1-32.rego
9:00PM INF Loaded ruleset name=deprecated-future.rego
section_end:1742245250:kubent_check

$ image_annotation_validation
section_start:1742245250:image_annot[collapsed=true]
Image Annotation Validation
2025/03/17 21:00:50 logged in via /root/.docker/config.json
section_end:1742245250:image_annot

$ package_control_validate
section_start:1742245250:package_control_validate[collapsed=true]
Package Control Validation
section_end:1742245250:package_control_validate

$ touch $CI_PROJECT_DIR/success

section_end:1742245250:step_script
section_start:1742245250:after_script
Running after_script
Running after script...
$ source ${PIPELINE_REPO_DESTINATION}/library/templates.sh
$ source ${PIPELINE_REPO_DESTINATION}/library/bigbang-functions.sh
$ source ${PIPELINE_REPO_DESTINATION}/library/package-functions.sh
$ source ${PIPELINE_REPO_DESTINATION}/library/k8s-functions.sh
$ get_ns
section_start:1742245250:namespaces[collapsed=true]
Namespaces
NAME              STATUS   AGE     LABELS
default           Active   2m42s   kubernetes.io/metadata.name=default
kube-node-lease   Active   2m42s   kubernetes.io/metadata.name=kube-node-lease
kube-public       Active   2m43s   kubernetes.io/metadata.name=kube-public
kube-system       Active   2m43s   kubernetes.io/metadata.name=kube-system
metallb-system    Active   2m28s   app=metallb,kubernetes.io/metadata.name=metallb-system
minio             Active   105s    app.kubernetes.io/name=minio,kubernetes.io/metadata.name=minio
minio-operator    Active   2m3s    app.kubernetes.io/name=minio-operator,kubernetes.io/metadata.name=minio-operator
section_end:1742245250:namespaces

$ get_all
section_start:1742245250:all_resources[collapsed=true]
All Cluster Resources
NAMESPACE        NAME                                          READY   STATUS      RESTARTS   AGE
kube-system      pod/coredns-ccb96694c-9pk5r                   1/1     Running     0          2m35s
kube-system      pod/local-path-provisioner-5cf85fd84d-ddnjg   1/1     Running     0          2m35s
kube-system      pod/metrics-server-5985cbc9d7-q4dbr           1/1     Running     0          2m35s
metallb-system   pod/controller-5f67f69db-wl96s                1/1     Running     0          2m28s
metallb-system   pod/speaker-vqnvv                             1/1     Running     0          2m28s
minio-operator   pod/minio-operator-5bcd8579fb-72cd4           1/1     Running     0          2m3s
minio-operator   pod/minio-operator-5bcd8579fb-c4cfg           1/1     Running     0          2m3s
minio            pod/minio-instance-cypress-test               0/1     Completed   0          54s
minio            pod/minio-instance-script-test                0/1     Completed   0          14s
minio            pod/minio-minio-instance-pool-0-0             2/2     Running     0          98s
minio            pod/minio-minio-instance-pool-0-1             2/2     Running     0          98s

NAMESPACE        NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default          service/kubernetes                     ClusterIP   172.20.0.1       <none>        443/TCP                  2m41s
kube-system      service/kube-dns                       ClusterIP   172.20.0.10      <none>        53/UDP,53/TCP,9153/TCP   2m38s
kube-system      service/metrics-server                 ClusterIP   172.20.187.106   <none>        443/TCP                  2m38s
metallb-system   service/webhook-service                ClusterIP   172.20.190.254   <none>        443/TCP                  2m28s
minio-operator   service/operator                       ClusterIP   172.20.252.14    <none>        4222/TCP                 2m3s
minio-operator   service/sts                            ClusterIP   172.20.108.153   <none>        4223/TCP                 2m3s
minio            service/minio                          ClusterIP   172.20.176.93    <none>        80/TCP                   100s
minio            service/minio-minio-instance-console   ClusterIP   172.20.195.53    <none>        9090/TCP                 100s
minio            service/minio-minio-instance-hl        ClusterIP   None             <none>        9000/TCP                 100s

NAMESPACE        NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
metallb-system   daemonset.apps/speaker   1         1         1       1            1           kubernetes.io/os=linux   2m28s

NAMESPACE        NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system      deployment.apps/coredns                  1/1     1            1           2m38s
kube-system      deployment.apps/local-path-provisioner   1/1     1            1           2m38s
kube-system      deployment.apps/metrics-server           1/1     1            1           2m38s
metallb-system   deployment.apps/controller               1/1     1            1           2m28s
minio-operator   deployment.apps/minio-operator           2/2     2            2           2m3s

NAMESPACE        NAME                                                DESIRED   CURRENT   READY   AGE
kube-system      replicaset.apps/coredns-ccb96694c                   1         1         1       2m35s
kube-system      replicaset.apps/local-path-provisioner-5cf85fd84d   1         1         1       2m35s
kube-system      replicaset.apps/metrics-server-5985cbc9d7           1         1         1       2m35s
metallb-system   replicaset.apps/controller-5f67f69db                1         1         1       2m28s
minio-operator   replicaset.apps/minio-operator-5bcd8579fb           2         2         2       2m3s

NAMESPACE   NAME                                           READY   AGE
minio       statefulset.apps/minio-minio-instance-pool-0   2/2     98s
section_end:1742245250:all_resources

$ get_events
section_start:1742245250:show_event_log[collapsed=true]
Cluster Event Log
NOTICE: Cluster events can be found in artifact events.txt
section_end:1742245250:show_event_log

$ bigbang_pipeline
Pipeline type is not BB, skipping
$ get_debug
Debug not enabled, skipping
$ k3d cluster delete ${CI_JOB_ID}
INFO[0000] Deleting cluster '43431107'                  
INFO[0003] Deleting 1 attached volumes...               
INFO[0003] Removing cluster details from default kubeconfig... 
INFO[0003] Removing standalone kubeconfig file (if there is one)... 
INFO[0003] Successfully deleted cluster 43431107!       
$ docker network rm ${CI_JOB_ID}
43431107

section_end:1742245255:after_script
section_start:1742245255:upload_artifacts_on_success
Uploading artifacts for successful job
Uploading artifacts...
events.txt: found 1 matching artifact files and directories 
WARNING: get_cpumem.txt: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/minio) 
WARNING: db_values.yaml: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/minio) 
images.txt: found 1 matching artifact files and directories 
WARNING: pod_logs: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/minio) 
WARNING: cluster_info_dump.txt: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/minio) 
WARNING: kubectl_describes: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/minio) 
cypress-artifacts: found 6 matching artifact files and directories 
WARNING: oscal-assessment-results.yaml: no matching files. Ensure that the artifact path is relative to the working directory (/builds/big-bang/product/packages/minio) 
Uploading artifacts as "archive" to coordinator... 201 Created  id=43431107 responseStatus=201 Created token=glcbt-64

section_end:1742245256:upload_artifacts_on_success
section_start:1742245256:cleanup_file_variables
Cleaning up project directory and file based variables

section_end:1742245256:cleanup_file_variables
Job succeeded