UNCLASSIFIED - NO CUI

Skip to content
Snippets Groups Projects
Commit 78c59da8 authored by Zach Callahan's avatar Zach Callahan
Browse files

Resolve "Elasticsearch AutoRollingUpgrade is getting error"

parent 36eb4351
No related branches found
No related tags found
1 merge request!304Resolve "Elasticsearch AutoRollingUpgrade is getting error"
......@@ -4,6 +4,17 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
---
## [1.19.0-bb.2] - 2024-09-30
### Removed
- The auto rolling upgrade job has been removed entirely. The ECK operator (which this package depends on)
already performs rolling upgrades for elastic and kibana version changes which was all the upgrade job
tried to do. Also, the upgrade job has been nonfunctional with Kyverno policies enabled for some time.
[Here](https://github.com/elastic/cloud-on-k8s/blob/7323879c77aecede9971cee8a4b4988906725d7b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc#upgrading-the-cluster)
are the relevant docs from the ECK operator project outlining the operator's upgrade logic.
## [1.19.0-bb.1] - 2024-09-26
### Changed
......
<!-- Warning: Do not manually edit this file. See notes on gluon + helm-docs at the end of this file for more information. -->
# elasticsearch-kibana
![Version: 1.19.0-bb.1](https://img.shields.io/badge/Version-1.19.0--bb.1-informational?style=flat-square) ![AppVersion: 8.15.1](https://img.shields.io/badge/AppVersion-8.15.1-informational?style=flat-square)
![Version: 1.19.0-bb.2](https://img.shields.io/badge/Version-1.19.0--bb.2-informational?style=flat-square) ![AppVersion: 8.15.1](https://img.shields.io/badge/AppVersion-8.15.1-informational?style=flat-square)
Configurable Deployment of Elasticsearch and Kibana Custom Resources Wrapped Inside a Helm Chart.
......@@ -41,7 +41,6 @@ helm install elasticsearch-kibana chart/
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| domain | string | `"dev.bigbang.mil"` | Domain used for BigBang created exposed services. |
| autoRollingUpgrade.enabled | bool | `false` | Enable BigBang specific autoRollingUpgrade support |
| imagePullPolicy | string | `"IfNotPresent"` | Pull Policy for all non-init containers in this package. |
| fluentbit | object | `{"enabled":false}` | Toggle for networkpolicies to allow fluentbit ingress |
| kibana.version | string | `"8.15.1"` | Kibana version |
......
apiVersion: v2
name: elasticsearch-kibana
description: Configurable Deployment of Elasticsearch and Kibana Custom Resources Wrapped Inside a Helm Chart.
version: 1.19.0-bb.1
version: 1.19.0-bb.2
appVersion: 8.15.1
dependencies:
- name: gluon
......
{{- if .Values.autoRollingUpgrade.enabled }}
{{- if .Values.networkPolicies.enabled }}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-egress-upgrade-job
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed,before-hook-creation
spec:
egress:
- to:
- ipBlock:
cidr: {{ .Values.networkPolicies.controlPlaneCidr }}
{{- if eq .Values.networkPolicies.controlPlaneCidr "0.0.0.0/0" }}
# ONLY Block requests to AWS metadata IP
except:
- 169.254.169.254/32
{{- end }}
podSelector:
matchLabels:
app.kubernetes.io/name: bigbang-ek-upgrade-job
policyTypes:
- Egress
{{- end }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}-bb-upgrade
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed,before-hook-creation
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: bb-{{ .Release.Name }}-upgrade-view
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed,before-hook-creation
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
verbs: ["get", "list", "watch"]
- apiGroups: ["kibana.k8s.elastic.co"]
resources: ["kibanas"]
verbs: ["get", "list", "update", "patch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Release.Name }}-bb-upgrade
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed,before-hook-creation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: bb-{{ .Release.Name }}-upgrade-view
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-bb-upgrade
namespace: {{ .Release.Namespace }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: bb-{{ .Release.Name }}-upgrade
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "-5"
spec:
backoffLimit: 3
ttlSecondsAfterFinished: 480
template:
metadata:
name: bb-{{ .Release.Name }}-upgrade
labels:
app.kubernetes.io/name: bigbang-ek-upgrade-job
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
serviceAccountName: {{ .Release.Name }}-bb-upgrade
containers:
- name: bb-{{ .Release.Name }}-upgrade
image: {{ $.Values.upgradeJob.image.repository }}:{{ $.Values.upgradeJob.image.tag }}
command:
- /bin/bash
- -ec
- |
if [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.9.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 7.10.* ]]; then
export ES_DESIRED_VERSION="7.10.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.10.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 7.12.* ]]; then
export ES_DESIRED_VERSION="7.12.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.12.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 7.13.* ]]; then
export ES_DESIRED_VERSION="7.13.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.13.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 7.14.* ]]; then
export ES_DESIRED_VERSION="7.14.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.13.* || $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.14.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 7.16.* ]]; then
export ES_DESIRED_VERSION="7.16.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.16.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 7.17.* ]]; then
export ES_DESIRED_VERSION="7.17.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 7.17.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.2.* ]]; then
export ES_DESIRED_VERSION="8.2.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.2.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.3.* ]]; then
export ES_DESIRED_VERSION="8.3.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.3.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.4.* ]]; then
export ES_DESIRED_VERSION="8.4.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.4.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.5.* ]]; then
export ES_DESIRED_VERSION="8.5.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.5.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.6.* ]]; then
export ES_DESIRED_VERSION="8.6.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.6.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.7.* ]]; then
export ES_DESIRED_VERSION="8.7.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.7.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.8.* ]]; then
export ES_DESIRED_VERSION="8.8.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.8.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.9.* ]]; then
export ES_DESIRED_VERSION="8.9.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.9.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.10.* ]]; then
export ES_DESIRED_VERSION="8.10.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.10.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.11.* ]]; then
export ES_DESIRED_VERSION="8.11.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.11.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.12.* ]]; then
export ES_DESIRED_VERSION="8.12.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.12.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.13.* ]]; then
export ES_DESIRED_VERSION="8.13.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.13.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.14.* ]]; then
export ES_DESIRED_VERSION="8.14.*"
export ROLLING_UPGRADE="true"
elif [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == 8.14.* ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.spec.version}') == 8.15.* ]]; then
export ES_DESIRED_VERSION="8.15.*"
export ROLLING_UPGRADE="true"
fi
if [[ "$ROLLING_UPGRADE" == "true" ]]; then
echo "Running Rolling Upgrade Prep Commands"
kubectl annotate --overwrite kibana {{ .Release.Name }} -n {{ .Release.Namespace }} 'eck.k8s.elastic.co/managed=false'
kubectl delete deployment -l kibana.k8s.elastic.co/name={{ .Release.Name }},common.k8s.elastic.co/type=kibana -n {{ .Release.Namespace }}
curl -XPUT -ku "elastic:$elastic" "https://{{ .Release.Name }}-es-http.{{ .Release.Namespace }}.svc:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": "primaries" } }'
curl -XPOST -ku "elastic:$elastic" "https://{{ .Release.Name }}-es-http.{{ .Release.Namespace }}.svc:9200/_flush/synced?pretty"
echo "Rolling Upgrade Prep Commands Completed"
else
echo "No Upgrade Prep Necessary :D"
if {{ .Values.istio.enabled }}; then
echo "Killing Istio Sidecar"
curl -X POST http://localhost:15020/quitquitquit
fi
exit 0
fi
until [[ $( kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.phase}' ) == "Ready" ]] && [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.version}') == $ES_DESIRED_VERSION ]]; do
echo "ES cluster version $ES_DESIRED_VERSION not yet Ready" && sleep 10;
done
if [[ $( curl -ku "elastic:$elastic" -k "https://{{ .Release.Name }}-es-http.{{ .Release.Namespace }}.svc:9200/_cluster/settings?pretty" | jq '.persistent.cluster.routing.allocation.enable' | tr -d '"' ) == "primaries" ]]; then
echo "Running Post-Upgrade Commands"
curl -XPUT -ku "elastic:$elastic" "https://{{ .Release.Name }}-es-http.{{ .Release.Namespace }}.svc:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": null } }'
until [[ $(kubectl get elasticsearch {{ .Release.Name }} -n {{ .Release.Namespace }} -o jsonpath='{.status.health}') == "green" ]]; do
echo "Waiting for ES cluster to be green" && sleep 5;
done
kubectl annotate kibana {{ .Release.Name }} -n {{ .Release.Namespace }} 'eck.k8s.elastic.co/managed-'
echo "Post-Upgrade Commands completed"
if {{ .Values.istio.enabled }}; then
echo "Killing Istio Sidecar"
curl -X POST http://localhost:15020/quitquitquit
fi
exit 0
else
kubectl annotate kibana {{ .Release.Name }} -n {{ .Release.Namespace }} 'eck.k8s.elastic.co/managed-'
echo "No post-upgrade commands necessary"
if {{ .Values.istio.enabled }}; then
echo "Killing Istio Sidecar"
curl -X POST http://localhost:15020/quitquitquit
fi
exit 0
fi
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 100m
memory: 256Mi
envFrom:
- secretRef:
name: {{ .Release.Name }}-es-elastic-user
securityContext:
capabilities:
drop:
- ALL
{{- with $.Values.elasticsearch.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: OnFailure
{{- end }}
# -- Domain used for BigBang created exposed services.
domain: dev.bigbang.mil
autoRollingUpgrade:
# -- Enable BigBang specific autoRollingUpgrade support
enabled: false
# -- Pull Policy for all non-init containers in this package.
imagePullPolicy: IfNotPresent
......
......@@ -41,14 +41,12 @@ Chart.yaml](https://github.com/prometheus-community/helm-charts/blob/main/charts
6. Generate the `README.md` updates by following the [guide in gluon](https://repo1.dso.mil/platform-one/big-bang/apps/library-charts/gluon/-/blob/master/docs/bb-package-readme.md).
- Renovate bot may have already performed this step for you as well! 🤖
7. If this is a new minor version of Elastic you will likely need to add a new section to `chart/templates/bigbang/upgrade-job.yaml` for the new version upgrade. Follow the existing examples to update the job to support upgrades between old version -> new version.
8. Push up your changes, add upgrade notices if applicable, validate that CI passes.
7. Push up your changes, add upgrade notices if applicable, validate that CI passes.
- If there are any failures, follow the information in the pipeline to make the necessary updates.
- Add the `debug` label to the MR for more detailed information.
- Reach out to the CODEOWNERS if needed.
9. As part of your MR that modifies bigbang packages, you should modify the bigbang [bigbang/tests/test-values.yaml](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/tests/test-values.yaml?ref_type=heads) against your branch for the CI/CD MR testing by enabling your packages.
8. As part of your MR that modifies bigbang packages, you should modify the bigbang [bigbang/tests/test-values.yaml](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/tests/test-values.yaml?ref_type=heads) against your branch for the CI/CD MR testing by enabling your packages.
- To do this, at a minimum, you will need to follow the instructions at [bigbang/docs/developer/test-package-against-bb.md](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/docs/developer/test-package-against-bb.md?ref_type=heads) with changes for Elasticsearch-Kibana enabled (the below is a reference, actual changes could be more depending on what changes where made to Elasticsearch-Kibana in the pakcage MR).
......@@ -216,7 +214,6 @@ fluentbit:
Testing Steps:
- Ensure all pods go to running (NOTE: this is especially important for the upgrade testing since Big Bang has an "auto rolling upgrade" job in place)
- Log in to Elasticsearch [default credentials](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/docs/guides/using-bigbang/default-credentials.md) to ensure that the Elasticsearch endpoint is available
- Log in to Kibana with [default credentials](https://repo1.dso.mil/big-bang/bigbang/-/blob/master/docs/guides/using-bigbang/default-credentials.md), using the password in the `logging-ek-es-elastic-user` secret and username `elastic`
```shell
......
## Troubleshooting
#### AutoRollingUpgrade
Once upgraded an elasticsearch cluster won't be able to roll back to the previous version in the event the cluster is unhealthy after a minor version upgrade and the `autoRollingUpgrade` commands are attempted.
- If your elasticsearch pods are not restarting and you have 1 data node and 1 master node, the ECK-Operator will not auto re-deploy the pods since as soon as 1 node goes offline the cluster will go health status Red, so you need to manually kick the pods starting with the data node first.
Check ff your Elasticsearch cluster is Unhealthy and status "Red":
```bash
kubectl get elasticsearches -A
```
- If the ek HelmRelease states "Upgrade tries Exhausted" and Elasticsearch or Kibana is in a bad state check the logs for the ECK-Operator and see if nodes need to be manually restarted:
```bash
kubectl logs elastic-operator-0 -n eck-operator
```
If you see logs like the following:
```
{"log.level":"info","@timestamp":"2021-04-16T20:57:24.771Z","log.logger":"driver","message":"Cannot restart some nodes for upgrade at this time","service.version":"1.3.0+6db1914b","service.type":"eck","ecs.version":"1.4.0","namespace":"logging","es_name":"logging-ek","failed_predicates":{"...":["logging-ek-es-data-0","logging-ek-es-master-0"]}}
```
Manually delete the pods mentioned in the log, eg: starting with "logging-ek-es-data-0" & then "logging-ek-es-master-0" if it still isn't terminating after data is 2/2 Ready.
- If Elasticsearch has upgraded and showing Green/Yellow Health status but new Kibana pods are stuck at 1/2 check the logs for Kibana and Elasticsearch:
```bash
kubectl logs -l common.k8s.elastic.co/type=kibana -n logging -c kibana
kubectl logs logging-ek-es-data-0 -n logging -c elasticsearch
```
If Kibana shows the following logs:
```
"message":"[search_phase_execution_exception]: all shards failed"}
```
Check Elasticsearch logs for the troublesome indexes:
```
"Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.kibana_task_manager_1][0]]. Consider using `allow_partial_search_results` setting to bypass this error."
```
Perform the following commands to delete the `.kibana_task_manager_X` index. WARNING This will erase any Kibana configuration, Index Mappings, Dashboards, Role Mappings, etc.
```bash
kubectl port-forward svc/logging-ek-es-http -n logging 9200:9200
curl -XDELETE -ku "elastic:ELASTIC_USER_PASSWORD" "https://localhost:9200/.kibana_task_manager_1"
```
#### Error Failed to Flush Chunk
The Fluentbit pods on the Release Cluster may have ocasional issues with reliably sending their 2000+ logs per minute to ElasticSearch because ES is not tuned properly
......@@ -114,15 +62,13 @@ fluentbit:
#### Yellow ES Health Status and Unassigned Shards
After a BigBang `autoRollingUpgrade` job, cluster shard allocation may not have been properly re-enabled resulting in a yellow health status for the ElasticSearch cluster and Unassigned Shards
To check Cluster Health run:
```
kubectl get elasticsearch -A
```
To view the sttus of shards run:
To view the status of shards run:
```
curl -XGET -H 'Content-Type: application/json' -ku "elastic:$(kubectl get secrets -n logging logging-ek-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')" "https://localhost:9200/_cat/shards?h=index,shard,prirep,state,un
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment