Open Discussion: App and Version Label Inconsistency When Using BigBang, BigBang Packages, Istio, and Kiali
Not sure if this is where I should stick this, but I copied this from a wiki where I'm documenting my findings related to Kiali errors I've been trying to fix and understand. I was hoping to get some insights on the proper use and configuration of these labels and if anything can be done BigBang wide to standardize and satisfy Istio and Kiali when deploying BigBang.
Reconciling App and Version Label Inconsistency When Using BigBang, Istio, and Kiali
Istio's Recomendations
Istio recommends the use of app and version labels on pods in the mesh.
Istio is able to reconcile the use of the labels app
and version
and/or the kubernetes recommended labels of app.kubernetes.io/name
and app.kubernetes.io/version
.
Kiali's Needs
Kiali on the other hand can only be configured to use a single configurable set of these istio labels to reconcile the telemetry of the workloads.
Kiali Documentation related to this.
Gravity's Configuration
Gravity's Kiali configuration is currently set to these istio labels.
cr:
spec:
istio_labels:
app_label_name: "app.kubernetes.io/name"
version_label_name: "app.kubernetes.io/version"
BigBang, Istio, Kiali, and BigBang Packages
On the Gravity platform, we are using BigBang Core and many of its Packages. There are many inconsistencies across them all when it comes to the configuring of these app and version labels on pods.
Different Scenarios
The package chart defines one or both sets of these labels for App and Name in the Deployment Pod templates.
In this case, Kiali is happy because it can gather telemetry data using the istio_labels it is looking for.
Defined in the BigBang Package Chart here and here
The package chart does NOT define one or both of the desired istio_labels
In this case, we see the following Missing App or Missing Version error in our Kiali dashboard
In some cases, we can configure additional pod labels to satisfy the need for these labels if the upstream chart doesn't do it by default. Take these ArgoCD redis-bb workloads for example.
We are able to add the following pod labels to our argocd configuration to create the version label for the redis-bb pods and set it to the image tag that is used for those pods.
redis-bb:
master:
podLabels:
app.kubernetes.io/version: "{{ .Values.image.tag }}"
replica:
podLabels:
app.kubernetes.io/version: "{{ .Values.image.tag }}"
This works because the BigBang Redis chart templates can properly interpret that label here.
Once this is configured, our ArgoCD redis-bb pods now contain a valid version label. Altho, Valid is a strong term. Should it be the chart version? image tag? something else? I've seen variations.
In some other cases, we cannot fix this as easily, or at all maybe?
Take this case for example. Jaeger is also missing the Version label in its pods. We do have the option to add additional pod labes, but the Jaeger template can't interpret a configured label in a way that allows us to set it to the image tag. It just reads it in using toYaml here.
Additional Inconsistencies In BigBang Packages
Note: Most if not all these inconsistencies are inherited by BigBang from the upstream open source chart.
- Some Deployment templates have these labels, while not including them in the spec template.
- Some charts define the labels
app
andversion
only vsapp.kubernetes.io/name
andapp.kubernetes.io/version
- This is valid in istio only land, but Kiali can only handle a single pair of labels representing this data.
What Should We Be Doing?
- Patch what we can?
- Update BigBang package charts to standardize and add these labels to be properly configured to satisfy both istio and kialis needs?