gitlab-exporter / redis Prometheus scrape failing with 503 connection refused
gitlab-exporter
Prometheus scrape failing with 503 connection refused
Bug: Problem:
The gitlab-exporter
target is DOWN
in Prometheus, returning a 503 Service Unavailable
error during release testing.
Evidence:
The istio-proxy
log for the pod shows a critical error:
"GET /metrics HTTP/2" 503 URX,UF ... delayed_connect_error:_Connection_refused
Analysis:
This error means the Istio sidecar inside the gitlab-exporter
pod tried to forward the scrape request to the application on port 9168
but the connection was refused. This points to a problem with the gitlab-exporter
application container itself (e.g., it's crashed, not running, or not listening on the correct port), rather than a network issue between Prometheus and the pod.
Next Steps:
-
Check Application Container (Most Likely Cause):
-
Describe Pod: Look for crashes and events:
kubectl -n gitlab describe pod <gitlab-exporter-pod-name>
-
Check App Logs: Review logs from the application container, not the proxy:
kubectl -n gitlab logs <gitlab-exporter-pod-name> -c gitlab-exporter
-
Verify Port: Ensure the process is listening on
9168
inside the container:kubectl -n gitlab exec <pod> -c gitlab-exporter -- netstat -tlnp | grep 9168
-
Describe Pod: Look for crashes and events:
-
Check for Restrictive Policies (Less Likely):
- If the application appears healthy, investigate Kubernetes
NetworkPolicies
and IstioAuthorizationPolicies
orPeerAuthentication
settings that might be blocking the connection.
- If the application appears healthy, investigate Kubernetes