Bug 2128677
| Summary: | Prometheus pods in the openshift-storage namespace in a production cluster breaking | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Yashvardhan Kukreja <ykukreja> |
| Component: | odf-managed-service | Assignee: | Leela Venkaiah Gangavarapu <lgangava> |
| Status: | CLOSED WONTFIX | QA Contact: | Neha Berry <nberry> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.10 | CC: | aeyal, lgangava, ocs-bugs, odf-bz-bot |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-10-03 10:36:09 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Yashvardhan Kukreja
2022-09-21 11:45:19 UTC
This issue was investigated and the root cause wasn't identified. To the best of your investigation, it seems it was a flake because of the following reasons: - we got shell access inside the `prometheus` container of the `prometheus-managed-ocs-prometheus-0` pod and tried making/simulating the API calls which the `prometheus` container was doing against the API server. Those API calls successfully reached the API server and we got a response from it, unlike what the prometheus container was representing. - we couldn't even hit the prometheus container from inside the container itself at `localhost:9090` depicting that the prometheus container itself wasn't being perceived as a process in the pod. Ultimately, we proceeded to restart the pod by `oc rollout restart statefulset/prometheus-managed-ocs-prometheus -n openshift-storage` and everything worked perfectly fine. @ykukreja in the bridge as weren't able to find the root cause and there were no repro steps and a restart of pod fixed it, can I close this? Sure, we can reopen this ticket again if the issue seems to occur regularly again. |