Bug 1762698
| Summary: | After deleting ocsinit-cephfilesystem and rook-ceph-mds pods, in the dashboard, it shows: `rook-ceph is not available` | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Servesha <sdudhgao> |
| Component: | Console Storage Plugin | Assignee: | umanga <uchapaga> |
| Status: | CLOSED WORKSFORME | QA Contact: | Raz Tamir <ratamir> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.3.0 | CC: | aos-bugs, kaushal, nthomas, rhhi-next-mgmt-qe, uchapaga |
| Target Milestone: | --- | ||
| Target Release: | 4.3.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-11-06 12:10:42 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Servesha
2019-10-17 09:31:48 UTC
@ Servesha, Can you provide requested info? @ Nishanth, here is needed info : > What was the Health Status before deleting? (Maybe it was already broken?) - Before deleting `ocsinit-cephfilesystem`, except ceph-mds pods(were pending), other all pods were up and running. So, the ceph health status was `HEALTH_WARN`. > Please check if your rook-ceph-mgr pod is running. Also, provide rook-operator logs. - ceph-mds pods were not running at that time, they were in the pending state. Unfortunately at this instance, I do not have rook-operator logs since that setup had been deleted. > Did deleting the said resources cause deletion of any other resources? - The notable deleted resources were two ceph-mds pods after deleting `ocsinit-cephfilesystem`. Then if checked dashboard, it was showing `rook-ceph unavailable`. Except that other things were fine. I am unable to reproduce this. Deleting or Recreating cephfilesystem did not affect monitoring at all. ceph-mgr is actively talking to Prometheus. Works for me and no further instructions to replicate the issue. Closing this. |