Bug 2153695
| Summary: | [KMS] rook-ceph-osd-prepare pod in CLBO state after deleting rook OSD deployment | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Rakshith <rar> |
| Component: | rook | Assignee: | Rakshith <rar> |
| Status: | CLOSED ERRATA | QA Contact: | Rachael <rgeorge> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.10 | CC: | kbg, kramdoss, nberry, ocs-bugs, odf-bz-bot, sheggodu |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.10.10 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 4.10.10-1 | Doc Type: | Bug Fix |
| Doc Text: |
Previously, the `rook-ceph-osd-prepare` job sometimes would be stuck in `CrashLoopBackOff` (CLBO) state and would never come up. This was due to the deletion of OSD deployment in an encrypted cluster backed by CSI provisioned PVC which caused the `rook-ceph-osd-prepare` job for that OSD
to be stuck in `CrashLoopBackOff` state.
With this fix, the `rook-ceph-osd-prepare` job removes the stale encrypted device and opens it again avoiding the CLBO state. As a result, the `rook-ceph-osd-prepare` job runs as expected and the OSD comes up.
|
Story Points: | --- |
| Clone Of: | 2153675 | Environment: | |
| Last Closed: | 2023-02-20 15:40:44 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 2153675 | ||
| Bug Blocks: | |||
|
Comment 18
errata-xmlrpc
2023-02-20 15:40:44 UTC
|