Bug 2076890
| Summary: | The ocs-operator pod can't become ready and the healthz check failed error can be seen in pod logs after reinstallation | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Container Storage | Reporter: | yhe |
| Component: | ocs-operator | Assignee: | Nobody <nobody> |
| Status: | NEW --- | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.8 | CC: | etamir, kkagoshi, pibanezr, sostapov |
| Target Milestone: | --- | Flags: | yhe:
needinfo?
(jrivera) yhe: needinfo? (jrivera) yhe: needinfo? (jrivera) |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
IIRC, the problem here was related to reinstallation on the same disks. Although we don't support it, it has been resolved by improving the customer's scripts. I believe we can close this one. |
Description of problem (please be detailed as possible and provide log snippests): After deleting and reinstalling the OCS, the ocs-operator Pod can't become ready and the healthz check failed error can be seen in pod logs. 2022-04-18T09:19:13.428178174Z {"level":"info","ts":1650273553.42811,"logger":"controller-runtime.healthz","msg":"healthz check failed","statuses":[{}]} 2022-04-18T09:19:23.428500397Z {"level":"info","ts":1650273563.4284315,"logger":"controller-runtime.healthz","msg":"healthz check failed","statuses":[{}]} Version of all relevant components (if applicable): OCS 4.8.5 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes, if the ocs-operator pod can't become ready, the installation of the OCS operator will be stuck at installing status and can't finish. Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? Yes (in the customer's environment) Can this issue reproduce from the UI? No If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Delete the OCS 2. Reinstall the OCS 3. Check the status of the ocs-operator pod Actual results: The ocs-operator pod can't become ready and the installation is stuck at installing status. Expected results: The ocs-operator pod becomes ready and the installation finishes. Additional info: