Bug 2044447
| Summary: | ODF 4.9 deployment fails when deployed using the ODF managed service deployer (ocs-osd-deployer) | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Ohad <omitrani> |
| Component: | odf-operator | Assignee: | Dhruv Bindra <dbindra> |
| Status: | CLOSED ERRATA | QA Contact: | suchita <sgatfane> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.9 | CC: | dbindra, jarrpa, muagarwa, nberry, ocs-bugs, odf-bz-bot, rperiyas |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.10.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 4.10.0-141 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-04-13 18:51:56 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Ohad
2022-01-24 16:19:35 UTC
A possible resolution: Replace SetControllerReference with SetOwnerReference here: https://github.com/red-hat-storage/odf-operator/blob/release-4.9/controllers/vendors.go#L77 This should allow StorageCluster to be owned by StorageSystem but not mark it as a controller for StorageCluster I agree that this should be fixed, so giving devel_ack+. I'll leave it up to others to determine which versions this will need to be backported to. able to deploy a cluster using managed service add-on of ocs-provider-qe and ocs-consumer-qe. This issue is resolved. Verify onboarding on OCS - 4.10.0-197 OCP 4.9.23 oc get csv NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.10.0 NooBaa Operator 4.10.0 Succeeded ocs-operator.v4.10.0 OpenShift Container Storage 4.10.0 Succeeded ocs-osd-deployer.v2.0.0 OCS OSD Deployer 2.0.0 Succeeded odf-csi-addons-operator.v4.10.0 CSI Addons 4.10.0 Succeeded odf-operator.v4.10.0 OpenShift Data Foundation 4.10.0 Succeeded ose-prometheus-operator.4.8.0 Prometheus Operator 4.8.0 Succeeded route-monitor-operator.v0.1.406-54ff884 Route Monitor Operator 0.1.406-54ff884 route-monitor-operator.v0.1.404-e29b74b Succeeded provider ======= storagecluster ========== NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 24m Ready 2022-03-22T07:21:56Z ======= cephcluster ========== NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL ocs-storagecluster-cephcluster /var/lib/rook 3 23m Ready Cluster created successfully HEALTH_OK ======= cluster health status===== HEALTH_OK Consumer: oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 24h Ready true 2022-03-22T08:46:20Z ======= cluster health status===== HEALTH_OK ====== cephcluster ========== NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL ocs-storagecluster-cephcluster 36m Connected Cluster connected successfully HEALTH_OK true Both consumer and provider onboarded successfully. hence moving this BZ to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1372 |