Bug 1872755
| Summary: | [RFE][External mode] Re-sync OCS and RHCS for any changes on the RHCS cluster that will affect the OCS cluster | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Rachael <rgeorge> |
| Component: | ocs-operator | Assignee: | Mudit Agarwal <muagarwa> |
| Status: | CLOSED WONTFIX | QA Contact: | Elad <ebenahar> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.5 | CC: | muagarwa, nberry, ocs-bugs, odf-bz-bot, owasserm, sostapov |
| Target Milestone: | --- | Keywords: | AutomationBackLog, FutureFeature |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-08-01 13:24:13 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Rachael
2020-08-26 14:50:03 UTC
OCS does not control the external cluster. If the admin of the external cluster does destructive actions like deleting a pool, there is nothing OCS can do to recover the PVCs using that pool. As you noticed, new volumes can at least be created. Moving the RFE out to 4.7 for discussion. But this is a very broad request. We will really need to evaluate individual scenarios to see if they can even be supported. In some scenarios, the admin could delete the OCS initialization CR and the issue could be resolved by re-creating the storage class. But in other cases there may be nothing we can do. We need specific scenarios. In some scenarios, the Rook CRs can be updated. In other cases, they may be OCS CRs to be updated. Either way, in OCS this would be driven by the OCS operator. Moving to the OCS component in case Jose sees a more general solution, but I would recommend this general BZ be closed and instead track more specific scenarios individually. Maybe I'm missing the point of this RFE but if there are changes in the cluster resources that are not reflected on OCS side, isn't it the most important thing in the external mode? (In reply to Travis Nielsen from comment #3) > OCS does not control the external cluster. If the admin of the external > cluster does destructive actions like deleting a pool, there is nothing OCS > can do to recover the PVCs using that pool. As you noticed, new volumes can > at least be created. > New volumes can be created only in the case the re-created pool has the same old name. Else, we cannot create new PVCs too. So, we should have a way to reconfigure/re-initialize SC with the new pool name(in case pool name is diff than original), to enable new PVC creation. As regards old volumes, we agree that we cannot expect to use them if underlying pool is deleted. But at-least for noobaa-db PV, there should be a way to recover noobaa without OCS re-install. Same issue will be seen for internal mode too, in case the noobaa-db PV has some issues. We should have a way to recover Noobaa (even if old data is lost, new things should work - that can only happen if we have a way to recover noobaa in case its DB is lost) > Moving the RFE out to 4.7 for discussion. But this is a very broad request. > We will really need to evaluate individual scenarios to see if they can even > be supported. > > In some scenarios, the admin could delete the OCS initialization CR and the > issue could be resolved by re-creating the storage class. But in other cases > there may be nothing we can do. We need specific scenarios. > > In some scenarios, the Rook CRs can be updated. In other cases, they may be > OCS CRs to be updated. Either way, in OCS this would be driven by the OCS > operator. > > Moving to the OCS component in case Jose sees a more general solution, but I > would recommend this general BZ be closed and instead track more specific > scenarios individually. Don't think this will ever be prioritized. |