Description of problem (please be detailed as possible and provide log snippests): Version of all relevant components (if applicable): ACM 2.10 GA'ed ODF 4.15 GA'ed ceph version 17.2.6-196.el9cp (cbbf2cfb549196ca18c0c9caff9124d83ed681a4) quincy (stable) OCP 4.15.0-0.nightly-2024-03-24-023440 VolSync 0.9.0 Submariner 0.17 (GA'ed alongside ACM 2.10) Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: *****Active hub co-situated with primary managed cluster***** 1. Deployed DR protected 6 RBD and 6 CephFS workloads on C1 over a RDR setup of both subscription and appset types (1 each) and failedover (with all clusters up and running) and relocated them in such a way that they are finally running on C1 and maintain a unique state such as Deployed, FailedOver and Relocated (check drpc output below). Such as if busybox-1 is failedover to C2, it is failedover back to C1 and so on (with all clusters up and running). We also have 4 workloads (2 RBD and 2 CephFS 1 each of subscription and appset types) on C2 and they remain as it is in the Deployed state. 2. After 2nd operation when workloads are finally running on C1, let IOs continue for some time and configure hub recovery by bringing another OCP cluster which would act as passive hub. 3. Post successful backup creation on both sides, bring C1 and active hub down (basically perform site failure for co-situated hub recovery scenario). 4. Restore backups and move to passive hub. 5. Ensure C2 is successfully imported on passive hub. 6. Check if drpolicy is validated and drpc details for all the workloads are available. 7. Now refresh OCP console of passive hub for Data policies page to appear under Data Services. 8. Now on Data policies --> Disaster recovery, check if the correct app count is shown attached to the drpolicy you have on the setup. If everything looks good, click on hyperlinks under ""Connected applications"" and verify all the apps running on C1 and C2, then move to Data policies --> Overview page. 9. Now select cluster C1 from cluster dropdown and then check if all the applications running on C1 is listed under application dropdown or not. You will find all appset based apps but not subscription apps. However, everything looks good on C2. Actual results: [RDR] [UI] Subscription based apps go missing on passive hub from application dropdown of Data policies page after site-failure Refer attached screencast. Expected results: Subscription based apps running on C1 cluster (which goes down during site failure, and remains down) should also be listed under application dropdown on the Data policies page. Additional info:
RDT not required for 4.16.0 BZ (will add it for the 4.15 z-stream BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2276052)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:4591