Bug 2316022
| Summary: | [RDR]Communication issues within the cluster after removal of network multiClusterService on Regional DR | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | khover |
| Component: | odf-dr | Assignee: | Shyamsundar <srangana> |
| odf-dr sub component: | multicluster-orchestrator | QA Contact: | krishnaram Karthick <kramdoss> |
| Status: | NEW --- | Docs Contact: | |
| Severity: | urgent | ||
| Priority: | urgent | CC: | cblum, etamir, kramdoss, mduasope, muagarwa, sapillai, tasano |
| Version: | 4.14 | Flags: | uchapaga:
needinfo?
(sapillai) uchapaga: needinfo? (kramdoss) mduasope: needinfo? (uchapaga) |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Description of problem (please be detailed as possible and provide log snippests): All mons are in probing state attempting to connect with other mons. 2024-10-01T15:03:19.958979558Z debug 2024-10-01T15:03:19.957+0000 7f95864d8640 -1 mon.f@1(probing) e12 get_health_metrics reporting 1164 slow ops, oldest is log(1 entries from seq 1 at 2024-10-01T13:25:59.169996+0000) 2024-10-01T15:03:24.959314216Z debug 2024-10-01T15:03:24.958+0000 7f95864d8640 -1 mon.f@1(probing) e12 get_health_metrics reporting 1164 slow ops, oldest is log(1 entries from seq 1 at 2024-10-01T13:25:59.169996+0000) The issue started after removing Regional DR components and the service export config in the storagecluster CR. spec: network: multiClusterService: clusterID: <clustername> enabled: true Version of all relevant components (if applicable): ODF 4.14.10 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes all ODF pods communication is down. Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 5 Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: