Bug 2276664 - [4.15 clone][RDR] Unexpected webhook MCO error when creating additional DRPolicy from the ACM console
Summary: [4.15 clone][RDR] Unexpected webhook MCO error when creating additional DRPol...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.15.3
Assignee: Vineet
QA Contact: Annette Clewett
URL:
Whiteboard:
Depends On: 2273533
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-04-23 15:20 UTC by Vineet
Modified: 2024-10-23 04:25 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 2273533
Environment:
Last Closed: 2024-06-11 16:41:43 UTC
Embargoed:
kramdoss: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2024:3806 0 None None None 2024-06-11 16:41:44 UTC

Description Vineet 2024-04-23 15:20:21 UTC
+++ This bug was initially created as a clone of Bug #2273533 +++

Description of problem (please be detailed as possible and provide log
snippests):
Created RDR test environment with 5 OCP clusters. Cluster hyper2 is the hub with ACM and 2 sets of managed clusters; hyper3<->hyper4 and hyper5<->hyper6. All 4 clusters had ODF 4.15 installed. 

After getting clusters imported into ACM and configuring Submariner between the peer clusters MCO was installed and the first DRPolicy created between hyper3<->hyper4. I validated that mirroring was enabled, object buckets created, DR cluster operators installed, and secrets exchanged. 

I then attempted to create a 2nd DRPolicy for hyper5<->hyper6. This fails with a webhook error (graphic attached). The only resource created was the new DRPolicy on the hub (hyper2) cluster.

Version of all relevant components (if applicable):
OCP 4.15.3
ODF 4.15.0
ACM 2.10.0
Submariner 0.17

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, DR is not configured for 2nd set of Peer clusters

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
4

Can this issue reproducible?
Only tested one time

Can this issue reproduce from the UI?
Yes

Steps to Reproduce:
1. Create RDR config with hub + 4 managed clusters (2 Peer clusters)
2. Create DRPolicy for first Peer Cluster and validate
3. Create DRPolicy for second Peer Cluster


Actual results:
There is UI error in Create DRPolicy and only drpolicy is created

Expected results:
There is no UI error and DR configuration is created exactly the same as first Peer Cluster for second set of cluster (2nd Peer Cluster).

--- Additional comment from RHEL Program Management on 2024-04-04 21:08:18 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.16.0' to '?', and so is being proposed to be fixed at the ODF 4.16.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from RHEL Program Management on 2024-04-04 21:08:18 UTC ---

The 'Target Release' is not to be set manually at the Red Hat OpenShift Data Foundation product.

The 'Target Release' will be auto set appropriately, after the 3 Acks (pm,devel,qa) are set to "+" for a specific release flag and that release flag gets auto set to "+".

--- Additional comment from Annette Clewett on 2024-04-04 21:09:49 UTC ---



--- Additional comment from Vineet on 2024-04-17 08:35:30 UTC ---

@aclewett I am unable to replicate it on my cluster. It seems to be an intermittent network issue. Can you retry after giving the operator pod a restart ?

--- Additional comment from Annette Clewett on 2024-04-17 21:42:20 UTC ---

@vbadrina I completely started over with ODF 4.15.1 and reinstalled MCO and updated ODF on the managed clusters. This time the first DRPolicy was between hyper5 and hyper6 which had failed before with UI issue. The DRPolicy went to Validated in ~5 mins. 

The 2nd DRPolicy was for hyper2 and hyper3 which failed immediately as the 2nd DRPolicy with the UI error (attached to BZ). It seems that this is not a network issue. Are you reproducing using ACM 2.10.1 and the odf-multicluster plugin via Data policies UI?

Comment 2 krishnaram Karthick 2024-05-02 11:41:13 UTC
Moving the bug to 4.15.4. we need to understand why this bug needs to be backported.

Comment 12 errata-xmlrpc 2024-06-11 16:41:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.15.3 Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:3806

Comment 13 Red Hat Bugzilla 2024-10-23 04:25:06 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.