Bug 2183198 - [MDR] After upgrade(redhat-operators) on hub from 4.12.1 to 4.12.2 noticed 2 token-exchange-agent pods on managed clusters and one of them on CBLO
Summary: [MDR] After upgrade(redhat-operators) on hub from 4.12.1 to 4.12.2 noticed 2 ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.12
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ODF 4.12.2
Assignee: Vineet
QA Contact: Shrivaibavi Raghaventhiran
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-30 14:43 UTC by Shrivaibavi Raghaventhiran
Modified: 2023-08-09 17:00 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-04-17 22:34:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-multicluster-orchestrator pull 156 0 None open Bug 2183198: [release-4.12] Disables leader election for addons 2023-04-03 11:39:11 UTC
Red Hat Product Errata RHSA-2023:1816 0 None None None 2023-04-17 22:34:51 UTC

Description Shrivaibavi Raghaventhiran 2023-03-30 14:43:45 UTC
Description of problem:
-----------------------
After hub upgrade from 4.12.1 to 4.12.2 noticed 2 token-exchange-agent pods on managed clusters and one of them on CBLO on C1

Note:
* C1 managed cluster was also upgraded from 4.12.1 to 4.12.2 and noticed token exchange agent did not respin during the process
* Noticed different images on both the token exchange agent pods(Irrespective of ODF versions its noticed in both the managed clusters)

$ oc get pods -n openshift-storage | grep token
token-exchange-agent-8698c5669d-5qt5z              0/1     CrashLoopBackOff   34 (4m59s ago)   159m
token-exchange-agent-887b49977-cdp46               1/1     Running            0                5h27m

[sraghave@localhost ~]$ oc describe pod token-exchange-agent-887b49977-cdp46 -n openshift-storage | grep -i image
    Image:         registry.redhat.io/odf4/odf-multicluster-rhel8-operator@sha256:15212c0de68d394fd4efc73b97222dbe966d0198eed3ac04e6f7ec7dac0920c7
    Image ID:      registry.redhat.io/odf4/odf-multicluster-rhel8-operator@sha256:15212c0de68d394fd4efc73b97222dbe966d0198eed3ac04e6f7ec7dac0920c7
[sraghave@localhost ~]$ oc describe pod token-exchange-agent-8698c5669d-5qt5z -n openshift-storage | grep -i image
    Image:         quay.io/rhceph-dev/odf4-odf-multicluster-rhel8-operator@sha256:1ae1009c34f5b41e543cc3b4a6d3c8d9138b3316e288fec5b913ad94961e0f56
    Image ID:      quay.io/rhceph-dev/odf4-odf-multicluster-rhel8-operator@sha256:1ae1009c34f5b41e543cc3b4a6d3c8d9138b3316e288fec5b913ad94961e0f56

Version of all relevant components (if applicable):
---------------------------------------------------
OCP version on Managed cluster1 = 4.12.0-0.nightly-2023-03-28-180259
ODF version on Managed cluster1 = 4.12.2-2

OCP version on Managed cluster2 = 4.12.0-0.nightly-2023-03-15-050003
ODF version on Managed cluster2 = 4.12.1

OCP version on hub = 4.12.0-0.nightly-2023-03-15-050003
Other DR operators/MCO on hub = 4.12.2

ACM - 2.7.2- GA
RHCEPH-5.3-RHEL-8-20230220.ci.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
1/1

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
--------------------
1. Install ODF GAed version 4.12.1 on managed clusters and install MCO in the same version 4.12.1 on hub cluster and configure MDR
2. Install apps on both C1 and C2 and apps in different state(failovered,Relocated,Deployed)
3. Upgrade ODF from 4.12.1 to 4.12.2 on C1 by disabling the old catsrc and recreating new catsrc with 4.12.2 on C1
4. Same upgrade process as mentioned in step 3 on hub cluster
5. Noticed 2 token exchange pods on C1 and c2 and one of them was in CBLO on c1


Actual results:
----------------
Noticed 2 token exchange pods on C1 and c2 and one of them was in CBLO on c1


Expected results:
-----------------
Only 1 token exchange agent pods on C1 and C2, Old ones should get deleted.


Additional info:

Comment 2 Shrivaibavi Raghaventhiran 2023-03-30 17:45:55 UTC
Logs being copied here http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-2183198/

Comment 3 Harish NV Rao 2023-04-03 06:16:12 UTC
Proposing this BZ for 4.12.2

Comment 9 Shrivaibavi Raghaventhiran 2023-04-10 10:32:13 UTC
Tested on versions:

ODF - 4.12.2-4.
OCP - 4.12.0-0.nightly-2023-04-07-033506

Upgraded OCP and ODF and HUB clusters from 4.12.1 to 4.12.2-4, old token exchange pod got deleted and new one was created during hub upgrade.

Hence moving the BZ to verified

Comment 13 errata-xmlrpc 2023-04-17 22:34:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.2 Bug Fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:1816


Note You need to log in before you can comment on or make changes to this bug.