Bug 1977358 - [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace
Summary: [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace
Alias: None
Product: Red Hat Advanced Cluster Management for Kubernetes
Classification: Red Hat
Component: Infrastructure Operator
Version: rhacm-2.3
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: rhacm-2.3.1
Assignee: Fred Rolland
QA Contact: bjacot
Christopher Dawson
Whiteboard: AI-Team-Hive KNI-EDGE-4.8
Depends On: 1977354
TreeView+ depends on / blocked
Reported: 2021-06-29 14:03 UTC by Michael Filanov
Modified: 2021-10-19 22:07 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1977354
Last Closed: 2021-10-13 21:08:28 UTC
Target Upstream Version:
ming: rhacm-2.3.z+

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github open-cluster-management backlog issues 15675 0 None None None 2021-08-27 15:58:50 UTC
Github openshift assisted-service pull 2156 0 None open [ocm-2.3] Bug 1977358: Move agent to different cluster same NS 2021-07-05 06:45:12 UTC
Red Hat Bugzilla 1977354 1 medium CLOSED [master] KUBE-API: Support move agent to different cluster in the same namespace 2021-10-18 17:37:20 UTC

Comment 12 ximhan 2021-08-20 07:26:57 UTC
OpenShift engineering has decided to NOT ship 4.8.6 on 8/23 due to the following issue.
All the fixes part will be now included in 4.8.7 on 8/30.

Comment 16 Mike Ng 2021-09-03 13:52:31 UTC
G2Bsync 912085739 comment 
 CrystalChun Thu, 02 Sep 2021 21:54:56 UTC 
 G2Bsync Fix was merged as part of ACM 2.3 GA
Picked up in #14081

Comment 17 Yona First 2021-09-30 10:37:20 UTC
Hi @frolland, some questions about this bug:

1. What is the use case of this bug? Before\during\after installation?
2. In order to reproduce & verify the bug, should booting and rebooting the host be done manually, or done via ZTP and bmh? 


Comment 18 Fred Rolland 2021-09-30 12:38:30 UTC
1. Before installation
2. Should work with both methods

Comment 19 Yona First 2021-10-12 09:33:46 UTC
Verified with:
Hub Cluster OCP version: 4.8.0
RHACM version: 2.3.3-DOWNSTREAM-2021-09-30-13-28-42

Using both manual & ZTP methods to reboot the host with the new infraenv, output with old Infraenv:

oc get agent -o jsonpath='{.items[0].spec.clusterDeploymentName.name}{"\n"}'

With new Infraenv:

oc get agent -o jsonpath='{.items[0].spec.clusterDeploymentName.name}{"\n"}'

From Agent events log, with Infraenv1:

{"cluster_id":"06202dc3-1ce0-4661-a3f1-5ac682ccbe09","event_time":"2021-10-11T14:49:06.585Z","host_id":"67de17b9-ce08-45ce-98e2-a027c46f80b0","message":"Host 67de17b9-ce08-45ce-98e2-a027c46f80b0: registered to cluster","severity":"info"}

After changing to Infraenv2:

"cluster_id":"04af3f0a-6af0-4048-ba83-88ad08f6a220","event_time":"2021-10-11T15:00:10.060Z","host_id":"67de17b9-ce08-45ce-98e2-a027c46f80b0","message":"Host 67de17b9-ce08-45ce-98e2-a027c46f80b0: registered to cluster","severity":"info"}

Note You need to log in before you can comment on or make changes to this bug.