Bug 1973258 - The status of the successfully deployed SNO spoke, isn't updated in ACM UI - shows "creating".
Summary: The status of the successfully deployed SNO spoke, isn't updated in ACM UI - ...
Keywords:
Status: CLOSED DUPLICATE of bug 1972687
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: assisted-installer
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Michael Filanov
QA Contact: Yuri Obshansky
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-17 13:57 UTC by Alexander Chuzhoy
Modified: 2023-09-15 01:10 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-21 12:13:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alexander Chuzhoy 2021-06-17 13:57:35 UTC
Version:
HUB: 4.8.0-0.nightly-2021-06-16-020345
SPOKE: 4.8.0-fc.9
ACM: advanced-cluster-management.v2.3.0

Steps to reproduce:

Follow the steps to deploy Spoke SNO cluster with ACM and AI/operator.

Result:
The spoke cluster is deployed successfully, yet the ACM UI still shows the cluster in "creating" status.

Expected result: The status should switch from creating to "Ready".

Comment 1 Alexander Chuzhoy 2021-06-17 13:58:59 UTC
oc get managedcluster elvis  -o yaml
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
  creationTimestamp: "2021-06-17T04:50:19Z"
  finalizers:
  - agent.open-cluster-management.io/klusterletaddonconfig-cleanup
  - managedclusterinfo.finalizers.open-cluster-management.io
  - open-cluster-management.io/managedclusterrole
  - cluster.open-cluster-management.io/api-resource-cleanup
  generation: 1
  name: elvis
  resourceVersion: "699671"
  uid: bef34974-b1c6-4993-bd3f-4598a67ce181
spec:
  hubAcceptsClient: true
  leaseDurationSeconds: 60
status:
  capacity:
    core_worker: "0"
    socket_worker: "0"
  conditions:
  - lastTransitionTime: "2021-06-17T04:50:20Z"
    message: Accepted by hub cluster admin
    reason: HubClusterAdminAccepted
    status: "True"
    type: HubAcceptedManagedCluster
  - lastTransitionTime: "2021-06-17T05:00:20Z"
    message: Registration agent stopped updating its lease.
    reason: ManagedClusterLeaseUpdateStopped
    status: Unknown
    type: ManagedClusterConditionAvailable
  version: {}





 oc get KlusterletAddonConfig elvis  -o yaml
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
  creationTimestamp: "2021-06-17T04:50:19Z"
  finalizers:
  - agent.open-cluster-management.io/klusterletaddonconfig-cleanup
  generation: 2
  name: elvis
  namespace: elvis
  resourceVersion: "687405"
  uid: 88a7a216-bd96-4b68-a190-ee742d48dee8
spec:
  applicationManager:
    argocdCluster: false
    enabled: true
  certPolicyController:
    enabled: true
  clusterLabels:
    cloud: auto-detect
    vendor: auto-detect
  clusterName: elvis
  clusterNamespace: elvis
  iamPolicyController:
    enabled: true
  policyController:
    enabled: true
  searchCollector:
    enabled: false
  version: ""



oc get  AgentClusterInstall -n elvis -o=custom-columns='STATUS:status.conditions[-3].message'
STATUS
The installation has completed: Cluster is installed

Comment 3 Alexander Chuzhoy 2021-06-17 22:02:15 UTC
I checked things with older hub cluser:
Version:
hub cluster running 4.8.0-fc.7
spoke running 4.8.0-fc.9
quay.io/acm-d/acm-custom-registry:2.3.0-DOWNSTREAM-2021-06-17-01-26-58

Same steps repeated and the spoke cluster became ready automatically, so seems like we have some regression.

Comment 4 Michael Filanov 2021-06-20 08:06:06 UTC
Shouldn't it be assigned to ACM? I don't really know what they are looking at in cluster deployment.
Can you please attach the output of cluster deployment?

Comment 5 Osher De Paz 2021-06-21 12:13:56 UTC
It's a duplicate of other bug in openshift-apiserver that already been handled
TL;DR: openshift-apiserver proxies requests to the kube-apiserver, and there are impersonation headers that have not been escaped well.
Hive installation uses those headers, and so the installation of Hive operator most probably started but didn't succeed.
Make sure to check for a successful installation even on the HiveConfig object (in our automation we forgot it as well).

*** This bug has been marked as a duplicate of bug 1972687 ***

Comment 6 Red Hat Bugzilla 2023-09-15 01:10:03 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.