Bug 1995380 - failed ClusterDeployment validation errors do not surface through the ClusterPool UI
Summary: failed ClusterDeployment validation errors do not surface through the Cluster...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Advanced Cluster Management for Kubernetes
Classification: Red Hat
Component: Cluster Lifecycle
Version: rhacm-2.4.z
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: rhacm-2.4.4
Assignee: Jian Qiu
QA Contact: Hui Chen
Christopher Dawson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-18 23:49 UTC by Dale Bewley
Modified: 2022-05-03 16:44 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-03 16:44:03 UTC
Target Upstream Version:
Embargoed:
dhuynh: rhacm-2.4.z+
juhsu: rhacm-2.5+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github open-cluster-management backlog issues 15413 0 None None None 2021-08-19 13:12:00 UTC
Red Hat Product Errata RHSA-2022:1681 0 None None None 2022-05-03 16:44:35 UTC

Description Dale Bewley 2021-08-18 23:49:10 UTC
Description of the problem:

When there is a failure to provision a cluster via a ClusterPool, the UI presents a spinner with the word "Creating" indefinitely. Clicking the link to logs has no effect due to non-existent hive pod.

Release version: 2.3.1

Operator snapshot version:

OCP version: 4.8.4

Browser Info: Chrome

Steps to reproduce:
1. Create clusterpool with invalid Azure region spelling
2. Create install-config.yaml for same with proper Azure region spelling


Actual results:

UI gives impression that cluster creation is proceeding when it never actually started.

Expected results:

The error condition found in the ClusterDeployment should be surfaced to the ClusterPool UI and cluster creation should fail.

Additional info:
https://coreos.slack.com/archives/CTZTHFQRH/p1629327600047100


Inputs:
---
apiVersion: hive.openshift.io/v1
kind: ClusterPool
metadata:
  name: az-edge-bad
  namespace: edge-pool
  labels:
    cluster.open-cluster-management.io/clusterset: edge
    cloud: Azure
    region: badvalue
    vendor: OpenShift
spec:
  baseDomain: az.domain.com
  imageSetRef:
    name: img4.8.4-x86-64-appsub
  skipMachinePools: true
  installConfigSecretTemplateRef:
    name: az-install-config
  platform:
    azure:
      baseDomainResourceGroupName: ocp
      credentialsSecretRef:
        name: azure-creds
      region: badvalue
  pullSecretRef:
    name: pull-secret
  size: 2

---
apiVersion: v1
baseDomain: az.domain.com
compute:
- architecture: amd64
  hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 0
controlPlane:
  architecture: amd64
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 1
metadata:
  creationTimestamp: null
  name: edge
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    baseDomainResourceGroupName: ocp
    cloudName: AzurePublicCloud
    outboundType: Loadbalancer
    region: westus2
publish: External
pullSecret: "" 


Resulting ClusterDeployment:

status:
  cliImage: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c7210ebe8b9c7b0799b4188be59bb712e50e1e26a28e0dfa4c7b6f4e9745a1c7
  conditions:
  - lastProbeTime: "2021-08-18T23:25:52Z"
    lastTransitionTime: "2021-08-18T23:25:52Z"
    message: install config region does not match cluster deployment region
    reason: InstallConfigValidationFailed
    status: "False"
    type: RequirementsMet

Comment 1 bot-tracker-sync 2022-04-13 21:06:31 UTC
G2Bsync 1090269971 comment 
 chenz4027 Wed, 06 Apr 2022 13:25:51 UTC 
 G2Bsync @dtthuynh Yep, this is merged in 2.5 and awaiting for 2.4.4 to be open for commits (likely mid April). You can move this out of 2.4.3

Comment 8 errata-xmlrpc 2022-05-03 16:44:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Advanced Cluster Management 2.4.4 security updates and bug fixes), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1681


Note You need to log in before you can comment on or make changes to this bug.