Bug 2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
Summary: [vsphere upi] pod machine-config-operator cannot be started due to panic issue
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.10.0
Assignee: Joseph Callen
QA Contact: Rio Liu
URL:
Whiteboard:
: 2031056 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-10 11:58 UTC by Rio Liu
Modified: 2022-03-10 16:33 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-10 16:33:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 2865 0 None open Bug 2031049: Fix panic when PlatformStatus VSphere is nil 2021-12-10 17:57:46 UTC
Red Hat Bugzilla 1859230 1 None None None 2021-12-10 12:30:33 UTC
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:33:35 UTC

Description Rio Liu 2021-12-10 11:58:27 UTC
Description of problem:

Version-Release number of MCO (Machine Config Operator) (if applicable): 
4.10.0-0.nightly-2021-12-10-033652

Platform (AWS, VSphere, Metal, etc.): VSphere UPI

Did you catch this issue by running a Jenkins job? If yes, please list:
1. Jenkins job: 
https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Flexy-install/58863/console

2. Profile:
https://gitlab.cee.redhat.com/aosqe/flexy-templates/-/blob/master/functionality-testing/aos-4_10/upi-on-vsphere/versioned-installer-vmc7-secureboot_enabled-static_network

Steps to Reproduce:
1. install OCP cluster with payload 4.10.0-0.nightly-2021-12-10-033652 on vsphere upi env


Actual results:
Installation is failed, pod machine-config-operator cannot be started due to panic

Expected results:
MCO works well and installation can be completed successfully

Additional info:

 oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version             False       True          3h57m   Unable to apply 4.10.0-0.nightly-2021-12-10-033652: some cluster operators have not yet rolled out

oc get co/machine-config -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  annotations:
    exclude.release.openshift.io/internal-openshift-hosted: "true"
    include.release.openshift.io/self-managed-high-availability: "true"
    include.release.openshift.io/single-node-developer: "true"
  creationTimestamp: "2021-12-10T07:47:47Z"
  generation: 1
  managedFields:
  - apiVersion: config.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:exclude.release.openshift.io/internal-openshift-hosted: {}
          f:include.release.openshift.io/self-managed-high-availability: {}
          f:include.release.openshift.io/single-node-developer: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"e22b9bb9-81bf-49b3-a364-7a598c8c89a7"}: {}
      f:spec: {}
    manager: cluster-version-operator
    operation: Update
    time: "2021-12-10T07:47:47Z"
  - apiVersion: config.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status: {}
    manager: cluster-version-operator
    operation: Update
    subresource: status
    time: "2021-12-10T07:47:48Z"
  - apiVersion: config.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions: {}
        f:extension: {}
        f:relatedObjects: {}
    manager: machine-config-operator
    operation: Update
    subresource: status
    time: "2021-12-10T07:49:07Z"
  name: machine-config
  ownerReferences:
  - apiVersion: config.openshift.io/v1
    kind: ClusterVersion
    name: version
    uid: e22b9bb9-81bf-49b3-a364-7a598c8c89a7
  resourceVersion: "4226"
  uid: 956bb928-b31f-47e3-b671-24bdeddfdb85
spec: {}
status:
  conditions:
  - lastTransitionTime: "2021-12-10T07:49:06Z"
    message: Working towards 4.10.0-0.nightly-2021-12-10-033652
    status: "True"
    type: Progressing
  - lastTransitionTime: "2021-12-10T07:49:07Z"
    message: 'Unable to apply 4.10.0-0.nightly-2021-12-10-033652: openshift-config-managed/kube-cloud-config configmap is required on platform VSphere but not found: configmap "kube-cloud-config" not found'
    reason: RenderConfigFailed
    status: "True"
    type: Degraded
  - lastTransitionTime: "2021-12-10T07:49:07Z"
    message: Cluster not available for []
    status: "False"
    type: Available
  - lastTransitionTime: "2021-12-10T07:49:07Z"
    message: 'An error occurred when checking kubelet version skew: kube-apiserver does not yet have a version'
    reason: KubeletSkewUnchecked
    status: "True"
    type: Upgradeable
  extension: {}
  relatedObjects:
  - group: ""
    name: openshift-machine-config-operator
    resource: namespaces
  - group: machineconfiguration.openshift.io
    name: ""
    resource: machineconfigpools
  - group: machineconfiguration.openshift.io
    name: ""
    resource: controllerconfigs
  - group: machineconfiguration.openshift.io
    name: ""
    resource: kubeletconfigs
  - group: machineconfiguration.openshift.io
    name: ""
    resource: containerruntimeconfigs
  - group: machineconfiguration.openshift.io
    name: ""
    resource: machineconfigs
  - group: ""
    name: ""
    resource: nodes
  - group: ""
    name: openshift-kni-infra
    resource: namespaces
  - group: ""
    name: openshift-openstack-infra
    resource: namespaces
  - group: ""
    name: openshift-ovirt-infra
    resource: namespaces
  - group: ""
    name: openshift-vsphere-infra
    resource: namespaces

oc get pod -n openshift-machine-config-operator
NAME                                       READY   STATUS             RESTARTS        AGE
machine-config-operator-5b98d5894f-bh7jt   0/1     CrashLoopBackOff   38 (2m3s ago)   4h5m

oc get pod/machine-config-operator-5b98d5894f-bh7jt -n openshift-machine-config-operator -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.129.0.16"
          ],
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.129.0.16"
          ],
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: hostmount-anyuid
  creationTimestamp: "2021-12-10T07:48:26Z"
  generateName: machine-config-operator-5b98d5894f-
  labels:
    k8s-app: machine-config-operator
    pod-template-hash: 5b98d5894f
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:target.workload.openshift.io/management: {}
        f:generateName: {}
        f:labels:
          .: {}
          f:k8s-app: {}
          f:pod-template-hash: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"d62458f8-037e-4b31-9091-1e2e04f07193"}: {}
      f:spec:
        f:containers:
          k:{"name":"machine-config-operator"}:
            .: {}
            f:args: {}
            f:env:
              .: {}
              k:{"name":"RELEASE_VERSION"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources:
              .: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/etc/mco/images"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/etc/ssl/kubernetes/ca.crt"}:
                .: {}
                f:mountPath: {}
                f:name: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:nodeSelector: {}
        f:priorityClassName: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext:
          .: {}
          f:runAsNonRoot: {}
          f:runAsUser: {}
        f:terminationGracePeriodSeconds: {}
        f:tolerations: {}
        f:volumes:
          .: {}
          k:{"name":"images"}:
            .: {}
            f:configMap:
              .: {}
              f:defaultMode: {}
              f:name: {}
            f:name: {}
          k:{"name":"root-ca"}:
            .: {}
            f:hostPath:
              .: {}
              f:path: {}
              f:type: {}
            f:name: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-12-10T07:48:26Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          .: {}
          k:{"type":"PodScheduled"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
    manager: kube-scheduler
    operation: Update
    subresource: status
    time: "2021-12-10T07:48:26Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:k8s.v1.cni.cncf.io/network-status: {}
          f:k8s.v1.cni.cncf.io/networks-status: {}
    manager: multus
    operation: Update
    subresource: status
    time: "2021-12-10T07:49:06Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.129.0.16"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    subresource: status
    time: "2021-12-10T11:52:07Z"
  name: machine-config-operator-5b98d5894f-bh7jt
  namespace: openshift-machine-config-operator
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: machine-config-operator-5b98d5894f
    uid: d62458f8-037e-4b31-9091-1e2e04f07193
  resourceVersion: "123825"
  uid: a4f7eef1-0e3b-47d3-b1a2-1f92c706996c
spec:
  containers:
  - args:
    - start
    - --images-json=/etc/mco/images/images.json
    env:
    - name: RELEASE_VERSION
      value: 4.10.0-0.nightly-2021-12-10-033652
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fdca36de89abc881d24f5d84e4b47d5e5ada05f9dbb3cd6518cce692608dbccb
    imagePullPolicy: IfNotPresent
    name: machine-config-operator
    resources:
      requests:
        cpu: 20m
        memory: 50Mi
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /etc/ssl/kubernetes/ca.crt
      name: root-ca
    - mountPath: /etc/mco/images
      name: images
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-zqqkh
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: control-plane-1
  nodeSelector:
    node-role.kubernetes.io/master: ""
  preemptionPolicy: PreemptLowerPriority
  priority: 2000000000
  priorityClassName: system-cluster-critical
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    runAsNonRoot: true
    runAsUser: 65534
    seLinuxOptions:
      level: s0:c18,c12
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 120
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 120
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  volumes:
  - configMap:
      defaultMode: 420
      name: machine-config-operator-images
    name: images
  - hostPath:
      path: /etc/kubernetes/ca.crt
      type: ""
    name: root-ca
  - name: kube-api-access-zqqkh
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-12-10T07:48:58Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-12-10T11:52:07Z"
    message: 'containers with unready status: [machine-config-operator]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-12-10T11:52:07Z"
    message: 'containers with unready status: [machine-config-operator]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-12-10T07:48:58Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://ab69e76cd4e104c708458b64cdbde25c6fdb58c857948d869ae7f643abf63552
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fdca36de89abc881d24f5d84e4b47d5e5ada05f9dbb3cd6518cce692608dbccb
    imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fdca36de89abc881d24f5d84e4b47d5e5ada05f9dbb3cd6518cce692608dbccb
    lastState:
      terminated:
        containerID: cri-o://ab69e76cd4e104c708458b64cdbde25c6fdb58c857948d869ae7f643abf63552
        exitCode: 2
        finishedAt: "2021-12-10T11:52:06Z"
        message: "/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:304\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f43f5e03570)\n\t/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00059d9e0, {0x1af8fe0, 0xc00089cf00}, 0x1, 0xc00009ce40)\n\t/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6\nk8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000610ba0, 0x3b9aca00, 0x0, 0x80, 0xc00009fe50)\n\t/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89\nk8s.io/apimachinery/pkg/util/wait.Until(0xc00009ffd0, 0x888906, 0xc000616d40)\n\t/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25\ncreated by github.com/openshift/machine-config-operator/pkg/operator.(*Operator).Run\n\t/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:273 +0x6c9\n"
        reason: Error
        startedAt: "2021-12-10T11:50:10Z"
    name: machine-config-operator
    ready: false
    restartCount: 38
    started: false
    state:
      waiting:
        message: back-off 5m0s restarting failed container=machine-config-operator pod=machine-config-operator-5b98d5894f-bh7jt_openshift-machine-config-operator(a4f7eef1-0e3b-47d3-b1a2-1f92c706996c)
        reason: CrashLoopBackOff
  hostIP: 172.31.248.226
  phase: Running
  podIP: 10.129.0.16
  podIPs:
  - ip: 10.129.0.16
  qosClass: Burstable
  startTime: "2021-12-10T07:48:58Z"

oc logs -n openshift-machine-config-operator machine-config-operator-5b98d5894f-bh7jt
I1210 11:50:10.344967       1 start.go:43] Version: 4.10.0-0.nightly-2021-12-10-033652 (Raw: v4.10.0-202112100221.p0.g3cc5461.assembly.stream-dirty, Hash: 3cc546103bc9042d899c34ada12e429b86aa6547)
I1210 11:50:10.346750       1 leaderelection.go:248] attempting to acquire leader lease openshift-machine-config-operator/machine-config...
I1210 11:52:05.997658       1 leaderelection.go:258] successfully acquired lease openshift-machine-config-operator/machine-config
I1210 11:52:06.435710       1 operator.go:267] Starting MachineConfigOperator
E1210 11:52:06.462769       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 253 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x16a1f00, 0x28267f0})
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001bf5c0})
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75
panic({0x16a1f00, 0x28267f0})
	/usr/lib/golang/src/runtime/panic.go:1038 +0x215
github.com/openshift/machine-config-operator/pkg/operator.getIgnitionHost(0xc0004a95e8)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/sync.go:356 +0x22a
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).syncRenderConfig(0xc0002d2240, 0x4b7477)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/sync.go:320 +0xdbb
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).syncAll(0xc0002d2240, {0xc001253d00, 0x6, 0x90})
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/sync.go:113 +0x3b8
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).sync(0xc0002d2240, {0xc00025e120, 0x30})
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:359 +0x453
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).processNextWorkItem(0xc0002d2240)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:315 +0xe5
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).worker(...)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:304
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f43f5e03570)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00059d9e0, {0x1af8fe0, 0xc00089cf00}, 0x1, 0xc00009ce40)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000610ba0, 0x3b9aca00, 0x0, 0x80, 0xc00009fe50)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0xc00009ffd0, 0x888906, 0xc000616d40)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by github.com/openshift/machine-config-operator/pkg/operator.(*Operator).Run
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:273 +0x6c9
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x14d8e6a]

goroutine 253 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001bf5c0})
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x16a1f00, 0x28267f0})
	/usr/lib/golang/src/runtime/panic.go:1038 +0x215
github.com/openshift/machine-config-operator/pkg/operator.getIgnitionHost(0xc0004a95e8)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/sync.go:356 +0x22a
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).syncRenderConfig(0xc0002d2240, 0x4b7477)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/sync.go:320 +0xdbb
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).syncAll(0xc0002d2240, {0xc001253d00, 0x6, 0x90})
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/sync.go:113 +0x3b8
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).sync(0xc0002d2240, {0xc00025e120, 0x30})
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:359 +0x453
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).processNextWorkItem(0xc0002d2240)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:315 +0xe5
github.com/openshift/machine-config-operator/pkg/operator.(*Operator).worker(...)
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:304
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f43f5e03570)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00059d9e0, {0x1af8fe0, 0xc00089cf00}, 0x1, 0xc00009ce40)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000610ba0, 0x3b9aca00, 0x0, 0x80, 0xc00009fe50)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0xc00009ffd0, 0x888906, 0xc000616d40)
	/go/src/github.com/openshift/machine-config-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by github.com/openshift/machine-config-operator/pkg/operator.(*Operator).Run
	/go/src/github.com/openshift/machine-config-operator/pkg/operator/operator.go:273 +0x6c9


oc get infrastructure/cluster -o yaml
apiVersion: config.openshift.io/v1
kind: Infrastructure
metadata:
  creationTimestamp: "2021-12-10T07:47:40Z"
  generation: 1
  managedFields:
  - apiVersion: config.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        .: {}
        f:cloudConfig:
          .: {}
          f:key: {}
          f:name: {}
        f:platformSpec:
          .: {}
          f:type: {}
    manager: cluster-bootstrap
    operation: Update
    time: "2021-12-10T07:47:40Z"
  - apiVersion: config.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:apiServerInternalURI: {}
        f:apiServerURL: {}
        f:controlPlaneTopology: {}
        f:etcdDiscoveryDomain: {}
        f:infrastructureName: {}
        f:infrastructureTopology: {}
        f:platform: {}
        f:platformStatus:
          .: {}
          f:type: {}
    manager: cluster-bootstrap
    operation: Update
    subresource: status
    time: "2021-12-10T07:47:40Z"
  name: cluster
  resourceVersion: "582"
  uid: af2247b5-d013-431d-80b9-bf9523506d5a
spec:
  cloudConfig:
    key: config
    name: cloud-provider-config
  platformSpec:
    type: VSphere
status:
  apiServerInternalURI: https://api-int.jimavsp1210b.qe.devcluster.openshift.com:6443
  apiServerURL: https://api.jimavsp1210b.qe.devcluster.openshift.com:6443
  controlPlaneTopology: HighlyAvailable
  etcdDiscoveryDomain: ""
  infrastructureName: jimavsp1210b-9b9b7
  infrastructureTopology: HighlyAvailable
  platform: VSphere
  platformStatus:
    type: VSphere

Comment 1 jima 2021-12-10 12:14:40 UTC
*** Bug 2031056 has been marked as a duplicate of this bug. ***

Comment 12 errata-xmlrpc 2022-03-10 16:33:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056


Note You need to log in before you can comment on or make changes to this bug.