Bug 2029750

Summary: cvo keep restart due to it fail to get feature gate value during the initial start stage
Product: OpenShift Container Platform Reporter: liujia <jiajliu>
Component: Cluster Version OperatorAssignee: W. Trevor King <wking>
Status: CLOSED ERRATA QA Contact: liujia <jiajliu>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.10CC: aos-bugs, wking
Target Milestone: ---   
Target Release: 4.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-03-10 16:32:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liujia 2021-12-07 08:37:53 UTC
Description of problem:
CVO keep restart during the initial start stage due to fail to get featuregate value. It finally will work well, but with many raw logs in container status, which is hard to read and confused.

# ./oc get po -n openshift-cluster-version
NAME                                        READY   STATUS    RESTARTS      AGE
cluster-version-operator-7c99bcfc6c-w8w8m   1/1     Running   6 (37m ago)   43m

# ./oc -n openshift-cluster-version get po -ojson|jq .items[].status.containerStatuses
[
  {
    "containerID": "cri-o://92636ec17d558f22655a8b76339bbc8582d8fd59b54587562885c61922fb00cd",
    "image": "registry.ci.openshift.org/ocp/release@sha256:0ab310bf32a549764f37387a1da2f148485ce03a412ac9afea4f7173e4d43d5e",
    "imageID": "registry.ci.openshift.org/ocp/release@sha256:0ab310bf32a549764f37387a1da2f148485ce03a412ac9afea4f7173e4d43d5e",
    "lastState": {
      "terminated": {
        "containerID": "cri-o://a10a30306228bbca855cb99ad1662944985f95775a765e3bbe4ccf795ad16a13",
        "exitCode": 255,
        "finishedAt": "2021-12-07T02:04:00Z",
        "message": "b1.assembly.stream-de6f4b1\nI1207 02:04:00.439091       1 merged_client_builder.go:121] Using in-cluster configuration\nF1207 02:04:00.439590       1 start.go:29] error: error getting featuregate value: Get \"https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster\": dial tcp 127.0.0.1:6443: connect: connection refused\ngoroutine 1 [running]:\nk8s.io/klog/v2.stacks(0x1)\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1026 +0x8a\nk8s.io/klog/v2.(*loggingT).output(0x2af1ca0, 0x3, {0x0, 0x0}, 0xc0002c02a0, 0x0, {0x214a773, 0xc000181fb0}, 0x0, 0x0)\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:975 +0x63d\nk8s.io/klog/v2.(*loggingT).printf(0xc0001c3d70, 0x590625, {0x0, 0x0}, {0x0, 0x0}, {0x1a6ce54, 0x9}, {0xc000181fb0, 0x1, ...})\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:753 +0x1e5\nk8s.io/klog/v2.Fatalf(...)\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1514\nmain.init.3.func1(0xc000470280, {0x1a6717c, 0x7, 0x7})\n\t/go/src/github.com/openshift/cluster-version-operator/cmd/start.go:29 +0x1f5\ngithub.com/spf13/cobra.(*Command).execute(0xc000470280, {0xc0002c01c0, 0x7, 0x7})\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:856 +0x5f8\ngithub.com/spf13/cobra.(*Command).ExecuteC(0x2ada8c0)\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:960 +0x3ad\ngithub.com/spf13/cobra.(*Command).Execute(...)\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:897\nmain.main()\n\t/go/src/github.com/openshift/cluster-version-operator/cmd/main.go:29 +0x46\n\ngoroutine 19 [chan receive]:\nk8s.io/klog/v2.(*loggingT).flushDaemon(0x0)\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1169 +0x6a\ncreated by k8s.io/klog/v2.init.0\n\t/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:420 +0xfb\n",
        "reason": "Error",
        "startedAt": "2021-12-07T02:04:00Z"
      }
    },
    "name": "cluster-version-operator",
    "ready": true,
    "restartCount": 6,
    "started": true,
    "state": {
      "running": {
        "startedAt": "2021-12-07T02:06:41Z"
      }
    }
  }
]

Version-Release number of the following components:
4.10.0-0.nightly-2021-12-06-201335

How reproducible:
always

Steps to Reproduce:
1. Check the cvo pod status after fresh installation.
2.
3.

Actual results:
cvo keep restart during initial start stage.

Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 2 liujia 2021-12-13 02:40:56 UTC
Verified on 4.10.0-0.nightly-2021-12-12-184227

 # ./oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2021-12-12-184227   True        False         26m     Cluster version is 4.10.0-0.nightly-2021-12-12-184227

# ./oc get po -n openshift-cluster-version
NAME                                        READY   STATUS    RESTARTS   AGE
cluster-version-operator-654c4cfb9f-sbbgh   1/1     Running   0          48m

# ./oc -n openshift-cluster-version get po -ojson|jq .items[].status.containerStatuses
[
  {
    "containerID": "cri-o://f91c59354204d7d7600851743c4dc5c395de9e2324a11a91467b97c740a01aff",
    "image": "registry.ci.openshift.org/ocp/release@sha256:685c7444567c2759f314a819da023b72b88d065d33ab9ce7e6a326e114aeca75",
    "imageID": "registry.ci.openshift.org/ocp/release@sha256:685c7444567c2759f314a819da023b72b88d065d33ab9ce7e6a326e114aeca75",
    "lastState": {},
    "name": "cluster-version-operator",
    "ready": true,
    "restartCount": 0,
    "started": true,
    "state": {
      "running": {
        "startedAt": "2021-12-13T01:54:16Z"
      }
    }
  }
]

# ./oc -n openshift-cluster-version logs cluster-version-operator-654c4cfb9f-sbbgh|grep "Error getting featuregate"
W1213 01:54:16.279051       1 start.go:145] Error getting featuregate value: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 127.0.0.1:6443: connect: connection refused

Comment 6 errata-xmlrpc 2022-03-10 16:32:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056