Bug 1891742
Summary: | 4.6: OperatorStatusChanged is noisy | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Stefan Schimanski <sttts> |
Component: | kube-controller-manager | Assignee: | Maciej Szulik <maszulik> |
Status: | CLOSED ERRATA | QA Contact: | zhou ying <yinzhou> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.6 | CC: | aos-bugs, geliu, kewang, lxia, mfojtik, mifiedle, sbatsche, sttts, xxia |
Target Milestone: | --- | Keywords: | UpcomingSprint |
Target Release: | 4.6.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1891740 | Environment: | |
Last Closed: | 2021-01-25 20:02:12 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1891740 | ||
Bug Blocks: |
Comment 3
Xingxing Xia
2020-11-12 09:53:49 UTC
I'm following JIRA issue DPTP-660 to do the pre-merge verification. Used the cluster-bot launching an env with the still open but Dev-approved PR(s): launch openshift/cluster-kube-apiserver-operator#1029 Checking the env with step in previous comment, kube-apiserver-operator logs do not have the noise now. KCM and kube-scheduler operator pods' logs still have, as previous step. Are there KCM and kube-scheduler operators' PRs? Moving back to Assigned. Feel free to move back if the answer is no, or apply their PRs if the answer is yes. Moving to workloads team in order for them to bump library-go in kcm and ks. PR in the queue. The bug attaches 3 PRs, only the KAS PR was pre-merge-verified, the BZ robot should not automatically move it to VERIFIED. Thus manually moving it back to ON_QA for verifying the other 2 KCM/KS PRs. Checked with latest payload, the issue has fixed: [root@dhcp-140-138 ~]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2021-01-18-070340 True False 45m Cluster version is 4.6.0-0.nightly-2021-01-18-070340 [root@dhcp-140-138 ~]# oc logs deployment/kube-controller-manager-operator -n openshift-kube-controller-manager-operator > logs/kcm_o.log [root@dhcp-140-138 ~]# oc logs deployment/openshift-kube-scheduler-operator -n openshift-kube-scheduler-operator > logs/ks_o.log [root@dhcp-140-138 ~]# grep 'OperatorStatusChanged.*StaticPodsDegraded: pod.*kube-.*container.*is running.*but not ready: unknown reason"' logs/kcm_o.log |wc -l 0 [root@dhcp-140-138 ~]# grep 'OperatorStatusChanged.*StaticPodsDegraded: pod.*kube-.*container.*is running.*but not ready: unknown reason"$' logs/kcm_o.log |wc -l 0 [root@dhcp-140-138 ~]# grep 'OperatorStatusChanged.*StaticPodsDegraded: pod.*kube-.*container.*is running.*but not ready: unknown reason"$' logs/ks_o.log |wc -l 0 [root@dhcp-140-138 ~]# grep 'OperatorStatusChanged.*StaticPodsDegraded: pod.*kube-.*container.*is running.*but not ready: unknown reason"' logs/ks_o.log |wc -l 0 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.6.13 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0171 |