Bug 1938636

Summary: Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller
Product: OpenShift Container Platform Reporter: zhou ying <yinzhou>
Component: kube-controller-managerAssignee: Mike Dame <mdame>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.8CC: aos-bugs, knarra, mfojtik
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-27 22:53:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description zhou ying 2021-03-15 02:34:50 UTC
Description of problem:
When  I set kubecontrollermanager with loglevel=Debug, only the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller still start with -v=2

Version-Release number of selected component (if applicable):
[root@localhost ~]# oc version 
Client Version: 4.8.0-202103080232.p0-f749845
Server Version: 4.8.0-0.nightly-2021-03-14-134919
Kubernetes Version: v1.20.0+e1bc274

How reproducible:
Always

Steps to Reproduce:
1) Update the loglevel=Debug of the kubecontrollermanager;
2) Check the logs of all the containers for the kubecontrollermanager 

Actual results:
2) Only the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller still start with -v=2:
Expected results:
2) Should restart all the containers with the loglevel setting
Additional info:

Comment 2 Mike Dame 2021-03-18 14:49:07 UTC
Waiting PR review

Comment 3 RamaKasturi 2021-06-09 16:27:43 UTC
Verified with the cluster built by clusterbot and when  I set kubecontrollermanager with loglevel=Debug, the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller as well start with -v=4


 exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \
        --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \
        --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=4

exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml -v=4
      exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=4

Have set kubecontrollermanager with loglevel=Trace, the kube-controller-manager restart with -v=6, the cluster-policy-controller and kube-controller-manager-recovery-controller as well start with -v=6

exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \
        --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \
        --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=6 
exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml -v=6
      exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=6

Comment 5 RamaKasturi 2021-06-11 17:45:04 UTC
Have already verified by building a cluster using clusterbot, waiting for the bot to  move it to verified state.

Comment 6 RamaKasturi 2021-06-15 06:07:33 UTC
Moving the bug to verified state as it has been verified via pre-merge verification as per comment 3.

Comment 9 errata-xmlrpc 2021-07-27 22:53:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438