Description of problem: When I set kubecontrollermanager with loglevel=Debug, only the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller still start with -v=2 Version-Release number of selected component (if applicable): [root@localhost ~]# oc version Client Version: 4.8.0-202103080232.p0-f749845 Server Version: 4.8.0-0.nightly-2021-03-14-134919 Kubernetes Version: v1.20.0+e1bc274 How reproducible: Always Steps to Reproduce: 1) Update the loglevel=Debug of the kubecontrollermanager; 2) Check the logs of all the containers for the kubecontrollermanager Actual results: 2) Only the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller still start with -v=2: Expected results: 2) Should restart all the containers with the loglevel setting Additional info:
PR created here: https://github.com/openshift/cluster-kube-controller-manager-operator/pull/511
Waiting PR review
Verified with the cluster built by clusterbot and when I set kubecontrollermanager with loglevel=Debug, the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller as well start with -v=4 exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \ --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \ --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \ --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \ --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \ --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=4 exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml -v=4 exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=4 Have set kubecontrollermanager with loglevel=Trace, the kube-controller-manager restart with -v=6, the cluster-policy-controller and kube-controller-manager-recovery-controller as well start with -v=6 exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \ --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \ --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \ --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \ --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \ --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=6 exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml -v=6 exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=6
Have already verified by building a cluster using clusterbot, waiting for the bot to move it to verified state.
Moving the bug to verified state as it has been verified via pre-merge verification as per comment 3.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438