Bug 1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller
Summary: Can't set the loglevel of the container: cluster-policy-controller and kube-c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 4.8
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.8.0
Assignee: Mike Dame
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-15 02:34 UTC by zhou ying
Modified: 2021-07-27 22:53 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 22:53:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-controller-manager-operator pull 511 0 None open Bug 1938636: Set logLevel for policy and recovery controllers 2021-03-16 14:23:21 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:53:42 UTC

Description zhou ying 2021-03-15 02:34:50 UTC
Description of problem:
When  I set kubecontrollermanager with loglevel=Debug, only the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller still start with -v=2

Version-Release number of selected component (if applicable):
[root@localhost ~]# oc version 
Client Version: 4.8.0-202103080232.p0-f749845
Server Version: 4.8.0-0.nightly-2021-03-14-134919
Kubernetes Version: v1.20.0+e1bc274

How reproducible:
Always

Steps to Reproduce:
1) Update the loglevel=Debug of the kubecontrollermanager;
2) Check the logs of all the containers for the kubecontrollermanager 

Actual results:
2) Only the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller still start with -v=2:
Expected results:
2) Should restart all the containers with the loglevel setting
Additional info:

Comment 2 Mike Dame 2021-03-18 14:49:07 UTC
Waiting PR review

Comment 3 RamaKasturi 2021-06-09 16:27:43 UTC
Verified with the cluster built by clusterbot and when  I set kubecontrollermanager with loglevel=Debug, the kube-controller-manager restart with -v=4, the cluster-policy-controller and kube-controller-manager-recovery-controller as well start with -v=4


 exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \
        --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \
        --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=4

exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml -v=4
      exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=4

Have set kubecontrollermanager with loglevel=Trace, the kube-controller-manager restart with -v=6, the cluster-policy-controller and kube-controller-manager-recovery-controller as well start with -v=6

exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \
        --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
        --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \
        --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=6 
exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml -v=6
      exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=6

Comment 5 RamaKasturi 2021-06-11 17:45:04 UTC
Have already verified by building a cluster using clusterbot, waiting for the bot to  move it to verified state.

Comment 6 RamaKasturi 2021-06-15 06:07:33 UTC
Moving the bug to verified state as it has been verified via pre-merge verification as per comment 3.

Comment 9 errata-xmlrpc 2021-07-27 22:53:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.