Hide Forgot
The logLevel value registered for the Service CA CRD are not reconciled immediately and require manual intervention by deleting the ServiceCA Controller instance Deployment for the application’s loglevel to change. The options are described here: https://docs.openshift.com/container-platform/4.8/rest_api/operator_apis/serviceca-operator-openshift-io-v1.html Version-Release number of selected component (if applicable): OpenShift 4.x OpenShift Service CA Operator How reproducible: Every time Steps to Reproduce: 1. Change the `servicecas.spec.logLevel` options to Debug, Trace or TraceAll 2. Check the ServiceCA Controller Pods - This is still verbosity level 2 Actual results: Loglevel is not increased Expected results: LogLevel is increased (without requiring deletion of the whole Deployment) Additional info: To force a redeployment of the ServiceCA Controller, we can delete the whole deployment and the verbosity will increase. Delete the ServiceCA Controller Deployment `oc delete deploy -n openshift-service-ca --all` We can see in the links below: `needsDeploy` doesn't take into account changes to the ServiceCA CRD: https://github.com/openshift/service-ca-operator/blob/master/pkg/operator/sync.go#L15-L37 ServiceCAOperator then avoids (re)deploying the ServiceCA Controller when `needsDeploy || caModified` is not true: https://github.com/openshift/service-ca-operator/blob/master/pkg/operator/sync_common.go#L200-L214
*** Bug 2048348 has been marked as a duplicate of this bug. ***
I was able to reproduce the issue. The aforementioned `needsDeploy` logic is a little flawed in general, most if not all the resources should not force a redeployment in fact.
The issue has fixed: [root@localhost ~]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2022-09-28-204419 True False 37m Cluster version is 4.12.0-0.nightly-2022-09-28-204419 oc get servicecas cluster -o=jsonpath='{.spec.logLevel}' Trace[root@localhost ~]# oc get pod NAME READY STATUS RESTARTS AGE service-ca-6495bddb88-w7tkk 1/1 Running 0 48s [root@localhost ~]# oc exec po/service-ca-6495bddb88-w7tkk -- ps ax PID TTY STAT TIME COMMAND 1 ? Ssl 0:01 service-ca-operator controller -v=6 16 ? Rs 0:00 ps ax [root@localhost ~]# oc get servicecas cluster -o=jsonpath='{.spec.logLevel}' Normal[root@localhost ~]# oc exec pod/service-ca-5f9bc879d8-4mt9t -- ps ax PID TTY STAT TIME COMMAND 1 ? Ssl 0:00 service-ca-operator controller -v=2 16 ? Rs 0:00 ps ax
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.12.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7399