Bug 2048349

Summary: Service CA Operator does not reconcile for spec.loglevel changes in ServiceCA CRD
Product: OpenShift Container Platform Reporter: Michael Washer <mwasher>
Component: service-caAssignee: Standa Laznicka <slaznick>
Status: CLOSED ERRATA QA Contact: zhou ying <yinzhou>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.9CC: aos-bugs, kostrows, mfojtik, slaznick, surbania, xxia
Target Milestone: ---   
Target Release: 4.12.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-01-17 19:47:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Michael Washer 2022-01-31 03:52:44 UTC
The logLevel value registered for the Service CA CRD are not reconciled immediately and require manual intervention by deleting the ServiceCA Controller instance Deployment for the application’s loglevel to change.

The options are described here:

Version-Release number of selected component (if applicable):
OpenShift 4.x
OpenShift Service CA Operator

How reproducible:
Every time

Steps to Reproduce:
1. Change the `servicecas.spec.logLevel` options to Debug, Trace or TraceAll
2. Check the ServiceCA Controller Pods - This is still verbosity level 2

Actual results:
Loglevel is not increased

Expected results:
LogLevel is increased (without requiring deletion of the whole Deployment)

Additional info:
To force a redeployment of the ServiceCA Controller, we can delete the whole deployment and the verbosity will increase.
Delete the ServiceCA Controller Deployment `oc delete deploy -n openshift-service-ca --all`

We can see in the links below:
`needsDeploy` doesn't take into account changes to the ServiceCA CRD:

ServiceCAOperator then avoids (re)deploying the ServiceCA Controller when `needsDeploy || caModified` is not true:

Comment 1 Michael Washer 2022-01-31 03:58:02 UTC
*** Bug 2048348 has been marked as a duplicate of this bug. ***

Comment 2 Standa Laznicka 2022-01-31 12:01:14 UTC
I was able to reproduce the issue. The aforementioned `needsDeploy` logic is a little flawed in general, most if not all the resources should not force a redeployment in fact.

Comment 6 zhou ying 2022-09-29 03:27:49 UTC
The issue has fixed:
[root@localhost ~]# oc get clusterversion 
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.12.0-0.nightly-2022-09-28-204419   True        False         37m     Cluster version is 4.12.0-0.nightly-2022-09-28-204419

 oc get servicecas cluster -o=jsonpath='{.spec.logLevel}'
Trace[root@localhost ~]# oc get pod
NAME                          READY   STATUS    RESTARTS   AGE
service-ca-6495bddb88-w7tkk   1/1     Running   0          48s
[root@localhost ~]# oc exec po/service-ca-6495bddb88-w7tkk  -- ps ax
      1 ?        Ssl    0:01 service-ca-operator controller -v=6
     16 ?        Rs     0:00 ps ax

[root@localhost ~]# oc get servicecas cluster -o=jsonpath='{.spec.logLevel}'
Normal[root@localhost ~]# oc exec pod/service-ca-5f9bc879d8-4mt9t -- ps ax
      1 ?        Ssl    0:00 service-ca-operator controller -v=2
     16 ?        Rs     0:00 ps ax

Comment 9 errata-xmlrpc 2023-01-17 19:47:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.12.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.