Bug 2048349 - Service CA Operator does not reconcile for spec.loglevel changes in ServiceCA CRD
Summary: Service CA Operator does not reconcile for spec.loglevel changes in ServiceCA...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: service-ca
Version: 4.9
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.12.0
Assignee: Standa Laznicka
QA Contact: zhou ying
URL:
Whiteboard:
: 2048348 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-31 03:52 UTC by Michael Washer
Modified: 2023-01-17 19:47 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-01-17 19:47:08 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift service-ca-operator pull 196 0 None open Bug 2048349: make the operator react to workload logLevel configuration 2022-07-01 13:35:39 UTC
Red Hat Product Errata RHSA-2022:7399 0 None None None 2023-01-17 19:47:32 UTC

Description Michael Washer 2022-01-31 03:52:44 UTC
The logLevel value registered for the Service CA CRD are not reconciled immediately and require manual intervention by deleting the ServiceCA Controller instance Deployment for the application’s loglevel to change.

The options are described here:
https://docs.openshift.com/container-platform/4.8/rest_api/operator_apis/serviceca-operator-openshift-io-v1.html

Version-Release number of selected component (if applicable):
OpenShift 4.x
OpenShift Service CA Operator

How reproducible:
Every time

Steps to Reproduce:
1. Change the `servicecas.spec.logLevel` options to Debug, Trace or TraceAll
2. Check the ServiceCA Controller Pods - This is still verbosity level 2

Actual results:
Loglevel is not increased


Expected results:
LogLevel is increased (without requiring deletion of the whole Deployment)

Additional info:
To force a redeployment of the ServiceCA Controller, we can delete the whole deployment and the verbosity will increase.
Delete the ServiceCA Controller Deployment `oc delete deploy -n openshift-service-ca --all`

We can see in the links below:
`needsDeploy` doesn't take into account changes to the ServiceCA CRD:
https://github.com/openshift/service-ca-operator/blob/master/pkg/operator/sync.go#L15-L37

ServiceCAOperator then avoids (re)deploying the ServiceCA Controller when `needsDeploy || caModified` is not true:
https://github.com/openshift/service-ca-operator/blob/master/pkg/operator/sync_common.go#L200-L214

Comment 1 Michael Washer 2022-01-31 03:58:02 UTC
*** Bug 2048348 has been marked as a duplicate of this bug. ***

Comment 2 Standa Laznicka 2022-01-31 12:01:14 UTC
I was able to reproduce the issue. The aforementioned `needsDeploy` logic is a little flawed in general, most if not all the resources should not force a redeployment in fact.

Comment 6 zhou ying 2022-09-29 03:27:49 UTC
The issue has fixed:
[root@localhost ~]# oc get clusterversion 
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.12.0-0.nightly-2022-09-28-204419   True        False         37m     Cluster version is 4.12.0-0.nightly-2022-09-28-204419


 oc get servicecas cluster -o=jsonpath='{.spec.logLevel}'
Trace[root@localhost ~]# oc get pod
NAME                          READY   STATUS    RESTARTS   AGE
service-ca-6495bddb88-w7tkk   1/1     Running   0          48s
[root@localhost ~]# oc exec po/service-ca-6495bddb88-w7tkk  -- ps ax
    PID TTY      STAT   TIME COMMAND
      1 ?        Ssl    0:01 service-ca-operator controller -v=6
     16 ?        Rs     0:00 ps ax


[root@localhost ~]# oc get servicecas cluster -o=jsonpath='{.spec.logLevel}'
Normal[root@localhost ~]# oc exec pod/service-ca-5f9bc879d8-4mt9t -- ps ax
    PID TTY      STAT   TIME COMMAND
      1 ?        Ssl    0:00 service-ca-operator controller -v=2
     16 ?        Rs     0:00 ps ax

Comment 9 errata-xmlrpc 2023-01-17 19:47:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.12.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7399


Note You need to log in before you can comment on or make changes to this bug.