Description of problem: Some log level in ibm-vpc-block-csi-controller are hard code Version-Release number of selected component (if applicable): 4.10.0-0.nightly-2021-12-23-153012 How reproducible: Always Steps to Reproduce: 1.Install ocp cluster 2.oc get deployment.apps/ibm-vpc-block-csi-controller -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" operator.openshift.io/spec-hash: a5931b5990a06b02338852795be0fe02752bdd7fb118f9d4bd0dcea3bc49b998 creationTimestamp: "2022-01-04T02:02:34Z" generation: 1 labels: addonmanager.kubernetes.io/mode: Reconcile app: ibm-vpc-block-csi-driver name: ibm-vpc-block-csi-controller namespace: openshift-cluster-csi-drivers resourceVersion: "24188" uid: 9ada1081-a6bc-4cfa-9dc5-bace21901401 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: ibm-vpc-block-csi-driver strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: ibm-vpc-block-csi-driver spec: containers: - args: - --v=5 - --csi-address=/csi/csi.sock - --timeout=900s image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:231f3a0782eb8fbc97f9922b4de24febe4f24bda0f02282db1377ca3e1ee894d imagePullPolicy: Always name: csi-resizer resources: limits: cpu: 80m memory: 160Mi requests: cpu: 20m memory: 40Mi securityContext: allowPrivilegeEscalation: false privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /csi name: socket-dir - args: - --v=5 - --csi-address=$(ADDRESS) - --timeout=600s - --feature-gates=Topology=true env: - name: ADDRESS value: /csi/csi.sock image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc05d08aac841a74a191d93a6186cc380f056e63cd324cc2969686d401f878e imagePullPolicy: Always name: csi-provisioner resources: limits: cpu: 100m memory: 100Mi requests: cpu: 10m memory: 20Mi securityContext: allowPrivilegeEscalation: false privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /csi name: socket-dir - args: - --v=5 - --csi-address=/csi/csi.sock - --timeout=900s image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5ccd6be4c6ded9eefb1715cce26adde6498afefd62fe2610323d90852df8c51c imagePullPolicy: Always name: csi-attacher resources: limits: cpu: 100m memory: 100Mi requests: cpu: 10m memory: 20Mi securityContext: allowPrivilegeEscalation: false privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /csi name: socket-dir - args: - --csi-address=/csi/csi.sock - --v=2 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:806003f3eede5838f694de5347eb5ca1b64174f37aa388104305996e890f56ca imagePullPolicy: IfNotPresent name: liveness-probe resources: limits: cpu: 50m memory: 50Mi requests: cpu: 5m memory: 10Mi securityContext: allowPrivilegeEscalation: false privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /csi name: socket-dir - args: - --v=5 - --endpoint=$(CSI_ENDPOINT) - --lock_enabled=false env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace envFrom: - configMapRef: name: ibm-vpc-block-csi-configmap image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16b98894889e6519528edc2e07d0ec217852bd4f7d48d791ea767f6c3d6e7e81 imagePullPolicy: Always livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: healthz scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 3 name: iks-vpc-block-driver ports: - containerPort: 9808 name: healthz protocol: TCP resources: limits: cpu: 500m memory: 500Mi requests: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /csi name: socket-dir - mountPath: /etc/storage_ibmc name: customer-auth readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: ibm-vpc-block-controller-sa serviceAccountName: ibm-vpc-block-controller-sa terminationGracePeriodSeconds: 30 volumes: - emptyDir: {} name: socket-dir - name: customer-auth secret: defaultMode: 420 secretName: storage-secret-store status: availableReplicas: 1 conditions: - lastTransitionTime: "2022-01-04T02:12:10Z" lastUpdateTime: "2022-01-04T02:12:10Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2022-01-04T02:02:34Z" lastUpdateTime: "2022-01-04T02:12:10Z" message: ReplicaSet "ibm-vpc-block-csi-controller-5dc949cf6" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 1 readyReplicas: 1 replicas: 1 updatedReplicas: 1 3.oc get deployment.apps/ibm-vpc-block-csi-controller -o yaml | grep "\-\-v=" - --v=5 - --v=5 - --v=5 - --v=2 - --v=5 4.Update the log level to "Trace" oc get deployment.apps/ibm-vpc-block-csi-controller -o yaml | grep "\-\-v" - --v=5 - --v=5 - --v=5 - --v=6 - --v=5 Actual results: Some log level in ibm-vpc-block-csi-controller are hard code Expected results: Log level should update when change in the clustercsidriver Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info:
oc get deployment.apps/ibm-vpc-block-csi-controller -o json | grep "\-\-v" "--v=2", "--v=2", "--v=2", "--v=2" "--v=2", Update clustercsidriver spec: logLevel: TraceAll managementState: Managed observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null oc get deployment.apps/ibm-vpc-block-csi-controller -o json | grep "\-\-v" "--v=8", "--v=8", "--v=8", "--v=8" "--v=8", Passed with 4.10.0-0.nightly-2022-01-16-191814
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056