Bug 2139099 - `PodSecurityViolation` alert is triggerring for openshift-storage pods.
Summary: `PodSecurityViolation` alert is triggerring for openshift-storage pods.
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Nitin Goyal
QA Contact: Martin Bukatovic
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-01 13:10 UTC by Rahul Rajendran
Modified: 2023-08-09 17:00 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-01 13:45:54 UTC
Embargoed:


Attachments (Terms of Use)

Description Rahul Rajendran 2022-11-01 13:10:21 UTC
Description of problem (please be detailed as possible and provide log
snippests):


The alert `PodSecurityViolation` is triggering for Openshift-Storage pods.

Alert definition is:

~~~
openshift-kube-apiserver-podsecurity-951cc1a0-6193-44ff-901c-c3e0873f0145.yaml: |
    groups:
    - name: pod-security-violation
      rules:
      - alert: PodSecurityViolation
        annotations:
          description: A workload (pod, deployment, deamonset, ...) was created somewhere
            in the cluster but it did not match the PodSecurity "{{ $labels.policy_level
            }}" profile defined by its namespace either via the cluster-wide configuration
            (which triggers on a "restricted" profile violations) or by the namespace
            local Pod Security labels. Refer to Kubernetes documentation on Pod Security
            Admission to learn more about these violations.
          summary: One or more workloads users created in the cluster don't match their
            Pod Security profile
        expr: |
          sum(increase(pod_security_evaluations_total{decision="deny",mode="audit",resource="pod"}[1d])) by (policy_level) > 0
        labels:
          namespace: openshift-kube-apiserver
          severity: info
~~~


The pods are identified after following the [KCS|https://access.redhat.com/solutions/6976583].

~~~ 
openshift-storage ceph-file-controller-detect-version jobs
openshift-storage ceph-object-controller-detect-version jobs
openshift-storage csi-addons-controller-manager deployments
openshift-storage csi-cephfsplugin daemonsets
openshift-storage csi-cephfsplugin-provisioner deployments
openshift-storage csi-rbdplugin daemonsets
openshift-storage csi-rbdplugin-provisioner deployments
openshift-storage noobaa-db-pg-0 pods
openshift-storage noobaa-endpoint deployments
openshift-storage noobaa-operator deployments
openshift-storage ocs-metrics-exporter deployments
openshift-storage ocs-operator deployments
openshift-storage odf-console deployments
openshift-storage odf-operator-controller-manager deployments
openshift-storage  pods
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-6cw54-589df669cc replicasets
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-6cw54 deployments
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-kk7n2-5f6b486fb7 replicasets
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-kk7n2-7689f7cffc replicasets
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-kk7n2 deployments
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-qsbdh-66666b654 replicasets
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-qsbdh-86769545d8 replicasets
openshift-storage rook-ceph-crashcollector-uslpreprod1-br7h4-storage-qsbdh deployments
openshift-storage rook-ceph-detect-version jobs
openshift-storage rook-ceph-mds-ocs-storagecluster-cephfilesystem-a deployments
openshift-storage rook-ceph-mds-ocs-storagecluster-cephfilesystem-b deployments
openshift-storage rook-ceph-mgr-a deployments
openshift-storage rook-ceph-mon-a deployments
openshift-storage rook-ceph-mon-b deployments
openshift-storage rook-ceph-mon-c deployments
openshift-storage rook-ceph-operator deployments
openshift-storage rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a deployments
openshift-storage rook-ceph-tools deployments 
~~~


The operators version in the namespace are:

~~~
# oc get csv -n openshift-storage
NAME                              DISPLAY                            VERSION   REPLACES                          PHASE
elasticsearch-operator.5.5.2      OpenShift Elasticsearch Operator   5.5.2     elasticsearch-operator.5.5.1      Succeeded
mcg-operator.v4.11.1              NooBaa Operator                    4.11.1    mcg-operator.v4.11.0              Succeeded
ocs-operator.v4.11.1              OpenShift Container Storage        4.11.1    ocs-operator.v4.11.0              Succeeded
odf-csi-addons-operator.v4.11.1   CSI Addons                         4.11.1    odf-csi-addons-operator.v4.11.0   Succeeded
odf-operator.v4.11.1              OpenShift Data Foundation          4.11.1    odf-operator.v4.11.0              Succeeded 
~~~

Version of all relevant components (if applicable):

v4.11.1
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?

No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue be reproducible?

Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:



The alert points that the Openshift-storage pods are not following the pod security profile.



Expected results:

Openshift-Storage pods should be following the pod security profile.

Additional info:

Comment 2 Mudit Agarwal 2022-11-01 13:45:54 UTC
This is expected with OCP 4.11, the issue has been fixed in OCP 4.12


Note You need to log in before you can comment on or make changes to this bug.