Bug 2088541
| Summary: | Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity "restricted:v1.24"` | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Anik <anbhatta> |
| Component: | OLM | Assignee: | Per da Silva <pegoncal> |
| OLM sub component: | OLM | QA Contact: | Jian Zhang <jiazha> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | high | ||
| Priority: | high | CC: | jiazha, pegoncal, surbania, tflannag |
| Version: | 4.11 | Keywords: | Triaged |
| Target Milestone: | --- | Flags: | anbhatta:
needinfo-
|
| Target Release: | 4.11.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
Cause:
PSA `baseline` policy was introduced as default cluster-wide
for all namespaces, with a default warning level of `restricted`.
Consequence:
Warnings like
/apis/batch/v1/namespaces/jian/jobs would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
were being emitted in the openshift-marketplace namespace.
Workaround (if any):
Result:
Fix introduced in the PR suppress the warnings, by reducing the warn level to `baseline`.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-10 11:13:14 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Anik
2022-05-19 16:30:53 UTC
Hi Anik,
The job back pods don't have set the `securityContext.runAsNonRoot` either, do I need to create a separate bug to trace it? Or am I missing something? Thanks!
mac:~ jianzhang$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-05-20-213928 True False 9h Cluster version is 4.11.0-0.nightly-2022-05-20-213928
mac:~ jianzhang$ oc get job -n openshift-marketplace
NAME COMPLETIONS DURATION AGE
584f82d478a964a7ea525ac52979004bb406d4eff8427acd7cee176180c49c3 1/1 68s 9h
9fe57ab70b517dfc544ee68749bb66b0da14ad6ca7dd32654f8c850e154193f 1/1 69s 9h
mac:~ jianzhang$ oc get pods -n openshift-marketplace
NAME READY STATUS RESTARTS AGE
584f82d478a964a7ea525ac52979004bb406d4eff8427acd7cee176180cbnsn 0/1 Completed 0 9h
9fe57ab70b517dfc544ee68749bb66b0da14ad6ca7dd32654f8c850e156xgrh 0/1 Completed 0 9h
certified-operators-wvhgr 1/1 Running 0 10h
community-operators-lqv58 1/1 Running 0 10h
marketplace-operator-67dbd44ff-89r9r 1/1 Running 0 10h
qe-app-registry-lvcxc 1/1 Running 0 38m
redhat-marketplace-hncv4 1/1 Running 0 10h
redhat-operators-hddrs 1/1 Running 0 75m
mac:~ jianzhang$ oc get pods -n openshift-marketplace 584f82d478a964a7ea525ac52979004bb406d4eff8427acd7cee176180cbnsn -o yaml|grep "securityContext" -A5
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsUser: 1000220000
--
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsUser: 1000220000
--
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsUser: 1000220000
--
securityContext:
fsGroup: 1000220000
seLinuxOptions:
level: s0:c15,c5
seccompProfile:
type: RuntimeDefault
Jian, that is a really good call out, thank you. It's probably reasonable to look into that too as part of this report. I've just pushed up a downsync PR that updates the security context for the catalog source pods and the bundle unpack job. I hope this will be sufficient. If we still need to be able to configure the catalog source pod and container security contexts we can look into it. 1, Create a cluster with the fixed PR via the cluster-bot.
mac:~ jianzhang$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.ci.test-2022-06-14-023803-ci-ln-l86bm2k-latest True False 5m47s Cluster version is 4.11.0-0.ci.test-2022-06-14-023803-ci-ln-l86bm2k-latest
2, Install some operators so that the unpack bundle job is generated.
mac:~ jianzhang$ oc get sub -n default
NAME PACKAGE SOURCE CHANNEL
etcd etcd community-operators singlenamespace-alpha
mac:~ jianzhang$ oc get ip -n default
NAME CSV APPROVAL APPROVED
install-qpvgg etcdoperator.v0.9.4 Automatic true
mac:~ jianzhang$ oc get csv -n default
No resources found in default namespace.
mac:~ jianzhang$ oc get ip -n default install-qpvgg -o=jsonpath={.status.bundleLookups[0].conditions}
[{"message":"bundle contents have not yet been persisted to installplan status","reason":"BundleNotUnpacked","status":"True","type":"BundleLookupNotPersisted"},{"lastTransitionTime":"2022-06-14T03:27:10Z","message":"unpack job not completed","reason":"JobIncomplete","status":"True","type":"BundleLookupPending"},{"lastTransitionTime":"2022-06-14T03:37:53Z","message":"Job was active longer than specified deadline","reason":"DeadlineExceeded","status":"True","type":"BundleLookupFailed"}]
But the bundle unpacked failed, as follows,
mac:~ jianzhang$ oc get job -n openshift-marketplace
NAME COMPLETIONS DURATION AGE
40faf9b09dfee4dc1387f3870c6826a5164498299c605195a02d22c8af6a1c6 0/1 24m 24m
e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645 0/1 21m 21m
mac:~ jianzhang$ oc get job e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645 -o yaml -n openshift-marketplace
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2022-06-14T03:27:10Z"
generation: 1
labels:
controller-uid: f7c70846-dba0-45f0-9ef6-896ebd46de0d
job-name: e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645
name: e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645
namespace: openshift-marketplace
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: false
controller: false
kind: ConfigMap
name: e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645
uid: ed467b09-6cb2-4db9-a20b-eae440dbfba4
resourceVersion: "41362"
uid: f7c70846-dba0-45f0-9ef6-896ebd46de0d
spec:
activeDeadlineSeconds: 600
backoffLimit: 3
completionMode: NonIndexed
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: f7c70846-dba0-45f0-9ef6-896ebd46de0d
suspend: false
template:
metadata:
creationTimestamp: null
labels:
controller-uid: f7c70846-dba0-45f0-9ef6-896ebd46de0d
job-name: e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645
name: e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645
spec:
containers:
- command:
- opm
- alpha
- bundle
- extract
- -m
- /bundle/
- -n
- openshift-marketplace
- -c
- e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332beb645
- -z
env:
- name: CONTAINER_IMAGE
value: quay.io/openshift-community-operators/etcd@sha256:94346b5ee6149d1411b2f37f815526db3b86e62a03879337f6194428d52c336e
image: registry.build01.ci.openshift.org/ci-ln-l86bm2k/stable@sha256:d94e790504c0347dcdc461b3b66175d27441e8d91c9cbf5c2f0b6e33260cde08
imagePullPolicy: IfNotPresent
name: extract
resources:
requests:
cpu: 10m
memory: 50Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bundle
name: bundle
dnsPolicy: ClusterFirst
initContainers:
- command:
- /bin/cp
- -Rv
- /bin/cpb
- /util/cpb
image: registry.build01.ci.openshift.org/ci-ln-l86bm2k/stable@sha256:c5601714fef9ebece3d39300d46b403fb537e01ba89614e9838ed18f3d0f0375
imagePullPolicy: IfNotPresent
name: util
resources:
requests:
cpu: 10m
memory: 50Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /util
name: util
- command:
- /util/cpb
- /bundle
image: quay.io/openshift-community-operators/etcd@sha256:94346b5ee6149d1411b2f37f815526db3b86e62a03879337f6194428d52c336e
imagePullPolicy: Always
name: pull
resources:
requests:
cpu: 10m
memory: 50Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bundle
name: bundle
- mountPath: /util
name: util
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
runAsNonRoot: true
runAsUser: 1001
seccompProfile:
type: RuntimeDefault
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: bundle
- emptyDir: {}
name: util
status:
conditions:
- lastProbeTime: "2022-06-14T03:37:50Z"
lastTransitionTime: "2022-06-14T03:37:50Z"
message: Job was active longer than specified deadline
reason: DeadlineExceeded
status: "True"
type: Failed
ready: 0
startTime: "2022-06-14T03:27:10Z"
Set the status back to ASSIGNED.
1, Install an OCP that contains the fixed PR. mac:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-22-061133 True False 23m Cluster version is 4.11.0-0.nightly-2022-06-22-061133 mac:~ jianzhang$ oc -n openshift-operator-lifecycle-manager exec deploy/catalog-operator -- olm --version OLM version: 0.19.0 git commit: 8ee785c8646e0f8395ada5e10ebb04ac161331a0 2, Subscribe to some operators so that Job pods generated. mac:~ jianzhang$ oc get sub -A NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging qe-app-registry stable openshift-operators-redhat elasticsearch-operator elasticsearch-operator qe-app-registry stable mac:~ jianzhang$ oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE 4758eeea14451f2ff6e90b9e3cd5a12bfadc05987a97b004e6717bcca645rv8 0/1 Completed 0 17m 6c6159e26bb5008db8dac0c68f536da61d681221edec3462c9ba565467pfrb9 0/1 Completed 0 17m certified-operators-6dxcm 1/1 Running 0 41m community-operators-6hsq7 1/1 Running 0 41m marketplace-operator-6cc4dc7496-9sf6k 1/1 Running 5 (33m ago) 45m qe-app-registry-45xqq 1/1 Running 0 17m redhat-marketplace-gz8pl 1/1 Running 0 41m redhat-operators-nzkc6 1/1 Running 0 41m 3, Run the below security checking script: mac:~ jianzhang$ cat security_test.sh # All workloads creation is audited on masters with below annotation. Below cmd checks all workloads that would violate PodSecurity. cat > cmd.txt << EOF grep -hir 'would violate PodSecurity' /var/log/kube-apiserver/ | jq -r '.requestURI + " " + .annotations."pod-security.kubernetes.io/audit-violations"' EOF CMD="`cat cmd.txt`" oc new-project jian-test # With admin, run above cmd on all masters: MASTERS=`oc get no | grep master | grep -o '^[^ ]*'` for i in $MASTERS do oc debug -n jian-test no/$i -- chroot /host bash -c "$CMD || true" done > all-violations.txt cat all-violations.txt | grep -E 'namespaces/openshift-marketplace' | sort | uniq > all-violations_system_components.txt cat all-violations_system_components.txt mac:~ jianzhang$ ./security_test.sh ... /apis/batch/v1/namespaces/openshift-marketplace/jobs would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Seems like some related contains still need to be updated. 1, Build a cluster with the fixed PR: https://github.com/openshift/operator-framework-olm/pull/323 mac:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.ci.test-2022-06-22-110518-ci-ln-3mws142-latest True False 10m Cluster version is 4.11.0-0.ci.test-2022-06-22-110518-ci-ln-3mws142-latest 2, Subscribe an operator so that the Job pods generated. mac:~ jianzhang$ oc get sub -n default NAME PACKAGE SOURCE CHANNEL etcd etcd community-operators singlenamespace-alpha mac:~ jianzhang$ oc get ip -n default NAME CSV APPROVAL APPROVED install-ghmm4 etcdoperator.v0.9.4 Automatic true mac:~ jianzhang$ oc get csv -n default NAME DISPLAY VERSION REPLACES PHASE etcdoperator.v0.9.4 etcd 0.9.4 etcdoperator.v0.9.2 Succeeded mac:~ jianzhang$ oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE certified-operators-ptjf6 1/1 Running 0 95s community-operators-c4s6g 1/1 Running 0 95s e8c9651078ae45ddb2807e3a07727d459b82d7def5572a7b7ccaae332bcgp9b 0/1 Completed 0 40s marketplace-operator-7577dd46b-2tgd6 1/1 Running 1 (19m ago) 28m redhat-marketplace-pbrr6 1/1 Running 0 95s 3, Run the security checking script. mac:~ jianzhang$ cat security_test.sh # All workloads creation is audited on masters with below annotation. Below cmd checks all workloads that would violate PodSecurity. cat > cmd.txt << EOF grep -hir 'would violate PodSecurity' /var/log/kube-apiserver/ | jq -r '.requestURI + " " + .annotations."pod-security.kubernetes.io/audit-violations"' EOF CMD="`cat cmd.txt`" oc new-project jian-test # With admin, run above cmd on all masters: MASTERS=`oc get no | grep master | grep -o '^[^ ]*'` for i in $MASTERS do oc debug -n jian-test no/$i -- chroot /host bash -c "$CMD || true" done > all-violations.txt cat all-violations.txt | grep -E 'namespaces/(openshift-marketplace|openshift-operator-lifecycle-manager)' | sort | uniq > all-violations_system_components.txt cat all-violations_system_components.txt mac:~ jianzhang$ ./security_test.sh Now using project "jian-test" on server "https://api.ci-ln-3mws142-72292.origin-ci-int-gce.dev.rhcloud.com:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-example to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ci-ln-3mws142-72292-m6m2d-master-0-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ci-ln-3mws142-72292-m6m2d-master-1-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ci-ln-3mws142-72292-m6m2d-master-2-debug ... To use host binaries, run `chroot /host` Removing debug pod ... No issue found for the OLM related contains. LGTM, verify it. Because of the side effects of the change (blocking legacy, sqlite, registries from deploying correctly), Ben Parees, Joe Lanford and I decided to revert the changes for 4.11. We will label the operator-marketplace to suppress PSA warnings and add a release note to say that creating catalog sources in other namespaces will create a warning. We will target to fix properly in 4.12. Hi Per, >> Because of the side effects of the change (blocking legacy, sqlite, registries from deploying correctly), Ben Parees, Joe Lanford and I decided to revert the changes for 4.11. Sorry, maybe I missed something, are you going to remove the fix, right? If yes, could you help paste the revert PR here? Thanks! PS: I test the latest available payload, seems like this bug has been fixed. mac:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-25-081133 True False 6h56m Cluster version is 4.11.0-0.nightly-2022-06-25-081133 mac:~ jianzhang$ ./security_test.sh Now using project "jian-test" on server "https://api.qe-daily-0627.qe.devcluster.openshift.com:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-example to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ip-10-0-143-50ap-southeast-1computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ip-10-0-173-128ap-southeast-1computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ip-10-0-217-199ap-southeast-1computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... I change the status back to ASSIGNED until Per's confirmation. The revert PR: https://github.com/openshift/operator-framework-olm/pull/325 After the revert, 1, Run the above security script, no waring found for the openshift-marketplace project. Looks good. But, according to the emial `[aos-devel] <IMPORTANT> Enabling pod security admission with restricted profile by default - next steps for you and your workloads` and sample https://github.com/openshift/cluster-kube-apiserver-operator/pull/1234/files. It's better to add the `pod-security.kubernetes.io/enforce: privileged` label. @Per any thoughts? >> add a release note to say that creating catalog sources in other namespaces will create a warning. Yes, detailed test as follows, 2, test the pods created in other project. 2-1, Create a CatalogSource in a project called 'jian'. mac:~ jianzhang$ cat cs-qe.yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: qe-app-registry namespace: jian spec: displayName: Production Operators image: quay.io/openshift-qe-optional-operators/ocp4-index:latest publisher: OpenShift QE sourceType: grpc updateStrategy: registryPoll: interval: 15m mac:~ jianzhang$ oc create -f cs-qe.yaml catalogsource.operators.coreos.com/qe-app-registry created 2-2, subscribe an operator from it. mac:~ jianzhang$ cat sub-learn.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: learn namespace: jian spec: channel: beta installPlanApproval: Automatic name: learn source: qe-app-registry sourceNamespace: jian startingCSV: learn-operator.v0.0.3 mac:~ jianzhang$ oc create -f sub-learn.yaml subscription.operators.coreos.com/learn created mac:~ jianzhang$ oc get pods -n jian NAME READY STATUS RESTARTS AGE 552b4660850a7fe1e1f142091eb5e4305f18af151727c56f70aa5dffc1dg8cg 0/1 Completed 0 71s learn-operator-666b687bfb-7qppm 1/1 Running 0 50s qe-app-registry-hbzxg 1/1 Running 0 4m23s 2-3, Run the above security script, get the below error: /apis/batch/v1/namespaces/jian/jobs would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") As discussed with Per on Slack, we will add `pod-security.kubernetes.io/enforce` label for 4.next. I create a bug https://bugzilla.redhat.com/show_bug.cgi?id=2101367 to trace the legacy issues. Verified this one. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069 |