If we define a custom scc like this: allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: [] apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: MCP Vault Unsealer meta.helm.sh/release-name: vault meta.helm.sh/release-namespace: mcp-vault creationTimestamp: "2022-07-25T11:09:53Z" generation: 2 labels: app.kubernetes.io/instance: vault app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: vault-unsealer app.kubernetes.io/version: 3.7.0 helm.sh/chart: vault-unsealer-3.7.1 name: vault-unsealer resourceVersion: "1793493" uid: 6b6d88be-03c0-476d-8602-2e94e4ecfcb5 priority: null readOnlyRootFilesystem: true requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: RunAsAny seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:mcp-vault:vault-unsealer volumes: - configMap - hostPath - secret we can see that the pod originally has this scc: oc get pod machine-config-operator-7f57686f5c-g895k -o yaml | grep scc openshift.io/scc: hostmount-anyuid After applying the new SCC ( even if we set a higher priority ) the pod is showing after restart: oc get pod machine-config-operator-7f57686f5c-jg2jv -o yaml | grep scc openshift.io/scc: vault-unsealer
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.13.0 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:1326