Created attachment 1766059 [details] collector-scc.yaml Created attachment 1766059 [details] collector-scc.yaml Description of problem: openshift-api pods need to write privileges for the root filesystem. If you create an SCC with readOnlyRootFilesystem: true and openshift-api pick that one startup will fail. Version-Release number of selected component (if applicable): OpenShift 4.7.3 How reproducible: $ oc get pods -o "custom-columns=NAME:.metadata.name,SCC:.metadata.annotations.openshift\.io/scc,SERVICEACCOUNT:.spec.serviceAccountName" NAME SCC SERVICEACCOUNT apiserver-598d4c595-59fdh node-exporter openshift-apiserver-sa apiserver-598d4c595-rgvp4 node-exporter openshift-apiserver-sa apiserver-598d4c595-vkf8b node-exporter openshift-apiserver-sa $ oc create -f - <<EOF > apiVersion: security.openshift.io/v1 > kind: SecurityContextConstraints > metadata: > name: collector > labels: > app.kubernetes.io/instance: stackrox-secured-cluster-services > app.kubernetes.io/managed-by: Helm > app.kubernetes.io/name: stackrox > app.kubernetes.io/part-of: stackrox-secured-cluster-services > app.kubernetes.io/version: 3.0.56.1 > helm.sh/chart: sensor-56.1.0 > auto-upgrade.stackrox.io/component: "sensor" > annotations: > email: support > meta.helm.sh/release-name: stackrox-secured-cluster-services > meta.helm.sh/release-namespace: stackrox > owner: stackrox > kubernetes.io/description: This SCC is based on privileged, hostaccess, and hostmount-anyuid > users: > - system:serviceaccount:stackrox:collector > allowHostDirVolumePlugin: true > allowPrivilegedContainer: true > fsGroup: > type: RunAsAny > groups: [] > priority: 0 > readOnlyRootFilesystem: true > runAsUser: > type: RunAsAny > seLinuxContext: > type: RunAsAny > seccompProfiles: > - '*' > supplementalGroups: > type: RunAsAny > allowHostIPC: false > allowHostNetwork: false > allowHostPID: falsef > allowHostPorts: false > allowPrivilegeEscalation: true > allowedCapabilities: [] > defaultAddCapabilities: [] > requiredDropCapabilities: [] > volumes: > - configMap > - downwardAPI > - emptyDir > - hostPath > - secret > EOF securitycontextconstraints.security.openshift.io/collector created $ oc delete pod apiserver-598d4c595-59fdh pod "apiserver-598d4c595-59fdh" deleted $ oc get pods NAME READY STATUS RESTARTS AGE apiserver-598d4c595-rgvp4 2/2 Running 0 5h27m apiserver-598d4c595-vkf8b 2/2 Running 0 5h33m apiserver-598d4c595-z2rvf 0/2 CrashLoopBackOff 4 119s $ oc get pods -o "custom-columns=NAME:.metadata.name,SCC:.metadata.annotations.openshift\.io/scc,SERVICEACCOUNT:.spec.serviceAccountName" NAME SCC SERVICEACCOUNT apiserver-598d4c595-rgvp4 node-exporter openshift-apiserver-sa apiserver-598d4c595-vkf8b node-exporter openshift-apiserver-sa apiserver-598d4c595-z2rvf collector openshift-apiserver-sa $ oc logs apiserver-598d4c595-z2rvf -c openshift-apiserver Copying system trust bundle cp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Read-only file system Steps to Reproduce: 1. Create SCC, attached (collector-scc.yaml) 2. Delete one openshift-api pod and wait for the new one. 3. Actual results: Startup fail because of wrong SCC Copying system trust bundle cp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Read-only file system Expected results: Choose a propper SCC Additional info: * We had exactly the same problem with the OpenShift authentication deployment: Bug 1824800 - openshift authentication operator is in a CrashLoopBackOff
*** This bug has been marked as a duplicate of bug 1942725 ***