Bug 1996620

Summary: [SCC] openshift-oauth-apiserver degraded when a SCC with high priority is created
Product: OpenShift Container Platform Reporter: liyao
Component: oauth-apiserverAssignee: Sergiusz Urbaniak <surbania>
Status: CLOSED ERRATA QA Contact: liyao
Severity: high Docs Contact:
Priority: high    
Version: 4.9CC: aos-bugs, mfojtik, surbania
Target Milestone: ---   
Target Release: 4.9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-10-18 17:47:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liyao 2021-08-23 10:30:04 UTC
Description of problem:
After install a SCC with priority 0, openshift-oauth-apiserver pod is in Init:CreateContainerConfigError status if recreated.

Version-Release number of selected component (if applicable):
4.9

How reproducible:
Always

Steps to Reproduce:
1. Create the k10-k10 SCC
2. Delete one openshift-oauth-apiserver pod
3. Check the recreated pod

Actual results:
openshift-oauth-apiserver recreated pod is not Running but Init:CreateContainerConfigError
$ oc get pods -n openshift-oauth-apiserver
NAME                         READY   STATUS                            RESTARTS   AGE
apiserver-65bb8f7684-tfpzg   1/1     Running                           0          29m
apiserver-65bb8f7684-tsxdb   0/1     Init:CreateContainerConfigError   0          13m
apiserver-65bb8f7684-tvwtj   1/1     Running                           0          15m

The recreated pod SCC is k10-k10 
$ oc get pods apiserver-65bb8f7684-tsxdb -n openshift-oauth-apiserver -o yaml | grep scc
    openshift.io/scc: k10-k10


Expected results:
openshift-oauth-apiserver recreated pod can be Running for this case

Additional info:
The SCC main content:
~~~
k10-k10.yaml


allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: []
apiVersion: security.openshift.io/v1
defaultAddCapabilities: []
fsGroup:
  type: RunAsAny
groups: []
kind: SecurityContextConstraints
metadata:
[...]
[...]
priority: 0
readOnlyRootFilesystem: false
requiredDropCapabilities:
- CHOWN
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
  type: MustRunAsNonRoot
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
users:
- system:serviceaccount:k10:k10-k10
- system:serviceaccount:k10:k10-k10
volumes:
- '*'
~~~

Comment 3 liyao 2021-08-31 05:42:37 UTC
Tested in 4.9.0-0.nightly-2021-08-29-010334
1. create above k10-k10 scc
2. delete one pod in openshift-oauth-apiserver, then check the new created pod, it can be running which is expected
3. check the SCC used by the new pod, it's using the system scc not the custom scc
$ oc get pods apiserver-579c684cb9-wtqcl -n openshift-oauth-apiserver -o yaml | grep scc
    openshift.io/scc: node-exporter

Comment 6 errata-xmlrpc 2021-10-18 17:47:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759