Bug 1996620 - [SCC] openshift-oauth-apiserver degraded when a SCC with high priority is created
Summary: [SCC] openshift-oauth-apiserver degraded when a SCC with high priority is cre...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oauth-apiserver
Version: 4.9
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.9.0
Assignee: Sergiusz Urbaniak
QA Contact: liyao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-23 10:30 UTC by liyao
Modified: 2021-10-18 17:48 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-18 17:47:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-authentication-operator pull 474 0 None None None 2021-08-23 15:00:36 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:48:09 UTC

Description liyao 2021-08-23 10:30:04 UTC
Description of problem:
After install a SCC with priority 0, openshift-oauth-apiserver pod is in Init:CreateContainerConfigError status if recreated.

Version-Release number of selected component (if applicable):
4.9

How reproducible:
Always

Steps to Reproduce:
1. Create the k10-k10 SCC
2. Delete one openshift-oauth-apiserver pod
3. Check the recreated pod

Actual results:
openshift-oauth-apiserver recreated pod is not Running but Init:CreateContainerConfigError
$ oc get pods -n openshift-oauth-apiserver
NAME                         READY   STATUS                            RESTARTS   AGE
apiserver-65bb8f7684-tfpzg   1/1     Running                           0          29m
apiserver-65bb8f7684-tsxdb   0/1     Init:CreateContainerConfigError   0          13m
apiserver-65bb8f7684-tvwtj   1/1     Running                           0          15m

The recreated pod SCC is k10-k10 
$ oc get pods apiserver-65bb8f7684-tsxdb -n openshift-oauth-apiserver -o yaml | grep scc
    openshift.io/scc: k10-k10


Expected results:
openshift-oauth-apiserver recreated pod can be Running for this case

Additional info:
The SCC main content:
~~~
k10-k10.yaml


allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: []
apiVersion: security.openshift.io/v1
defaultAddCapabilities: []
fsGroup:
  type: RunAsAny
groups: []
kind: SecurityContextConstraints
metadata:
[...]
[...]
priority: 0
readOnlyRootFilesystem: false
requiredDropCapabilities:
- CHOWN
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
  type: MustRunAsNonRoot
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
users:
- system:serviceaccount:k10:k10-k10
- system:serviceaccount:k10:k10-k10
volumes:
- '*'
~~~

Comment 3 liyao 2021-08-31 05:42:37 UTC
Tested in 4.9.0-0.nightly-2021-08-29-010334
1. create above k10-k10 scc
2. delete one pod in openshift-oauth-apiserver, then check the new created pod, it can be running which is expected
3. check the SCC used by the new pod, it's using the system scc not the custom scc
$ oc get pods apiserver-579c684cb9-wtqcl -n openshift-oauth-apiserver -o yaml | grep scc
    openshift.io/scc: node-exporter

Comment 6 errata-xmlrpc 2021-10-18 17:47:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.