Bug 2079034 - [4.10] Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment
Summary: [4.10] Openshift Container Platform - Ingress Controller does not set allowPr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.9
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: 4.10.z
Assignee: Chad Scribner
QA Contact: Shudi Li
URL:
Whiteboard:
Depends On: 2007246
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-26 18:14 UTC by Chad Scribner
Modified: 2022-08-04 22:35 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The default IngressController Deployment creates a container named "router" without requesting sufficient permissions in the `securityContext` of the container. Consequence: Normally, this will not cause an issue but in cases where clusters have a Security Context Constraint (SCC) that's similar enough to the hostnetwork SCC could result in router pods failing to start. Fix: Set `allowPrivilegeEscalation: true` in the `router` container's `securityContext` to ensure that it matches the default hostnetwork SCC. Result: The router pods will be admitted to the correct SCC and be created without error.
Clone Of: 2007246
Environment:
Last Closed: 2022-08-01 11:34:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-ingress-operator pull 748 0 None open [release-4.10] Bug 2079034: Add allowPrivilegeEscalation to the router container 2022-04-27 16:39:19 UTC
Red Hat Product Errata RHSA-2022:5730 0 None None None 2022-08-01 11:36:13 UTC

Comment 3 Shudi Li 2022-06-06 04:12:04 UTC
tested it with 4.10.0-0.ci.test-2022-06-06-025347-ci-ln-gh5jzz2-latest and passed
1. securityContext with allowPrivilegeEscalation true is added to the deployment/router-default
a,
% oc -n openshift-ingress get deployment.apps/router-default -o yaml | grep -i -A1 securityContext
        securityContext:
          allowPrivilegeEscalation: true
--
      securityContext: {}
      serviceAccount: router
%
b,
% oc -n openshift-ingress get pod router-default-644887b75b-bm7wb -o yaml | grep -i -A1 securityContext
    securityContext:
      allowPrivilegeEscalation: true
--
  securityContext:
    fsGroup: 1000580000
% 

2. Create the customer SCC, delete a router pod, and a new router pod can be created successfully
a. create the SecurityContextConstraints
% oc create -f bug2007246_scc
securitycontextconstraints.security.openshift.io/custom-restricted created
% cat bug2007246_scc
{
    "allowHostDirVolumePlugin": false,
    "allowHostIPC": false,
    "allowHostNetwork": false,
    "allowHostPID": false,
    "allowHostPorts": false,
    "allowPrivilegeEscalation": false,
    "allowPrivilegedContainer": false,
    "allowedCapabilities": null,
    "apiVersion": "security.openshift.io/v1",
    "defaultAddCapabilities": null,
    "fsGroup": {
        "type": "MustRunAs"
    },
    "groups": [
        "system:authenticated"
    ],
    "kind": "SecurityContextConstraints",
    "metadata": {
        "annotations": {
            "include.release.openshift.io/ibm-cloud-managed": "true",
            "include.release.openshift.io/self-managed-high-availability": "true",
            "include.release.openshift.io/single-node-developer": "true",
            "kubernetes.io/description": "restricted denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace.  This is the most restrictive SCC and it is used by default for authenticated users.",
            "release.openshift.io/create-only": "true"
        },
        "name": "custom-restricted"
    },
    "priority": null,
    "readOnlyRootFilesystem": false,
    "requiredDropCapabilities": [
        "KILL",
        "MKNOD",
        "SETUID",
        "SETGID"
    ],
    "runAsUser": {
        "type": "MustRunAsRange"
    },
    "seLinuxContext": {
        "type": "MustRunAs"
    },
    "supplementalGroups": {
        "type": "RunAsAny"
    },
    "users": [],
    "volumes": [
        "configMap",
        "downwardAPI",
        "emptyDir",
        "persistentVolumeClaim",
        "projected",
        "secret"
    ]
}
% 

b.
% oc -n openshift-ingress get pods                                                                     
NAME                              READY   STATUS    RESTARTS   AGE
router-default-644887b75b-bm7wb   1/1     Running   0          35m
router-default-644887b75b-psppl   1/1     Running   0          35m
% 

c. delete one router pod
oc -n openshift-ingress delete pod router-default-644887b75b-bm7wb
pod "router-default-644887b75b-bm7wb" deleted
                                                         %                                                                                       % 

d. new router pod is created successfully
% oc -n openshift-ingress get pods
NAME                              READY   STATUS    RESTARTS   AGE
router-default-644887b75b-mfwxl   1/1     Running   0          5m7s
router-default-644887b75b-psppl   1/1     Running   0          41m
% 

3. Create an ingress-controller, securityContext with allowPrivilegeEscalation true is also added to its deployment
% oc -n openshift-ingress get deployment.apps/router-internalapps   -o yaml | grep -i -A1 securityContext  
        securityContext:
          allowPrivilegeEscalation: true
--
      securityContext: {}
      serviceAccount: router
% 

4. Edit deployment/router-default and try to configure spec\containers\securityContext\allowPrivilegeEscalation with false
a, % oc -n openshift-ingress edit deployment/router-default
deployment.apps/router-default edited
%

b. % oc -n openshift-ingress get deployment/router-default  -o yaml | grep -i -A1 securityContext  
        securityContext:
          allowPrivilegeEscalation: true
--
      securityContext: {}
      serviceAccount: router
%

c. 
% oc -n openshift-ingress get pods                       
NAME                                  READY   STATUS        RESTARTS   AGE
router-default-644887b75b-8tmmr       0/1     Pending       0          20s
router-default-644887b75b-mfwxl       1/1     Terminating   0          14m
router-default-644887b75b-psppl       1/1     Running       0          50m
router-default-7f6d86767c-wbv92       1/1     Terminating   0          20s
router-internalapps-898ff699c-72gn8   1/1     Running       0          6m51s
router-internalapps-898ff699c-lkk5v   1/1     Running       0          6m51s
% 
% oc -n openshift-ingress get pods                                                              
NAME                                  READY   STATUS        RESTARTS   AGE
router-default-644887b75b-8tmmr       1/1     Running       0          5m12s
router-default-644887b75b-psppl       1/1     Running       0          55m
router-default-7f6d86767c-wbv92       1/1     Terminating   0          5m12s
router-internalapps-898ff699c-72gn8   1/1     Running       0          11m
router-internalapps-898ff699c-lkk5v   1/1     Running       0          11m
%

d. Edit deployment/router-default and try to configure spec\containers\securityContext\allowPrivilegeEscalation with false
% oc -n openshift-ingress edit deployment/router-default                                        
deployment.apps/router-default edited
% 
% oc -n openshift-ingress get deployment/router-default  -o yaml | grep -i -A1 securityContext  
        securityContext:
          allowPrivilegeEscalation: true
--
      securityContext: {}
      serviceAccount: router
%

Comment 6 Hongan Li 2022-07-27 05:41:12 UTC
It is verified with pre-merge process (see Comment#3) and the PR has been merged to 4.10.0-0.nightly-2022-07-25-110002, so move to Verified

Comment 8 errata-xmlrpc 2022-08-01 11:34:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.25 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5730


Note You need to log in before you can comment on or make changes to this bug.