Bug 2084433
| Summary: | Podsecurity violation error getting logged for ingresscontroller during deployment. | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Arvind iyengar <aiyengar> |
| Component: | Networking | Assignee: | Miciah Dashiel Butler Masters <mmasters> |
| Networking sub component: | router | QA Contact: | Arvind iyengar <aiyengar> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | high | CC: | mmasters |
| Version: | 4.11 | ||
| Target Milestone: | --- | ||
| Target Release: | 4.11.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-10 11:11:30 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Arvind iyengar
2022-05-12 06:40:18 UTC
Verified in "4.11.0-0.nightly-2022-05-18-010528" release version. There are no more "podsecurity" errors noted during ingresscontroller pod creation and it is observed the canary and router resources have the security context set properly:
-------
oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-05-18-010528 True False 3h35m Cluster version is 4.11.0-0.nightly-2022-05-18-010528
oc -n openshift-ingress-operator get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-operator-54c67bbfdf-2dwj4 2/2 Running 1 (4h1m ago) 4h12m 10.128.0.32 aiyengartq-xtngl-master-0 <none> <none>
oc -n openshift-ingress-operator logs pod/ingress-operator-54c67bbfdf-2dwj4 -c ingress-operator | grep -i "podsecurity" | wc -l
0
oc get ns openshift-ingress -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
openshift.io/node-selector: ""
openshift.io/sa.scc.mcs: s0:c24,c19
openshift.io/sa.scc.supplemental-groups: 1000590000/10000
openshift.io/sa.scc.uid-range: 1000590000/10000
workload.openshift.io/allowed: management
creationTimestamp: "2022-05-18T08:33:46Z"
labels:
kubernetes.io/metadata.name: openshift-ingress
name: openshift-ingress
network.openshift.io/policy-group: ingress
olm.operatorgroup.uid/1f62d689-46a4-49e1-8fc4-8260c31d95e9: ""
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/warn: privileged
policy-group.network.openshift.io/ingress: ""
name: openshift-ingress
resourceVersion: "16153"
uid: 0e212e1c-705e-4789-98f4-86ac5bf3a201
spec:
finalizers:
- kubernetes
status:
phase: Active
oc -n openshift-ingress-canary get daemonset -o yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2022-05-18T08:38:05Z"
....
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
....
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
...
-------
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069 |