Description of problem: When Deploying externalDNS operand pods the PodSecurity violation logs starting appearing in the operator pod logs. This appears to be due to the PodSecurity feature gate being active by default for v1.23 k8s release and from 4.11 pod security admission level gets set as "restricted" by default. Ref: https://kubernetes.io/docs/concepts/security/pod-security-admission/ OpenShift release version: 4.11.0-0.nightly-2022-05-11-054135 external-dns-operator.v0.1.2 How reproducible: Frequently Steps to Reproduce (in detail): 1.Deploy externalDNS operator in OCP platform from operator hub 2.Deploy a route or service type extdn operand pod 3.Check the operator logs Actual results: The ExternalDNS operator logs below warnings: ------ oc -n external-dns-operator logs pod/external-dns-operator-7dd9d5984d-5pn7q -c operator | grep -i "violate PodSecurity" 2022-05-16T04:37:19.996Z INFO KubeAPIWarningLogger would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "external-dns-n56fh6dh59ch5fcq" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "external-dns-n56fh6dh59ch5fcq" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "external-dns-n56fh6dh59ch5fcq" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "external-dns-n56fh6dh59ch5fcq" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") ------ As per the documentation, this function appears to work in conjunction with the namespace level labels and places requirements on a Pod's Security Context and other related fields. Currently, there seem to be no labels defined for the ingress operator and operand ns: # oc get ns external-dns -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c26,c15 openshift.io/sa.scc.supplemental-groups: 1000680000/10000 openshift.io/sa.scc.uid-range: 1000680000/10000 creationTimestamp: "2022-05-16T03:52:15Z" labels: kubernetes.io/metadata.name: external-dns name: external-dns resourceVersion: "36294" uid: 4dd415c3-8066-4c57-9e0e-a120b39b8088 spec: finalizers: - kubernetes status: phase: Active # oc get ns external-dns-operator -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c26,c20 openshift.io/sa.scc.supplemental-groups: 1000690000/10000 openshift.io/sa.scc.uid-range: 1000690000/10000 creationTimestamp: "2022-05-16T03:53:01Z" labels: kubernetes.io/metadata.name: external-dns-operator olm.operatorgroup.uid/de6bdbb6-4198-4e0a-926b-ed8e83e166cb: "" name: external-dns-operator resourceVersion: "36654" uid: dd5543d5-3963-45cd-9679-40eafa233075 spec: finalizers: - kubernetes status: phase: Active Expected results: The warning should not occur during externalDNS operand pod deployments.
This is a blocker because OpenShift 4.11 is going to enforce PodSecurity.
The latest plan I have heard for pod security admission is to enable it but default to alert in OpenShift 4.11, and then move to restricted in OpenShift 4.12. This means it isn't absolutely necessary to block the release for this BZ. For that reason, I am changing this BZ to blocker-. However, it will save time and frustration if we resolve this BZ in 4.11.0, so we should still consider this a priority even if it isn't strictly a blocker.
Verified with the latest externalDNS build image. The pod security errors no more appear when the operand pod gets spawned: ----- oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-06-201913 True False 134m Cluster version is 4.11.0-0.nightly-2022-06-06-201913 oc -n external-dns-operator get all NAME READY STATUS RESTARTS AGE pod/external-dns-aws-svc-rc-565b685784-lb4l7 1/1 Running 0 37m pod/external-dns-operator-5bd9f5df9b-5hxqf 2/2 Running 0 41m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/external-dns-operator-metrics-service ClusterIP 172.30.174.238 <none> 8443/TCP 41m service/external-dns-operator-service ClusterIP 172.30.201.103 <none> 443/TCP 41m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/external-dns-aws-svc-rc 1/1 1 1 37m deployment.apps/external-dns-operator 1/1 1 1 41m NAME DESIRED CURRENT READY AGE replicaset.apps/external-dns-aws-svc-rc-565b685784 1 1 1 37m replicaset.apps/external-dns-operator-5bd9f5df9b 1 1 1 41m Deployment config reference: name: external-dns-n56fh6dh59ch5fcq resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault operator logs: 2022-06-10T05:06:58.391Z INFO controller.external_dns_controller Starting workers {"worker count": 1} 2022-06-10T05:06:58.392Z INFO controller.credentials_secret_controller Starting workers {"worker count": 1} 2022-06-10T05:10:41.238Z DEBUG controller-runtime.webhook.webhooks received request {"webhook": "/validate-externaldns-olm-openshift-io-v1beta1-externaldns", "UID": "cfdc0b52-4cda-4558-9076-aa7645d4fdcd", "kind": "externaldns.olm.openshift.io/v1beta1, Kind=ExternalDNS", "resource": {"group":"externaldns.olm.openshift.io","version":"v1beta1","resource":"externaldnses"}} 2022-06-10T05:10:41.238Z INFO validating-webhook validate create {"name": "aws-svc-rc"} 2022-06-10T05:10:41.239Z DEBUG controller-runtime.webhook.webhooks wrote response {"webhook": "/validate-externaldns-olm-openshift-io-v1beta1-externaldns", "code": 200, "reason": "", "UID": "cfdc0b52-4cda-4558-9076-aa7645d4fdcd", "allowed": true} 2022-06-10T05:10:41.243Z INFO credentials_secret_controller reconciling credentials secret for externalDNS instance {"externaldns": "/aws-svc-rc"} 2022-06-10T05:10:41.243Z INFO external_dns_controller reconciling externalDNS {"externaldns": "/aws-svc-rc"} 2022-06-10T05:10:41.250Z INFO credentials_secret_controller credentials secret is reconciled for externalDNS instance {"externaldns": "/aws-svc-rc"} -----
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (ExternalDNS Operator 1.0 operator/operand containers), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:5867