Description of problem: Cannot block registries using image policies with whitelist Not sure if it's a bug, may be it's weird logic of image policies. I want allow in my OCP cluster to run images only from corporate registry (and want to block docker.io) without manually changing /etc/containers/registries.conf on nodes in my cluster (without creating/applying new machineconfigs) - I want to apply whitelist of allowed registries which users can use to build and deploy their apps - without docker.io in this list - so deploying images from docker.io should not be available. To do this I set up image policies with whitelist and looks like image policies don't work as expected. Version-Release number of selected component (if applicable): $ oc4 get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0-rc.5 True False 3h34m Cluster version is 4.1.0-rc.5 How reproducible: Always Steps to Reproduce: 1. Set up image policies with whitelist [vadim@vadim openshift4-main]$ oc4 get images.config.openshift.io -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" name: cluster spec: registrySources: allowedRegistries: - registry.redhat.io - registry.access.redhat.com - quay.io - image-registry.openshift-image-registry.svc:5000 2. Deploy pod from docker.io: wget https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json && sed -i 's/openshift\/hello-openshift/docker.io\/openshift\/hello-openshift/g' hello-pod.json && oc create -f hello-pod.json 3. Pod will be started: [vadim@vadim tools]$ oc4 get pods NAME READY STATUS RESTARTS AGE hello-openshift 1/1 Running 0 13s 4. check /etc/containers/registries.conf on nodes (no blocked registries): [vadim@vadim openshift4-main]$ oc4 get machineconfig 01-master-container-runtime -o yaml | grep docker.io -A 4 -B 3 storage: files: - contents: source: data:,%5Bregistries.search%5D%0Aregistries%20%3D%20%5B'registry.access.redhat.com'%2C%20'docker.io'%5D%0A%0A%5Bregistries.insecure%5D%0Aregistries%20%3D%20%5B%5D%0A%0A%5Bregistries.block%5D%0Aregistries%20%3D%20%5B%5D%0A verification: {} filesystem: root mode: 420 path: /etc/containers/registries.conf Looks like Whitelist works only for builds (push/pull). Actual results - pod doesn't start because docker.io not in whitelist. Expected results: Users cannot run containers from registries not in whitelist. Additional info: However, if I switch policies to blacklist: 1. [vadim@vadim openshift4-main]$ oc4 get images.config.openshift.io -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" name: cluster spec: registrySources: blockedRegistries: - docker.io 2. And try to deploy the same pod: wget https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json && sed -i 's/openshift\/hello-openshift/docker.io\/openshift\/hello-openshift/g' hello-pod.json && oc create -f hello-pod.json 3. I will see error: [vadim@vadim tools]$ oc4 get pods NAME READY STATUS RESTARTS AGE hello-openshift 0/1 ImageInspectError 0 3m46s [vadim@vadim tools]$ oc4 describe pod hello-openshift | tail -n 6 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m34s default-scheduler Successfully assigned vadim-test/hello-openshift to XXXXXXXXXXXXXXXXX.us-west-2.compute.internal Warning InspectFailed 2m29s (x12 over 4m27s) kubelet, XXXXXXXXXXXXXXXXX.us-west-2.compute.internal Failed to inspect image "docker.io/docker.io/docker.io/openshift/hello-openshift": rpc error: code = Unknown desc = cannot use "docker.io/docker.io/docker.io/openshift/hello-openshift:latest" because it's blocked Warning Failed 2m15s (x13 over 4m27s) kubelet, XXXXXXXXXXXXXXXXX.us-west-2.compute.internal Error: ImageInspectError I checked, and looks like when you apply blacklisted policy - operator change /etc/container/registries.conf on all nodes and add registries from image policy (if it's blacklist) to blocked registries: [vadim@vadim openshift4-main]$ oc4 get machineconfig 99-master-a36047b0-7cd1-11e9-b96d-068e4e6ec940-registries -o yaml | grep docker.io -A 4 -B 3 storage: files: - contents: source: data:text/plain,%5Bregistries%5D%0A%20%20%5Bregistries.search%5D%0A%20%20%20%20registries%20%3D%20%5B%22registry.access.redhat.com%22%2C%20%22docker.io%22%5D%0A%20%20%5Bregistries.insecure%5D%0A%20%20%20%20registries%20%3D%20%5B%5D%0A%20%20%5Bregistries.block%5D%0A%20%20%20%20registries%20%3D%20%5B%22docker.io%22%5D%0A verification: {} filesystem: root mode: 420 path: /etc/containers/registries.conf I cannot create both whitelist and blacklist (i.e. create whitelist with my registries and balcklist only with docker.io) because of https://github.com/openshift/origin/pull/22296
Made mistake - Actual result - pod was started
ContainerRuntimeConfig doesn't support allow registries; it only supports blocking registries today. We will add this in the next release.
PR is currently open at https://github.com/openshift/machine-config-operator/pull/803. I plan to go through the comments and get it patched up and merged in this week.
https://github.com/openshift/machine-config-operator/pull/803 got in. Should be available for testing by tomorrow.
// AllowedRegistries scenario: ➜ ~ oc get images.config.openshift.io -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-09-11T07:09:19Z" generation: 2 name: cluster resourceVersion: "17064" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 12ee94d8-d463-11e9-9344-02dc5ed3e938 spec: registrySources: allowedRegistries: - brewregistry.stage.redhat.io - cloud.openshift.com - registry.access.redhat.com - quay.io - registry.connect.redhat.com - registry.redhat.io - registry.svc.ci.openshift.org ➜ ~ oc describe pods h-1-mrrrw Name: h-1-mrrrw Namespace: default Priority: 0 PriorityClassName: <none> Node: ip-10-0-137-72.us-east-2.compute.internal/10.0.137.72 Start Time: Wed, 11 Sep 2019 16:33:29 +0800 Labels: deployment=h-1 deploymentconfig=h run=h Annotations: openshift.io/deployment-config.latest-version: 1 openshift.io/deployment-config.name: h openshift.io/deployment.name: h-1 Status: Pending IP: 10.128.2.35 Controlled By: ReplicationController/h-1 Containers: h: Container ID: Image: docker.io/openshift/hello-openshift Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-w2vdb (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-w2vdb: Type: Secret (a volume populated by a Secret) SecretName: default-token-w2vdb Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s default-scheduler Successfully assigned default/h-1-mrrrw to ip-10-0-137-72.us-east-2.compute.internal Normal Pulling 10s kubelet, ip-10-0-137-72.us-east-2.compute.internal Pulling image "docker.io/openshift/hello-openshift" Warning Failed 9s kubelet, ip-10-0-137-72.us-east-2.compute.internal Failed to pull image "docker.io/openshift/hello-openshift": rpc error: code = Unknown desc = Source image rejected: Running image docker://openshift/hello-openshift:latest is rejected by policy. Warning Failed 9s kubelet, ip-10-0-137-72.us-east-2.compute.internal Error: ErrImagePull Normal BackOff 8s kubelet, ip-10-0-137-72.us-east-2.compute.internal Back-off pulling image "docker.io/openshift/hello-openshift" Warning Failed 8s kubelet, ip-10-0-137-72.us-east-2.compute.internal Error: ImagePullBackOff // BlockedRegistries scenario: ➜ ~ oc get images.config.openshift.io -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-09-11T07:09:19Z" generation: 3 name: cluster resourceVersion: "38942" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 12ee94d8-d463-11e9-9344-02dc5ed3e938 spec: registrySources: blockedRegistries: - docker.io ➜ ~ oc describe pods h-1-fslbs Name: h-1-fslbs Namespace: default Priority: 0 PriorityClassName: <none> Node: ip-10-0-150-250.us-east-2.compute.internal/10.0.150.250 Start Time: Wed, 11 Sep 2019 17:26:42 +0800 Labels: deployment=h-1 deploymentconfig=h run=h Annotations: openshift.io/deployment-config.latest-version: 1 openshift.io/deployment-config.name: h openshift.io/deployment.name: h-1 Status: Pending IP: 10.131.0.50 Controlled By: ReplicationController/h-1 Containers: h: Container ID: Image: docker.io/openshift/hello-openshift Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ImageInspectError Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-w2vdb (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-w2vdb: Type: Secret (a volume populated by a Secret) SecretName: default-token-w2vdb Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8s default-scheduler Successfully assigned default/h-1-fslbs to ip-10-0-150-250.us-east-2.compute.internal Warning InspectFailed 5s (x2 over 6s) kubelet, ip-10-0-150-250.us-east-2.compute.internal Failed to inspect image "docker.io/openshift/hello-openshift": rpc error: code = Unknown desc = cannot use "docker.io/openshift/hello-openshift:latest" because it's blocked Warning Failed 5s (x2 over 6s) kubelet, ip-10-0-150-250.us-east-2.compute.internal Error: ImageInspectError
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922