Bug 1686509 - Blacklisting release payload repo destroys cluster
Summary: Blacklisting release payload repo destroys cluster
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Containers
Version: 4.1.0
Hardware: x86_64
OS: Linux
Target Milestone: ---
: 4.1.0
Assignee: Urvashi Mohnani
QA Contact: weiwei jiang
Depends On:
Blocks: 1685185
TreeView+ depends on / blocked
Reported: 2019-03-07 15:29 UTC by Adam Kaplan
Modified: 2019-06-04 10:45 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2019-06-04 10:45:20 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:45:26 UTC

Description Adam Kaplan 2019-03-07 15:29:31 UTC
Description of problem: MCO was recently updated to watch image repository blacklists and whitelists from `image.config.openshift.io/cluster`. If the release payload repository is blacklisted (or is not included in the whitelist), the subsequent node roll out will destroy the cluster.

Version-Release number of selected component (if applicable): 4.0.0

How reproducible: Always

Steps to Reproduce:
1. Update `image.config.openshift.io/cluster` to include the payload repository (ex: quay.io) in the blacklist
2. Alternatively, run the same update excluding the payload repository from the whitelist.

Actual results: Cluster dies because nodes cannot pull payload images.

Expected results: Cluster remains alive.

Additional info: https://github.com/openshift/origin/pull/22228#discussion_r263148713

Comment 1 Antonio Murdaca 2019-03-07 15:34:31 UTC
do we want to always disallow adding the release payload repository to the blacklist? or always ensuring it's in the whitelist? also, who gives us (the MCO) that repository hostname so we can add logic for it?

Comment 2 Ben Parees 2019-03-07 16:12:56 UTC
you should be able to determine the payload registry from the CVO payload (ie the same way your operator gets its image pullspecs)

Comment 3 Adam Kaplan 2019-03-07 16:28:21 UTC
Blacklist and whitelist should be mutually exclusive - for builds you can set one or the other, but not both (build controller will ignore your config in that case). Are you generating the container runtime `policy.json` from these lists?

I suggest the following:
1. If a whitelist is set, make sure the release payload repo is present (add if it is not).
2. If a blacklist is set, remove the release payload repo if present.

Comment 4 Antonio Murdaca 2019-03-08 09:32:32 UTC
Looks like this is gonna be taken care of by the ValidateImage admission plugin (reference is the email thread with Derek).

Who should I move the BZ to?

Comment 5 Ben Parees 2019-03-08 15:12:24 UTC
Antonio, that plan ended up getting rejected, see:

so i think we're back to expecting consuming operators (in this case the MCO?) to filter the configuration as appropriate.

Comment 6 Antonio Murdaca 2019-03-08 15:24:21 UTC
(In reply to Ben Parees from comment #5)
> Antonio, that plan ended up getting rejected, see:
> https://github.com/openshift/origin/pull/22228#issuecomment-470602025
> so i think we're back to expecting consuming operators (in this case the
> MCO?) to filter the configuration as appropriate.

Thanks Ben, I'll open an issue in the MCO repo for that. Do we extract the registry from this image:

oc get clusterversion/version -o json | jq '.status.desired.image'

is that reasonable (even if it requires parsing)?

Comment 7 Antonio Murdaca 2019-03-08 15:29:31 UTC
ContainerRuntimeCRD/registries is owned by the Runtimes team in the MCO repo, moving to them 

The way to fix this would be disallowing the registry got from parsing `oc get clusterversion/version -o json | jq '.status.desired.image'` from being added to the registries.conf blacklist (or add it by default to the whitelist and prevent anyone from deleting it)

Comment 8 Adam Kaplan 2019-03-12 15:20:04 UTC
@Antonio - my understanding is that blacklists and whitelists should be done via image signature policy config (policy.json) - see thread in containers/image [1].

This is informing my PR to only allow one of (allowedRegistries|blockedRegistries) to be set on `image.config.openshift.io` objects [2].

[1] https://github.com/containers/image/issues/548
[2] https://github.com/openshift/origin/pull/22296

Comment 9 Urvashi Mohnani 2019-03-14 00:53:09 UTC
@Adam and @Ben: if https://github.com/openshift/origin/pull/22296 can't ensure that the CVO registry doesn't end up being blacklisted, should the MCO just make sure that the CVO registry doesn't end up under blockedRegistries in registries.conf?
I don't think the containerRuntimeConfig controller has the power of stopping the user from adding that to the image CRD. We can just log a warning saying adding the CVO registry as blocked is not allowed so skipping and we continue to add the other registries from the image CRD.

Comment 10 Ben Parees 2019-03-14 03:05:32 UTC
Uvashi, I think that's exactly what we're proposing needs to be done.  (by "CVO registry" I assume you mean the registry referenced by the payload content.  Note that I think we anticipate a future state in which the payload can reference multiple registries, and I don't think we currently have an answer about where the MCO can get the full list of referenced registries from)

Comment 11 Urvashi Mohnani 2019-03-21 19:06:36 UTC
Fixed in https://github.com/openshift/machine-config-operator/pull/569

Comment 12 Urvashi Mohnani 2019-03-22 17:34:45 UTC
Fix got merged into machine-config-operator/master

Comment 13 weiwei jiang 2019-03-25 05:36:35 UTC
Checked with 4.0.0-0.nightly-2019-03-23-222829 and the fix still not in, waiting new bulid to have another try.

Comment 14 weiwei jiang 2019-03-26 05:03:15 UTC
Checked and the issue has been fixed.

#> oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.nightly-2019-03-25-180911   True        False         42m     Cluster version is 4.0.0-0.nightly-2019-03-25-180911

#> oc get image.config.openshift.io cluster -o yaml 
apiVersion: config.openshift.io/v1
kind: Image
  creationTimestamp: 2019-03-26T02:27:14Z
  generation: 2
  name: cluster
  resourceVersion: "31716"
  selfLink: /apis/config.openshift.io/v1/images/cluster
  uid: ab1d3566-4f6e-11e9-b0b5-0eb2688208c2
    name: ""
    - registry.svc.ci.openshift.org
  internalRegistryHostname: image-registry.openshift-image-registry.svc:5000

#> oc -n openshift-machine-config-operator logs machine-config-controller-64f5ddb575-hbgwj
I0326 03:13:58.877690       1 container_runtime_config_controller.go:582] error adding "registry.svc.ci.openshift.org" to blocked registries, cannot block the registry being used by the payload, skipping....

Comment 16 errata-xmlrpc 2019-06-04 10:45:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.