Hide Forgot
Description of problem: MCO was recently updated to watch image repository blacklists and whitelists from `image.config.openshift.io/cluster`. If the release payload repository is blacklisted (or is not included in the whitelist), the subsequent node roll out will destroy the cluster. Version-Release number of selected component (if applicable): 4.0.0 How reproducible: Always Steps to Reproduce: 1. Update `image.config.openshift.io/cluster` to include the payload repository (ex: quay.io) in the blacklist 2. Alternatively, run the same update excluding the payload repository from the whitelist. Actual results: Cluster dies because nodes cannot pull payload images. Expected results: Cluster remains alive. Additional info: https://github.com/openshift/origin/pull/22228#discussion_r263148713
do we want to always disallow adding the release payload repository to the blacklist? or always ensuring it's in the whitelist? also, who gives us (the MCO) that repository hostname so we can add logic for it?
you should be able to determine the payload registry from the CVO payload (ie the same way your operator gets its image pullspecs)
Blacklist and whitelist should be mutually exclusive - for builds you can set one or the other, but not both (build controller will ignore your config in that case). Are you generating the container runtime `policy.json` from these lists? I suggest the following: 1. If a whitelist is set, make sure the release payload repo is present (add if it is not). 2. If a blacklist is set, remove the release payload repo if present.
Looks like this is gonna be taken care of by the ValidateImage admission plugin (reference is the email thread with Derek). Who should I move the BZ to?
Antonio, that plan ended up getting rejected, see: https://github.com/openshift/origin/pull/22228#issuecomment-470602025 so i think we're back to expecting consuming operators (in this case the MCO?) to filter the configuration as appropriate.
(In reply to Ben Parees from comment #5) > Antonio, that plan ended up getting rejected, see: > https://github.com/openshift/origin/pull/22228#issuecomment-470602025 > > so i think we're back to expecting consuming operators (in this case the > MCO?) to filter the configuration as appropriate. Thanks Ben, I'll open an issue in the MCO repo for that. Do we extract the registry from this image: oc get clusterversion/version -o json | jq '.status.desired.image' is that reasonable (even if it requires parsing)?
ContainerRuntimeCRD/registries is owned by the Runtimes team in the MCO repo, moving to them The way to fix this would be disallowing the registry got from parsing `oc get clusterversion/version -o json | jq '.status.desired.image'` from being added to the registries.conf blacklist (or add it by default to the whitelist and prevent anyone from deleting it)
@Antonio - my understanding is that blacklists and whitelists should be done via image signature policy config (policy.json) - see thread in containers/image [1]. This is informing my PR to only allow one of (allowedRegistries|blockedRegistries) to be set on `image.config.openshift.io` objects [2]. [1] https://github.com/containers/image/issues/548 [2] https://github.com/openshift/origin/pull/22296
@Adam and @Ben: if https://github.com/openshift/origin/pull/22296 can't ensure that the CVO registry doesn't end up being blacklisted, should the MCO just make sure that the CVO registry doesn't end up under blockedRegistries in registries.conf? I don't think the containerRuntimeConfig controller has the power of stopping the user from adding that to the image CRD. We can just log a warning saying adding the CVO registry as blocked is not allowed so skipping and we continue to add the other registries from the image CRD.
Uvashi, I think that's exactly what we're proposing needs to be done. (by "CVO registry" I assume you mean the registry referenced by the payload content. Note that I think we anticipate a future state in which the payload can reference multiple registries, and I don't think we currently have an answer about where the MCO can get the full list of referenced registries from)
Fixed in https://github.com/openshift/machine-config-operator/pull/569
Fix got merged into machine-config-operator/master
Checked with 4.0.0-0.nightly-2019-03-23-222829 and the fix still not in, waiting new bulid to have another try.
Checked and the issue has been fixed. #> oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-03-25-180911 True False 42m Cluster version is 4.0.0-0.nightly-2019-03-25-180911 #> oc get image.config.openshift.io cluster -o yaml apiVersion: config.openshift.io/v1 kind: Image metadata: creationTimestamp: 2019-03-26T02:27:14Z generation: 2 name: cluster resourceVersion: "31716" selfLink: /apis/config.openshift.io/v1/images/cluster uid: ab1d3566-4f6e-11e9-b0b5-0eb2688208c2 spec: additionalTrustedCA: name: "" registrySources: blockedRegistries: - registry.svc.ci.openshift.org status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 #> oc -n openshift-machine-config-operator logs machine-config-controller-64f5ddb575-hbgwj I0326 03:13:58.877690 1 container_runtime_config_controller.go:582] error adding "registry.svc.ci.openshift.org" to blocked registries, cannot block the registry being used by the payload, skipping....
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758