Bug 1879777 - Overlapping, divergent openshift-machine-api namespace manifests
Summary: Overlapping, divergent openshift-machine-api namespace manifests
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.6
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.7.0
Assignee: Joel Speed
QA Contact: Milind Yadav
Depends On:
TreeView+ depends on / blocked
Reported: 2020-09-16 23:53 UTC by W. Trevor King
Modified: 2021-02-24 15:19 UTC (History)
0 users

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Last Closed: 2021-02-24 15:18:31 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift cluster-autoscaler-operator pull 185 0 None closed Bug 1879777: Remove namespace manifest 2021-01-08 10:53:42 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:19:03 UTC

Description W. Trevor King 2020-09-16 23:53:47 UTC
Description of problem:

From [1]:

  $ oc adm release extract --to manifests quay.io/openshift-release-dev/ocp-release:4.6.0-fc.6-x86_64
  Extracted release payload from digest sha256:933f3d6f61ddec9f3b88a0932b47c438d7dfc15ff1873ab176284b66c9cff76e created at 2020-09-14T21:50:05Z
  $ diff -u0 <(yaml2json <manifests/0000_30_machine-api-operator_00_namespace.yaml | jq -S .) <(yaml2json <manifests/0000_50_cluster-autoscaler-operator_00_namespace.yaml | jq -S .)
  --- /dev/fd/63	2020-09-15 21:59:06.566442814 -0700
  +++ /dev/fd/62	2020-09-15 21:59:06.569442850 -0700
  @@ -5,3 +4,0 @@
  -    "annotations": {
  -      "openshift.io/node-selector": ""
  -    },

This is similar to bug 1879365.

Ideally, we'd either have a single manifest or [2,3] would match.  Seems like the divergent overlap dates back to Feb. [4] or April [5] 2019.  Diverging by annotation isn't terrible, because the CVO merges annotations [6,7] and only attempts to stomp the in-cluster object if the in-cluster object diverges in a manifest-specified key [8].  But would be nice to remove the risk of more serious divergence.

Based on the manifest runlevel docs [9,10,11] (we need to cleanup and unify that a bit on the CVO side), you should be able to rely on the runlevel-30 manifest from the machine-api operator being in place by the time the runlevel-50 autoscaler manifests come along.  Or you could leave the overlapping manifests, update one to remove any divergence, and punt unification into a single manifest to some future work if/when you ever need to make a change to the manifest that would temp the CVO into stomping back and forth on itself.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1879184#c2
[2]: https://github.com/openshift/machine-api-operator/blob/2b0f59dd296a1f2eab8c6088a6b9225a0d17d55b/install/0000_30_machine-api-operator_00_namespace.yaml
[3]: https://github.com/openshift/cluster-autoscaler-operator/blob/f7d3793fc0268e43d8998027d247000f9602b48a/install/00_namespace.yaml
[4]: https://github.com/openshift/cluster-autoscaler-operator/commit/06cb044b1c43c9658594903c523b4d7d50808dda
[5]: https://github.com/openshift/machine-api-operator/commit/94f548ea2b9a421cd01dcfb560ad6d2ed05149e9
[6]: https://github.com/openshift/cluster-version-operator/blob/6d56c655ea16f6faee4b65ffef43dcd912657bc6/lib/resourceapply/core.go#L32
[7]: https://github.com/openshift/cluster-version-operator/blob/6d56c655ea16f6faee4b65ffef43dcd912657bc6/lib/resourcemerge/meta.go#L14
[8]: https://github.com/openshift/cluster-version-operator/blob/6d56c655ea16f6faee4b65ffef43dcd912657bc6/lib/resourcemerge/meta.go#L37
[9]: https://github.com/openshift/cluster-version-operator/blob/6d56c655ea16f6faee4b65ffef43dcd912657bc6/docs/dev/operators.md#what-is-the-order-that-resources-get-createdupdated-in
[10]: https://github.com/openshift/cluster-version-operator/blob/6d56c655ea16f6faee4b65ffef43dcd912657bc6/docs/user/reconciliation.md#manifest-graph
[11]: https://github.com/openshift/cluster-version-operator/blob/6d56c655ea16f6faee4b65ffef43dcd912657bc6/docs/dev/upgrades.md#generalized-ordering

Comment 1 Joel Speed 2020-09-30 16:45:26 UTC
I'd like to spend some time digging into this and work out how we want to proceed with this, will bring it up at our team arch call next sprint

Comment 2 Joel Speed 2020-11-13 13:39:56 UTC
I've added a PR for this, it should get merged during the next sprint

Comment 4 Milind Yadav 2020-12-01 12:33:28 UTC
Validated at : 
[miyadav@miyadav ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2020-11-30-172451   True        False         8h      Cluster version is 4.7.0-0.nightly-2020-11-30-172451

[miyadav@miyadav ~]$ oc adm release extract --to manifests registry.svc.ci.openshift.org/ocp/release:4.7.0-0.nightly-2020-11-30-172451
Extracted release payload from digest sha256:052ffda485ebd94c0772abdb88e326afc0b03188f9b6a2037ff0ff34b891fbbd created at 2020-11-30T17:27:01Z

[miyadav@miyadav ~]$ diff -u0 <(yaml2json <manifests/0000_30_machine-api-operator_00_namespace.yaml | jq -S .) <(yaml2json <manifests/0000_50_cluster-autoscaler-operator_00_namespace.yaml | jq -S .)
bash: manifests/0000_50_cluster-autoscaler-operator_00_namespace.yaml: No such file or directory
--- /dev/fd/63	2020-12-01 17:54:01.104853692 +0530
+++ /dev/fd/62	2020-12-01 17:54:01.104853692 +0530
@@ -1,16 +0,0 @@
-  "apiVersion": "v1",
-  "kind": "Namespace",
-  "metadata": {
-    "annotations": {
-      "include.release.openshift.io/ibm-cloud-managed": "true",
-      "include.release.openshift.io/self-managed-high-availability": "true",
-      "openshift.io/node-selector": ""
-    },
-    "labels": {
-      "name": "openshift-machine-api",
-      "openshift.io/cluster-monitoring": "true"
-    },
-    "name": "openshift-machine-api"
-  }

Moved to VERIFIED based on above ..

Comment 7 errata-xmlrpc 2021-02-24 15:18:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.