Bug 1936543 - Infrastructure status.platform not migrated to status.platformStatus causes warnings
Summary: Infrastructure status.platform not migrated to status.platformStatus causes w...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: config-operator
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.z
Assignee: Matthew Staebler
QA Contact: Ke Wang
URL:
Whiteboard:
: 1936454 (view as bug list)
Depends On: 1890038
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-08 17:40 UTC by OpenShift BugZilla Robot
Modified: 2024-06-14 00:42 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-14 14:53:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 Joel Speed 2021-03-08 17:52:27 UTC
*** Bug 1936454 has been marked as a duplicate of this bug. ***

Comment 2 Lalatendu Mohanty 2021-03-12 19:49:43 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
  example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time
What is the impact?  Is it serious enough to warrant blocking edges?
  example: Up to 2 minute disruption in edge routing
  example: Up to 90seconds of API downtime
  example: etcd loses quorum and you have to restore from backup
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  example: Issue resolves itself after five minutes
  example: Admin uses oc to fix things
  example: Admin must SSH to hosts, restore from backups, or other non standard admin activities
Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
  example: No, it’s always been like this we just never noticed
  example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 3 Matthew Staebler 2021-03-12 20:08:25 UTC
> Who is impacted?  If we have to block upgrade edges based on this issue,
> which edges would need blocking?

This impacts all non-AWS clusters that were created by OpenShift 4.1.

> What is the impact?  Is it serious enough to warrant blocking edges?

The machine-api-controllers will crash loop, preventing the upgrade from completing. It will not affect workloads or server operations on the cluster. It will only affect the capability of the machine-api-operator to manage machines.

> How involved is remediation (even moderately serious impacts might be
> acceptable if they are easy to mitigate)?

The remediation is to manually patch the status of the infrastructure resource to supply the expected `.status.platformStatus` field. Patching the status is a bit cumbersome because it involves patching a subresource, which is not possible directly using `oc`. It is doable using `oc proxy` and `curl`.

> Is this a regression (if all previous versions were also vulnerable,
> updating to the new, vulnerable version does not increase exposure)?
>   example: No, it’s always been like this we just never noticed
>   example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

This is not a regression. The code in the machine-api-operator that expects the `.status.platformStatus` field was added in 4.6.

Comment 4 Lalatendu Mohanty 2021-03-30 18:32:14 UTC
> This impacts all non-AWS clusters that were created by OpenShift 4.1.

Based on #3 , removing "UpgradeBlocker" keyword.

Comment 5 Ke Wang 2021-07-19 03:14:28 UTC
This bug's PR is dev-approved and not yet merged, so I'm following issue DPTP-660 to do the pre-merge verifying for QE pre-merge verification goal of issue OCPQE-815 by using the bot to build one OCP build image with the open PR and did a upgrade.  Here is the verification steps:

Upgraded one baremetal clsuter from 4.1.41 to 4.6.39 successfully.

$ oc get clusterversion -o json|jq ".items[0].status.history"
[
  {
    "completionTime": "2021-07-16T18:37:31Z",
    "image": "registry.ci.openshift.org/ocp/release@sha256:0e84fb79931b8ee5c7c2f868f0de8b0720dce208ac324664afa544db81fa21ab",
    "startedTime": "2021-07-16T17:10:58Z",
    "state": "Completed",
    "verified": true,
    "version": "4.6.39"
  },
  {
    "completionTime": "2021-07-16T16:26:28Z",
    "image": "registry.ci.openshift.org/ocp/release@sha256:c67fe644d1c06e6d7694e648a40199cb06e25e1c3cfd5cd4fdac87fd696d2297",
    "startedTime": "2021-07-16T15:03:47Z",
    "state": "Completed",
    "verified": true,
    "version": "4.5.41"
  },
  {
    "completionTime": "2021-07-16T13:25:56Z",
    "image": "registry.ci.openshift.org/ocp/release@sha256:a035dddd8a5e5c99484138951ef4aba021799b77eb9046f683a5466c23717738",
    "startedTime": "2021-07-16T09:48:26Z",
    "state": "Completed",
    "verified": true,
    "version": "4.4.33"
  },
  {
    "completionTime": "2021-07-16T08:53:02Z",
    "image": "registry.ci.openshift.org/ocp/release@sha256:9ff90174a170379e90a9ead6e0d8cf6f439004191f80762764a5ca3dbaab01dc",
    "startedTime": "2021-07-16T07:38:48Z",
    "state": "Completed",
    "verified": true,
    "version": "4.3.40"
  },
  {
    "completionTime": "2021-07-16T07:19:33Z",
    "image": "registry.ci.openshift.org/ocp/release@sha256:449b9b839d2cdf33ff2a494a5863d24dc1b436f995c286d8f8d58821ae293b82",
    "startedTime": "2021-07-16T06:09:59Z",
    "state": "Completed",
    "verified": true,
    "version": "4.2.36"
  },
  {
    "completionTime": "2021-07-16T04:22:14Z",
    "image": "quay.io/openshift-release-dev/ocp-release@sha256:a8f706d139c8e77d884ccedbf67d69eefd67b66dcf69ee1032b507fe3acbf8c8",
    "startedTime": "2021-07-16T03:51:47Z",
    "state": "Completed",
    "verified": false,
    "version": "4.1.41"
  }
]

$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.39    True        False         7h41m   Cluster version is 4.6.39

$ oc get infrastructures.config.openshift.io/cluster -ojson
{
    "apiVersion": "config.openshift.io/v1",
    "kind": "Infrastructure",
    "metadata": {
        "creationTimestamp": "2021-07-16T03:51:42Z",
        "generation": 1,
        "name": "cluster",
        "resourceVersion": "254",
        "selfLink": "/apis/config.openshift.io/v1/infrastructures/cluster",
        "uid": "21ce86ea-e5e9-11eb-9a48-fa163e4d9e60"
    },
    "spec": {
        "cloudConfig": {
            "name": ""
        }
    },
    "status": {
        "apiServerInternalURI": "https://api-int.kewang16upb1.....com:6443",
        "apiServerURL": "https://api.kewang16upb1.....com:6443",
        "etcdDiscoveryDomain": "kewang16upb1.....com",
        "infrastructureName": "kewang16upb1-56sws",
        "platform": "None"
    }
}


So the bug is pre-merge verified. After the PR gets merged, the bug will be moved to VERIFIED by the bot automatically or, if not working, by me manually.

Comment 7 Maciej Szulik 2021-12-14 14:53:37 UTC
4.6 is mostly outside of support, and the EUS version only accepts critical fixes, see https://access.redhat.com/support/policy/updates/openshift#dates
so I'm going to close it as won't fix.


Note You need to log in before you can comment on or make changes to this bug.