Bug 1940806 - [4.7z] CNO: nodes and masters are upgrading simultaneously
Summary: [4.7z] CNO: nodes and masters are upgrading simultaneously
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.7.z
Assignee: Federico Paolinelli
QA Contact: Anurag saxena
URL:
Whiteboard:
Depends On: 1939060
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-19 09:08 UTC by OpenShift BugZilla Robot
Modified: 2021-04-05 13:56 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-05 13:56:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 1029 0 None open [release-4.7] Bug 1940806: OVN Upgrade: fix upgrade order of node and master 2021-03-23 16:59:29 UTC
Red Hat Product Errata RHSA-2021:1005 0 None None None 2021-04-05 13:56:36 UTC

Description OpenShift BugZilla Robot 2021-03-19 09:08:36 UTC
+++ This bug was initially created as a clone of Bug #1939060 +++

Description of problem:
While upgrading ovn-kubernetes via CNO, I can see workers and masters both getting upgraded simultaneously. The method I used is:
cat override-cno-control-patch.yaml

- op: add
  path: /spec/overrides
  value: []
- op: add
  path: /spec/overrides/-
  value:
    kind: Deployment
    name: network-operator
    group: operator.openshift.io
    namespace: openshift-network-operator
    unmanaged: true

# sets network operator deployment unmanaged from clusterversion
oc patch --type=json -p "$(cat ~/override-cno-control-patch.yaml)" clusterversion version

cat override-ovn-kubernetes-image-patch.yaml
spec:
  template:
    spec:
      containers:
      - name: network-operator
        env:
        - name: OVN_IMAGE
          # build from images/Dockerfile.bf and https://github.com/ovn-org/ovn-kubernetes/pull/2005
          value: quay.io/zshi/ovn-daemonset:openshift-454-3

# overrides ovn-kubernetes image
oc patch -p "$(cat ~/override-ovn-kubernetes-image-patch.yaml)" deploy network-operator -n openshift-network-operator

--- Additional comment from trozet on 2021-03-15 14:17:17 UTC ---

Note I saw this behavior on 4.8, but I assume the behavior also exists in 4.7 so targeting that version.

--- Additional comment from trozet on 2021-03-15 14:18:27 UTC ---

Created attachment 1763387 [details]
cno logs

--- Additional comment from fpaoline on 2021-03-17 17:12:28 UTC ---

Update: I suspect this is because this is not a real version upgrade, but you changed the image only. The code path is here 
https://github.com/openshift/cluster-network-operator/pull/961/files#diff-3a72fb129233dbf79f270cfcc408ec08f67a3b21e8ecf4fba9ae8d8dd849a83eR445

Comment 3 zhaozhanqi 2021-03-29 08:58:28 UTC
upgrade from 4.6.23 to  4.7.0-0.nightly-2021-03-27-082615 successfully with OVN plugin.  
Move this to 'verified'

Comment 4 Lalatendu Mohanty 2021-03-30 16:22:07 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
  example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time
What is the impact?  Is it serious enough to warrant blocking edges?
  example: Up to 2 minute disruption in edge routing
  example: Up to 90seconds of API downtime
  example: etcd loses quorum and you have to restore from backup
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  example: Issue resolves itself after five minutes
  example: Admin uses oc to fix things
  example: Admin must SSH to hosts, restore from backups, or other non standard admin activities
Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
  example: No, it’s always been like this we just never noticed
  example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 6 Dan Williams 2021-03-30 17:45:46 UTC
(In reply to Lalatendu Mohanty from comment #4)
> We're asking the following questions to evaluate whether or not this bug
> warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The
> ultimate goal is to avoid delivering an update which introduces new risk or
> reduces cluster functionality in any way. Sample answers are provided to
> give more context and the UpgradeBlocker flag has been added to this bug. It
> will be removed if the assessment indicates that this should not block
> upgrade edges. The expectation is that the assignee answers these questions.
> 
> Who is impacted?  If we have to block upgrade edges based on this issue,
> which edges would need blocking?

This fixes an issue upgrading from 4.6 -> 4.7 for customers using the ovn-kubernetes network plugin (which is not the default plugin yet).

4.7 -> 4.7 is *NOT* problematic.

> What is the impact?  Is it serious enough to warrant blocking edges?

Nodes whose ovnkube-node pod has not yet been upgraded after the ovnkube-master pod *has* been upgraded may experience connection failures to Services, especially those that have HostNetwork endpoints.

> How involved is remediation (even moderately serious impacts might be
> acceptable if they are easy to mitigate)?

The problem will resolve itself after all ovnkube-node pods are upgraded; but the interruption during upgrade may cause disruption the cluster, or the might cause the upgrade to fail if non-host-network pods are scheduled on non-upgraded nodes.

> standard admin activities
> Is this a regression (if all previous versions were also vulnerable,
> updating to the new, vulnerable version does not increase exposure)?

4.6 -> 4.7 is affected. 4.7 -> 4.7 is *not* affected. Given that we have always upgraded masters before nodes in ovnkube, we just got lucky before. So I guess technically a regression from the user point of view.

Comment 8 W. Trevor King 2021-03-30 19:59:12 UTC
Based on comment 6, 4.7 -> 4.7 is not impacted.  And 4.6 -> 4.7 is blocked already on bug 1935539 and the vSphere HW thing.  This bugfix is beating the fix for the bug 1935539 series into 4.7.z.  So we don't have to decide if we consider this an update-blocker, because the edges it impacts are already blocked.  Dropping UpgradeBlocker.

Comment 10 errata-xmlrpc 2021-04-05 13:56:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.5 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1005


Note You need to log in before you can comment on or make changes to this bug.