Bug 1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present
Summary: Drain on worker node during an upgrade fails due to PDB set for image registr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 4.8
Hardware: Unspecified
OS: Linux
unspecified
urgent
Target Milestone: ---
: 4.8.0
Assignee: Oleg Bulatov
QA Contact: Wenjing Zheng
URL:
Whiteboard:
: 1944768 1945458 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-30 15:43 UTC by Prashanth Sundararaman
Modified: 2021-07-27 22:56 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 22:56:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-image-registry-operator pull 676 0 None open Bug 1944762: Allow disruptions when operand has only one replica 2021-03-30 20:37:56 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:56:56 UTC

Description Prashanth Sundararaman 2021-03-30 15:43:14 UTC
Description of problem:
On an OCP upgrade from 4.7 -> 4.8, noticed that one of the worker nodes was stuck at "Ready, SchedulingDiabled". On looking at the machine config daemon logs, saw a lot of:

E0330 12:39:45.132218  167828 daemon.go:329] error when evicting pods/"image-registry-cfb9c9c7c-2sq54" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.

Version-Release number of selected component (if applicable):
OCP 4.8

How reproducible:
Reproducible all the time on a cluster with a single image registry replica


On talking to the image-registry folks, it looks like the recent change to add PDB to the image registry deployment caused this issue and this case needs to be handled for non HA image registry.

On the libvirt deploys, there is no storage and we use emptyDir. so this causes the issue.

Comment 1 Lalatendu Mohanty 2021-03-30 20:11:09 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
  example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time
What is the impact?  Is it serious enough to warrant blocking edges?
  example: Up to 2 minute disruption in edge routing
  example: Up to 90seconds of API downtime
  example: etcd loses quorum and you have to restore from backup
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  example: Issue resolves itself after five minutes
  example: Admin uses oc to fix things
  example: Admin must SSH to hosts, restore from backups, or other non standard admin activities
Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
  example: No, it’s always been like this we just never noticed
  example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 2 Lalatendu Mohanty 2021-03-30 20:12:27 UTC
Increasing the bugs severity to urgent as we need to know if can impact 4.6.z update edges to 4.7.z.

Comment 3 Prashanth Sundararaman 2021-03-30 20:24:13 UTC
My understanding is this : 

This bug only affects upgrades to 4.8 because of a recent change: https://github.com/openshift/cluster-image-registry-operator/pull/671

From this: https://github.com/openshift/cluster-image-registry-operator/blob/master/pkg/storage/storage.go#L140 the affected platforms will be anything other than AWS,GCP and Azure.

The workaround would be to edit the configs.imageregistry and increase replicas to 2 before starting the upgrade.

Comment 4 W. Trevor King 2021-03-30 20:28:52 UTC
Based on comment 3, I'm going to drop UpgradeBlocker and add blocker+.

If this doesn't seem like it will get fixed before 4.8 GAs and folks are ok with that, move to blocker+, restore UpgradeBlocker, and go into more detail about the the "What is the impact?" answer so we can judge blocker-ness.

Comment 5 W. Trevor King 2021-03-31 23:16:03 UTC
Adding the relevant test-case string so Sippy associates this bug with the failure, although the test-case is very broad and could fail for many orthogonal reasons.

Comment 6 W. Trevor King 2021-03-31 23:16:16 UTC
*** Bug 1944768 has been marked as a duplicate of this bug. ***

Comment 7 W. Trevor King 2021-03-31 23:18:24 UTC
PDBs for the registry are very new (bug 1939731), so no impact on 4.7 or earlier.

Comment 8 W. Trevor King 2021-04-01 04:45:00 UTC
*** Bug 1945458 has been marked as a duplicate of this bug. ***

Comment 10 Prashanth Sundararaman 2021-04-01 15:08:26 UTC
tested upgrades with the latest nightlies and works as expected.

Comment 13 errata-xmlrpc 2021-07-27 22:56:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.