Bug 1878980 - Manila volumes cannot be unmounted after NFS driver pod restart
Summary: Manila volumes cannot be unmounted after NFS driver pod restart
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.5.z
Assignee: aos-storage-staff@redhat.com
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On: 1867152
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-15 06:43 UTC by Qin Ping
Modified: 2021-04-05 17:47 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1867152
Environment:
Last Closed: 2020-09-30 12:48:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift csi-driver-manila-operator pull 61 0 None closed Bug 1878980: Use hostNetwork for the NFS CSI driver 2021-01-14 08:53:12 UTC
Github openshift csi-driver-nfs pull 33 0 None closed Bug 1878980: 4.5: <carry>: Umount volumes with force 2021-01-14 08:53:12 UTC
Red Hat Product Errata RHBA-2020:3768 0 None None None 2020-09-30 12:48:45 UTC

Comment 1 Jack Ottofaro 2020-09-15 15:51:25 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
  example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time
What is the impact?  Is it serious enough to warrant blocking edges?
  example: Up to 2 minute disruption in edge routing
  example: Up to 90seconds of API downtime
  example: etcd loses quorum and you have to restore from backup
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  example: Issue resolves itself after five minutes
  example: Admin uses oc to fix things
  example: Admin must SSH to hosts, restore from backups, or other non standard admin activities
Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
  example: No, it’s always been like this we just never noticed
  example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 7 errata-xmlrpc 2020-09-30 12:48:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.5.11 optional CSI driver Operators bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3768

Comment 8 W. Trevor King 2021-04-05 17:47:42 UTC
Removing UpgradeBlocker from this older bug, to remove it from the suspect queue described in [1].  If you feel like this bug still needs to be a suspect, please add keyword again.

[1]: https://github.com/openshift/enhancements/pull/475


Note You need to log in before you can comment on or make changes to this bug.