Bug 1957133 - Unable to attach Vsphere volume shows the error "failed to get canonical path"
Summary: Unable to attach Vsphere volume shows the error "failed to get canonical path"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.8
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: 4.8.z
Assignee: Hemant Kumar
QA Contact: Wei Duan
URL:
Whiteboard:
Depends On: 1981477
Blocks: 1973766
TreeView+ depends on / blocked
 
Reported: 2021-05-05 08:45 UTC by Prasad Deshpande
Modified: 2021-10-28 12:12 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1973766 1981477 (view as bug list)
Environment:
Last Closed: 2021-08-16 18:32:11 UTC
Target Upstream Version:
Embargoed:
prdeshpa: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kubernetes pull 854 0 None open Bug 1957133: do not throw error when we can't get canonical path 2021-07-13 12:49:24 UTC
Red Hat Product Errata RHBA-2021:3121 0 None None None 2021-08-16 18:32:28 UTC

Comment 22 W. Trevor King 2021-06-18 18:28:06 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z.  The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way.  Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug.  When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label.  The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact?  Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90 seconds of API downtime
* example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it has always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 23 Hemant Kumar 2021-06-18 18:40:38 UTC
1. Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?

A. Customers upgrading from 4.6.z to 4.8.z (all releases of 4.7 is broken as well) and using datastores which are inside datastore cluster (a unsupported configuration) or inside a storage folder(supported configuration). 

2. What is the impact?  Is it serious enough to warrant blocking edges?
A. Prometheus or AlertManager or any pods that use PVs will not able to come up.Unless they re-create PV (data loss) or carefully delete and create PV in such a way that, they can bind to same disk (possible but complicated)

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
A. It is possible to workaround the bug by removing storage folder or datastore cluster name from datastore name and just specify datastore name. But this requires re-creation of PVs. But for existing PVs, customer must carefully delete and re-create PVs in such a way that they can bind the PV to same disk. There is a risk of data loss during deletion of any PV.

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
A. This is a regression from 4.6. It worked fine in 4.6.

Comment 25 W. Trevor King 2021-06-22 16:03:42 UTC
We are not considering this as UpgradeBlocker because we don't know how many clusters are impacted as this information isn't submitted via Telemetry/Insights. However we see more customer cases we might change our stance.

Comment 28 Wei Duan 2021-08-11 11:30:31 UTC
Verified pass on 4.8.0-0.nightly-2021-08-09-135211


Setting default-datastore = "qe/datastore3" in cm/cloud-provider-config, pv provisioning successfully and pod is running.
$ govc ls /Datacenter/datastore/qe
/Datacenter/datastore/qe/datastore3

$ oc get pv pvc-d0001754-7b33-46d6-ac16-6846f6dd50fa -ojson | jq .spec.vsphereVolume.volumePath
"[qe/datastore3] kubevols/reliability01-h7kh5-dy-pvc-d0001754-7b33-46d6-ac16-6846f6dd50fa.vmdk"

Comment 29 Wei Duan 2021-08-11 11:36:24 UTC
Also verified by:

Setting default-datastore = "/Datacenter/datastore/qe/datastore3" in cm/cloud-provider-config, pv provisioning successfully and pod is running.

$ oc get pv pvc-d146860a-17f4-4785-988b-b71e1f21a50a -ojson | jq .spec.vsphereVolume.volumePath
"[/Datacenter/datastore/qe/datastore3] kubevols/reliability01-h7kh5-dy-pvc-d146860a-17f4-4785-988b-b71e1f21a50a.vmdk"

Comment 31 errata-xmlrpc 2021-08-16 18:32:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.8.5 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3121

Comment 32 W. Trevor King 2021-08-18 22:31:07 UTC
Dropping ImpactStatementProposed per comment 25.


Note You need to log in before you can comment on or make changes to this bug.