Bug 1957133
Summary: | Unable to attach Vsphere volume shows the error "failed to get canonical path" | |||
---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Prasad Deshpande <prdeshpa> | |
Component: | Storage | Assignee: | Hemant Kumar <hekumar> | |
Storage sub component: | Storage | QA Contact: | Wei Duan <wduan> | |
Status: | CLOSED ERRATA | Docs Contact: | ||
Severity: | urgent | |||
Priority: | urgent | CC: | acarlos, aos-bugs, hekumar, jocolema, jsafrane, mrbraga, wking | |
Version: | 4.8 | Keywords: | Regression, Reopened, Upgrades | |
Target Milestone: | --- | Flags: | prdeshpa:
needinfo-
|
|
Target Release: | 4.8.z | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1973766 1981477 (view as bug list) | Environment: | ||
Last Closed: | 2021-08-16 18:32:11 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1981477 | |||
Bug Blocks: | 1973766 |
Comment 22
W. Trevor King
2021-06-18 18:28:06 UTC
1. Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? A. Customers upgrading from 4.6.z to 4.8.z (all releases of 4.7 is broken as well) and using datastores which are inside datastore cluster (a unsupported configuration) or inside a storage folder(supported configuration). 2. What is the impact? Is it serious enough to warrant blocking edges? A. Prometheus or AlertManager or any pods that use PVs will not able to come up.Unless they re-create PV (data loss) or carefully delete and create PV in such a way that, they can bind to same disk (possible but complicated) How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? A. It is possible to workaround the bug by removing storage folder or datastore cluster name from datastore name and just specify datastore name. But this requires re-creation of PVs. But for existing PVs, customer must carefully delete and re-create PVs in such a way that they can bind the PV to same disk. There is a risk of data loss during deletion of any PV. Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? A. This is a regression from 4.6. It worked fine in 4.6. We are not considering this as UpgradeBlocker because we don't know how many clusters are impacted as this information isn't submitted via Telemetry/Insights. However we see more customer cases we might change our stance. Verified pass on 4.8.0-0.nightly-2021-08-09-135211 Setting default-datastore = "qe/datastore3" in cm/cloud-provider-config, pv provisioning successfully and pod is running. $ govc ls /Datacenter/datastore/qe /Datacenter/datastore/qe/datastore3 $ oc get pv pvc-d0001754-7b33-46d6-ac16-6846f6dd50fa -ojson | jq .spec.vsphereVolume.volumePath "[qe/datastore3] kubevols/reliability01-h7kh5-dy-pvc-d0001754-7b33-46d6-ac16-6846f6dd50fa.vmdk" Also verified by: Setting default-datastore = "/Datacenter/datastore/qe/datastore3" in cm/cloud-provider-config, pv provisioning successfully and pod is running. $ oc get pv pvc-d146860a-17f4-4785-988b-b71e1f21a50a -ojson | jq .spec.vsphereVolume.volumePath "[/Datacenter/datastore/qe/datastore3] kubevols/reliability01-h7kh5-dy-pvc-d146860a-17f4-4785-988b-b71e1f21a50a.vmdk" Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.8.5 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3121 Dropping ImpactStatementProposed per comment 25. |