Bug 1768487
| Summary: | Glusterblock RWO PV is not getting correct default storageclass | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Erik Nelson <ernelson> | |
| Component: | Migration Tooling | Assignee: | Scott Seago <sseago> | |
| Status: | CLOSED ERRATA | QA Contact: | Xin jiang <xjiang> | |
| Severity: | unspecified | Docs Contact: | Scott Seago <sseago> | |
| Priority: | unspecified | |||
| Version: | 4.2.0 | CC: | aclewett, dymurray, jmatthew, sregidor | |
| Target Milestone: | --- | |||
| Target Release: | 4.2.z | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1772072 (view as bug list) | Environment: | ||
| Last Closed: | 2019-12-11 22:36:06 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1772072 | |||
| Bug Blocks: | ||||
|
Description
Erik Nelson
2019-11-04 14:54:44 UTC
Could you also provide information on the storageclasses available on both src and destination? `oc get sc` should give both name and provisioner. What is described above would happen if either the dest cluster did not have a storageclass with a provisioner ending with "rbd.csi.ceph.com" (unlikely since the glusterfs volume worked) or the src pvc's storageclass provisioner does not match either "kubernetes.io/glusterfs" or "gluster.org/glusterblock". The latter seems like the most likely explanation for what you're seeing. If neither of these provisioner issues matches what you're running into, then we'll need to dig more deeply. Based on a follow-on email discussion, it looks like the glusterblock provisioner has changed from "gluster.org/glusterblock" to "gluster.org/glusterblock-glusterfs". We need to update the code to check for both, and I also need to add the explicit check for the glusterblock metadata data on the pv which should catch this even if it changes again -- basically an equivalent for glusterblock for what we already have for glusterfs -- the check for pv.Spec.Glusterfs not being nil. I'll just need to create another glusterblock volume and make sure I'm testing for the right thing. A follow-on update. It looks like we need to match for the substring "gluster.org/glusterblock" at the beginning of the provisioner, expecting that users might namespace it with something after. We may need to do the same with glusterfs. Fix is here: https://github.com/fusor/mig-controller/pull/361 We're now matching volumes with a storageclass whose provisioner begins with "gluster.org/glusterblock", so it will match "gluster.org/glusterblock" as well as "gluster.org/glusterblock-foo". I was unable to add an equivalent fallback (in case the provisioner is completely different) like I did with glusterfs, because instead of "pv.Spec.Glusterfs", we have "pv.Spec.Iscsi" for glusterblock, and I'm assuming that the presence of an iscsi volume source will not uniquely identify a glusterblock volume. The result is that if a customer has installed glusterblock with a nonstandard provisioner string (something other than "gluster.org/glusterblock" followed by a namespace), we will not automatically suggest ceph/rbd migration for those volumes. This can probably be handled as a documentation issue. Verified on 1.0.1 version stored on stage repository.
With the storage class using the namespace into the provisioner's name
$ oc get sc glusterfs-storage-block
NAME PROVISIONER AGE
glusterfs-storage-block gluster.org/glusterblock-openshift-storage 4h
We get the right default value for the migration
pvc:
accessModes:
- ReadWriteOnce
name: nginx-logs
namespace: ng-gblock2def
selection:
action: copy
copyMethod: filesystem
storageClass: csi-rbd
storageClass: glusterfs-storage-block
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:4093 |