Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1768487

Summary: Glusterblock RWO PV is not getting correct default storageclass
Product: OpenShift Container Platform Reporter: Erik Nelson <ernelson>
Component: Migration ToolingAssignee: Scott Seago <sseago>
Status: CLOSED ERRATA QA Contact: Xin jiang <xjiang>
Severity: unspecified Docs Contact: Scott Seago <sseago>
Priority: unspecified    
Version: 4.2.0CC: aclewett, dymurray, jmatthew, sregidor
Target Milestone: ---   
Target Release: 4.2.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1772072 (view as bug list) Environment:
Last Closed: 2019-12-11 22:36:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1772072    
Bug Blocks:    

Description Erik Nelson 2019-11-04 14:54:44 UTC
Description of problem:

A glusterfs (file) volume worked correctly for RWO accessMode and, defaulted to the cephrbd storageclass which is the correct choice for RWO. But when in creating a plan for a glusterblock RWO volume the default was gp2 (EBS) which is not correct, the default should also be cephrbd storageclass given the accessMode was RWO for the PVC.

Here is the PVC that shows up above on the OCP 3.11 source.

$ oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS              AGE
mysql     Bound     pvc-915d75d8-fb24-11e9-8546-06cdb92c9cb6   5Gi        RWO            glusterfs-storage-block   2d

Comment 1 Scott Seago 2019-11-04 15:01:06 UTC
Could you also provide information on the storageclasses available on both src and destination? `oc get sc` should give both name and provisioner.

What is described above would happen if either the dest cluster did not have a storageclass with a provisioner ending with "rbd.csi.ceph.com" (unlikely since the glusterfs volume worked) or the src pvc's storageclass provisioner does not match either "kubernetes.io/glusterfs" or "gluster.org/glusterblock". The latter seems like the most likely explanation for what you're seeing. If neither of these provisioner issues matches what you're running into, then we'll need to dig more deeply.

Comment 2 Scott Seago 2019-11-05 13:44:48 UTC
Based on a follow-on email discussion, it looks like the glusterblock provisioner has changed from "gluster.org/glusterblock" to "gluster.org/glusterblock-glusterfs". We need to update the code to check for both, and I also need to add the explicit check for the glusterblock metadata data on the pv which should catch this even if it changes again -- basically an equivalent for glusterblock for what we already have for glusterfs -- the check for pv.Spec.Glusterfs not being nil. I'll just need to create another glusterblock volume and make sure I'm testing for the right thing.

Comment 3 Scott Seago 2019-11-05 14:04:33 UTC
A follow-on update. It looks like we need to match for the substring "gluster.org/glusterblock" at the beginning of the provisioner, expecting that users might namespace it with something after. We may need to do the same with glusterfs.

Comment 4 Scott Seago 2019-11-05 16:28:48 UTC
Fix is here: https://github.com/fusor/mig-controller/pull/361

We're now matching volumes with a storageclass whose provisioner begins with "gluster.org/glusterblock", so it will match "gluster.org/glusterblock" as well as "gluster.org/glusterblock-foo". I was unable to add an equivalent fallback (in case the provisioner is completely different) like I did with glusterfs, because instead of "pv.Spec.Glusterfs", we have "pv.Spec.Iscsi" for glusterblock, and I'm assuming that the presence of an iscsi volume source will not uniquely identify a glusterblock volume. The result is that if a customer has installed glusterblock with a nonstandard provisioner string (something other than "gluster.org/glusterblock" followed by a namespace), we will not automatically suggest ceph/rbd migration for those volumes. This can probably be handled as a documentation issue.

Comment 7 Sergio 2019-12-04 18:41:51 UTC
Verified on 1.0.1 version stored on stage repository.

With the storage class using the namespace into the provisioner's name

$ oc get sc glusterfs-storage-block
NAME                      PROVISIONER                                  AGE
glusterfs-storage-block   gluster.org/glusterblock-openshift-storage   4h


We get the right default value for the migration

    pvc:
      accessModes:
      - ReadWriteOnce
      name: nginx-logs
      namespace: ng-gblock2def
    selection:
      action: copy
      copyMethod: filesystem
      storageClass: csi-rbd
    storageClass: glusterfs-storage-block

Comment 9 errata-xmlrpc 2019-12-11 22:36:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:4093