Bug 1658913 - Allow creation of PVC with long names ( > 45 chars) in GlusterFS driver of OCS
Summary: Allow creation of PVC with long names ( > 45 chars) in GlusterFS driver of OCS
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: kubernetes
Version: ocs-3.11
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: OCS 3.11.z Batch Update 4
Assignee: Humble Chirammal
QA Contact: Rachael
URL:
Whiteboard:
Depends On: 1707789
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-13 07:23 UTC by vinutha
Modified: 2019-12-03 10:00 UTC (History)
6 users (show)

Fixed In Version: atomic-openshift-3.11.123-1.git.0.4bd9283.el7.x86_64.rpm
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-30 12:50:47 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3262 0 None None None 2019-10-30 12:50:49 UTC

Description vinutha 2018-12-13 07:23:54 UTC
Description of problem:

+++++++ This bug is created based on bug https://bugzilla.redhat.com/show_bug.cgi?id=1622493#c49 +++++++++++++++++++

The above bug handled the length restriction of the pvc endpoint to be <63 characters. So if we create a PVC longer than 45 characters, glusterfs will in turn create an endpointname in the form of glusterfs-dymanic-<clainname> that will eventually fail since the name is greater than  63 characters, which is not allowed according to the DNS standards.

Vers# rpm -qa| grep openshift
openshift-ansible-docs-3.11.43-1.git.0.fa69a02.el7.noarch
atomic-openshift-excluder-3.11.43-1.git.0.647ac05.el7.noarch
atomic-openshift-hyperkube-3.11.43-1.git.0.647ac05.el7.x86_64
atomic-openshift-node-3.11.43-1.git.0.647ac05.el7.x86_64
openshift-ansible-playbooks-3.11.43-1.git.0.fa69a02.el7.noarch
openshift-ansible-3.11.43-1.git.0.fa69a02.el7.noarch
atomic-openshift-clients-3.11.43-1.git.0.647ac05.el7.x86_64
atomic-openshift-3.11.43-1.git.0.647ac05.el7.x86_64
openshift-ansible-roles-3.11.43-1.git.0.fa69a02.el7.noarch
atomic-openshift-docker-excluder-3.11.43-1.git.0.647ac05.el7.noarch
ion-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
Steps;
1. Created 2 file pvc with name >45 characters. The pvc was successfully Bound. 

# oc get pvc 
NAME                                                  STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
openshift-sample-postgresql-another-feature-abc       Bound     pvc-332bda3f-f92b-11e8-b816-005056a5b276   2Gi        RWO            glusterfs-storage   15m
openshift-sample-postgresql-another-feature-abc-xyz   Bound     pvc-4e400b22-f92b-11e8-b816-005056a5b276   2Gi        RWO            glusterfs-storage   14m


# heketi-cli  volume list 
Id:18597c1326cbe9af0b77c228f04d492f    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_18597c1326cbe9af0b77c228f04d492f [block]
Id:6c69fb9f8584e06240e7fd555dc10b07    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_6c69fb9f8584e06240e7fd555dc10b07
Id:7670de9f4adad53fd04de0aa5991c7c1    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_7670de9f4adad53fd04de0aa5991c7c1
Id:b84fb15e049c39aae69b9f1746734762    Cluster:71d18c877d8828c4271a648121decda0    Name:heketidbstorage




2. Created a file pvc of name >63 characters. The PVC creation failed as expected with the below error 

# oc describe pvc openshift-sample-postgresql-another-feature-abc-xyz-openshift-123
Name:          openshift-sample-postgresql-another-feature-abc-xyz-openshift-123
Namespace:     glusterfs
StorageClass:  glusterfs-storage
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-class=glusterfs-storage
               volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type     Reason              Age                 From                         Message
  ----     ------              ----                ----                         -------
  Warning  ProvisioningFailed  11m (x53 over 36m)  persistentvolume-controller  Failed to provision volume with StorageClass "glusterfs-storage": failed to create volume: failed to create endpoint/service glusterfs/glusterfs-dynamic-801ec2b9-f92d-11e8-b816-005056a5b276: failed to create endpoint: Endpoints "glusterfs-dynamic-801ec2b9-f92d-11e8-b816-005056a5b276" is invalid: metadata.labels: Invalid value: "openshift-sample-postgresql-another-feature-abc-xyz-openshift-123": must be no more than 63 characters



# oc get pvc 
NAME                                                                STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
openshift-sample-postgresql-another-feature-abc                     Bound     pvc-332bda3f-f92b-11e8-b816-005056a5b276   2Gi        RWO            glusterfs-storage   52m
openshift-sample-postgresql-another-feature-abc-xyz                 Bound     pvc-4e400b22-f92b-11e8-b816-005056a5b276   2Gi        RWO            glusterfs-storage   51m
openshift-sample-postgresql-another-feature-abc-xyz-openshift-123   Pending                                                                        glusterfs-storage   36m


# oc get ep
NAME                                                     ENDPOINTS                                              AGE
glusterfs-dynamic-332bda3f-f92b-11e8-b816-005056a5b276   10.70.46.20:1,10.70.46.72:1,10.70.47.3:1 + 1 more...   23m
glusterfs-dynamic-4e400b22-f92b-11e8-b816-005056a5b276   10.70.46.20:1,10.70.46.72:1,10.70.47.3:1 + 1 more...   23m
heketi-db-storage-endpoints                              10.70.46.20:1,10.70.46.72:1,10.70.47.3:1 + 1 more...   2d
heketi-storage                                           10.131.0.7:8080                                        2d


----------------------------------------------------------------------------

However, it was observed that heketi keeps sending volume requests for the pvc in pending state and volume keeps getting created and deleted 

heketi volume list info captured during volume create and deletes
 heketi-cli  volume list 
Id:18597c1326cbe9af0b77c228f04d492f    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_18597c1326cbe9af0b77c228f04d492f [block]
Id:47ffce266dd0a7f3d22eb237107f6e4b    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_47ffce266dd0a7f3d22eb237107f6e4b
Id:6c69fb9f8584e06240e7fd555dc10b07    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_6c69fb9f8584e06240e7fd555dc10b07
Id:7670de9f4adad53fd04de0aa5991c7c1    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_7670de9f4adad53fd04de0aa5991c7c1
Id:b84fb15e049c39aae69b9f1746734762    Cluster:71d18c877d8828c4271a648121decda0    Name:heketidbstorage



]# heketi-cli  volume list 
Id:18597c1326cbe9af0b77c228f04d492f    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_18597c1326cbe9af0b77c228f04d492f [block]
Id:6c69fb9f8584e06240e7fd555dc10b07    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_6c69fb9f8584e06240e7fd555dc10b07
Id:7670de9f4adad53fd04de0aa5991c7c1    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_7670de9f4adad53fd04de0aa5991c7c1
Id:b84fb15e049c39aae69b9f1746734762    Cluster:71d18c877d8828c4271a648121decda0    Name:heketidbstorage
Id:ddd6d4112ee920a281f73fe490c8a351    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_ddd6d4112ee920a281f73fe490c8a351


# heketi-cli  volume list 
Id:18597c1326cbe9af0b77c228f04d492f    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_18597c1326cbe9af0b77c228f04d492f [block]
Id:6c69fb9f8584e06240e7fd555dc10b07    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_6c69fb9f8584e06240e7fd555dc10b07
Id:7670de9f4adad53fd04de0aa5991c7c1    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_7670de9f4adad53fd04de0aa5991c7c1
Id:b84fb15e049c39aae69b9f1746734762    Cluster:71d18c877d8828c4271a648121decda0    Name:heketidbstorage
Id:d7072400e568efe15693850a316a95da    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_d7072400e568efe15693850a316a95da

]# heketi-cli  volume list 
Id:18597c1326cbe9af0b77c228f04d492f    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_18597c1326cbe9af0b77c228f04d492f [block]
Id:6c69fb9f8584e06240e7fd555dc10b07    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_6c69fb9f8584e06240e7fd555dc10b07
Id:7670de9f4adad53fd04de0aa5991c7c1    Cluster:71d18c877d8828c4271a648121decda0    Name:vol_7670de9f4adad53fd04de0aa5991c7c1
Id:b84fb15e049c39aae69b9f1746734762    Cluster:71d18c877d8828c4271a648121decda0    Name:heketidbstorage


--------- snip of heketi logs --------------------------------
Result:
[cmdexec] INFO 2018/12/06 08:06:18 Creating volume vol_adba6e7491b65b0fd63d2c898440287f replica 3
[kubeexec] DEBUG 2018/12/06 08:06:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-72.lab.eng.blr.redhat.com Pod: glusterfs-storage-kxqg7 Command: gluster --mode=script volume create vol_adba6e7491b65b0fd63d2c898440287f replica 3 10.70.46.72:/var/lib/heketi/mounts/vg_6be54f8c2432154cc7454dc50879362e/brick_cd333d4924ed26ba738d3be76f674c49/brick 10.70.46.20:/var/lib/heketi/mounts/vg_6b3fed18ec3ccad0e42f9175eee067b5/brick_6e4a4763984bba82506ebfb844a5e020/brick 10.70.47.61:/var/lib/heketi/mounts/vg_248dac6786e216b71e45589b23647945/brick_b7fd355cf957561b5b6bdf710e7234d9/brick
Result: volume create: vol_adba6e7491b65b0fd63d2c898440287f: success: please start the volume to access data
[negroni] Started GET /queue/2107a08666e8ddd0892971872d5dce68
[negroni] Completed 200 OK in 130.962µs
[kubeexec] DEBUG 2018/12/06 08:06:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-72.lab.eng.blr.redhat.com Pod: glusterfs-storage-kxqg7 Command: gluster --mode=script volume set vol_adba6e7491b65b0fd63d2c898440287f server.tcp-user-timeout 42
Result: volume set: success
.
.
.
[kubeexec] DEBUG 2018/12/06 08:06:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-20.lab.eng.blr.redhat.com Pod: glusterfs-storage-4pfcr Command: gluster --mode=script volume stop vol_adba6e7491b65b0fd63d2c898440287f force
Result: volume stop: vol_adba6e7491b65b0fd63d2c898440287f: success
[kubeexec] DEBUG 2018/12/06 08:06:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-20.lab.eng.blr.redhat.com Pod: glusterfs-storage-4pfcr Command: gluster --mode=script volume delete vol_adba6e7491b65b0fd63d2c898440287f
Result: volume delete: vol_adba6e7491b65b0fd63d2c898440287f: success

--------------------- snip of heketi logs ------------------------

Actual results:
Heketi will be creating / deleting the corresponding volumes to this PVC continuously which can cause memory leaks 

Expected results:
Heketi should not keep creating and deleting volumes for a PVC which will not get bound since the PVC name is >45 characters. 

Maybe we would need a check to see a PVC is bound and then heketi proceed with the volume creation 

Additional info:
Logs will be attached

Comment 9 Humble Chirammal 2019-06-27 08:05:53 UTC
Thanks for the verification Racheal!

>The initial bug description mentioned not allowing create requests for PVC names greater than 63 characters to reach heketi since it resulted in continuous creation and deletion of volumes. From the above output it appears that the PVC requests are getting bound irrespective of the size of the PVC name.

Is this the expected behavior? If yes, shouldn't the bug summary or title be modified to reflect the same?
>


Yeah, we could change the title of the bugzilla accordingly.

Comment 13 errata-xmlrpc 2019-10-30 12:50:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3262


Note You need to log in before you can comment on or make changes to this bug.