Bug 1541323

Summary: [GSS] Glusterfs pvc bound fail with error creating volume Token used before issued
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rajnikant <rkant>
Component: heketiAssignee: Raghavendra Talur <rtalur>
Status: CLOSED ERRATA QA Contact: vinutha <vinug>
Severity: high Docs Contact:
Priority: high    
Version: cns-3.6CC: akrishna, anli, aos-bugs, aos-storage-staff, asriram, bkunal, dluong, hchiramm, hongkliu, jhou, jmulligan, jsafrane, kramdoss, madam, ncredi, nigoyal, piqin, pprakash, qixuan.wang, rcyriac, rhs-bugs, rkant, rtalur, sankarshan, schoudha, storage-qa-internal, vinug, weshi, wmeng
Target Milestone: ---   
Target Release: CNS 3.10   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, some heketi client requests failed with ‘Token used before issued’ error because time synchronization was not properly handled by JSON web tokens. This update adds a margin of 120 seconds to iat claim validation to ensure that client requests can succeed in this situation. This margin can be changed by editing the ‘HEKETI_JWT_IAT_LEEWAY_SECONDS’ environment variable.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-12 09:22:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1568861    

Description Rajnikant 2018-02-02 09:43:33 UTC
Description of problem:
Glusterfs pvc bound fail with error creating volume Token used before issued

Version-Release number of selected component (if applicable):
OpenShift Container Platform 3.6

How reproducible:
- Glusterfs pvc provisioning fail with such error[0]

- Tried with fresh OSE-3.6 installation on glusterfs fails with error, it is unable to bound PVC and installation fail.

[0]
--
54s        3d          17434     registry-claim            PersistentVolumeClaim               Warning   ProvisioningFailed   persistentvolume-controller   Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume Token used before issued.
---

Steps to Reproduce:
1.
2.
3.

Actual results:
PVC bound fail with error
54s        3d          17434     registry-claim            PersistentVolumeClaim               Warning   ProvisioningFailed   persistentvolume-controller   Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume Token used before issued.

Expected results:
PVC proviosinign should happen

Additional info:
Please let me know is any further log/information required.

Comment 2 Humble Chirammal 2018-02-02 09:58:57 UTC
The error is thrown from heketi when trying authentication. Mostly due to time sync issue.

Comment 3 Humble Chirammal 2018-02-02 10:00:02 UTC
Ref#https://github.com/heketi/heketi/issues/646

Comment 9 Humble Chirammal 2018-03-16 05:56:37 UTC
Rajnikant, can  you try step in c#8?

Comment 22 Raghavendra Talur 2018-06-04 17:14:45 UTC
Make sure the node has a time synchronization service running. Gluster pods don't run any time sync service within them anymore.

Comment 25 Nitin Goyal 2018-07-31 08:33:27 UTC
I was creating block device when i hit this bug again in cns 3.10.

rpm ->
heketi-7.0.0-5.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64
glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-api-3.8.4-54.15.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64
glusterfs-server-3.8.4-54.15.el7rhgs.x86_64
gluster-block-0.2.1-23.el7rhgs.x86_64

container images ->
rhgs-volmanager-rhel7           3.3.1-22
rhgs-server-rhel7               3.3.1-28

[root@dhcp47-153 ~]# oc describe pvc c101 
Name:          c101
Namespace:     glusterfs
StorageClass:  block-sc
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   control-plane.alpha.kubernetes.io/leader={"holderIdentity":"aaabf029-948d-11e8-8b97-0a580a830006","leaseDurationSeconds":15,"acquireTime":"2018-07-31T07:02:39Z","renewTime":"2018-07-31T07:19:53Z","lea...
               volume.beta.kubernetes.io/storage-class=block-sc
               volume.beta.kubernetes.io/storage-provisioner=gluster.org/glusterblock
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type     Reason              Age               From                                                           Message
  ----     ------              ----              ----                                                           -------
  Warning  ProvisioningFailed  1h (x13 over 1h)  gluster.org/glusterblock aaabf029-948d-11e8-8b97-0a580a830006  Failed to provision volume with StorageClass "block-sc": failed to create volume: [heketi] failed to create volume: Invalid JWT token: Token used before issued


Time was in sync in all the nodes. Hence marking this as failed Qa.

[root@dhcp47-153 ~]# date
Tue Jul 31 13:54:49 IST 2018
[root@dhcp47-165 ~]# date
Tue Jul 31 13:54:49 IST 2018
[root@dhcp46-217 ~]# date
Tue Jul 31 13:54:49 IST 2018
[root@dhcp47-138 ~]# date
Tue Jul 31 13:54:49 IST 2018

logs and sosreports
http://rhsqe-repo.lab.eng.blr.redhat.com/cns/bugs/BZ-1541323/

Comment 27 Raghavendra Talur 2018-08-10 17:25:05 UTC
patch merged upstream at https://github.com/heketi/heketi/pull/1303

Comment 29 Humble Chirammal 2018-08-14 06:02:53 UTC
Fixed in version: rhgs-volmanager-rhel7:3.4.0-1

Comment 33 Anjana KD 2018-08-30 13:15:30 UTC
Updated doc text in the Doc Text field. Please review for technical accuracy.

Comment 34 Humble Chirammal 2018-08-30 15:27:38 UTC
John/Talur, can you please help QE with the steps to validate this fix ?

Comment 42 John Mulligan 2018-09-07 17:50:33 UTC
Doc Text looks OK

Comment 45 errata-xmlrpc 2018-09-12 09:22:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2686