Bug 1541323
Summary: | [GSS] Glusterfs pvc bound fail with error creating volume Token used before issued | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rajnikant <rkant> |
Component: | heketi | Assignee: | Raghavendra Talur <rtalur> |
Status: | CLOSED ERRATA | QA Contact: | vinutha <vinug> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | cns-3.6 | CC: | akrishna, anli, aos-bugs, aos-storage-staff, asriram, bkunal, dluong, hchiramm, hongkliu, jhou, jmulligan, jsafrane, kramdoss, madam, ncredi, nigoyal, piqin, pprakash, qixuan.wang, rcyriac, rhs-bugs, rkant, rtalur, sankarshan, schoudha, storage-qa-internal, vinug, weshi, wmeng |
Target Milestone: | --- | ||
Target Release: | CNS 3.10 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Previously, some heketi client requests failed with ‘Token used before issued’ error because time synchronization was not properly handled by JSON web tokens. This update adds a margin of 120 seconds to iat claim validation to ensure that client requests can succeed in this situation. This margin can be changed by editing the ‘HEKETI_JWT_IAT_LEEWAY_SECONDS’ environment variable.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-09-12 09:22:12 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1568861 |
Description
Rajnikant
2018-02-02 09:43:33 UTC
The error is thrown from heketi when trying authentication. Mostly due to time sync issue. Rajnikant, can you try step in c#8? Make sure the node has a time synchronization service running. Gluster pods don't run any time sync service within them anymore. I was creating block device when i hit this bug again in cns 3.10. rpm -> heketi-7.0.0-5.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64 glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64 glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64 glusterfs-3.8.4-54.15.el7rhgs.x86_64 glusterfs-api-3.8.4-54.15.el7rhgs.x86_64 glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64 glusterfs-server-3.8.4-54.15.el7rhgs.x86_64 gluster-block-0.2.1-23.el7rhgs.x86_64 container images -> rhgs-volmanager-rhel7 3.3.1-22 rhgs-server-rhel7 3.3.1-28 [root@dhcp47-153 ~]# oc describe pvc c101 Name: c101 Namespace: glusterfs StorageClass: block-sc Status: Pending Volume: Labels: <none> Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"aaabf029-948d-11e8-8b97-0a580a830006","leaseDurationSeconds":15,"acquireTime":"2018-07-31T07:02:39Z","renewTime":"2018-07-31T07:19:53Z","lea... volume.beta.kubernetes.io/storage-class=block-sc volume.beta.kubernetes.io/storage-provisioner=gluster.org/glusterblock Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 1h (x13 over 1h) gluster.org/glusterblock aaabf029-948d-11e8-8b97-0a580a830006 Failed to provision volume with StorageClass "block-sc": failed to create volume: [heketi] failed to create volume: Invalid JWT token: Token used before issued Time was in sync in all the nodes. Hence marking this as failed Qa. [root@dhcp47-153 ~]# date Tue Jul 31 13:54:49 IST 2018 [root@dhcp47-165 ~]# date Tue Jul 31 13:54:49 IST 2018 [root@dhcp46-217 ~]# date Tue Jul 31 13:54:49 IST 2018 [root@dhcp47-138 ~]# date Tue Jul 31 13:54:49 IST 2018 logs and sosreports http://rhsqe-repo.lab.eng.blr.redhat.com/cns/bugs/BZ-1541323/ patch merged upstream at https://github.com/heketi/heketi/pull/1303 Fixed in version: rhgs-volmanager-rhel7:3.4.0-1 Updated doc text in the Doc Text field. Please review for technical accuracy. John/Talur, can you please help QE with the steps to validate this fix ? Doc Text looks OK Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2686 |