Bug 1581864
| Summary: | CNS: failed to create volume: Token used before issued | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Hongkai Liu <hongkliu> |
| Component: | heketi | Assignee: | Raghavendra Talur <rtalur> |
| Status: | CLOSED ERRATA | QA Contact: | vinutha <vinug> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.0 | CC: | akrishna, aos-bugs, aos-storage-staff, bchilds, ekuric, hchiramm, hongkliu, jmulligan, kramdoss, mifiedle, pprakash, rhs-bugs, rtalur, sankarshan, storage-qa-internal, vlaad, wmeng |
| Target Milestone: | --- | ||
| Target Release: | CNS 3.10 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Previously, some heketi client requests failed with ‘Token used before issued’ error because JSON web tokens did not properly handle clock skew. With this fix, this update adds a margin of 120 seconds to iat claim validation to ensure that client requests can succeed in this situation. This margin can be changed by editing the ‘HEKETI_JWT_IAT_LEEWAY_SECONDS’ environment variable.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-09-12 09:22:13 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1600160 | ||
| Bug Blocks: | 1568862 | ||
Are those logs normal? heketi [negroni] Completed 401 Unauthorized in 58.847µs master-controller E0523 16:21:25.001057 1 glusterfs.go:708] failed to create volume: failed to create volume: Token used before issued After reading comments from https://bugzilla.redhat.com/show_bug.cgi?id=1541323 I tried to delete the heketi pod and check if the time on nodes is synced. It did not solve the problem. Those failures still show in heketi and controller logs. The last time we got 1000 is with the same cns images and atomic-openshift.x86_64 3.10.0-0.27.0.git.0.baf1ec4.el7 patch posted upstream at https://github.com/heketi/heketi/pull/1223 Fixed in version : rhgs-volmanager-rhel7:3.3.1-20 Not seeing this bug after 1000 gluster.file PVCs were created
Tested with
# oc get pod -n glusterfs -o yaml | grep "image:" | sort -u
image: registry.reg-aws.openshift.com:443/rhgs3/rhgs-gluster-block-prov-rhel7:3.3.1-20
image: registry.reg-aws.openshift.com:443/rhgs3/rhgs-server-rhel7:3.3.1-27
image: registry.reg-aws.openshift.com:443/rhgs3/rhgs-volmanager-rhel7:3.3.1-21
# yum list installed | grep openshift
atomic-openshift.x86_64 3.10.18-1.git.0.13dc4a0.el7
Updated doc text in the Doc Text field. Please review for technical accuracy. Doc Text looks OK Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2686 |
Description of problem: Create n pods and each of them uses a PVC. The target is 1000, and in this run, it is less than 600. It has been tested with at least 3 clusters. Problem occurs between 500 - 600. Version-Release number of selected component (if applicable): # oc get pod -n glusterfs -o yaml | grep "image:" | sort -u image: registry.reg-aws.openshift.com:443/rhgs3/rhgs-gluster-block-prov-rhel7:3.3.1-10 image: registry.reg-aws.openshift.com:443/rhgs3/rhgs-server-rhel7:3.3.1-13 image: registry.reg-aws.openshift.com:443/rhgs3/rhgs-volmanager-rhel7:3.3.1-10 # yum list installed | grep openshift atomic-openshift.x86_64 3.10.0-0.50.0.git.0.db6dfd6.el7 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info: The logs will be attched.