Bug 1048811
Summary: | [gswauth] python traceback seen while using gswauth-clean-token with -v option | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | pushpesh sharma <psharma> |
Component: | gluster-swift | Assignee: | crisbud <crisbud> |
Status: | CLOSED ERRATA | QA Contact: | pushpesh sharma <psharma> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | bbandari, grajaiya, rhs-bugs, vagarwal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 2.1.2 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | gluster-swift-plugin-1.10.0-4.el6rhs | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-02-25 08:14:12 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
pushpesh sharma
2014-01-06 11:30:40 UTC
Steps: # gswauth-cleanup-tokens -v GET .token_0?marker=None Traceback (most recent call last): File "/usr/bin/gswauth-cleanup-tokens", line 105, in <module> objs = conn.get_container(container, marker=marker)[1] File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1177, in get_container full_listing=full_listing) File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1103, in _retry self.url, self.token = self.get_auth() File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1079, in get_auth insecure=self.insecure) File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 273, in get_auth kwargs.get('snet')) File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 185, in get_auth_1_0 {'X-Auth-User': user, 'X-Auth-Key': key}) File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 164, in request_escaped validate_headers(headers) File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 99, in validate_headers if '\n' in value: TypeError: argument of type 'NoneType' is not iterable Version Info: glusterfs-openstack-swift-1.10.0-1.31.fc19.noarch http://review.gluster.org/#/c/6664/ Patch posted for Review Fixed in gluster-swift-plugin-1.10.0-4.el6rhs. MOving ON_QA [root@mater ~]# gswauth-cleanup-tokens -v -K gswauthkey GET .token_0?marker=None Container .token_0 not found. gswauth-prep needs to be rerun [root@mater ~]# gswauth-cleanup-tokens -v Usage: gswauth-cleanup-tokens [options] Options: -h, --help show this help message and exit -t TOKEN_LIFE, --token-life=TOKEN_LIFE The expected life of tokens; token objects modified more than this number of seconds ago will be checked for expiration (default: 86400). -s SLEEP, --sleep=SLEEP The number of seconds to sleep between token checks (default: 0.1) -v, --verbose Outputs everything done instead of just the deletions. -A ADMIN_URL, --admin-url=ADMIN_URL The URL to the auth subsystem (default: http://127.0.0.1:8080/auth/) -K ADMIN_KEY, --admin-key=ADMIN_KEY The key for .super_admin is required. --purge=PURGE_ACCOUNT Purges all tokens for a given account whether the tokens have expired or not. --purge-all Purges all tokens for all accounts and users whether the tokens have expired or not. Verified On: [root@mater ~]# rpm -qa|grep gluster gluster-swift-account-1.10.0-2.el6rhs.noarch glusterfs-geo-replication-3.4.0.57rhs-1.el6rhs.x86_64 glusterfs-rdma-3.4.0.57rhs-1.el6rhs.x86_64 gluster-swift-1.10.0-2.el6rhs.noarch glusterfs-3.4.0.57rhs-1.el6rhs.x86_64 glusterfs-fuse-3.4.0.57rhs-1.el6rhs.x86_64 gluster-swift-object-1.10.0-2.el6rhs.noarch gluster-swift-container-1.10.0-2.el6rhs.noarch samba-glusterfs-3.6.9-167.9.el6rhs.x86_64 vdsm-gluster-4.13.0-24.el6rhs.noarch gluster-swift-plugin-1.10.0-5.el6rhs.noarch glusterfs-server-3.4.0.57rhs-1.el6rhs.x86_64 gluster-swift-proxy-1.10.0-2.el6rhs.noarch glusterfs-libs-3.4.0.57rhs-1.el6rhs.x86_64 glusterfs-api-3.4.0.57rhs-1.el6rhs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |