Bug 833179 - X-Account-Container-Count and X-Container-Count numbers are not reflecting accurate count
Summary: X-Account-Container-Count and X-Container-Count numbers are not reflecting ac...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-swift
Version: 2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Luis Pabón
QA Contact: pushpesh sharma
URL:
Whiteboard:
Depends On:
Blocks: 858417
TreeView+ depends on / blocked
 
Reported: 2012-06-18 19:30 UTC by Justin Bautista
Modified: 2016-11-08 22:24 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 858417 (view as bug list)
Environment:
Last Closed: 2013-09-23 22:32:17 UTC
Embargoed:


Attachments (Terms of Use)

Description Justin Bautista 2012-06-18 19:30:15 UTC
Description of problem:

After creating 10 containers and deleting 1, the container account listed is 11 (should be 9).  I have not tested with any other number combinations but there seems to be an issue with calculating the total.



Version-Release number of selected component (if applicable):
RHS 2.0 (RC2)

How reproducible:
Every Time


Steps to Reproduce:
1. PUT 10 Containers (dir1 - dir10)
2. List containers and take note of count
3. DELETE 1 Container (dir6)
4. List Containers and take note of count
  
Actual results:
Number of containers listed is inaccurate (lists 11 containers)


Expected results:
Would expect number of containers to reflect accurately (in this case 9)

Additional info:

Comment 2 Saurabh 2012-06-26 10:55:00 UTC
It will be good, if we can get information about the value of the variable "object_only" in the /etc/swift/fs.conf file. With "object_only = no" this should work properly.

Comment 3 Justin Bautista 2012-06-26 14:39:15 UTC
Hi Saurabh,

Here are the contents of /etc/swift/fs.conf:


[DEFAULT]
mount_path = /mnt/gluster-object
auth_account = auth
#ip of the fs server.
mount_ip = localhost
#fs server need not be local, remote server can also be used,
#set remote_cluster=yes for using remote server.
remote_cluster = no
object_only = no

Comment 4 Saurabh 2012-06-27 13:06:31 UTC
Justin, 

  I have tried out several times this scenario with "object_only=no" and i have always received the correct information.

  Can you please collect information from the /var/log/messages, while he is trying these operations?

   while executing this scenario, the "tail -f /var/log/messages" can help to collect the information at that time.

Comment 5 Justin Bautista 2012-07-02 14:23:20 UTC
Hi Saurabh,

I'll run through the process again with the GA bits and let you know if the issue is still reproducible.

Comment 6 Justin Bautista 2012-07-05 14:44:57 UTC
Saurabh,

I was still able to reproduce the issue on GA bits.  I'll upload a file showing the process I'm using later today.  I'll also, run the repro again and gather the logs you requested previously.

Comment 7 Peter Portante 2012-11-19 21:44:50 UTC
This might be fixed as part of the changes leading up to the swift.diff removal, see http://review.gluster.org/4180.

Comment 8 Junaid 2013-03-21 07:23:52 UTC
Fixed in upstream.

Comment 11 pushpesh sharma 2013-07-16 09:54:57 UTC
Issue no longer seen in RHS2.1.

Verified with object_only=yes/no,as follows :- 

[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir1
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir2
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir3
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir4
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir5
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir6
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir7
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir8
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir9
[psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir10
[psharma@dhcp193-66 dummy_files]$ curl -v -X HEAD -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2
* About to connect() to 10.65.207.210 port 8080 (#0)
*   Trying 10.65.207.210... connected
* Connected to 10.65.207.210 (10.65.207.210) port 8080 (#0)
> HEAD /v1/AUTH_test2 HTTP/1.1
> User-Agent: curl/7.21.7 (x86_64-redhat-linux-gnu) libcurl/7.21.7 NSS/3.13.5.0 zlib/1.2.5 libidn/1.22 libssh2/1.2.7
> Host: 10.65.207.210:8080
> Accept: */*
> X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268
> 
< HTTP/1.1 204 No Content
< Content-Length: 0
< X-Account-Container-Count: 10
< Accept-Ranges: bytes
< X-Account-Object-Count: 0
< X-Bytes-Used: 0
< X-Timestamp: 1373872319.01700
< X-Object-Count: 0
< X-Account-Bytes-Used: 0
< X-Type: Account
< Content-Type: text/plain; charset=utf-8
< X-Container-Count: 10
< Date: Tue, 16 Jul 2013 09:51:35 GMT
< 
* Connection #0 to host 10.65.207.210 left intact
* Closing connection #0
[psharma@dhcp193-66 dummy_files]$ curl -X DELETE -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir6
[psharma@dhcp193-66 dummy_files]$ curl -v -X HEAD -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2
* About to connect() to 10.65.207.210 port 8080 (#0)
*   Trying 10.65.207.210... connected
* Connected to 10.65.207.210 (10.65.207.210) port 8080 (#0)
> HEAD /v1/AUTH_test2 HTTP/1.1
> User-Agent: curl/7.21.7 (x86_64-redhat-linux-gnu) libcurl/7.21.7 NSS/3.13.5.0 zlib/1.2.5 libidn/1.22 libssh2/1.2.7
> Host: 10.65.207.210:8080
> Accept: */*
> X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268
> 
< HTTP/1.1 204 No Content
< Content-Length: 0
< X-Account-Container-Count: 9
< Accept-Ranges: bytes
< X-Account-Object-Count: 0
< X-Bytes-Used: 0
< X-Timestamp: 1373872319.01700
< X-Object-Count: 0
< X-Account-Bytes-Used: 0
< X-Type: Account
< Content-Type: text/plain; charset=utf-8
< X-Container-Count: 9
< Date: Tue, 16 Jul 2013 09:51:49 GMT
< 
* Connection #0 to host 10.65.207.210 left intact
* Closing connection #0
[psharma@dhcp193-66 dummy_files]$


[root@dhcp207-210 ~]# rpm -qa|grep gluster
gluster-swift-object-1.8.0-6.3.el6rhs.noarch
vdsm-gluster-4.10.2-22.7.el6rhs.noarch
gluster-swift-plugin-1.8.0-2.el6rhs.noarch
glusterfs-geo-replication-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-1.8.0-6.3.el6rhs.noarch
glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch
gluster-swift-account-1.8.0-6.3.el6rhs.noarch
glusterfs-rdma-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-container-1.8.0-6.3.el6rhs.noarch

Comment 12 Scott Haines 2013-09-23 22:32:17 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.