Description of problem: After creating 10 containers and deleting 1, the container account listed is 11 (should be 9). I have not tested with any other number combinations but there seems to be an issue with calculating the total. Version-Release number of selected component (if applicable): RHS 2.0 (RC2) How reproducible: Every Time Steps to Reproduce: 1. PUT 10 Containers (dir1 - dir10) 2. List containers and take note of count 3. DELETE 1 Container (dir6) 4. List Containers and take note of count Actual results: Number of containers listed is inaccurate (lists 11 containers) Expected results: Would expect number of containers to reflect accurately (in this case 9) Additional info:
It will be good, if we can get information about the value of the variable "object_only" in the /etc/swift/fs.conf file. With "object_only = no" this should work properly.
Hi Saurabh, Here are the contents of /etc/swift/fs.conf: [DEFAULT] mount_path = /mnt/gluster-object auth_account = auth #ip of the fs server. mount_ip = localhost #fs server need not be local, remote server can also be used, #set remote_cluster=yes for using remote server. remote_cluster = no object_only = no
Justin, I have tried out several times this scenario with "object_only=no" and i have always received the correct information. Can you please collect information from the /var/log/messages, while he is trying these operations? while executing this scenario, the "tail -f /var/log/messages" can help to collect the information at that time.
Hi Saurabh, I'll run through the process again with the GA bits and let you know if the issue is still reproducible.
Saurabh, I was still able to reproduce the issue on GA bits. I'll upload a file showing the process I'm using later today. I'll also, run the repro again and gather the logs you requested previously.
This might be fixed as part of the changes leading up to the swift.diff removal, see http://review.gluster.org/4180.
Fixed in upstream.
Issue no longer seen in RHS2.1. Verified with object_only=yes/no,as follows :- [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir1 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir2 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir3 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir4 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir5 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir6 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir7 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir8 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir9 [psharma@dhcp193-66 dummy_files]$ curl -X PUT -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir10 [psharma@dhcp193-66 dummy_files]$ curl -v -X HEAD -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2 * About to connect() to 10.65.207.210 port 8080 (#0) * Trying 10.65.207.210... connected * Connected to 10.65.207.210 (10.65.207.210) port 8080 (#0) > HEAD /v1/AUTH_test2 HTTP/1.1 > User-Agent: curl/7.21.7 (x86_64-redhat-linux-gnu) libcurl/7.21.7 NSS/3.13.5.0 zlib/1.2.5 libidn/1.22 libssh2/1.2.7 > Host: 10.65.207.210:8080 > Accept: */* > X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268 > < HTTP/1.1 204 No Content < Content-Length: 0 < X-Account-Container-Count: 10 < Accept-Ranges: bytes < X-Account-Object-Count: 0 < X-Bytes-Used: 0 < X-Timestamp: 1373872319.01700 < X-Object-Count: 0 < X-Account-Bytes-Used: 0 < X-Type: Account < Content-Type: text/plain; charset=utf-8 < X-Container-Count: 10 < Date: Tue, 16 Jul 2013 09:51:35 GMT < * Connection #0 to host 10.65.207.210 left intact * Closing connection #0 [psharma@dhcp193-66 dummy_files]$ curl -X DELETE -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2/dir6 [psharma@dhcp193-66 dummy_files]$ curl -v -X HEAD -H 'X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268' http://10.65.207.210:8080/v1/AUTH_test2 * About to connect() to 10.65.207.210 port 8080 (#0) * Trying 10.65.207.210... connected * Connected to 10.65.207.210 (10.65.207.210) port 8080 (#0) > HEAD /v1/AUTH_test2 HTTP/1.1 > User-Agent: curl/7.21.7 (x86_64-redhat-linux-gnu) libcurl/7.21.7 NSS/3.13.5.0 zlib/1.2.5 libidn/1.22 libssh2/1.2.7 > Host: 10.65.207.210:8080 > Accept: */* > X-Auth-Token: AUTH_tk333998bd65df4c9aade2da934af97268 > < HTTP/1.1 204 No Content < Content-Length: 0 < X-Account-Container-Count: 9 < Accept-Ranges: bytes < X-Account-Object-Count: 0 < X-Bytes-Used: 0 < X-Timestamp: 1373872319.01700 < X-Object-Count: 0 < X-Account-Bytes-Used: 0 < X-Type: Account < Content-Type: text/plain; charset=utf-8 < X-Container-Count: 9 < Date: Tue, 16 Jul 2013 09:51:49 GMT < * Connection #0 to host 10.65.207.210 left intact * Closing connection #0 [psharma@dhcp193-66 dummy_files]$ [root@dhcp207-210 ~]# rpm -qa|grep gluster gluster-swift-object-1.8.0-6.3.el6rhs.noarch vdsm-gluster-4.10.2-22.7.el6rhs.noarch gluster-swift-plugin-1.8.0-2.el6rhs.noarch glusterfs-geo-replication-3.4.0.12rhs.beta3-1.el6rhs.x86_64 glusterfs-3.4.0.12rhs.beta3-1.el6rhs.x86_64 gluster-swift-1.8.0-6.3.el6rhs.noarch glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64 gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch gluster-swift-account-1.8.0-6.3.el6rhs.noarch glusterfs-rdma-3.4.0.12rhs.beta3-1.el6rhs.x86_64 glusterfs-fuse-3.4.0.12rhs.beta3-1.el6rhs.x86_64 gluster-swift-container-1.8.0-6.3.el6rhs.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html