Description of problem: In UFO, even if implemented with swift_plugin, if a directory in created in the root of the volume then that is to be considered as "container" but the X-Account-Container-Count does not show the matching result [root@node130 ~]# ls /mnt/gluster-object/AUTH_test container2 tmp [root@node130 ~]# cd /mnt/gluster-object/AUTH_test [root@node130 AUTH_test]# mkdir dir0 ; touch file1 [root@node130 AUTH_test]# ls container2 dir0 file1 tmp [root@node130 AUTH_test]# curl -v -X HEAD -H 'X-Auth-Token: AUTH_tk8c2393268b5d4a02a470f4e4442deb5d' http://127.0.0.1:8080/v1/AUTH_test * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > HEAD /v1/AUTH_test HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.12.9.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 > Host: 127.0.0.1:8080 > Accept: */* > X-Auth-Token: AUTH_tk8c2393268b5d4a02a470f4e4442deb5d > < HTTP/1.1 204 No Content < X-Account-Container-Count: 1 < X-Account-Object-Count: 0 < X-Bytes-Used: 0 < X-Object-Count: 0 < X-Account-Bytes-Used: 0 < X-Type: Account < X-Container-Count: 1 < Accept-Ranges: bytes < Content-Length: 0 < Date: Thu, 15 Dec 2011 07:31:01 GMT < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.create some directory inside the root of the volume via fuse mount. 2. Use Curl with HEAD command. 3. Actual results: X-Account-Container-Count just includes the number of containers that are created via PUT api, but does not include the directory as container. Expected results: The X-Account-Container-Count should be more than "one" in the present scenario. Additional info:
Junaid tried out the things for this bug, and it does not count the object or container created via nfs-mount, till a GET or some operation is not done.
the present change definitely works as per fix, but along with it has update the info about X-Account-Object-Count: presently it is not updating this metadata, < HTTP/1.1 204 No Content < X-Account-Container-Count: 4 < X-Account-Object-Count: 0 < X-Bytes-Used: 0 < X-Object-Count: 0 < X-Account-Bytes-Used: 0 < X-Type: Account < X-Container-Count: 4 < Accept-Ranges: bytes < Content-Length: 0 < Date: Wed, 11 Apr 2012 04:39:30 GMT though there are objects inside the container X-Container-Object-Count: 31
Fix available in latest rpms.
the X-Object-Count is still not getting updated.
This is the behaviour with openstack swift aswell.
Based on Junaid's comment 5 moving this bug to closed, as things are working according to swift. As there also the values of the variables are not updated.
Not to cause any problems here, but I am not sure this is comparable to the default OpenStack Swift behavior. Swift container creation only occurs via the Swift REST APIs. Also, containers are not file system objects in non-Gluster Swift, so it is not even possible to do such an operation like creating a directory on a disk to show up as a container. For Gluster OpenStack, a directory created via a file system operation in the root of the gluster mount point (as far as I can tell) will eventually show up in the list of containers. This is because the container list might have to be rebuilt if the cached list is evicted from memcache for whatever reason. Note that for the above, if I understand the code correctly, memcache is used when /etc/swift/fs.conf has the object_only setting set to "yes". When object_only is set to "no", it appears that the code will check the file system each time a request for the container list is made. So we should probably only close this bug as NOTABUG if object_only was set to "yes", and that the manually created container shows up when object_only is set to "no".
I think X-Container-Object-Count is also not updated accordingly. I'm not sure if this is a duplicate of this bug or if this behavior is normal. However, X-Container-Object-Count is displayed correctly on subsequent GETs on containers. [root@vm1 ppai]# curl -v -X PUT http://127.0.0.1:8080/v1/AUTH_test/c1 * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > PUT /v1/AUTH_test/c1 HTTP/1.1 > User-Agent: curl/7.27.0 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 201 Created < Content-Length: 0 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: txebe1e4a5414b4e5094f06815f736d13c < Date: Mon, 19 Aug 2013 12:29:56 GMT < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 [root@vm1 ppai]# touch /mnt/gluster-object/test/c1/f{1..5} [root@vm1 ppai]# curl -v -X GET http://127.0.0.1:8080/v1/AUTH_test/c1 * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > GET /v1/AUTH_test/c1 HTTP/1.1 > User-Agent: curl/7.27.0 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 15 < X-Container-Object-Count: 0 < Accept-Ranges: bytes < X-Timestamp: 1 < X-Container-Bytes-Used: 0 < Content-Type: text/plain; charset=utf-8 < X-Trans-Id: tx0e91371690ae450aab4d138bd67522d2 < Date: Mon, 19 Aug 2013 12:31:35 GMT < f1 f2 f3 f4 f5 * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 [root@vm1 ppai]# curl -v -X GET http://127.0.0.1:8080/v1/AUTH_test/c1 * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > GET /v1/AUTH_test/c1 HTTP/1.1 > User-Agent: curl/7.27.0 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 15 < X-Container-Object-Count: 5 < Accept-Ranges: bytes < X-Timestamp: 1 < X-Container-Bytes-Used: 0 < Content-Type: text/plain; charset=utf-8 < X-Trans-Id: txccf8c790729f41f99eecac9013e1448e < Date: Mon, 19 Aug 2013 12:32:05 GMT < f1 f2 f3 f4 f5 * Connection #0 to host 127.0.0.1 left intact * Closing connection #0
Ah, I missed this configurable option - container_update_object_count in fs.conf. My bad.
Test Scenario : mount a gluster volume. PUT 4 containers using curl. But created one file manually using touch command directly on the gluster-volume. Then performed GET on the account. [root@cbox chetan]# curl -v -X PUT http://127.0.0.1:8080/v1/AUTH_test/c^C [root@cbox chetan]# ^C [root@cbox chetan]# ^C [root@cbox chetan]# curl -v -X GET http://127.0.0.1:8080/v1/AUTH_test * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > GET /v1/AUTH_test HTTP/1.1 > User-Agent: curl/7.27.0 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 12 < X-Account-Container-Count: 4 < Accept-Ranges: bytes < X-Account-Object-Count: 0 < X-Bytes-Used: 0 < X-Timestamp: 1390803036.41345 < X-Object-Count: 0 < X-Account-Bytes-Used: 0 < X-Type: Account < Content-Type: text/plain; charset=utf-8 < X-Container-Count: 4 < X-Trans-Id: txe4f7f85b98be4c24a8fcb-0052e5fdf8 < Date: Mon, 27 Jan 2014 06:34:32 GMT < c1 c2 c3 c4 * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 It did not list the file as an object either. Plus it did not update the X-Account-Object-Count. Even after subsequent requests the results were same. I tried the scenario with object_only = yes and later with object_only = no in /etc/swift/fs.conf Is the object_count on container is an invalid things to ask in case of gluster-swift environment ?
pre-release version is ambiguous and about to be removed as a choice. If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.