Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.Send PUT request #curl -X PUT -T 5g_1.txt -H 'X-Auth-Token: AUTH_tka17fd6ad14e34759a2a1b3922ec8277f' -H 'Transfer-Encoding: chunked' http://10.65.207.210:8080/v1/AUTH_test/dir1/5g_less.txt <html><h1>Service Unavailable</h1><p>The server is currently unavailable. Please try again at a later time.</p></html> 2. [psharma@dhcp193-66 dummy_files]$ ls -lh -rw-rw-r-- 1 psharma psharma 4.7G Jul 18 14:49 5g_1.txt 3. tail /var/log/messages Jul 18 17:03:02 dhcp207-210 account-server 127.0.0.1 - - [18/Jul/2013:11:33:02 +0000] "HEAD /test/0/AUTH_test" 204 - "tx0cf2888f56c54793836e60bad32bb119" "-" "-" 0.0824 "" Jul 18 17:03:02 dhcp207-210 container-server 127.0.0.1 - - [18/Jul/2013:11:33:02 +0000] "HEAD /test/0/AUTH_test/dir1" 204 - "tx0cf2888f56c54793836e60bad32bb119" "-" "-" 0.0037 Jul 18 17:04:11 dhcp207-210 object-server ERROR __call__ error with PUT /test/0/AUTH_test/dir1/5g_less.txt : [Errno 5] Input/output error (txn: tx0cf2888f56c54793836e60bad32bb119) Jul 18 17:04:11 dhcp207-210 object-server 127.0.0.1 - - [18/Jul/2013:11:34:11 +0000] "PUT /test/0/AUTH_test/dir1/5g_less.txt" 500 864 "-" "tx0cf2888f56c54793836e60bad32bb119" "curl/7.21.7 (x86_64-redhat-linux-gnu) libcurl/7.21.7 NSS/3.13.5.0 zlib/1.2.5 libidn/1.22 libssh2/1.2.7" 68.7530 Jul 18 17:04:37 dhcp207-210 proxy-server ERROR 500 Traceback (most recent call last):#012 File "/usr/lib/python2.6/site-packages/swift/obj/server.py", line 928, in __call__#012 res = method(req)#012 File "/usr/lib/python2.6/site-packages/swift/common/utils.py", line 1558, in wrapped#012 return func(*a, **kw)#012 File "/usr/lib/python2.6/site-packages/swift/common/utils.py", line 520, in _timing_stats#012 resp = func(ctrl, *args, **kwargs)#012 File "/usr/lib/python2.6/site-packages/swift/obj/server.py", line 705, in PUT#012 file.put(fd, metadata)#012 File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__#012 self.gen.throw(type, value, traceback)#012 File "/usr/lib/python2.6/site-packages/gluster/swift/common/DiskFile.py", line 764, in mkstemp#012 yield fd#012 File "/usr/lib/python2.6/site-packages/swift/obj/server.py", line 661, in PUT#012 written = os.write(fd, chunk)#012OSError: [Errno 5] Input/output error#012 From Object Server re: /v1/AUTH_test/dir1/5g_less.txt 127.0.0.1:6010 (txn: tx0cf2888f56c54793836e60bad32bb119) (client_ip: 10.65.193.24) Jul 18 17:04:37 dhcp207-210 proxy-server Object PUT returning 503 for [500] (txn: tx0cf2888f56c54793836e60bad32bb119) (client_ip: 10.65.193.24) 4. Strace is attached. Actual results: Expected results: Additional info:
Created attachment 775289 [details] strace output
[swift-constraints] # max_file_size is the largest "normal" object that can be saved in # the cluster. This is also the limit on the size of each segment of # a "large" object when using the large object manifest support. # This value is set in bytes. Setting it to lower than 1MiB will cause # some tests to fail. It is STRONGLY recommended to leave this value at # the default (5 * 2**30 + 2). # FIXME: Really? Gluster can handle a 2^64 sized file? And can the fronting # web service handle such a size? I think with UFO, we need to keep with the # default size from Swift and encourage users to research what size their web # services infrastructure can handle. max_file_size = 18446744073709551616
Good test.. we may even add this to the functional tests.
I have been able to successfully write more than 5GiB in both unit tests and on a real deployment using XFS only. I will try on GlusterFS after. Here is a sample result of running with RHS2.1: [root@heka-client-09 ~]# curl -v -X PUT -T 6g.dat -H 'X-Auth-Token: AUTH_tk507dd81c3fac4a13954fc6bb65f0aaae' http://127.0.0.1:8080/v1/AUTH_test/c/6g.dat * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > PUT /v1/AUTH_test/c/6g.dat HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: 127.0.0.1:8080 > Accept: */* > X-Auth-Token: AUTH_tk507dd81c3fac4a13954fc6bb65f0aaae > Content-Length: 6442451968 > Expect: 100-continue > < HTTP/1.1 100 Continue < HTTP/1.1 201 Created < Last-Modified: Fri, 19 Jul 2013 19:27:56 GMT < Content-Length: 0 < Etag: b5e14581099e94f0a372de0fad526d95 < Content-Type: text/html; charset=UTF-8 < Date: Fri, 19 Jul 2013 19:28:58 GMT < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 [root@heka-client-09 ~]# ls -alh 6g.dat -rw-r--r--. 1 root root 6.1G Jul 19 15:27 6g.dat [root@heka-client-09 ~]# ls -alh /mnt/gluster-object/test/c/6g.dat -rwxr-xr-x. 1 root root 6.1G Jul 19 15:28 /mnt/gluster-object/test/c/6g.dat [root@heka-client-09 ~]# rpm -qa | grep swift gluster-swift-object-1.8.0-6.3.el6rhs.noarch gluster-swift-doc-1.4.8-4.el6.noarch gluster-swift-1.8.0-6.3.el6rhs.noarch gluster-swift-container-1.8.0-6.3.el6rhs.noarch gluster-swift-account-1.8.0-6.3.el6rhs.noarch gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch gluster-swift-plugin-1.8.0-1.el6rhs.noarch
I was able to successfully create a 6.1GB file on Gluster for Swift with a GlusterFS volume: [root@heka-client-09 ~]# curl -v -X PUT -T 6g.dat -H 'X-Auth-Token: AUTH_tk8ff42d6206ba4322bca2e31743fb4110' http://127.0.0.1:8080/v1/AUTH_glustervol/c/6g.dat * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > PUT /v1/AUTH_glustervol/c/6g.dat HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: 127.0.0.1:8080 > Accept: */* > X-Auth-Token: AUTH_tk8ff42d6206ba4322bca2e31743fb4110 > Content-Length: 6442451968 > Expect: 100-continue > < HTTP/1.1 100 Continue < HTTP/1.1 201 Created < Last-Modified: Fri, 19 Jul 2013 20:45:26 GMT < Content-Length: 0 < Etag: b5e14581099e94f0a372de0fad526d95 < Content-Type: text/html; charset=UTF-8 < Date: Fri, 19 Jul 2013 20:46:32 GMT < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 [root@heka-client-09 ~]# ls -alh 6g.dat -rw-r--r--. 1 root root 6.1G Jul 19 15:27 6g.dat [root@heka-client-09 ~]# ls -alh /mnt/gluster-object/glustervol/c/6g.dat -rwxr-xr-x. 1 root root 6.1G Jul 19 16:46 /mnt/gluster-object/glustervol/c/6g.dat [root@heka-client-09 ~]# ls -alh /mnt/brick/glustervol/c/6g.dat -rwxr-xr-x. 2 root root 6.1G Jul 19 16:46 /mnt/brick/glustervol/c/6g.dat [root@heka-client-09 ~]# Pushpesh, you may want to check your environment.
Luis, True it might have passed on your setup.Because I was hitting a situation where my file size was exceeding the brick size of underlying gluster volume. But I would like to have a fix which can handle this i/o error and return to REST request with proper error message/response code.507 Insufficient Storage (WebDAV) would be a good fix. Would like to have separate BZ for this. However I am in discussion with other folks for properly documenting this limitation of RHS.
We can track status of this https://bugzilla.redhat.com/show_bug.cgi?id=986812 , to check what the final error message in this case look like.
Luis, Please add the known issue details in the Doc Text field.
Luis, I have edited the content of doc text field. Kindly review and sign-off.
Looks good
this is documented as a known issue in the 2.1 release notes
Created attachment 797447 [details] known issue
The GA link is here for the known issue: https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html-single/2.1_Release_Notes/index.html#chap-Documentation-2.1_Release_Notes-Known_Issues
Moving it back to Assigned state. The known issue is documented in the link: https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html-single/2.1_Release_Notes/index.html#chap-Documentation-2.1_Release_Notes-Known_Issues
I have updated the parent glusterfs bug with some more info : https://bugzilla.redhat.com/show_bug.cgi?id=986812 Even if the above bug is fixed and glusterfs returns ENOSPC correctly, this bug is unlikely to be fixed in gluster-swift grizzly. Even if the gluster-swift object-server is made to return 507 (HTTPInsufficientStorage) to swift, swift proxy-server would still send a 503. This is because of the following function in swift proxy-server: def best_response(self, req, statuses, reasons, bodies, server_type, etag=None): """ Given a list of responses from several servers, choose the best to return to the API.
http://review.gluster.org/#/c/6199/
gluster-swift does handle ENOSPC currently. However, the fix to make proxy-server return HTTPInsufficientStorage cannot be made in gluster-swift but has to be done in Openstack Swift. So closing this.
No need for info since this is a 'wontfix' bug.