Bug 985862
Summary: | Input/output error(when brick size is less then file size) should be handled properly , returned response code should be 507 | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | pushpesh sharma <psharma> | ||||||
Component: | doc-Release_Notes | Assignee: | Prashanth Pai <ppai> | ||||||
Status: | CLOSED WONTFIX | QA Contact: | Sudhir D <sdharane> | ||||||
Severity: | low | Docs Contact: | |||||||
Priority: | low | ||||||||
Version: | 2.1 | CC: | asriram, bbandari, divya, gluster-bugs, lpabon, mhideo, nlevinki, psriniva, rhs-bugs, sharne, storage-doc, vraman | ||||||
Target Milestone: | --- | Keywords: | Reopened | ||||||
Target Release: | --- | Flags: | lpabon:
needinfo+
|
||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Known Issue | |||||||
Doc Text: |
Issue: When you to try to copy a file whose size exceeds that of a brick, an HTTP return code of 503 will be returned.
Work Around: Increase the amount of storage available in the volume and retry.
|
Story Points: | --- | ||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2014-02-06 10:37:13 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 986812 | ||||||||
Bug Blocks: | |||||||||
Attachments: |
|
Description
pushpesh sharma
2013-07-18 11:42:19 UTC
Created attachment 775289 [details]
strace output
[swift-constraints] # max_file_size is the largest "normal" object that can be saved in # the cluster. This is also the limit on the size of each segment of # a "large" object when using the large object manifest support. # This value is set in bytes. Setting it to lower than 1MiB will cause # some tests to fail. It is STRONGLY recommended to leave this value at # the default (5 * 2**30 + 2). # FIXME: Really? Gluster can handle a 2^64 sized file? And can the fronting # web service handle such a size? I think with UFO, we need to keep with the # default size from Swift and encourage users to research what size their web # services infrastructure can handle. max_file_size = 18446744073709551616 Good test.. we may even add this to the functional tests. I have been able to successfully write more than 5GiB in both unit tests and on a real deployment using XFS only. I will try on GlusterFS after. Here is a sample result of running with RHS2.1: [root@heka-client-09 ~]# curl -v -X PUT -T 6g.dat -H 'X-Auth-Token: AUTH_tk507dd81c3fac4a13954fc6bb65f0aaae' http://127.0.0.1:8080/v1/AUTH_test/c/6g.dat * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > PUT /v1/AUTH_test/c/6g.dat HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: 127.0.0.1:8080 > Accept: */* > X-Auth-Token: AUTH_tk507dd81c3fac4a13954fc6bb65f0aaae > Content-Length: 6442451968 > Expect: 100-continue > < HTTP/1.1 100 Continue < HTTP/1.1 201 Created < Last-Modified: Fri, 19 Jul 2013 19:27:56 GMT < Content-Length: 0 < Etag: b5e14581099e94f0a372de0fad526d95 < Content-Type: text/html; charset=UTF-8 < Date: Fri, 19 Jul 2013 19:28:58 GMT < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 [root@heka-client-09 ~]# ls -alh 6g.dat -rw-r--r--. 1 root root 6.1G Jul 19 15:27 6g.dat [root@heka-client-09 ~]# ls -alh /mnt/gluster-object/test/c/6g.dat -rwxr-xr-x. 1 root root 6.1G Jul 19 15:28 /mnt/gluster-object/test/c/6g.dat [root@heka-client-09 ~]# rpm -qa | grep swift gluster-swift-object-1.8.0-6.3.el6rhs.noarch gluster-swift-doc-1.4.8-4.el6.noarch gluster-swift-1.8.0-6.3.el6rhs.noarch gluster-swift-container-1.8.0-6.3.el6rhs.noarch gluster-swift-account-1.8.0-6.3.el6rhs.noarch gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch gluster-swift-plugin-1.8.0-1.el6rhs.noarch I was able to successfully create a 6.1GB file on Gluster for Swift with a GlusterFS volume: [root@heka-client-09 ~]# curl -v -X PUT -T 6g.dat -H 'X-Auth-Token: AUTH_tk8ff42d6206ba4322bca2e31743fb4110' http://127.0.0.1:8080/v1/AUTH_glustervol/c/6g.dat * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > PUT /v1/AUTH_glustervol/c/6g.dat HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: 127.0.0.1:8080 > Accept: */* > X-Auth-Token: AUTH_tk8ff42d6206ba4322bca2e31743fb4110 > Content-Length: 6442451968 > Expect: 100-continue > < HTTP/1.1 100 Continue < HTTP/1.1 201 Created < Last-Modified: Fri, 19 Jul 2013 20:45:26 GMT < Content-Length: 0 < Etag: b5e14581099e94f0a372de0fad526d95 < Content-Type: text/html; charset=UTF-8 < Date: Fri, 19 Jul 2013 20:46:32 GMT < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 [root@heka-client-09 ~]# ls -alh 6g.dat -rw-r--r--. 1 root root 6.1G Jul 19 15:27 6g.dat [root@heka-client-09 ~]# ls -alh /mnt/gluster-object/glustervol/c/6g.dat -rwxr-xr-x. 1 root root 6.1G Jul 19 16:46 /mnt/gluster-object/glustervol/c/6g.dat [root@heka-client-09 ~]# ls -alh /mnt/brick/glustervol/c/6g.dat -rwxr-xr-x. 2 root root 6.1G Jul 19 16:46 /mnt/brick/glustervol/c/6g.dat [root@heka-client-09 ~]# Pushpesh, you may want to check your environment. Luis, True it might have passed on your setup.Because I was hitting a situation where my file size was exceeding the brick size of underlying gluster volume. But I would like to have a fix which can handle this i/o error and return to REST request with proper error message/response code.507 Insufficient Storage (WebDAV) would be a good fix. Would like to have separate BZ for this. However I am in discussion with other folks for properly documenting this limitation of RHS. We can track status of this https://bugzilla.redhat.com/show_bug.cgi?id=986812 , to check what the final error message in this case look like. Luis, Please add the known issue details in the Doc Text field. Luis, I have edited the content of doc text field. Kindly review and sign-off. Looks good this is documented as a known issue in the 2.1 release notes Created attachment 797447 [details]
known issue
The GA link is here for the known issue: https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html-single/2.1_Release_Notes/index.html#chap-Documentation-2.1_Release_Notes-Known_Issues Moving it back to Assigned state. The known issue is documented in the link: https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html-single/2.1_Release_Notes/index.html#chap-Documentation-2.1_Release_Notes-Known_Issues I have updated the parent glusterfs bug with some more info : https://bugzilla.redhat.com/show_bug.cgi?id=986812 Even if the above bug is fixed and glusterfs returns ENOSPC correctly, this bug is unlikely to be fixed in gluster-swift grizzly. Even if the gluster-swift object-server is made to return 507 (HTTPInsufficientStorage) to swift, swift proxy-server would still send a 503. This is because of the following function in swift proxy-server: def best_response(self, req, statuses, reasons, bodies, server_type, etag=None): """ Given a list of responses from several servers, choose the best to return to the API. gluster-swift does handle ENOSPC currently. However, the fix to make proxy-server return HTTPInsufficientStorage cannot be made in gluster-swift but has to be done in Openstack Swift. So closing this. No need for info since this is a 'wontfix' bug. No need for info since this is a 'wontfix' bug. |