Bug 849758

Summary: UFO large object support, chunked transfer encoding
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Kaleb KEITHLEY <kkeithle>
Component: gluster-swiftAssignee: Luis Pabón <lpabon>
Status: CLOSED ERRATA QA Contact: pushpesh sharma <psharma>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.0CC: bbandari, gluster-bugs, madam, vagarwal
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-23 22:29:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kaleb KEITHLEY 2012-08-20 19:09:16 UTC
Description of problem:

Blocker BZ for RHS 2.1

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 2 Kaleb KEITHLEY 2012-09-05 17:47:40 UTC
http://docs.openstack.org/api/openstack-object-storage/1.0/content/chunked-transfer-encoding.html

Users can upload data without needing to know in advance the amount of data to be uploaded. Users can do this by specifying an HTTP header of Transfer-Encoding: chunked and not using a Content-Length header. A good use of this feature would be doing a DB dump, piping the output through gzip, then piping the data directly into OpenStack Object Storage without having to buffer the data to disk to compute the file size. If users attempt to upload more than 5GB with this method, the server will close the TCP/IP connection after 5GB and purge the customer data from the system. Users must take responsibility for ensuring the data they transfer will be less than 5GB or for splitting it into 5GB chunks, each in its own storage object. If you have files that are larger than 5GB and still want to use Object Storage, you can segment them prior to upload, upload them to the same container, and then use a manifest file to allow downloading of a concatenated object containing all the segmented objects, concatenated as a single object.

Comment 3 Kaleb KEITHLEY 2012-10-01 13:38:36 UTC
See BZ 849756. The 5G object size limit is configurable and our over-ride configuration already makes the object size effectively unlimited.

On top of that the swift server already handles 'Transfer-Encoding: chunked' and both curl and swift utilities will do chunked transfer-encoding.

Comment 4 Kaleb KEITHLEY 2012-10-11 14:30:18 UTC
To be clear, it's configurable in the sense that our swift.diff patch changes the default maximum size (to a number much larger than 5G).

Upstream has a change that didn't make it into Folsom that allows for run-time "constraints configuration". We have asked for it to be back-ported into the Fedora 1.4.8 (F17) and 1.7.4 (F18) swift packages.

And our own clean up of UFO "plugins" will incorporate the backport into our copy of swift as well until such time the backport is incorporated into the openstack-swift packaging and we can dispense with our copy.

Comment 5 pushpesh sharma 2013-08-06 13:16:38 UTC
Chunked Object upload is tested as follows:-

1.
[root@luigi ~]# dd if=/dev/zero of=10g.img bs=1000 count=0 seek=$[1000*1000*10]
0+0 records in
0+0 records out
0 bytes (0 B) copied, 3.0346e-05 s, 0.0 kB/s
[root@luigi ~]# curl -v -X PUT -T 10g.img -H 'X-Auth-Token: AUTH_tk59353ff9ae0f45ae9cf12861e70e765a' -k http://luigi.lab.eng.blr.redhat.com:8080/v1/AUTH_test/dir/10g.img
* About to connect() to luigi.lab.eng.blr.redhat.com port 8080 (#0)
*   Trying 10.70.34.102... connected
* Connected to luigi.lab.eng.blr.redhat.com (10.70.34.102) port 8080 (#0)
> PUT /v1/AUTH_test/dir/10g.img HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: luigi.lab.eng.blr.redhat.com:8080
> Accept: */*
> X-Auth-Token: AUTH_tk59353ff9ae0f45ae9cf12861e70e765a
> Content-Length: 10000000000
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
< HTTP/1.1 201 Created
< Last-Modified: Tue, 06 Aug 2013 00:11:33 GMT
< Content-Length: 0
< Etag: 2213b2de79c457359da646cdd78f840c
< Content-Type: text/html; charset=UTF-8
< Date: Tue, 06 Aug 2013 00:15:09 GMT
< 
* Connection #0 to host luigi.lab.eng.blr.redhat.com left intact
* Closing connection #0

2. 
[root@luigi ~]# ls -lh /mnt/gluster-object/test/dir/
total 9.4G
-rwxr-xr-x 1 root root 9.4G Aug  6 05:45 10g.img

3.
 curl -v -X PUT -T 10g.img -H 'X-Auth-Token: AUTH_tk59353ff9ae0f45ae9cf12861e70e765a' -H 'Transfer-Encoding: chunked' -k http://luigi.lab.eng.blr.redhat.com:8080/v1/AUTH_test/dir/10g.img
* About to connect() to luigi.lab.eng.blr.redhat.com port 8080 (#0)
*   Trying 10.70.34.102... connected
* Connected to luigi.lab.eng.blr.redhat.com (10.70.34.102) port 8080 (#0)
> PUT /v1/AUTH_test/dir/10g.img HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: luigi.lab.eng.blr.redhat.com:8080
> Accept: */*
> X-Auth-Token: AUTH_tk59353ff9ae0f45ae9cf12861e70e765a
> Transfer-Encoding: chunked
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
< HTTP/1.1 201 Created
< Last-Modified: Tue, 06 Aug 2013 00:17:57 GMT
< Content-Length: 0
< Etag: 2213b2de79c457359da646cdd78f840c
< Content-Type: text/html; charset=UTF-8
< Date: Tue, 06 Aug 2013 00:21:36 GMT
< 
* Connection #0 to host luigi.lab.eng.blr.redhat.com left intact
* Closing connection #0

4. 
md5sum /mnt/gluster-object/test/dir/10g.img 
2213b2de79c457359da646cdd78f840c  /mnt/gluster-object/test/dir/10g.img

Comment 6 pushpesh sharma 2013-08-06 13:17:16 UTC
Verified on RHS2.1
[root@luigi catalyst]# rpm -qa|grep gluster
gluster-swift-container-1.8.0-6.11.el6rhs.noarch
gluster-swift-1.8.0-6.11.el6rhs.noarch
glusterfs-fuse-3.4.0.14rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.14rhs-1.el6rhs.x86_64
gluster-swift-proxy-1.8.0-6.11.el6rhs.noarch
gluster-swift-account-1.8.0-6.11.el6rhs.noarch
glusterfs-rdma-3.4.0.14rhs-1.el6rhs.x86_64
vdsm-gluster-4.10.2-23.0.1.el6rhs.noarch
glusterfs-3.4.0.14rhs-1.el6rhs.x86_64
gluster-swift-object-1.8.0-6.11.el6rhs.noarch
glusterfs-geo-replication-3.4.0.14rhs-1.el6rhs.x86_64
gluster-swift-plugin-1.8.0-4.el6rhs.noarch

Comment 7 Scott Haines 2013-09-23 22:29:54 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html