Bug 1670321 - [GSS] Downloads are corrupted when using RGW with civetweb as frontend
Summary: [GSS] Downloads are corrupted when using RGW with civetweb as frontend
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RGW
Version: 3.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: z2
: 3.2
Assignee: Casey Bodley
QA Contact: ceph-qe-bugs
Bara Ancincova
Depends On:
TreeView+ depends on / blocked
Reported: 2019-01-29 09:21 UTC by Karun Josy
Modified: 2019-11-12 13:25 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-12.2.8-113.el7cp Ubuntu: ceph_12.2.8-96redhat1xenial
Doc Type: Bug Fix
Doc Text:
.CivetWeb was rebased to upstream version 1.10 and the `enable_keep_alive` CivetWeb option works as expected When using the Ceph Object Gateway with the CivetWeb front end, the CivetWeb connections timed out despite the `enable_keep_alive` option enabled. Consequently, S3 clients that did not reconnect or retry were not reliable. With this update, CivetWeb has been rebased to the 1.10 upstream version, and the `enable_keep_alive` option works as expected. As a result, CivetWeb connections no longer time out in this case. In addition, the new CivetWeb version introduces more strict header checks. This new behavior can cause certain return codes to change because invalid requests are detected sooner. For example, in previous version CivetWeb returned the `403 Forbidden` error on an invalid HTTP request, but in the new version it returns the `400 Bad Request` error instead.
Clone Of:
Last Closed: 2019-04-30 15:56:46 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:0911 0 None None None 2019-04-30 15:57:00 UTC

Description Karun Josy 2019-01-29 09:21:36 UTC
* Description of problem:

While using Ceph Rados Gateway with civetweb as frontend, segmented downloads of large files gets corrupted. This issue is seen only when the downloaded files are very large and multiple instances of the downloads are run in parallel. 

* Version-Release number of selected component (if applicable):

* How reproducible:

* Steps to Reproduce:
We can reproduce by using a test bucket, put a large file there and use 'aria2' download manager  to download 10 instances of the files using 8 segments for each file and with 250 KB/s rate limit, calculating the MD5s, and comparing them.
For eg:
- Copy an ISO file to a test bucket
    # aws s3 cp  CentOS-7-x86_64-NetInstall-1810.iso s3://test --acl public-read-write --endpoint-url
- Install aria2 in another server
    # wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    # rpm -ivvh epel-release-latest-7.noarch.rpm
    # yum install aria2
- Run the test command :
    # for i in {1..10}; do aria2c --max-overall-download-limit=250K -s 8 -x 8 -o test$i "" && md5sum test$i; done 

* Actual results:
Downloaded files have different md5sums, which implies they are corrupted.

* Expected results:
All downloaded files should have the same md5sum.

* Additional info:
We tested the same thing using 'Beast' and was not able to recreate the issue.

Comment 2 Karun Josy 2019-01-29 10:53:51 UTC

There is a correction in the command mentioned in the description to download the files parallely. It should be :
for i in {1..10}; do aria2c --max-overall-download-limit=250K -s 8 -x 8 -o test$i "" && md5sum test$i >> testmd5 & done

This will download the ISO file 10 times and save as  'test1' to 'test10'. We can compare the md5sum of these downloaded files which will be in the file 'testmd5' to see if it is corrupted or not.

Thanks and regards!

Comment 23 errata-xmlrpc 2019-04-30 15:56:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.