Bug 1670321
Summary: | [GSS] Downloads are corrupted when using RGW with civetweb as frontend | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Karun Josy <kjosy> |
Component: | RGW | Assignee: | Casey Bodley <cbodley> |
Status: | CLOSED ERRATA | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> |
Severity: | high | Docs Contact: | Bara Ancincova <bancinco> |
Priority: | high | ||
Version: | 3.2 | CC: | assingh, bancinco, cbodley, ceph-eng-bugs, ceph-qe-bugs, edonnell, kbader, kjosy, mbenjamin, sweil, tchandra, tserlin, vimishra |
Target Milestone: | z2 | ||
Target Release: | 3.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | RHEL: ceph-12.2.8-113.el7cp Ubuntu: ceph_12.2.8-96redhat1xenial | Doc Type: | Bug Fix |
Doc Text: |
.CivetWeb was rebased to upstream version 1.10 and the `enable_keep_alive` CivetWeb option works as expected
When using the Ceph Object Gateway with the CivetWeb front end, the CivetWeb connections timed out despite the `enable_keep_alive` option enabled. Consequently, S3 clients that did not reconnect or retry were not reliable. With this update, CivetWeb has been rebased to the 1.10 upstream version, and the `enable_keep_alive` option works as expected. As a result, CivetWeb connections no longer time out in this case.
In addition, the new CivetWeb version introduces more strict header checks. This new behavior can cause certain return codes to change because invalid requests are detected sooner. For example, in previous version CivetWeb returned the `403 Forbidden` error on an invalid HTTP request, but in the new version it returns the `400 Bad Request` error instead.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-04-30 15:56:46 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Karun Josy
2019-01-29 09:21:36 UTC
Hello, There is a correction in the command mentioned in the description to download the files parallely. It should be : ==== for i in {1..10}; do aria2c --max-overall-download-limit=250K -s 8 -x 8 -o test$i "http://10.74.255.176:8080/test/CentOS-7-x86_64-NetInstall-1810.iso" && md5sum test$i >> testmd5 & done ==== This will download the ISO file 10 times and save as 'test1' to 'test10'. We can compare the md5sum of these downloaded files which will be in the file 'testmd5' to see if it is corrupted or not. Thanks and regards! Karun Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:0911 |