Bug 1183182

Summary: RadosGW urlencode
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: bandrus
Component: RGWAssignee: Yehuda Sadeh <yehuda>
Status: CLOSED ERRATA QA Contact: Warren <wusui>
Severity: high Docs Contact:
Priority: unspecified    
Version: 1.2.2CC: cbodley, ceph-eng-bugs, flucifre, icolle, kbader, kdreyer, mbenjamin, nlevine, owasserm, sweil, tmuthami, vumrao, wschulze, yehuda
Target Milestone: pre-dev-freeze   
Target Release: 1.2.4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-09-02 14:07:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description bandrus 2015-01-16 23:10:06 UTC
Description of problem:
Some S3 clients utilize "/" in their upload IDs. RadosGW cannot handle this properly and returns a 403.

Version-Release number of selected component (if applicable):
v0.80.7 (firefly)

How reproducible:
Easily reproducible.

Steps to Reproduce:
1. Utilize a client that includes a "/" in the upload ID for multipart uploads (confirmed: TNTDrive or Updraft Plus or  AWS SDK JS v.2.0.29)
2. Receive 403 from RadosGW

Actual results:
RadosGW cannot properly decode "/" and returns a 403

Expected results:
RadosGW should properly decode "/" in the upload ID and continue

Additional info:
Regression introduced in a fix for this bug: http://tracker.ceph.com/issues/8702

Comment 1 Yehuda Sadeh 2015-01-28 22:14:46 UTC
Note that it's not that radosgw cannot decode the slash properly, it's the client fails to sign the request correctly having the slash there. The workaround is not to use slashes for that as the client doesn't handle it correctly.
The upstream bug for this issue is #10271, and a fix has already been pushed upstream (master, giant, firefly).

Comment 2 Neil Levine 2015-02-12 00:48:09 UTC
Yehuda, is this fix in 0.80.8?

Comment 3 Yehuda Sadeh 2015-02-12 00:56:56 UTC
(In reply to Neil Levine from comment #2)
> Yehuda, is this fix in 0.80.8?

No, it didn't make the cutoff. It will be in the following version.

Comment 4 Ken Dreyer (Red Hat) 2015-04-23 16:12:02 UTC
I don't see a firefly-specific PR upstream so I guess the patches were pushed directly to the firefly branch.

https://github.com/ceph/ceph/commit/24c13d87039d4f61df0bcabdb8862e0e94fe575d
https://github.com/ceph/ceph/commit/617002d3ff469ef409a83e35d4f4fd6a0b5b1278

These are present in upstream's v0.80.9.

Comment 7 Yehuda Sadeh 2015-06-22 21:18:00 UTC
*** Bug 1233529 has been marked as a duplicate of this bug. ***

Comment 8 Yehuda Sadeh 2015-06-22 21:21:11 UTC
*** Bug 1233530 has been marked as a duplicate of this bug. ***

Comment 9 Federico Lucifredi 2015-07-11 00:55:38 UTC
Presuming this already fixed in Hammer/1.3.0 — if not, please dupe bug and NEEDINFO me for ack.

Comment 10 Ken Dreyer (Red Hat) 2015-07-22 15:27:15 UTC
(In reply to Federico Lucifredi from comment #9)
> Presuming this already fixed in Hammer/1.3.0 — if not, please dupe bug and
> NEEDINFO me for ack.

Correct, the necessary commits are already in v0.94.1 upstream (and RHCS 1.3.0), as 5fc7a0be67a03ed63fcc8408f8d71a31a1841076 and 21e07eb6abacb085f81b65acd706b46af29ffc03 .

Comment 11 Warren 2015-08-26 05:07:50 UTC
Works on 1.2.3.2 iso's for trusty and precise.

Comment 12 Warren 2015-09-01 02:13:40 UTC
Works on 1.2.3.2 iso's for Centos 6.7

Comment 14 errata-xmlrpc 2015-09-02 14:07:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1703.html