Bug 1491739

Summary: Swift post via RGW ends up with random 500 errors
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mike Hackett <mhackett>
Component: RGWAssignee: Adam C. Emerson <aemerson>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: high Docs Contact: Aron Gunn <agunn>
Priority: high    
Version: 2.3CC: aemerson, agunn, bschmaus, cbodley, ceph-eng-bugs, hnallurv, kbader, kdreyer, mbenjamin, mhackett, mwatts, owasserm, sweil, tserlin, vumrao
Target Milestone: rc   
Target Release: 2.5   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-10.2.10-9.el7cp Ubuntu: ceph_10.2.10-6redhat1xenial Doc Type: Bug Fix
Doc Text:
.Swift POST operations no longer generate random 500 errors Previously, when making changes to the same bucket through multiple Ceph Object Gateways, under certain circumstances under heavy load, the Ceph Object Gateway returned a 500 error. With this release, the chances are reduced to cause a race condition.
Story Points: ---
Clone Of:
: 1530801 (view as bug list) Environment:
Last Closed: 2018-02-21 19:43:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1530801    
Bug Blocks: 1491723, 1536401    
Attachments:
Description Flags
PUT success none

Comment 2 Mike Hackett 2017-09-14 14:25:11 UTC
Created attachment 1326062 [details]
PUT success

Comment 11 Adam C. Emerson 2017-11-10 19:25:14 UTC
I have a potential fix in the works. Just making sure I don't destroy caching or do other potential things. Should have a PR up sometime early next week I think.

I haven't been able to successfully reproduce the bug but from looking at the logs/code and talking to people I think we just have a place where we aren't refreshing the cache. Being careful since I'm unfamiliar here.

Comment 15 Ken Dreyer (Red Hat) 2018-01-03 19:43:52 UTC
This bug is targeted for RHCEPH 2.5 and this fix is not in RHCEPH 3.

Would you please cherry-pick the change to ceph-3.0-rhel-patches (with the RHCEPH 3 clone ID number, "Resolves: rhbz#1530801") so customers do not experience a regression?

Comment 21 Adam C. Emerson 2018-01-08 15:10:37 UTC
In accordance with our current theory the setup is:

A cluster with at least three RGWs
Under heavy enough load that we get occaisonal failed to transmit cache errors

And to reproduce:

Make metadata changes to the same buckets through multiple RGWs (such as the POST mentioned in the original filing)
The behavior to look for is a 500 error that /persists/ until one of the RGWs is restarted
The log should mention receiving -ECANCELED

If the fix works:

You should not get the 500 error.
(It's still in principle possible if you end up with enough races that it exceeds the retry count, but even in that case, the failure shouldn't be /durable/)

Comment 22 Harish NV Rao 2018-01-08 15:14:46 UTC
Thanks Adam!

Comment 26 Adam C. Emerson 2018-02-05 18:14:37 UTC
I clicked on verified and selected 'Any' but it still shows the warning about the doc text not being verified?

Comment 28 Adam C. Emerson 2018-02-05 20:05:37 UTC
Oh. Well.

Ack!

Comment 31 errata-xmlrpc 2018-02-21 19:43:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0340