Bug 1491739 - Swift post via RGW ends up with random 500 errors
Summary: Swift post via RGW ends up with random 500 errors
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 2.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 2.5
Assignee: Adam C. Emerson
QA Contact: Tejas
Aron Gunn
URL:
Whiteboard:
Depends On: 1530801
Blocks: 1491723 1536401
TreeView+ depends on / blocked
 
Reported: 2017-09-14 14:24 UTC by Mike Hackett
Modified: 2021-12-10 15:30 UTC (History)
15 users (show)

Fixed In Version: RHEL: ceph-10.2.10-9.el7cp Ubuntu: ceph_10.2.10-6redhat1xenial
Doc Type: Bug Fix
Doc Text:
.Swift POST operations no longer generate random 500 errors Previously, when making changes to the same bucket through multiple Ceph Object Gateways, under certain circumstances under heavy load, the Ceph Object Gateway returned a 500 error. With this release, the chances are reduced to cause a race condition.
Clone Of:
: 1530801 (view as bug list)
Environment:
Last Closed: 2018-02-21 19:43:32 UTC
Embargoed:


Attachments (Terms of Use)
PUT success (47.73 KB, text/plain)
2017-09-14 14:25 UTC, Mike Hackett
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 22517 0 None None None 2017-12-20 21:48:42 UTC
Github ceph ceph pull 18954 0 None None None 2017-12-20 21:50:26 UTC
Github ceph ceph pull 19581 0 None None None 2017-12-20 21:51:26 UTC
Github ceph ceph pull 19601 0 None None None 2017-12-21 21:19:00 UTC
Red Hat Issue Tracker RHCEPH-2600 0 None None None 2021-12-10 15:30:16 UTC
Red Hat Product Errata RHBA-2018:0340 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.5 bug fix and enhancement update 2018-02-22 00:50:32 UTC

Comment 2 Mike Hackett 2017-09-14 14:25:11 UTC
Created attachment 1326062 [details]
PUT success

Comment 11 Adam C. Emerson 2017-11-10 19:25:14 UTC
I have a potential fix in the works. Just making sure I don't destroy caching or do other potential things. Should have a PR up sometime early next week I think.

I haven't been able to successfully reproduce the bug but from looking at the logs/code and talking to people I think we just have a place where we aren't refreshing the cache. Being careful since I'm unfamiliar here.

Comment 15 Ken Dreyer (Red Hat) 2018-01-03 19:43:52 UTC
This bug is targeted for RHCEPH 2.5 and this fix is not in RHCEPH 3.

Would you please cherry-pick the change to ceph-3.0-rhel-patches (with the RHCEPH 3 clone ID number, "Resolves: rhbz#1530801") so customers do not experience a regression?

Comment 21 Adam C. Emerson 2018-01-08 15:10:37 UTC
In accordance with our current theory the setup is:

A cluster with at least three RGWs
Under heavy enough load that we get occaisonal failed to transmit cache errors

And to reproduce:

Make metadata changes to the same buckets through multiple RGWs (such as the POST mentioned in the original filing)
The behavior to look for is a 500 error that /persists/ until one of the RGWs is restarted
The log should mention receiving -ECANCELED

If the fix works:

You should not get the 500 error.
(It's still in principle possible if you end up with enough races that it exceeds the retry count, but even in that case, the failure shouldn't be /durable/)

Comment 22 Harish NV Rao 2018-01-08 15:14:46 UTC
Thanks Adam!

Comment 26 Adam C. Emerson 2018-02-05 18:14:37 UTC
I clicked on verified and selected 'Any' but it still shows the warning about the doc text not being verified?

Comment 28 Adam C. Emerson 2018-02-05 20:05:37 UTC
Oh. Well.

Ack!

Comment 31 errata-xmlrpc 2018-02-21 19:43:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0340


Note You need to log in before you can comment on or make changes to this bug.