Bug 1491739
Summary: | Swift post via RGW ends up with random 500 errors | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Mike Hackett <mhackett> | ||||
Component: | RGW | Assignee: | Adam C. Emerson <aemerson> | ||||
Status: | CLOSED ERRATA | QA Contact: | Tejas <tchandra> | ||||
Severity: | high | Docs Contact: | Aron Gunn <agunn> | ||||
Priority: | high | ||||||
Version: | 2.3 | CC: | aemerson, agunn, bschmaus, cbodley, ceph-eng-bugs, hnallurv, kbader, kdreyer, mbenjamin, mhackett, mwatts, owasserm, sweil, tserlin, vumrao | ||||
Target Milestone: | rc | ||||||
Target Release: | 2.5 | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | RHEL: ceph-10.2.10-9.el7cp Ubuntu: ceph_10.2.10-6redhat1xenial | Doc Type: | Bug Fix | ||||
Doc Text: |
.Swift POST operations no longer generate random 500 errors
Previously, when making changes to the same bucket through multiple Ceph Object Gateways, under certain circumstances under heavy load, the Ceph Object Gateway returned a 500 error. With this release, the chances are reduced to cause a race condition.
|
Story Points: | --- | ||||
Clone Of: | |||||||
: | 1530801 (view as bug list) | Environment: | |||||
Last Closed: | 2018-02-21 19:43:32 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1530801 | ||||||
Bug Blocks: | 1491723, 1536401 | ||||||
Attachments: |
|
I have a potential fix in the works. Just making sure I don't destroy caching or do other potential things. Should have a PR up sometime early next week I think. I haven't been able to successfully reproduce the bug but from looking at the logs/code and talking to people I think we just have a place where we aren't refreshing the cache. Being careful since I'm unfamiliar here. This bug is targeted for RHCEPH 2.5 and this fix is not in RHCEPH 3. Would you please cherry-pick the change to ceph-3.0-rhel-patches (with the RHCEPH 3 clone ID number, "Resolves: rhbz#1530801") so customers do not experience a regression? In accordance with our current theory the setup is: A cluster with at least three RGWs Under heavy enough load that we get occaisonal failed to transmit cache errors And to reproduce: Make metadata changes to the same buckets through multiple RGWs (such as the POST mentioned in the original filing) The behavior to look for is a 500 error that /persists/ until one of the RGWs is restarted The log should mention receiving -ECANCELED If the fix works: You should not get the 500 error. (It's still in principle possible if you end up with enough races that it exceeds the retry count, but even in that case, the failure shouldn't be /durable/) Thanks Adam! I clicked on verified and selected 'Any' but it still shows the warning about the doc text not being verified? Oh. Well. Ack! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0340 |
Created attachment 1326062 [details] PUT success