Bug 1810121 - rgw: cls/queue: fix data corruption in urgent data
Summary: rgw: cls/queue: fix data corruption in urgent data
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 4.0
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: rc
: 4.1
Assignee: Yuval Lifshitz
QA Contact: Tejas
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-04 15:14 UTC by Matt Benjamin (redhat)
Modified: 2020-05-19 17:33 UTC (History)
9 users (show)

Fixed In Version: ceph-14.2.8-3.el8, ceph-14.2.8-3.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-19 17:32:46 UTC
Embargoed:
hyelloji: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:2231 0 None None None 2020-05-19 17:33:06 UTC

Description Matt Benjamin (redhat) 2020-03-04 15:14:04 UTC
Fixes an internal invariant in the new queuing mechanism introduced for
GC omap offload.
 
  when queue size exceeded 1K, urgent data was currupted
  this was happening even if the urgent data size was set correctly
  
For purposes of this bz it should be sufficient to verify all GC cases
work correctly.

Comment 1 RHEL Program Management 2020-03-04 15:14:12 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Matt Benjamin (redhat) 2020-03-04 15:19:14 UTC
Fix tracks https://github.com/ceph/ceph/pull/33686

Comment 11 errata-xmlrpc 2020-05-19 17:32:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:2231


Note You need to log in before you can comment on or make changes to this bug.