Bug 1674436 - GC erratic performance, very slow deletion performance
Summary: GC erratic performance, very slow deletion performance
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 3.1
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: z1
: 3.2
Assignee: Mark Kogan
QA Contact: Tejas
URL:
Whiteboard:
: 1672433 (view as bug list)
Depends On:
Blocks: 1629656 1680050
TreeView+ depends on / blocked
 
Reported: 2019-02-11 11:01 UTC by Ilan Green
Modified: 2022-03-13 17:28 UTC (History)
16 users (show)

Fixed In Version: RHEL: ceph-12.2.8-85.el7cp Ubuntu: ceph_12.2.8-71redhat1
Doc Type: Bug Fix
Doc Text:
.Garbage collection no longer consumes bandwidth without making forward progress Previously, some underlying bugs prevented garbage collection (GC) from making forward progress. Specifically, the marker was not always being advanced, GC was unable to process entries with zero-length chains, and the truncated flag was not always being set correctly. This caused GC to consume bandwidth without making any forward progress, thereby not freeing up disk space, slowing down other cluster work, and allowing OMAP entries related to GC to continue to increase. With this update, the underlying bugs have been fixed, and GC is able to make progress as expected freeing up disk space and OMAP entries.
Clone Of:
: 1680050 (view as bug list)
Environment:
Last Closed: 2019-03-07 15:51:42 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 38408 0 None None None 2019-02-21 17:17:39 UTC
Red Hat Issue Tracker RHCEPH-3771 0 None None None 2022-03-13 17:28:39 UTC
Red Hat Knowledge Base (Solution) 3936141 0 None None None 2019-02-24 17:29:45 UTC
Red Hat Product Errata RHBA-2019:0475 0 None None None 2019-03-07 15:51:50 UTC

Description Ilan Green 2019-02-11 11:01:56 UTC
Description of problem:
GC draining is working very slow.
Having about 235 objects deleted in 24 hours in bursts (vs. a steady rate)

Version-Release number of selected component (if applicable):
3.0z5

How reproducible:
This currently happens only on the customer's system - gc backlog is increasing over 50 million by now

Steps to Reproduce:
1.
2.
3.

Actual results:
GC backlog is increasing

Expected results:
GC to drain objects at a much faster rate


Additional info:
ceph.conf file to be provided soon
Output of radosgw-admin gc process --debug-rgw=20 --debug-ms=1 |& tee ./gc_proc.log to provided, too, soon

Comment 38 Vikhyat Umrao 2019-02-22 20:16:52 UTC
*** Bug 1672433 has been marked as a duplicate of this bug. ***

Comment 54 errata-xmlrpc 2019-03-07 15:51:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0475


Note You need to log in before you can comment on or make changes to this bug.