Bug 1859257

Summary: radosgw-admin user stats --reset-stats causing OSD flapping when issued against users with thousands of buckets
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mike Hackett <mhackett>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: medium Docs Contact: Karen Norteman <knortema>
Priority: medium    
Version: 3.2CC: aagrawal159, agangopadhy2, agunn, assingh, cbodley, ceph-eng-bugs, ceph-qe-bugs, jdurgin, jmundackal, jzhu116, kbader, kdreyer, knortema, mbenjamin, mhackett, mleonard33, mlu136, mmuench, racpatel, rkallu, roemerso, sbaldwin, sweil, swu497, tchandra, tserlin, vereddy, vimishra, vumrao
Target Milestone: ---   
Target Release: 4.2z1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-14.2.11-100.el7cp ceph-14.2.11-100.el8cp Doc Type: Bug Fix
Doc Text:
.The `--reset-stats` option updates buckets in groups for users with large numbers of buckets Previously, the `radosgw-admin` user `--reset-stats` option simultaneously updated the stats for all buckets owned by a user. For users with very large numbers of buckets, the time required to make the updates could exceed the length of the associated RADOS operation. This could cause Ceph to mark OSDs as down, and could cause the OSDs to flap. With this release, the `--reset-stats` option updates the stats in groups of 1000 buckets. This allows large numbers of buckets to update without resulting in OSD flapping.
Story Points: ---
Clone Of: 1737163 Environment:
Last Closed: 2021-04-28 20:12:31 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1737163    
Bug Blocks: 1733598, 1890121    

Comment 16 errata-xmlrpc 2021-04-28 20:12:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage security, bug fix, and enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1452

Comment 17 Red Hat Bugzilla 2023-09-15 00:34:24 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days