Bug 1829646

Summary: [RADOS] osdmaps not being cleaned up automatically on healthy cluster
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Prashant Dhange <pdhange>
Component: RADOSAssignee: Neha Ojha <nojha>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: high Docs Contact:
Priority: high    
Version: 4.0CC: akupczyk, bhubbard, ceph-eng-bugs, dzafman, hyelloji, jbrier, jdurgin, kchai, mmuench, mmurthy, mzink, nojha, pdhiran, rzarzyns, sseshasa, tserlin, vumrao
Target Milestone: z1Flags: pdhange: automate_bug?
pdhange: needinfo?
Target Release: 4.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-14.2.8-68.el8cp, ceph-14.2.8-68.el7cp Doc Type: Bug Fix
Doc Text:
.Disk space usage does not increase when OSDs are down for a long time Previously, when an OSD was down for a long time, a large number of osdmaps were stored and not trimmed. This led to excessive disk usage. In {storage-product} 4.1z1, osdmaps are trimmed regardless of whether or not there are down OSDs and disk space is not overused.
Story Points: ---
Clone Of: 1798833 Environment:
Last Closed: 2020-07-20 14:21:03 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1798833    
Bug Blocks: 1816167    

Comment 7 errata-xmlrpc 2020-07-20 14:21:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3003