Bug 1671585

Summary: [RFE] Implement lazy omap usage statistics per pg/osd
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RADOSAssignee: Brad Hubbard <bhubbard>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: medium    
Version: 3.2CC: agunn, bhubbard, ceph-eng-bugs, dzafman, jdurgin, kchai, mhackett, nojha, tchandra, tserlin
Target Milestone: z1Keywords: FutureFeature
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.12-64.el7cp Ubuntu: ceph_12.2.12-57redhat1 Doc Type: Enhancement
Doc Text:
.New `omap` usage statistics per PG and OSD This update adds a better reporting of `omap` data usage on a per placement group (PG) and per OSD level. PG-level data is gathered opportunistically during a deep scrub. Additional fields have been added to the output of the `ceph osd df` and various `ceph pg` commands to display the new values.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-22 13:29:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1726135    

Description Vikhyat Umrao 2019-02-01 00:32:09 UTC
Description of problem:
[RFE] Luminous: adding an option in BlueStore asock for checking the omap size in the OSD
http://tracker.ceph.com/issues/38136


Version-Release number of selected component (if applicable):
RHCS 3.2

How reproducible:
Always

Comment 7 Vikhyat Umrao 2019-07-11 23:02:36 UTC
I think we want to take the followings:

http://tracker.ceph.com/issues/40638
http://tracker.ceph.com/issues/38551

Comment 8 Brad Hubbard 2019-07-11 23:05:53 UTC
(In reply to Vikhyat Umrao from comment #7)
> I think we want to take the followings:
> 
> http://tracker.ceph.com/issues/40638
> http://tracker.ceph.com/issues/38551

That's right mate. Note that 38551 is still currently in progress.

Comment 9 Vikhyat Umrao 2019-07-12 00:11:29 UTC
(In reply to Brad Hubbard from comment #8)
> (In reply to Vikhyat Umrao from comment #7)
> > I think we want to take the followings:
> > 
> > http://tracker.ceph.com/issues/40638
> > http://tracker.ceph.com/issues/38551
> 
> That's right mate. Note that 38551 is still currently in progress.

Thanks for the feedback buddy.

Comment 10 Giridhar Ramaraju 2019-08-05 13:12:01 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 11 Giridhar Ramaraju 2019-08-05 13:12:51 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 21 errata-xmlrpc 2019-10-22 13:29:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3173