Bug 1671585 - [RFE] Implement lazy omap usage statistics per pg/osd
Summary: [RFE] Implement lazy omap usage statistics per pg/osd
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z1
: 3.3
Assignee: Brad Hubbard
QA Contact: Manohar Murthy
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1726135
TreeView+ depends on / blocked
 
Reported: 2019-02-01 00:32 UTC by Vikhyat Umrao
Modified: 2019-10-22 13:29 UTC (History)
10 users (show)

Fixed In Version: RHEL: ceph-12.2.12-64.el7cp Ubuntu: ceph_12.2.12-57redhat1
Doc Type: Enhancement
Doc Text:
.New `omap` usage statistics per PG and OSD This update adds a better reporting of `omap` data usage on a per placement group (PG) and per OSD level. PG-level data is gathered opportunistically during a deep scrub. Additional fields have been added to the output of the `ceph osd df` and various `ceph pg` commands to display the new values.
Clone Of:
Environment:
Last Closed: 2019-10-22 13:29:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 38136 0 None None None 2019-02-01 00:32:08 UTC
Ceph Project Bug Tracker 38550 0 None None None 2019-04-19 23:07:54 UTC
Red Hat Product Errata RHBA-2019:3173 0 None None None 2019-10-22 13:29:20 UTC

Description Vikhyat Umrao 2019-02-01 00:32:09 UTC
Description of problem:
[RFE] Luminous: adding an option in BlueStore asock for checking the omap size in the OSD
http://tracker.ceph.com/issues/38136


Version-Release number of selected component (if applicable):
RHCS 3.2

How reproducible:
Always

Comment 7 Vikhyat Umrao 2019-07-11 23:02:36 UTC
I think we want to take the followings:

http://tracker.ceph.com/issues/40638
http://tracker.ceph.com/issues/38551

Comment 8 Brad Hubbard 2019-07-11 23:05:53 UTC
(In reply to Vikhyat Umrao from comment #7)
> I think we want to take the followings:
> 
> http://tracker.ceph.com/issues/40638
> http://tracker.ceph.com/issues/38551

That's right mate. Note that 38551 is still currently in progress.

Comment 9 Vikhyat Umrao 2019-07-12 00:11:29 UTC
(In reply to Brad Hubbard from comment #8)
> (In reply to Vikhyat Umrao from comment #7)
> > I think we want to take the followings:
> > 
> > http://tracker.ceph.com/issues/40638
> > http://tracker.ceph.com/issues/38551
> 
> That's right mate. Note that 38551 is still currently in progress.

Thanks for the feedback buddy.

Comment 10 Giridhar Ramaraju 2019-08-05 13:12:01 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 11 Giridhar Ramaraju 2019-08-05 13:12:51 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 21 errata-xmlrpc 2019-10-22 13:29:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3173


Note You need to log in before you can comment on or make changes to this bug.