Bug 1859181 - mds: send scrub status to ceph-mgr only when scrub is running (or paused, etc..)
Summary: mds: send scrub status to ceph-mgr only when scrub is running (or paused, etc..)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 4.1
Hardware: All
OS: All
low
low
Target Milestone: ---
: 4.2z2
Assignee: Venky Shankar
QA Contact: Yogesh Mane
URL:
Whiteboard:
Depends On: 1859179
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-21 11:57 UTC by Venky Shankar
Modified: 2024-10-01 16:43 UTC (History)
7 users (show)

Fixed In Version: ceph-14.2.11-181.el8cp, ceph-14.2.11-181.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1859179
Environment:
Last Closed: 2021-06-15 17:13:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 46480 0 None None None 2020-09-04 21:26:57 UTC
Red Hat Issue Tracker RHCEPH-5520 0 None None None 2022-10-28 17:05:25 UTC
Red Hat Product Errata RHSA-2021:2445 0 None None None 2021-06-15 17:13:24 UTC

Description Venky Shankar 2020-07-21 11:57:52 UTC
+++ This bug was initially created as a clone of Bug #1859179 +++

Description of problem:
Currently, task status field (in ceph status) always displays (mds) scrub status. This is rather unnecessary when scrub is not running. This is a bit annoying. Scrub status should show up only when there are active scrub running or if a scrub operation is paused or if an abort operation is in progress.

Version-Release number of selected component (if applicable):


How reproducible:
Always


Steps to Reproduce:
1. Create a Ceph Filesystem
2. Cehck ceph status output -- scrub status is displayed as "idle"

Also, after a scrub finishes, "idle" status is always displayed.

Actual results:
Scrub status is always displayed in ceph status.

Expected results:

Scrub status should only be displayed when an filesystem scrub is running (or paused, etc..)

Additional info:

--- Additional comment from RHEL Program Management on 2020-07-21 11:55:35 UTC ---

Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 1 RHEL Program Management 2020-07-21 11:58:01 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 5 Venky Shankar 2020-09-28 04:18:51 UTC
Hey Prerna,

I'm not sure what info do you require from me on this BZ?

Also note that, BZ# https://bugzilla.redhat.com/show_bug.cgi?id=1852806 is also a related tracker.

Comment 6 Venky Shankar 2020-09-30 04:56:55 UTC
clearing needinfo -- please reask when required.

Comment 10 Venky Shankar 2021-05-21 06:01:19 UTC
Patrick, https://gitlab.cee.redhat.com/ceph/ceph/-/merge_requests/51

Comment 16 errata-xmlrpc 2021-06-15 17:13:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2445


Note You need to log in before you can comment on or make changes to this bug.