Bug 2091773 - (RHCS 5.3) [GSS] "deep-scrub starts" message missing in RHCS 5.1
Summary: (RHCS 5.3) [GSS] "deep-scrub starts" message missing in RHCS 5.1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 5.1
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: 5.3
Assignee: Prashant Dhange
QA Contact: Pawan
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2094955 2126049
TreeView+ depends on / blocked
 
Reported: 2022-05-31 03:29 UTC by Karun Josy
Modified: 2023-01-11 17:41 UTC (History)
18 users (show)

Fixed In Version: ceph-16.2.10-93.el8cp
Doc Type: Bug Fix
Doc Text:
.End-user can now see the scrub or deep-scrub `starts` message from the Ceph cluster log Previously, due to the scrub or deep-scrub starts message missing in the Ceph cluster log, the end-user would fail to know if the PG scrubbing had started for a PG from the Ceph cluster log. With this fix, the scrub or deep-scrub `starts` message is reintroduced. The Ceph cluster log now shows the message for a PG, whenever it goes for a scrubbing or deep-scrubbing process.
Clone Of:
: 2094955 (view as bug list)
Environment:
Last Closed: 2023-01-11 17:39:46 UTC
Embargoed:
akraj: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 55798 0 None None None 2022-05-31 05:20:07 UTC
Github ceph ceph pull 46438 0 None open osd/scrub: Reintroduce scrub starts message 2022-05-31 05:20:07 UTC
Github ceph ceph pull 48070 0 None open pacific: osd/scrub: Reintroduce scrub starts message 2022-09-13 15:34:06 UTC
Red Hat Issue Tracker RHCEPH-4421 0 None None None 2022-05-31 03:30:57 UTC
Red Hat Product Errata RHSA-2023:0076 0 None None None 2023-01-11 17:41:11 UTC

Description Karun Josy 2022-05-31 03:29:51 UTC
* Description of problem:

In RHCS 4 and earlier versions when a PG is deep-scrubbed, it shows "starts" and "ok" message
~~~
[root@ceph4e ceph]# journalctl | grep -i deep-scrub | sort -k14,15 | head
May 26 10:45:33 ceph4e.test conmon[2258]: 2022-05-26 10:45:33.215 7f498d75f700  0 log_channel(cluster) log [DBG] : 10.0 deep-scrub starts
May 26 10:45:33 ceph4e.test conmon[2258]: 2022-05-26 10:45:33.760 7f498d75f700  0 log_channel(cluster) log [DBG] : 10.0 deep-scrub ok
~~~

But in RHCS 5, only "ok" messages are logged.
~~~
May 30 21:35:53 ceph5-osds1 ceph-02a1fef2-df33-11ec-a6ef-001a4a0005b0-osd-0[109864]: debug 2022-05-30T11:35:53.616+0000 7f101011b700  0 log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
May 31 05:16:35 ceph5-osds1 ceph-02a1fef2-df33-11ec-a6ef-001a4a0005b0-osd-0[109864]: debug 2022-05-30T19:16:35.067+0000 7f101111d700  0 log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
~~~

The "start" logline is useful in many situations in a troubleshooting perspective as it helps us to get the actual time taken for a deep-scrub to complete.

We would need this log to be reinstated or provide another option to check the time taken for a deep-scrub process to complete. 


* Version-Release number of selected component (if applicable):
RHCS 5.1

Comment 7 Scott Ostapovicz 2022-12-14 04:05:48 UTC
Moved to 5.3.  Please clone to the 6.0 release as required.

Comment 36 errata-xmlrpc 2023-01-11 17:39:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0076


Note You need to log in before you can comment on or make changes to this bug.