* Description of problem: In RHCS 4 and earlier versions when a PG is deep-scrubbed, it shows "starts" and "ok" message ~~~ [root@ceph4e ceph]# journalctl | grep -i deep-scrub | sort -k14,15 | head May 26 10:45:33 ceph4e.test conmon[2258]: 2022-05-26 10:45:33.215 7f498d75f700 0 log_channel(cluster) log [DBG] : 10.0 deep-scrub starts May 26 10:45:33 ceph4e.test conmon[2258]: 2022-05-26 10:45:33.760 7f498d75f700 0 log_channel(cluster) log [DBG] : 10.0 deep-scrub ok ~~~ But in RHCS 5, only "ok" messages are logged. ~~~ May 30 21:35:53 ceph5-osds1 ceph-02a1fef2-df33-11ec-a6ef-001a4a0005b0-osd-0[109864]: debug 2022-05-30T11:35:53.616+0000 7f101011b700 0 log_channel(cluster) log [DBG] : 3.4 deep-scrub ok May 31 05:16:35 ceph5-osds1 ceph-02a1fef2-df33-11ec-a6ef-001a4a0005b0-osd-0[109864]: debug 2022-05-30T19:16:35.067+0000 7f101111d700 0 log_channel(cluster) log [DBG] : 3.7 deep-scrub ok ~~~ The "start" logline is useful in many situations in a troubleshooting perspective as it helps us to get the actual time taken for a deep-scrub to complete. We would need this log to be reinstated or provide another option to check the time taken for a deep-scrub process to complete. * Version-Release number of selected component (if applicable): RHCS 5.1
Moved to 5.3. Please clone to the 6.0 release as required.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0076