Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1907706

Summary: [RFE] Multiple MDS Scrub
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Patrick Donnelly <pdonnell>
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: high Docs Contact: Ranjini M N <rmandyam>
Priority: high    
Version: 5.0CC: ceph-eng-bugs, hyelloji, kdreyer, rmandyam, sweil
Target Milestone: ---Keywords: FutureFeature
Target Release: 5.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ceph-16.0.0-8633.el8cp Doc Type: Enhancement
Doc Text:
.Ceph File System (CephFS) scrub now works with multiple active MDS Previously, users had to set the parameter `max_mds=1` and wait for only one active metadata server (MDS) to run Ceph File System (CephFS) scrub operations. With this release, irrespective of the value of `mds_max`, users can execute scrub on rank `0` with multiple active MDS. See the link:{cephfs-guide}#configuring-multiple-active-metadata-server-daemons_fs[_Configuring multiple active Metadata Server daemons_] section in the _{storage-product} File System Guide_ for more information.
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:27:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1794781    
Bug Blocks: 1959686    

Description Patrick Donnelly 2020-12-15 02:41:07 UTC
Description of problem:

In RHCS 4, we gated multiple MDS scrub due to numerous known bugs. This is now considered stable. Documentation for scrub is here:

https://docs.ceph.com/en/latest/cephfs/scrub/

Comment 8 errata-xmlrpc 2021-08-30 08:27:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294