Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1200967 - [RFE] Provide method to automatically suspend scrubs during backfill and recovery
[RFE] Provide method to automatically suspend scrubs during backfill and reco...
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RADOS (Show other bugs)
1.2.2
Unspecified Unspecified
unspecified Severity low
: rc
: 2.3
Assigned To: David Zafman
Parikshith
Erin Donnelly
: FutureFeature
Depends On:
Blocks: 1258382 1437916
  Show dependency treegraph
 
Reported: 2015-03-11 13:53 EDT by Tupper Cole
Modified: 2017-07-30 11:12 EDT (History)
14 users (show)

See Also:
Fixed In Version: RHEL: ceph-10.2.7-2.el7cp Ubuntu: ceph_10.2.7-3redhat1xenial
Doc Type: Enhancement
Doc Text:
.Scrub processes can now be disabled during recovery A new option `osd_scrub_during_recovery` has been added with this release. Setting this option to `false` in the Ceph configuration file disables starting new scrub processes during recovery. As a result, the speed of the recovery is enhanced.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-06-19 09:24:55 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 17866 None None None 2017-04-11 21:56 EDT
Red Hat Product Errata RHBA-2017:1497 normal SHIPPED_LIVE Red Hat Ceph Storage 2.3 bug fix and enhancement update 2017-06-19 13:24:11 EDT

  None (edit)
Description Tupper Cole 2015-03-11 13:53:29 EDT
Description of problem:Manually suspending scrubs and deep-scubs seems to improve backfill, recovery, rebalancing seems to enhance speed and reduce the number of slow requests logged. A method to make this the default behavior is desirable to customers that suffer poor performance during these operations is requested. 


Version-Release number of selected component (if applicable):Firefly


How reproducible:Consistent behavior


Steps to Reproduce:
1.Add, remove OSDs during scrub operations. 
2.
3.

Actual results:


Expected results:


Additional info:
Comment 1 Federico Lucifredi 2015-03-25 21:13:35 EDT
This is a committed Tufnell feature.
Comment 2 Anthony D'Atri 2015-06-02 13:57:51 EDT
Thanks.  What's the timeline for Tufnell?  T seems a long ways down the alphabet, does the change from the cephalopod naming scheme to Spinal Tap skip a bunch of letters?
Comment 3 Neil Levine 2015-06-02 14:25:48 EDT
The Cephalopod names refer to the upstream, community releases which downstream is based.  

Tufnell is the codename for Red Hat Ceph Storage v2.0 - ie the downstream product. This will likely be based on Ceph v10 (Jewel) due out later this year. Tufnell is likely to arrive shortly after in Q1 2016.
Comment 4 Anthony D'Atri 2015-06-02 14:40:54 EDT
Ah, gotcha, thanks.  Had heard something about community and RCS forking but had not known of the naming roadmap.

-- aad
Comment 8 Erin Donnelly 2017-05-22 13:02:31 EDT
Hi David,

I’m proposing to add this BZ to the 2.3 release notes. If you agree, could you set the “Doc Type” and “Doc Text” fields?

Thanks,
Erin
Comment 11 Parikshith 2017-05-25 00:04:14 EDT
Hello,

When Recovery/backfill operation starts, will the scrubbing be suspended for scheduled scrub or manual scrub(will manual scrub override it?) or both?
Comment 16 John Poelstra 2017-05-31 11:11:17 EDT
discussed at program meeting, nobody is clear what should happen to this bug for 2.3 ... Neil to discuss with engineering and figure out next steps
Comment 19 Parikshith 2017-06-01 07:37:40 EDT
As per the clarification given I ran following steps:

1. Started long recovery and ran scrub/deep scrub on several osds
2. Monitored cluster status and "ceph pg dump", found no PGs both recovering and scrubbing.
Comment 20 David Zafman 2017-06-12 21:22:30 EDT
This has now been verified so I believe that any issues from me have been resolved.
Comment 22 errata-xmlrpc 2017-06-19 09:24:55 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1497

Note You need to log in before you can comment on or make changes to this bug.