Bug 1481830 - [RFE] [RHCS 2.y] osd: osd_scrub_during_recovery only considers primary, not replicas
Summary: [RFE] [RHCS 2.y] osd: osd_scrub_during_recovery only considers primary, not r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 2.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 2.4
Assignee: David Zafman
QA Contact: ceph-qe-bugs
Bara Ancincova
URL:
Whiteboard:
: 1636475 (view as bug list)
Depends On:
Blocks: 1473436 1479701
TreeView+ depends on / blocked
 
Reported: 2017-08-15 20:16 UTC by Vikhyat Umrao
Modified: 2021-12-10 15:22 UTC (History)
11 users (show)

Fixed In Version: RHEL: ceph-10.2.7-48.el7cp Ubuntu: ceph_10.2.7-48redhat1
Doc Type: Enhancement
Doc Text:
.Scrubbing is blocked for any PG if the primary or any replica OSDs are recovering The `osd_scrub_during_recovery` parameter now defaults to `false`, so that when an OSD is recovering, the scrubbing process is not initialized on it. Previously, `osd_scrub_during_recovery` was set to `true` by default allowing scrubbing and recovery to run simultaneously. In addition, in previous releases if the user set `osd_scrub_during_recovery` to `false`, only the primary OSD was checked for recovery activity.
Clone Of:
Environment:
Last Closed: 2017-10-17 18:12:51 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 18206 0 None None None 2017-08-15 20:17:05 UTC
Ceph Project Bug Tracker 21117 0 None None None 2017-08-24 17:51:58 UTC
Github ceph ceph pull 17815 0 None None None 2017-10-09 20:53:36 UTC
Red Hat Issue Tracker RHCEPH-2598 0 None None None 2021-12-10 15:22:44 UTC
Red Hat Product Errata RHBA-2017:2903 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.4 enhancement and bug fix update 2017-10-17 22:12:30 UTC

Description Vikhyat Umrao 2017-08-15 20:16:34 UTC
Description of problem:
[RFE] [RHCS 2.y] osd: osd_scrub_during_recovery only considers primary, not replicas

http://tracker.ceph.com/issues/18206



Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 2.3

Comment 2 Vikhyat Umrao 2017-08-24 17:51:59 UTC
Jewel backport tracker:
http://tracker.ceph.com/issues/21117

Comment 4 Vikhyat Umrao 2017-09-25 17:03:12 UTC
jewel backport: https://github.com/ceph/ceph/pull/17815

Comment 5 Christina Meno 2017-10-02 20:52:16 UTC
We're going to need this applied downstream Vikhyat. Would you please let me know when that will be ready?

Comment 6 Vikhyat Umrao 2017-10-02 21:25:48 UTC
(In reply to Gregory Meno from comment #5)
> We're going to need this applied downstream Vikhyat. Would you please let me
> know when that will be ready?

Thanks Gregory. I just checked upstream PR - https://github.com/ceph/ceph/pull/17815 It is under Kefu testing branch and I think as soon as upstream testing will complete. David should be able to take it downstream.

I am changing the needinfo to David.

Comment 7 Ian Colle 2017-10-04 15:48:56 UTC
Kefu,

When do you think https://github.com/ceph/ceph/labels/wip-kefu-testing will be merged?

Comment 13 David Zafman 2017-10-11 20:40:04 UTC
Ken: To test this change you need to include https://bugzilla.redhat.com/show_bug.cgi?id=1482749 or set osd_scrub_during_recovery false.

Comment 21 errata-xmlrpc 2017-10-17 18:12:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2903

Comment 24 Vikhyat Umrao 2018-10-05 20:11:50 UTC
*** Bug 1636475 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.