Description of problem: Setting a value of osd_max_scrubs greater than 1 (default) can lead to the following crash. 2021-06-23T20:55:00.571+0000 7f3a45c0b700 -1 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5425-gda5d094f/rpm/el8/BUILD/ceph-17.0.0-5425-gda5d094f/src/osd/PG.cc: In function 'bool PG::sched_scrub()' thread 7f3a45c0b700 time 2021-06-23T20:55:00.569263+0000 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-5425-gda5d094f/rpm/el8/BUILD/ceph-17.0.0-5425-gda5d094f/src/osd/PG.cc: 1333: FAILED ceph_assert(!is_scrubbing()) ceph version 17.0.0-5425-gda5d094f (da5d094f2647a6a32316cdd11e40fce91c572df8) quincy (dev) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55be96e7959c] 2: ceph-osd(+0x5b47a4) [0x55be96e797a4] 3: (PG::sched_scrub()+0x561) [0x55be970293a1] 4: (OSD::sched_scrub()+0x8e6) [0x55be96f736f6] 5: (OSD::tick_without_osd_lock()+0x678) [0x55be96f83248] 6: (Context::complete(int)+0xd) [0x55be96fb865d] 7: (SafeTimer::timer_thread()+0x1c0) [0x55be9761e7b0] 8: (SafeTimerThread::entry()+0x11) [0x55be97621351] 9: (Thread::_entry_func(void*)+0xd) [0x55be9761042d] 10: /lib64/libpthread.so.0(+0x814a) [0x7f3a5304114a] 11: clone()
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294