Bug 1611056 - MDS cluster degraded flag not set before handling MDS failures
Summary: MDS cluster degraded flag not set before handling MDS failures
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: CephFS
Version: 3.0
Hardware: All
OS: All
Target Milestone: z5
: 3.0
Assignee: Yan, Zheng
QA Contact: ceph-qe-bugs
Depends On: 1593100
TreeView+ depends on / blocked
Reported: 2018-08-02 02:41 UTC by Patrick Donnelly
Modified: 2018-08-13 20:58 UTC (History)
8 users (show)

Fixed In Version: RHEL: ceph-12.2.4-42.el7cp Ubuntu: ceph_12.2.4-46redhat1xenial
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-08-09 18:27:51 UTC
Target Upstream Version:
anharris: needinfo+

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Github ceph ceph pull 23381 None None None 2018-08-02 11:58:36 UTC
Red Hat Product Errata RHBA-2018:2375 None None None 2018-08-09 18:28:24 UTC

Description Patrick Donnelly 2018-08-02 02:41:32 UTC
Description of problem:

MDS does not mark the cluster degraded properly before asking the Migrator to handle MDS failures. This causes the Migrator to abort with an assertion failure.

Version-Release number of selected component (if applicable):


Steps to Reproduce:

Found in upstream testing: http://pulpito.ceph.com/pdonnell-2018-08-01_19:07:35-multimds-wip-pdonnell-testing-20180801.165617-testing-basic-smithi/2848127/

Comment 5 Yan, Zheng 2018-08-02 07:19:00 UTC
Steps to reproduce:

1. two active mds, no standby mds. one client mount
2. set rank1's config mds_inject_migrator_message_loss to 73
3. cd /mnt/cephfs/; mkdir testdir; sync; setfattr -n ceph.dir.pin -v 1 testdir;
4. wait 10 seconds and restart mds.1 (kill -9 <mds pid>; ceph-mds -i xx)

the recovering mds should crash

Comment 15 errata-xmlrpc 2018-08-09 18:27:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.