Description of problem:
MDS does not mark the cluster degraded properly before asking the Migrator to handle MDS failures. This causes the Migrator to abort with an assertion failure.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Found in upstream testing: http://pulpito.ceph.com/pdonnell-2018-08-01_19:07:35-multimds-wip-pdonnell-testing-20180801.165617-testing-basic-smithi/2848127/
Steps to reproduce:
1. two active mds, no standby mds. one client mount
2. set rank1's config mds_inject_migrator_message_loss to 73
3. cd /mnt/cephfs/; mkdir testdir; sync; setfattr -n ceph.dir.pin -v 1 testdir;
4. wait 10 seconds and restart mds.1 (kill -9 <mds pid>; ceph-mds -i xx)
the recovering mds should crash
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.