Bug 1611056
| Summary: | MDS cluster degraded flag not set before handling MDS failures | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Patrick Donnelly <pdonnell> |
| Component: | CephFS | Assignee: | Yan, Zheng <zyan> |
| Status: | CLOSED ERRATA | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 3.0 | CC: | anharris, ceph-eng-bugs, ceph-qe-bugs, john.spray, kdreyer, rperiyas, tchandra, tserlin |
| Target Milestone: | z5 | Flags: | anharris:
needinfo+
|
| Target Release: | 3.0 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | RHEL: ceph-12.2.4-42.el7cp Ubuntu: ceph_12.2.4-46redhat1xenial | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-08-09 18:27:51 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1593100 | ||
| Bug Blocks: | |||
|
Description
Patrick Donnelly
2018-08-02 02:41:32 UTC
Steps to reproduce: 1. two active mds, no standby mds. one client mount 2. set rank1's config mds_inject_migrator_message_loss to 73 3. cd /mnt/cephfs/; mkdir testdir; sync; setfattr -n ceph.dir.pin -v 1 testdir; 4. wait 10 seconds and restart mds.1 (kill -9 <mds pid>; ceph-mds -i xx) the recovering mds should crash Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2375 |