Bug 1570597 - [CephFS]: MDS assert, ceph-12.2.1/src/mds/MDCache.cc: 5080: FAILED assert(isolated_inodes.empty())
Summary: [CephFS]: MDS assert, ceph-12.2.1/src/mds/MDCache.cc: 5080: FAILED assert(iso...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z4
: 3.0
Assignee: Yan, Zheng
QA Contact: Ramakrishnan Periyasamy
URL:
Whiteboard:
: 1590241 (view as bug list)
Depends On:
Blocks: 1585023
TreeView+ depends on / blocked
 
Reported: 2018-04-23 10:08 UTC by Ramakrishnan Periyasamy
Modified: 2018-09-21 23:35 UTC (History)
8 users (show)

Fixed In Version: RHEL: ceph-12.2.4-22.el7cp Ubuntu: 12.2.4-27redhat1xenial
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1585023 (view as bug list)
Environment:
Last Closed: 2018-07-11 18:11:08 UTC
Embargoed:


Attachments (Terms of Use)
Failed MDS logs. (402.66 KB, text/plain)
2018-04-23 10:08 UTC, Ramakrishnan Periyasamy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 24108 0 None None None 2018-05-22 15:23:09 UTC
Red Hat Product Errata RHSA-2018:2177 0 None None None 2018-07-11 18:11:55 UTC

Description Ramakrishnan Periyasamy 2018-04-23 10:08:02 UTC
Created attachment 1425629 [details]
Failed MDS logs.

Description of problem:
MDS asserted after service restart:

/builddir/build/BUILD/ceph-12.2.1/src/mds/MDCache.cc: 5080: FAILED assert(isolated_inodes.empty())

 ceph version 12.2.1-46.el7cp (b6f6f1b141c306a43f669b974971b9ec44914cb0) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x564975ec7b40]
 2: (MDCache::handle_cache_rejoin_ack(MMDSCacheRejoin*)+0x25a0) [0x564975cb4e60]
 3: (MDCache::handle_cache_rejoin(MMDSCacheRejoin*)+0x213) [0x564975cc12a3]
 4: (MDCache::dispatch(Message*)+0xa5) [0x564975cc6905]
 5: (MDSRank::handle_deferrable_message(Message*)+0x5c4) [0x564975baf734]
 6: (MDSRank::_dispatch(Message*, bool)+0x1e3) [0x564975bbcd43]
 7: (MDSRankDispatcher::ms_dispatch(Message*)+0x15) [0x564975bbdb85]
 8: (MDSDaemon::ms_dispatch(Message*)+0xf3) [0x564975ba7023]
 9: (DispatchQueue::entry()+0x792) [0x5649761ab952]
 10: (DispatchQueue::DispatchThread::entry()+0xd) [0x564975f4dfbd]
 11: (()+0x7dd5) [0x7f577c615dd5]
 12: (clone()+0x6d) [0x7f577b6f5b3d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.


Version-Release number of selected component (if applicable):
ceph version 12.2.1-46.el7cp

How reproducible:
1/1

Steps to Reproduce:
1. Configured active-active mds with a standby
2. Run client IO's in a fresh cluster
3. after 20mins of IO restart the active MDS
4. MDS daemons is up after assert.

Actual results:
Observed MDS assert in the MDS logs.

Expected results:
There should not be asserts

Additional info:
Attached MDS logs

Comment 4 Yan, Zheng 2018-04-24 12:48:41 UTC
where can I find source code of 12.2.1-46.el7cp (b6f6f1b141c306a43f669b974971b9ec44914cb0)

Comment 15 Yan, Zheng 2018-06-04 12:35:59 UTC
already cherry-picked to 3.1. see https://bugzilla.redhat.com/show_bug.cgi?id=1585023

Comment 16 Ramakrishnan Periyasamy 2018-06-19 08:15:53 UTC
Moving this bug to verified state. Not observed any MDS assert during testing.

Verified in ceph version 12.2.4-27.el7cp

CI Automation regression runs passed without any issues.

Comment 18 errata-xmlrpc 2018-07-11 18:11:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2177

Comment 19 Patrick Donnelly 2018-09-21 23:35:02 UTC
*** Bug 1590241 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.