Bug 2272979 - mds: ensure snapclient is synced before corruption check
Summary: mds: ensure snapclient is synced before corruption check
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 7.1
Assignee: Patrick Donnelly
QA Contact: Hemanth Kumar
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2267614 2272981 2272983 2298578 2298579
TreeView+ depends on / blocked
 
Reported: 2024-04-03 14:23 UTC by Patrick Donnelly
Modified: 2024-07-18 07:59 UTC (History)
7 users (show)

Fixed In Version: ceph-18.2.1-122.el9cp
Doc Type: Bug Fix
Doc Text:
.MDS no longer fails with the assert function during recovery. Previously, MDS would sometimes report metadata damage incorrectly when recovering a failed rank and thus, fail with an assert function. With this fix, the startup procedure is corrected and the MDS does not fail with the assert function during recovery.
Clone Of:
Environment:
Last Closed: 2024-06-13 14:31:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 64058 0 None None None 2024-06-04 15:49:41 UTC
Ceph Project Bug Tracker 64922 0 None None None 2024-04-03 14:23:56 UTC
Red Hat Issue Tracker RHCEPH-8735 0 None None None 2024-04-03 14:26:45 UTC
Red Hat Knowledge Base (Solution) 7073314 0 None None None 2024-06-04 15:49:41 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:31:21 UTC

Description Patrick Donnelly 2024-04-03 14:23:57 UTC
Description of problem:

See: https://tracker.ceph.com/issues/64058

Comment 9 Manny 2024-06-04 15:53:14 UTC
Crash signature:

~~~
2024-04-28T13:25:34.748+0000 7fedc6158700 -1 mds.4.cache.den(0x1 volumes) newly corrupt dentry to be committed: [dentry #0x1/volumes [2,head] rep@-2,-2.0 (dversion lock) v=0 ino=0x10000000000 state=0 | inodepin=1 0x55ff14692000]
2024-04-28T13:25:34.748+0000 7fedc6158700 -1 log_channel(cluster) log [ERR] : MDS abort because newly corrupt dentry to be committed: [dentry #0x1/volumes [2,head] rep@-2,-2.0 (dversion lock) v=0 ino=0x10000000000 state=0 | inodepin=1 0x55ff14692000]
2024-04-28T13:25:34.751+0000 7fedc6158700 -1 /builddir/build/BUILD/ceph-16.2.10/src/mds/CDentry.cc: In function 'bool CDentry::check_corruption(bool)' thread 7fedc6158700 time 2024-04-28T13:25:34.750059+0000
/builddir/build/BUILD/ceph-16.2.10/src/mds/CDentry.cc: 699: ceph_abort_msg("abort() called")

 ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable)
 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xe5) [0x7fedceda6c74]
 2: (CDentry::check_corruption(bool)+0x456) [0x55ff0dfe98a6]
 3: (EMetaBlob::add_dir_context(CDir*, int)+0x241) [0x55ff0e132931]
 4: (MDCache::create_subtree_map()+0x1acd) [0x55ff0defef0d]
 5: (MDLog::_journal_segment_subtree_map(MDSContext*)+0x126) [0x55ff0e0c35d6]
 6: (MDLog::_submit_entry(LogEvent*, MDSLogContextBase*)+0x358) [0x55ff0e0c3a28]
 7: (Server::journal_close_session(Session*, int, Context*)+0x78c) [0x55ff0dded24c]
 8: (Server::kill_session(Session*, Context*)+0x212) [0x55ff0dded9a2]
 9: (Server::apply_blocklist()+0x10d) [0x55ff0ddedc5d]
 10: (MDSRank::apply_blocklist(std::set<entity_addr_t, std::less<entity_addr_t>, std::allocator<entity_addr_t> > const&, unsigned int)+0x34) [0x55ff0dda9824]
 11: (MDSRankDispatcher::handle_osd_map()+0x122) [0x55ff0dda9b92]
 12: (MDSDaemon::handle_core_message(boost::intrusive_ptr<Message const> const&)+0x33b) [0x55ff0dd93b8b]
 13: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0xc3) [0x55ff0dd94473]
 14: (DispatchQueue::entry()+0x126a) [0x7fedcefef8da]
 15: (DispatchQueue::DispatchThread::entry()+0x11) [0x7fedcf0a2e21]
 16: /lib64/libpthread.so.0(+0x81ca) [0x7fedcdd851ca]
 17: clone()

2024-04-28T13:25:34.753+0000 7fedc6158700 -1 *** Caught signal (Aborted) **
 in thread 7fedc6158700 thread_name:ms_dispatch

 ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable)
 1: /lib64/libpthread.so.0(+0x12cf0) [0x7fedcdd8fcf0]
 2: gsignal()
 3: abort()
 4: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1b6) [0x7fedceda6d45]
 5: (CDentry::check_corruption(bool)+0x456) [0x55ff0dfe98a6]
 6: (EMetaBlob::add_dir_context(CDir*, int)+0x241) [0x55ff0e132931]
 7: (MDCache::create_subtree_map()+0x1acd) [0x55ff0defef0d]
 8: (MDLog::_journal_segment_subtree_map(MDSContext*)+0x126) [0x55ff0e0c35d6]
 9: (MDLog::_submit_entry(LogEvent*, MDSLogContextBase*)+0x358) [0x55ff0e0c3a28]
 10: (Server::journal_close_session(Session*, int, Context*)+0x78c) [0x55ff0dded24c]
 11: (Server::kill_session(Session*, Context*)+0x212) [0x55ff0dded9a2]
 12: (Server::apply_blocklist()+0x10d) [0x55ff0ddedc5d]
 13: (MDSRank::apply_blocklist(std::set<entity_addr_t, std::less<entity_addr_t>, std::allocator<entity_addr_t> > const&, unsigned int)+0x34) [0x55ff0dda9824]
 14: (MDSRankDispatcher::handle_osd_map()+0x122) [0x55ff0dda9b92]
 15: (MDSDaemon::handle_core_message(boost::intrusive_ptr<Message const> const&)+0x33b) [0x55ff0dd93b8b]
 16: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0xc3) [0x55ff0dd94473]
 17: (DispatchQueue::entry()+0x126a) [0x7fedcefef8da]
 18: (DispatchQueue::DispatchThread::entry()+0x11) [0x7fedcf0a2e21]
 19: /lib64/libpthread.so.0(+0x81ca) [0x7fedcdd851ca]
 20: clone()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
~~~

See also Knowledge  (https://access.redhat.com/solutions/7073314)

BR
Manny

Comment 10 errata-xmlrpc 2024-06-13 14:31:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925


Note You need to log in before you can comment on or make changes to this bug.