Description of problem: See: https://tracker.ceph.com/issues/64058
Crash signature: ~~~ 2024-04-28T13:25:34.748+0000 7fedc6158700 -1 mds.4.cache.den(0x1 volumes) newly corrupt dentry to be committed: [dentry #0x1/volumes [2,head] rep@-2,-2.0 (dversion lock) v=0 ino=0x10000000000 state=0 | inodepin=1 0x55ff14692000] 2024-04-28T13:25:34.748+0000 7fedc6158700 -1 log_channel(cluster) log [ERR] : MDS abort because newly corrupt dentry to be committed: [dentry #0x1/volumes [2,head] rep@-2,-2.0 (dversion lock) v=0 ino=0x10000000000 state=0 | inodepin=1 0x55ff14692000] 2024-04-28T13:25:34.751+0000 7fedc6158700 -1 /builddir/build/BUILD/ceph-16.2.10/src/mds/CDentry.cc: In function 'bool CDentry::check_corruption(bool)' thread 7fedc6158700 time 2024-04-28T13:25:34.750059+0000 /builddir/build/BUILD/ceph-16.2.10/src/mds/CDentry.cc: 699: ceph_abort_msg("abort() called") ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable) 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xe5) [0x7fedceda6c74] 2: (CDentry::check_corruption(bool)+0x456) [0x55ff0dfe98a6] 3: (EMetaBlob::add_dir_context(CDir*, int)+0x241) [0x55ff0e132931] 4: (MDCache::create_subtree_map()+0x1acd) [0x55ff0defef0d] 5: (MDLog::_journal_segment_subtree_map(MDSContext*)+0x126) [0x55ff0e0c35d6] 6: (MDLog::_submit_entry(LogEvent*, MDSLogContextBase*)+0x358) [0x55ff0e0c3a28] 7: (Server::journal_close_session(Session*, int, Context*)+0x78c) [0x55ff0dded24c] 8: (Server::kill_session(Session*, Context*)+0x212) [0x55ff0dded9a2] 9: (Server::apply_blocklist()+0x10d) [0x55ff0ddedc5d] 10: (MDSRank::apply_blocklist(std::set<entity_addr_t, std::less<entity_addr_t>, std::allocator<entity_addr_t> > const&, unsigned int)+0x34) [0x55ff0dda9824] 11: (MDSRankDispatcher::handle_osd_map()+0x122) [0x55ff0dda9b92] 12: (MDSDaemon::handle_core_message(boost::intrusive_ptr<Message const> const&)+0x33b) [0x55ff0dd93b8b] 13: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0xc3) [0x55ff0dd94473] 14: (DispatchQueue::entry()+0x126a) [0x7fedcefef8da] 15: (DispatchQueue::DispatchThread::entry()+0x11) [0x7fedcf0a2e21] 16: /lib64/libpthread.so.0(+0x81ca) [0x7fedcdd851ca] 17: clone() 2024-04-28T13:25:34.753+0000 7fedc6158700 -1 *** Caught signal (Aborted) ** in thread 7fedc6158700 thread_name:ms_dispatch ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable) 1: /lib64/libpthread.so.0(+0x12cf0) [0x7fedcdd8fcf0] 2: gsignal() 3: abort() 4: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1b6) [0x7fedceda6d45] 5: (CDentry::check_corruption(bool)+0x456) [0x55ff0dfe98a6] 6: (EMetaBlob::add_dir_context(CDir*, int)+0x241) [0x55ff0e132931] 7: (MDCache::create_subtree_map()+0x1acd) [0x55ff0defef0d] 8: (MDLog::_journal_segment_subtree_map(MDSContext*)+0x126) [0x55ff0e0c35d6] 9: (MDLog::_submit_entry(LogEvent*, MDSLogContextBase*)+0x358) [0x55ff0e0c3a28] 10: (Server::journal_close_session(Session*, int, Context*)+0x78c) [0x55ff0dded24c] 11: (Server::kill_session(Session*, Context*)+0x212) [0x55ff0dded9a2] 12: (Server::apply_blocklist()+0x10d) [0x55ff0ddedc5d] 13: (MDSRank::apply_blocklist(std::set<entity_addr_t, std::less<entity_addr_t>, std::allocator<entity_addr_t> > const&, unsigned int)+0x34) [0x55ff0dda9824] 14: (MDSRankDispatcher::handle_osd_map()+0x122) [0x55ff0dda9b92] 15: (MDSDaemon::handle_core_message(boost::intrusive_ptr<Message const> const&)+0x33b) [0x55ff0dd93b8b] 16: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0xc3) [0x55ff0dd94473] 17: (DispatchQueue::entry()+0x126a) [0x7fedcefef8da] 18: (DispatchQueue::DispatchThread::entry()+0x11) [0x7fedcf0a2e21] 19: /lib64/libpthread.so.0(+0x81ca) [0x7fedcdd851ca] 20: clone() NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. ~~~ See also Knowledge (https://access.redhat.com/solutions/7073314) BR Manny
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925