Bug 2272983 - mds: ensure snapclient is synced before corruption check
Summary: mds: ensure snapclient is synced before corruption check
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 6.1z7
Assignee: Patrick Donnelly
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On: 2272979 2272981
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-04-03 14:30 UTC by Patrick Donnelly
Modified: 2024-08-28 17:58 UTC (History)
5 users (show)

Fixed In Version: ceph-17.2.6-227.el9cp
Doc Type: Bug Fix
Doc Text:
Previously, some metadata was marked as corrupted because the state tests for the file system snap systems were incorrect. Due to this, the MDS wrongly indicated some metadata is damaged. With this fix, the state tests for the file system snap systems have been corrected and the MDS no longer indicates the correct metadata as corrupt.
Clone Of:
Environment:
Last Closed: 2024-08-28 17:58:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 64058 0 None None None 2024-06-04 15:57:40 UTC
Ceph Project Bug Tracker 64921 0 None None None 2024-04-03 14:30:30 UTC
Ceph Project Bug Tracker 66634 0 None None None 2024-07-22 14:05:23 UTC
Red Hat Issue Tracker RHCEPH-8737 0 None None None 2024-04-03 14:34:07 UTC
Red Hat Knowledge Base (Solution) 7073314 0 None None None 2024-06-04 15:57:40 UTC
Red Hat Product Errata RHBA-2024:5960 0 None None None 2024-08-28 17:58:46 UTC

Description Patrick Donnelly 2024-04-03 14:30:30 UTC
This bug was initially created as a copy of Bug #2272981

I am copying this bug because: 

6.1z backport

This bug was initially created as a copy of Bug #2272979

I am copying this bug because: 

7.0z backport

Description of problem:

See: https://tracker.ceph.com/issues/64058

Comment 3 Manny 2024-06-04 15:57:40 UTC
Crash signature:

~~~
2024-04-28T13:25:34.748+0000 7fedc6158700 -1 mds.4.cache.den(0x1 volumes) newly corrupt dentry to be committed: [dentry #0x1/volumes [2,head] rep@-2,-2.0 (dversion lock) v=0 ino=0x10000000000 state=0 | inodepin=1 0x55ff14692000]
2024-04-28T13:25:34.748+0000 7fedc6158700 -1 log_channel(cluster) log [ERR] : MDS abort because newly corrupt dentry to be committed: [dentry #0x1/volumes [2,head] rep@-2,-2.0 (dversion lock) v=0 ino=0x10000000000 state=0 | inodepin=1 0x55ff14692000]
2024-04-28T13:25:34.751+0000 7fedc6158700 -1 /builddir/build/BUILD/ceph-16.2.10/src/mds/CDentry.cc: In function 'bool CDentry::check_corruption(bool)' thread 7fedc6158700 time 2024-04-28T13:25:34.750059+0000
/builddir/build/BUILD/ceph-16.2.10/src/mds/CDentry.cc: 699: ceph_abort_msg("abort() called")

 ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable)
 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xe5) [0x7fedceda6c74]
 2: (CDentry::check_corruption(bool)+0x456) [0x55ff0dfe98a6]
 3: (EMetaBlob::add_dir_context(CDir*, int)+0x241) [0x55ff0e132931]
 4: (MDCache::create_subtree_map()+0x1acd) [0x55ff0defef0d]
 5: (MDLog::_journal_segment_subtree_map(MDSContext*)+0x126) [0x55ff0e0c35d6]
 6: (MDLog::_submit_entry(LogEvent*, MDSLogContextBase*)+0x358) [0x55ff0e0c3a28]
 7: (Server::journal_close_session(Session*, int, Context*)+0x78c) [0x55ff0dded24c]
 8: (Server::kill_session(Session*, Context*)+0x212) [0x55ff0dded9a2]
 9: (Server::apply_blocklist()+0x10d) [0x55ff0ddedc5d]
 10: (MDSRank::apply_blocklist(std::set<entity_addr_t, std::less<entity_addr_t>, std::allocator<entity_addr_t> > const&, unsigned int)+0x34) [0x55ff0dda9824]
 11: (MDSRankDispatcher::handle_osd_map()+0x122) [0x55ff0dda9b92]
 12: (MDSDaemon::handle_core_message(boost::intrusive_ptr<Message const> const&)+0x33b) [0x55ff0dd93b8b]
 13: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0xc3) [0x55ff0dd94473]
 14: (DispatchQueue::entry()+0x126a) [0x7fedcefef8da]
 15: (DispatchQueue::DispatchThread::entry()+0x11) [0x7fedcf0a2e21]
 16: /lib64/libpthread.so.0(+0x81ca) [0x7fedcdd851ca]
 17: clone()

2024-04-28T13:25:34.753+0000 7fedc6158700 -1 *** Caught signal (Aborted) **
 in thread 7fedc6158700 thread_name:ms_dispatch

 ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable)
 1: /lib64/libpthread.so.0(+0x12cf0) [0x7fedcdd8fcf0]
 2: gsignal()
 3: abort()
 4: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1b6) [0x7fedceda6d45]
 5: (CDentry::check_corruption(bool)+0x456) [0x55ff0dfe98a6]
 6: (EMetaBlob::add_dir_context(CDir*, int)+0x241) [0x55ff0e132931]
 7: (MDCache::create_subtree_map()+0x1acd) [0x55ff0defef0d]
 8: (MDLog::_journal_segment_subtree_map(MDSContext*)+0x126) [0x55ff0e0c35d6]
 9: (MDLog::_submit_entry(LogEvent*, MDSLogContextBase*)+0x358) [0x55ff0e0c3a28]
 10: (Server::journal_close_session(Session*, int, Context*)+0x78c) [0x55ff0dded24c]
 11: (Server::kill_session(Session*, Context*)+0x212) [0x55ff0dded9a2]
 12: (Server::apply_blocklist()+0x10d) [0x55ff0ddedc5d]
 13: (MDSRank::apply_blocklist(std::set<entity_addr_t, std::less<entity_addr_t>, std::allocator<entity_addr_t> > const&, unsigned int)+0x34) [0x55ff0dda9824]
 14: (MDSRankDispatcher::handle_osd_map()+0x122) [0x55ff0dda9b92]
 15: (MDSDaemon::handle_core_message(boost::intrusive_ptr<Message const> const&)+0x33b) [0x55ff0dd93b8b]
 16: (MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0xc3) [0x55ff0dd94473]
 17: (DispatchQueue::entry()+0x126a) [0x7fedcefef8da]
 18: (DispatchQueue::DispatchThread::entry()+0x11) [0x7fedcf0a2e21]
 19: /lib64/libpthread.so.0(+0x81ca) [0x7fedcdd851ca]
 20: clone()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
~~~

See also Knowledge  (https://access.redhat.com/solutions/7073314)

BR
Manny

Comment 12 errata-xmlrpc 2024-08-28 17:58:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.1 security, bug fix, and enhancement updates.), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:5960


Note You need to log in before you can comment on or make changes to this bug.