Description of problem: A single OSD reboots every few minutes. When this OSD is marked as "OUT" and another OSD backfills/takes its place the new OSD also begins to reboot continuously. Oct 8 11:00:55 str-yyz-02-01 ceph-osd: -4> 2015-10-08 11:00:55.862853 7f33c3e26700 2 osd.91 pg_epoch: 66350 pg[5.2d6( v 66302'153650 (48156'150649,66302'153650] local-les=66350 n=1553 ec=65 les/c 66350/66350 66349/66349/66349) [91,69,32] r=0 lpr=66349 crt=66302'153647 lcod 0'0 mlcod 0'0 active+clean+scrubbing] scrub osd.91 has 24 items Oct 8 11:00:55 str-yyz-02-01 ceph-osd: -3> 2015-10-08 11:00:55.862877 7f33c3e26700 2 osd.91 pg_epoch: 66350 pg[5.2d6( v 66302'153650 (48156'150649,66302'153650] local-les=66350 n=1553 ec=65 les/c 66350/66350 66349/66349/66349) [91,69,32] r=0 lpr=66349 crt=66302'153647 lcod 0'0 mlcod 0'0 active+clean+scrubbing] scrub replica 32 has 24 items Oct 8 11:00:55 str-yyz-02-01 ceph-osd: -2> 2015-10-08 11:00:55.862885 7f33c3e26700 2 osd.91 pg_epoch: 66350 pg[5.2d6( v 66302'153650 (48156'150649,66302'153650] local-les=66350 n=1553 ec=65 les/c 66350/66350 66349/66349/66349) [91,69,32] r=0 lpr=66349 crt=66302'153647 lcod 0'0 mlcod 0'0 active+clean+scrubbing] scrub replica 69 has 24 items Oct 8 11:00:55 str-yyz-02-01 ceph-osd: -1> 2015-10-08 11:00:55.863074 7f33c3e26700 2 osd.91 pg_epoch: 66350 pg[5.2d6( v 66302'153650 (48156'150649,66302'153650] local-les=66350 n=1553 ec=65 les/c 66350/66350 66349/66349/66349) [91,69,32] r=0 lpr=66349 crt=66302'153647 lcod 0'0 mlcod 0'0 active+clean+scrubbing] Oct 8 11:02:31 str-yyz-02-01 ceph-osd: 0> 2015-10-08 11:02:31.048794 7f5782775700 -1 osd/osd_types.cc: In function 'uint64_t SnapSet::get_clone_bytes(snapid_t) const' thread 7f5782775700 time 2015-10-08 11:02:31.047352#012osd/osd_types.cc: 3543: FAILED assert(clone_size.count(clone))#012#012 ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047) 1: (SnapSet::get_clone_bytes(snapid_t) const+0xb6) [0x707b46]#012 2: (ReplicatedPG::_scrub(ScrubMap&)+0x9e8) [0x7c0198]#012 3: (PG::scrub_compare_maps()+0x5b6) [0x755306]#012 4: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x1d9) [0x758999]#012 5: (PG::scrub(ThreadPool::TPHandle&)+0x19a) [0x75b96a]#012 6: (OSD::ScrubWQ::_process(PG*, ThreadPool::TPHandle&)+0x19) [0x657309]#012 7: (ThreadPool::worker(ThreadPool::WorkThread*)+0xaf1) [0xa56dd1]#012 8: (ThreadPool::WorkThread::entry()+0x10) [0xa57cc0]#012 9: (()+0x8182) [0x7f579ce6e182]#012 10: (clone()+0x6d) [0x7f579b5e0fbd] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. In the snippet above, a corrupted snapset was found during scrubbing. This is incorrectly handled by the scrubbing logic and the OSD crashes every time the scrubber detects the corrupted snapset. This logic has already been re-written so the scrub doesn't crash, but this has not been added to any release at the time of this writing. In the future, scrub will report an unexpected clone in the ObjectStore, a tracker is open upstream. Version-Release number of selected component (if applicable): 1.2.3 Additional info: Spoke to dzafman about this and upstream tracker 12738 is open and fix is in works. This is a request to port this fix downstream. KCS: https://access.redhat.com/node/1993983/
David, would you please provide the reproduction steps for QE to verify the fix here? To trigger the crash, It looks like you have to corrupt a snapset in a particular way, and then issue a scrub command to the OSD?
Once the fix is available you can use the ceph-objectstore-tool to test. This undocumented feature of the tool is part of the same code that fixes the problem: Create an object with one more snapshots. ceph-objectstore-tool --data-path XXXX --journal-path XXXX --op list name-of-object Get JSON for head object which has "snapid": -2 ceph-objectstore-tool --data-path XXXX --journal-path XXXX 'JSON' clear-snapset clone_size To produce without the fix you could use get-xattr snapset on an object without any snapshots and set-xattr snapset to corrupt the head object of an object with snapshots: ceph-objectstore-tool --data-path XXXX --journal-path XXXX 'JSON-NOSNAPSOBJ' get-xattr snapset > saved.snapset ceph-objectstore-tool --data-path XXXX --journal-path XXXX 'JSON' set-xattr snapset saved.snapset
Fixed in infernalis, but it's a long series of scrub changes we'd prefer not to backport.
https://github.com/ceph/ceph/pull/7702 is the backport to Hammer that has passed my testing.
Hammer backport tracker : http://tracker.ceph.com/issues/14077
Fix is in 0.94.7 -
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-1972.html