Bug 1325322

Summary: OSD hit suicide timeout or OSD goes down and comes back, When deleting large RBD images with features: striping, exclusive, object map
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RBDAssignee: Jason Dillaman <jdillama>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: medium Docs Contact: Erin Donnelly <edonnell>
Priority: medium    
Version: 1.3.2CC: bniver, ceph-eng-bugs, edonnell, flucifre, hnallurv, jdillama, kdreyer, tserlin, vumrao
Target Milestone: z2   
Target Release: 3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.4-1.el7cp Ubuntu: ceph_12.2.4-2redhat1 Doc Type: Bug Fix
Doc Text:
* Previously, an attempt to delete a large RBD image with the "object map" feature enabled could cause the OSD nodes to trigger the "suicide_timeout" and self-terminate. With this update, deleting large RBD images with "object map" no longer causes OSDs to crash.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-26 17:38:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1548067    
Bug Blocks: 1348597, 1372735, 1557269    

Description Vikhyat Umrao 2016-04-08 12:49:10 UTC
Description of problem:
OSD hit suicide timeout or sometime OSD goes down and comes back, When deleting large RBD  images with features: striping, exclusive, object map

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3.2 
ceph version 0.94.5-9.el7cp (deef183a81111fa5e128ec88c90a32c9587c615d)

How reproducible:
As per customer always 

Steps to Reproduce:
In a big cluster, try creating several 100TB+ RBD images with features striping, exclusive-lock object map. Some of them should produce the same behavior when you attempt to delete them (at least after writing data). Customer did not observe this on RBD images without object map. 

rbd image 'test':
        size 102400 TB in 26843545600 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.b66fc9238e1f29
        format: 2
        features: striping, exclusive, object map
        flags: object map invalid
        stripe unit: 512 kB
        stripe count: 8

Comment 2 Jason Dillaman 2016-04-08 12:57:14 UTC
An object map that is tracking 26843545600 objects will require >6GB of memory to store.

In the attached upstream ticket, we added a guard to prevent the use of object map on extremely large RBD images (>1PB) -- which tops out at 64MB of memory.  In the example above, that is a 100PB image.

Comment 9 Jason Dillaman 2016-04-13 01:20:42 UTC
Upstream PR: https://github.com/ceph/ceph/pull/8401

Comment 45 Ken Dreyer (Red Hat) 2018-02-22 18:20:10 UTC
resolving with the v12.2.3 rebase

Comment 56 errata-xmlrpc 2018-04-26 17:38:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1259