* Previously, an attempt to delete a large RBD image with the "object map" feature enabled could cause the OSD nodes to trigger the "suicide_timeout" and self-terminate. With this update, deleting large RBD images with "object map" no longer causes OSDs to crash.
Description of problem:
OSD hit suicide timeout or sometime OSD goes down and comes back, When deleting large RBD images with features: striping, exclusive, object map
Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3.2
ceph version 0.94.5-9.el7cp (deef183a81111fa5e128ec88c90a32c9587c615d)
How reproducible:
As per customer always
Steps to Reproduce:
In a big cluster, try creating several 100TB+ RBD images with features striping, exclusive-lock object map. Some of them should produce the same behavior when you attempt to delete them (at least after writing data). Customer did not observe this on RBD images without object map.
rbd image 'test':
size 102400 TB in 26843545600 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.b66fc9238e1f29
format: 2
features: striping, exclusive, object map
flags: object map invalid
stripe unit: 512 kB
stripe count: 8
An object map that is tracking 26843545600 objects will require >6GB of memory to store.
In the attached upstream ticket, we added a guard to prevent the use of object map on extremely large RBD images (>1PB) -- which tops out at 64MB of memory. In the example above, that is a 100PB image.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2018:1259
Description of problem: OSD hit suicide timeout or sometime OSD goes down and comes back, When deleting large RBD images with features: striping, exclusive, object map Version-Release number of selected component (if applicable): Red Hat Ceph Storage 1.3.2 ceph version 0.94.5-9.el7cp (deef183a81111fa5e128ec88c90a32c9587c615d) How reproducible: As per customer always Steps to Reproduce: In a big cluster, try creating several 100TB+ RBD images with features striping, exclusive-lock object map. Some of them should produce the same behavior when you attempt to delete them (at least after writing data). Customer did not observe this on RBD images without object map. rbd image 'test': size 102400 TB in 26843545600 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.b66fc9238e1f29 format: 2 features: striping, exclusive, object map flags: object map invalid stripe unit: 512 kB stripe count: 8