Description of problem: If an RBD image is created using a separate data pool and snapshots are utilized, deleting the snapshot will not actually delete the space in the data pool. Version-Release number of selected component (if applicable): 12.2.1 How reproducible: 100% Steps to Reproduce: 1. create an image that utilizes a data pool 2. rbd snap create <image>@<snap> 3. rbd bench-write against the image 4. rbd snap rm <image>@<snap> Actual results: The cluster will never release the space associated with the deleted snapshot. Expected results: The cluster will eventually release the space associated with the deleted snapshot after its trimmed. Additional info:
After snapshot deletion, space has reclaimed in total pool usage. Followed below steps, Please share any concerns or steps to verify it if needed. 1. Create a pool,image and snapshot on it. 2. Write some data to image(used dd command) 3. Note down used space in pool and image. 4. Create snapshot and write data again. 5. Note down total pool space used. 6. Delete a snapshot and check the pool used space. Eventually pool used space got freed up. Version: ceph version 12.2.1-10.el7cp (5ba1c3fa606d7bf16f72756b0026f04a40297673) luminous (stable) [root@aircobra ~]# rbd du -p p3 warning: fast-diff map is not enabled for finalvol. operation may be slow. warning: fast-diff map is not enabled for testvol. operation may be slow. NAME PROVISIONED USED finalvol@snap1 2048M 0 finalvol@snap2 2048M 16384k finalvol@snap3 2048M 16384k finalvol 2048M 16384k testvol 1024M 73728k <TOTAL> 3072M 120M [root@aircobra ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 26694G 26684G 9498M 0.03 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS .rgw.root 1 1113 0 8448G 4 default.rgw.control 2 0 0 8448G 8 default.rgw.meta 3 0 0 8448G 0 default.rgw.log 4 0 0 8448G 207 rbd 5 5931 0 8448G 9 p1 6 3866M 0.01 8448G 1383 p2 7 0 0 8448G 0 p3 8 100M 0 8448G 36 p4 9 158 0 8448G 1 [root@aircobra ~]# rbd snap rm p3/finalvol@snap3 Removing snap: 100% complete...done. [root@aircobra ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 26694G 26684G 9498M 0.03 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS .rgw.root 1 1113 0 8448G 4 default.rgw.control 2 0 0 8448G 8 default.rgw.meta 3 0 0 8448G 0 default.rgw.log 4 0 0 8448G 207 rbd 5 5931 0 8448G 9 p1 6 3866M 0.01 8448G 1383 p2 7 0 0 8448G 0 p3 8 100M 0 8448G 36 p4 9 158 0 8448G 1 [root@aircobra ~]# rbd du -p p3 warning: fast-diff map is not enabled for finalvol. operation may be slow. warning: fast-diff map is not enabled for testvol. operation may be slow. NAME PROVISIONED USED finalvol@snap1 2048M 0 finalvol@snap2 2048M 16384k finalvol 2048M 16384k testvol 1024M 73728k <TOTAL> 3072M 104M [root@aircobra ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 26694G 26685G 9403M 0.03 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS .rgw.root 1 1113 0 8448G 4 default.rgw.control 2 0 0 8448G 8 default.rgw.meta 3 0 0 8448G 0 default.rgw.log 4 0 0 8448G 207 rbd 5 5931 0 8448G 9 p1 6 3866M 0.01 8448G 1383 p2 7 0 0 8448G 0 p3 8 86696k 0 8448G 32 p4 9 158 0 8448G 1 [root@aircobra ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3387