Description of problem: When delete requests are issued over volumes with snapshots (snapshots created with `gluster snapshot create` command), Heketi goes ahead and deletes the volumes from its db entry without successfully deleting the Gluster volume. Gluster volumes with snapshots cannot be deleted without deleting the snapshots. Version-Release number of selected component (if applicable): heketi-1.0.2-1.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Create volume: heketi-cli volume create -size=500 -replica=2 2. Create snapshot from one of the bricks: gluster snapshot create snap1 vol_d3613293878b0ad99436e8607509775d 3. Delete volume: heketi-cli volume delete d3613293878b0ad99436e8607509775d Actual results: Volume is deleted from heketi db. But volume exists in the cluster (in stopped state): Volume Name: vol_d3613293878b0ad99436e8607509775d Type: Distributed-Replicate Volume ID: d30d1538-4043-4367-82ed-be6f51b02701 Status: Stopped Number of Bricks: 8 x 2 = 16 Transport-type: tcp Bricks: Brick1: rhshdp05.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_f49a18501d3d71f05e1442fb0105dd3b/brick_0be02b4b2642ad94a1cb605b26b14296/brick Brick2: rhshdp15.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_be7077936e6a3809b3d258ddc631d324/brick_8a8944d1a3366060698c342b1b4e835a/brick Brick3: rhshdp12.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_e499cfde2ec888f36a91888c9a74906d/brick_22144bbd52af750c38b3c06fdf67a531/brick Brick4: rhshdp13.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_191770094f1f607e532c1a66aba58123/brick_deaa35d43fab07ea374b3cd2a3399905/brick Brick5: rhshdp12.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_7141c14ba16bdf63cc396b35a7865709/brick_7919ae91c67661b335c640dd36a54071/brick Brick6: rhshdp05.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_9093dce5e70e5eacb6954bfd88418559/brick_6dc0e4c636b71620a516b42f293652e7/brick Brick7: rhshdp13.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_64d204c49de1ef9f74b564b4efb4bec4/brick_02e976d2bf1fa2b947a121c2edb37449/brick Brick8: rhshdp04.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_eb0258b2f521e0a5534547e1395a624e/brick_7dfb4e6d1d0ef2f4b01c3bac7f3a7867/brick Brick9: rhshdp13.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_44b4c5af543e58df9982dda8df65d786/brick_f4e49f6a583e99860ebbbf492e3ec4fd/brick Brick10: rhshdp04.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_b000cc8193ce5fa624abc34aa1143774/brick_210deee5157d9406df565f9913e0eca5/brick Brick11: rhshdp12.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_2747fe342a4de6a783bf5e850e0bfdeb/brick_a47af2245a5b5c772fd8709c28830ac4/brick Brick12: rhshdp13.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_470e34d3e7e672a2f258aa43947052f2/brick_87c270227084765e4babfb25989f47c5/brick Brick13: rhshdp06.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_ae0195d25893efdddda4561e88108600/brick_88c2430557c74be2f442e886cbb63f78/brick Brick14: rhshdp15.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_9d03a7dbd6ac97225c0ead678d2334f0/brick_59b38322306bfca47abc544f8d7d5f2c/brick Brick15: rhshdp06.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_48faff2afed779db24532ffc2e24108b/brick_602487997105ac9359662e983e3a29c5/brick Brick16: rhshdp13.lab.eng.blr.redhat.com:/var/lib/heketi/mounts/vg_313ae3b43cdf8b7d5a2f4dce83e309bd/brick_fd9a694ff0687f483ea45af1b7cf13f3/brick Options Reconfigured: features.barrier: disable performance.readdir-ahead: on Expected results: heketi volume delete should fail. Additional info:
This is a known issue and will be fixed in release 2 of Heketi. It has to do with better handling of snapshots in Heketi. Upstream bug created: https://github.com/heketi/heketi/issues/256
Solve in 1.0.4. Please move to ON_QA
[root@dhcp37-163 ~]# heketi-cli --version heketi-cli 2.0.6 Deleting volume which contains snapshots should throw error EXPECTED Result: [root@dhcp37-163 ~]# heketi-cli volume delete e7d26963ac1d4ecaa02f76b2c13ae23a Error: Unable to delete volume vol_e7d26963ac1d4ecaa02f76b2c13ae23a because it containes 14 snapshots Hence marking it as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1498.html