Bug 1111603 - [SNAPSHOT]: Clear message is required when attempting to delete non-existing snap
Summary: [SNAPSHOT]: Clear message is required when attempting to delete non-existing ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: rjoseph
QA Contact:
URL:
Whiteboard: SNAPSHOT
Depends On: 1111148
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-20 13:53 UTC by rjoseph
Modified: 2014-11-11 08:35 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Clone Of: 1111148
Environment:
Last Closed: 2014-11-11 08:35:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description rjoseph 2014-06-20 13:53:01 UTC
+++ This bug was initially created as a clone of Bug #1111148 +++

Description of problem:
=======================

When we try to delete a non-existing snap, it should gracefully fail with error "does not exist". Currently it is failing as:

[root@inception ~]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: failed: snap snap1 might not be in an usable state.
Snapshot command failed
[root@inception ~]# 


This is a regression bug, it used to gracefully fail. The message "might not be in an usable state" only makes sense when a delete is started for an existing snap and for some reason delete fails to delete a snap and we print this message that "it might not be in an usable state", because we really dont know how much the back end clean-up has happened 

But, if we attempt to delete a snap which doesnt exist it should fail as:

[root@inception ~]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: failed: snap snap1 does not exist.
Snapshot command failed
[root@inception ~]# 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.6.0.19-1.el6rhs.x86_64


How reproducible:
=================
1/1


Steps to Reproduce:
===================
1. gluster snapshot delete <non-existing-snap>

Actual results:
===============


[root@inception ~]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: failed: snap snap1 might not be in an usable state.
Snapshot command failed
[root@inception ~]# 

Expected results:
==================

[root@inception ~]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: failed: snap snap1 does not exist.
Snapshot command failed
[root@inception ~]#

--- Additional comment from  on 2014-06-20 07:18:38 EDT ---

Version : glusterfs 3.6.0.20 built on Jun 19 2014
=======

While snapshots creation in progress on a volume , if snapshot delete is executed, instead of failing with "Another Transaction is in progress" it fails with "<snap_name> might not be in an usable state

for i in {1..100}; do gluster snap create snap_1_$i vol0 ; done 
snapshot create: success: Snap snap_1_1 created successfully
snapshot create: success: Snap snap_1_2 created successfully
snapshot create: success: Snap snap_1_3 created successfully
snapshot create: success: Snap snap_1_4 created successfully
snapshot create: success: Snap snap_1_5 created successfully


gluster snapshot delete snap_1_2
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: failed: snap snap_1_2 might not be in an usable state.
Snapshot command failed

------------Part of the Log-----------------------

 E [glusterd-locks.c:228:glusterd_acquire_multiple_locks_per_entity] 0-management: Failed to acquire lock for vol vol0 on behalf of 1edef405-58c2-4a81-b7a3-50925c6f8035. Reversing this transaction
[2014-06-20 10:57:04.097204] E [glusterd-locks.c:387:glusterd_mgmt_v3_lock_entity] 0-management: Failed to acquire all vol locks
[2014-06-20 10:57:04.097226] E [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_lock] 0-management: Unable to lock all vol
[2014-06-20 10:57:04.097241] E [glusterd-mgmt.c:401:glusterd_mgmt_v3_initiate_lockdown] 0-management: Failed to acquire mgmt_v3 locks on localhost
[2014-06-20 10:57:04.097258] E [glusterd-mgmt.c:1821:glusterd_mgmt_v3_initiate_snap_phases] 0-management: mgmt_v3 lockdown failed.
[2014-06-20 10:57:04.097464] E [glusterd-mgmt.c:1311:glusterd_mgmt_v3_post_validate] (-->/usr/lib64/glusterfs/3.6.0.20/xlator/mgmt/glusterd.so(glusterd_handle_snapshot_fn+0x5fb) [0x7f8a0dde440b] (-->/usr/lib64/glusterfs/3.6.0.20/xlator/mgmt/glusterd.so(glusterd_handle_snapshot_remove+0x258) [0x7f8a0ddd7e88] (-->/usr/lib64/glusterfs/3.6.0.20/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_initiate_snap_phases+0x152) [0x7f8a0ddebc32]))) 0-management: invalid argument: req_dict
[2014-06-20 10:57:04.097487] E [glusterd-mgmt.c:1944:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Post Validation Failed

Comment 1 Anand Avati 2014-06-20 13:55:04 UTC
REVIEW: http://review.gluster.org/8137 (glusterd/snapshot: cli error message corrected) posted (#1) for review on master by Rajesh Joseph (rjoseph)

Comment 2 Anand Avati 2014-06-23 05:56:36 UTC
REVIEW: http://review.gluster.org/8137 (glusterd/snapshot: cli error message corrected) posted (#2) for review on master by Rajesh Joseph (rjoseph)

Comment 3 Anand Avati 2014-06-23 10:37:02 UTC
COMMIT: http://review.gluster.org/8137 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit beeb30a4b777c5bbd6ebfd8f2074b99f30122e08
Author: Rajesh Joseph <rjoseph>
Date:   Fri Jun 20 18:04:33 2014 +0530

    glusterd/snapshot: cli error message corrected
    
    snapshot delete on failure used to give invalid error
    message.
    
    Change-Id: I65d6edf8004c9a1bb91f28fa987b2d1629134013
    BUG: 1111603
    Signed-off-by: Rajesh Joseph <rjoseph>
    Reviewed-on: http://review.gluster.org/8137
    Reviewed-by: Atin Mukherjee <amukherj>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Sachin Pandit <spandit>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 4 Niels de Vos 2014-09-22 12:43:33 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 5 Niels de Vos 2014-11-11 08:35:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.