Bug 1232430 - [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
Summary: [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usabl...
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
Depends On: 1232428
Blocks: 1232887
TreeView+ depends on / blocked
Reported: 2015-06-16 18:13 UTC by Avra Sengupta
Modified: 2018-10-10 13:19 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1232428
: 1232887 (view as bug list)
Last Closed: 2016-06-16 13:12:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)
Glusterfs Log Files from Server 1 (297.27 KB, application/x-xz)
2015-07-17 07:49 UTC, Richard Neuboeck
no flags Details
Glusterfs Log Files from Server 2 (292.89 KB, application/x-xz)
2015-07-17 07:49 UTC, Richard Neuboeck
no flags Details

Comment 1 Anand Avati 2015-06-16 18:29:01 UTC
REVIEW: http://review.gluster.org/11262 (snapshot: Fix terminating slash in brick mount path) posted (#1) for review on master by Avra Sengupta (asengupt@redhat.com)

Comment 2 Anand Avati 2015-06-25 16:24:44 UTC
COMMIT: http://review.gluster.org/11262 committed in master by Rajesh Joseph (rjoseph@redhat.com) 
commit a51d4670ce663b957d91443d313c48b5f44254e3
Author: Avra Sengupta <asengupt@redhat.com>
Date:   Tue Jun 16 23:53:32 2015 +0530

    snapshot: Fix terminating slash in brick mount path
    glusterd_find_brick_mount_path(), returns mount path,
    with a terminating '/' at the ned of the string in
    cases where the brick dir is a dir in the lvm root dir.
    Ignoring the terminating '/' fixes the issue.
    Change-Id: Ie7e63d37d48e2e03d541ae0076b8f143b8c9112f
    BUG: 1232430
    Signed-off-by: Avra Sengupta <asengupt@redhat.com>
    Reviewed-on: http://review.gluster.org/11262
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>

Comment 3 Richard Neuboeck 2015-07-17 07:48:02 UTC
I can confirm that the 'Snap might not be in an usable state' problem exists in glusterfs 3.7.2

Version: glusterfs-3.7.2-3.el7.x86_64 (gluster repo)
OS: CentOS 7.1 64bit

Steps to recreate:

# gluster snapshot create snap1 plexus description 'test snapshot'
snapshot create: success: Snap snap1_GMT-2015.07.16-11.16.03 created successfully

# gluster snapshot list

# gluster snapshot info
Snapshot                  : snap1_GMT-2015.07.16-11.16.03
Snap UUID                 : 6ddce064-2bd0-4770-9995-583147e1a35c
Description               : test snapshot
Created                   : 2015-07-16 11:16:03
Snap Volumes:

	Snap Volume Name          : e905ba76967f43efa0220c2283c87057
	Origin Volume name        : plexus
	Snaps taken for plexus      : 1
	Snaps available for plexus  : 255
	Status                    : Stopped

# gluster snapshot delete snap1_GMT-2015.07.16-11.16.03
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: failed: Snapshot snap1_GMT-2015.07.16-11.16.03 might not be in an usable state.
Snapshot command failed

# gluster snapshot delete all
System contains 1 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: failed: Snapshot snap1_GMT-2015.07.16-11.16.03 might not be in an usable state.
Snapshot command failed

Setup on the machines I've tested this:

- CentOS 7.1 minimal installation
- Thinly provisioned as follows:
# lvs --all
  LV                                 VG                Attr       LSize  Pool     Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  e905ba76967f43efa0220c2283c87057_0 storage_vg        Vwi-aotz-- 45.00t thinpool thindata 0.04                                   
  [lvol0_pmspare]                    storage_vg        ewi------- 10.00g                                                          
  thindata                           storage_vg        Vwi-aotz-- 45.00t thinpool          0.05                                   
  thinpool                           storage_vg        twi-aotz-- 49.95t                   0.05   0.43                            
  [thinpool_tdata]                   storage_vg        Twi-ao---- 49.95t                                                          
  [thinpool_tmeta]                   storage_vg        ewi-ao---- 10.00g

- Gluster setup for this test consists of two machines, one brick each. The brick is a (hardware) RAID 5 volume. Since I've got a lot of NFS related error messages and I didn't use NFS in this case 'nfs.disable' is on.

# gluster volume info
Volume Name: plexus
Type: Replicate
Volume ID: 105559c1-c6d9-4557-8488-2197ad86d92d
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: sphere-one:/srv/gluster/brick
Brick2: sphere-two:/srv/gluster/brick
Options Reconfigured:
features.barrier: disable
nfs.disable: on
performance.readdir-ahead: on

Attached are the logs from both nodes from /var/log/glusterfs.

Comment 4 Richard Neuboeck 2015-07-17 07:49:08 UTC
Created attachment 1053001 [details]
Glusterfs Log Files from Server 1

Comment 5 Richard Neuboeck 2015-07-17 07:49:25 UTC
Created attachment 1053002 [details]
Glusterfs Log Files from Server 2

Comment 6 Avra Sengupta 2015-07-17 09:39:36 UTC
The release you are using 3.7.2, doesn't have the fix. The fix is present in the master branch, and the release 3.7 branch of the gluster codebase, but was sent a few days after this particular release was rolled out. Hence the bug(https://bugzilla.redhat.com/show_bug.cgi?id=1232430) is still in MODIFIED state. Once  a new release(3.7.3) is rolled out (sometime towards the end of next week), it will contain this fix, this particular bug will me moved to ON_QA.

Comment 7 Nagaprasad Sathyanarayana 2015-10-25 14:52:53 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 8 Niels de Vos 2016-06-16 13:12:44 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.