Bug 1159806 - [USS]: Trying to access deactivated snapshot in .snaps folder hangs and trying to list snapshots on .snaps folder also hangs
Summary: [USS]: Trying to access deactivated snapshot in .snaps folder hangs and tryi...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Avra Sengupta
QA Contact: Rahul Hinduja
URL:
Whiteboard: USS
Depends On: 1166197 1175736
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-03 11:45 UTC by senaik
Modified: 2016-09-17 12:54 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-02 13:09:37 UTC
Embargoed:


Attachments (Terms of Use)

Description senaik 2014-11-03 11:45:04 UTC
Description of problem:
=======================
After deactivating a snapshot, accessing the snapshot in .snaps folder hangs and also further trying to list snapshots on .snaps folder also hangs 


Version-Release number of selected component (if applicable):
============================================================
glusterfs 3.6.0.30 built on Oct 28 2014

How reproducible:
================
1/1

Steps to Reproduce:
==================
1.Create a dist-rep volume and start it

2.Fuse and NFS mount the volume and create some IO 

3.Enable USS on the volume 

4.Create some snapshots on the volume

5.cd to .snaps folder and list the snapshots 

[root@dhcp-0-97 .snaps]# ll
total 0
d---------. 0 root root 0 Jan  1  1970 snap1_vol1
d---------. 0 root root 0 Jan  1  1970 snap2_vol1

6.Deactivate one of the snapshot(snap1_vol1)

7.cd to the deactivated snapshot snapshot from .snaps folder it hangs 

[root@dhcp-0-97 .snaps]# cd snap1_vol1

Same behavior is seen from NFS mount as well. 

Further trying to do a ls on .snaps folder from different mount points also hangs 

Actual results:
==============
Accessing a deactivated snapshot from .snaps folder hangs 


Expected results:
================
Accessing a deactivated snapshot from .snaps folder should fail with 'Transport end point not connected'


Additional info:
================
[root@snapshot15 ~]# gluster v i vol1
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 55927dd6-20bf-48a7-83dc-f11a93543e96
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: snapshot13.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick2: snapshot14.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick3: snapshot15.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick4: snapshot16.lab.eng.blr.redhat.com:/rhs/brick2/b2
Options Reconfigured:
features.uss: on
features.barrier: disable
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Comment 4 Sachin Pandit 2014-11-26 11:19:20 UTC
There are two aspects to this bug: First, we no longer display the deactivated snapshot in the snapshot world, but that does not answer the question raised in this bug.

Second aspect is the one mentioned in bug #1159173, comment #12
<snippet>
NFS Client was looking for the snapshot in the wrong place, and was not updating the subvolume once a proper path is resolved. Because of that error message was getting logged in a recursive manner. Patch which resolves https://bugzilla.redhat.com/show_bug.cgi?id=1165704 also fixes this issue
</snippet>

The issue mentioned above has been fixed. Applying that fix, I can no longer reproduce the issue mentioned in this bug. I'll try to execute different scenario which is similar to the one mentioned in this bug and update here if I am able to reproduce the problem.

Comment 11 Shashank Raj 2016-02-02 13:09:37 UTC
The issue is not reproducible with the latest glusterfs-3.7.5-18 build. Accessing a deactivated snapshot shows "cd: snap1: No such file or directory" and ls under .snaps doesn't list the deactivated snapshot and never hangs.

Verified for both glusterfs and nfs mounts.

Closing this bug as working with the latest release.


Note You need to log in before you can comment on or make changes to this bug.