Description of problem: ======================= After deactivating a snapshot, accessing the snapshot in .snaps folder hangs and also further trying to list snapshots on .snaps folder also hangs Version-Release number of selected component (if applicable): ============================================================ glusterfs 3.6.0.30 built on Oct 28 2014 How reproducible: ================ 1/1 Steps to Reproduce: ================== 1.Create a dist-rep volume and start it 2.Fuse and NFS mount the volume and create some IO 3.Enable USS on the volume 4.Create some snapshots on the volume 5.cd to .snaps folder and list the snapshots [root@dhcp-0-97 .snaps]# ll total 0 d---------. 0 root root 0 Jan 1 1970 snap1_vol1 d---------. 0 root root 0 Jan 1 1970 snap2_vol1 6.Deactivate one of the snapshot(snap1_vol1) 7.cd to the deactivated snapshot snapshot from .snaps folder it hangs [root@dhcp-0-97 .snaps]# cd snap1_vol1 Same behavior is seen from NFS mount as well. Further trying to do a ls on .snaps folder from different mount points also hangs Actual results: ============== Accessing a deactivated snapshot from .snaps folder hangs Expected results: ================ Accessing a deactivated snapshot from .snaps folder should fail with 'Transport end point not connected' Additional info: ================ [root@snapshot15 ~]# gluster v i vol1 Volume Name: vol1 Type: Distributed-Replicate Volume ID: 55927dd6-20bf-48a7-83dc-f11a93543e96 Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: snapshot13.lab.eng.blr.redhat.com:/rhs/brick2/b2 Brick2: snapshot14.lab.eng.blr.redhat.com:/rhs/brick2/b2 Brick3: snapshot15.lab.eng.blr.redhat.com:/rhs/brick2/b2 Brick4: snapshot16.lab.eng.blr.redhat.com:/rhs/brick2/b2 Options Reconfigured: features.uss: on features.barrier: disable performance.readdir-ahead: on auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256
sosreports : ========== http://rhsqe-repo.lab.eng.blr.redhat.com/bugs_necessary_info/snapshots/uss/1159806.tar
There are two aspects to this bug: First, we no longer display the deactivated snapshot in the snapshot world, but that does not answer the question raised in this bug. Second aspect is the one mentioned in bug #1159173, comment #12 <snippet> NFS Client was looking for the snapshot in the wrong place, and was not updating the subvolume once a proper path is resolved. Because of that error message was getting logged in a recursive manner. Patch which resolves https://bugzilla.redhat.com/show_bug.cgi?id=1165704 also fixes this issue </snippet> The issue mentioned above has been fixed. Applying that fix, I can no longer reproduce the issue mentioned in this bug. I'll try to execute different scenario which is similar to the one mentioned in this bug and update here if I am able to reproduce the problem.
The issue is not reproducible with the latest glusterfs-3.7.5-18 build. Accessing a deactivated snapshot shows "cd: snap1: No such file or directory" and ls under .snaps doesn't list the deactivated snapshot and never hangs. Verified for both glusterfs and nfs mounts. Closing this bug as working with the latest release.