Bug 1159302 - [USS]: Unable to go to .snaps directory from any directories other than root after enabling uss
Summary: [USS]: Unable to go to .snaps directory from any directories other than root ...
Keywords:
Status: CLOSED DUPLICATE of bug 1160621
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Vijaikumar Mallikarjuna
QA Contact: senaik
URL:
Whiteboard: USS
Depends On: 1159283 1303595
Blocks: 1160678 1163416 1303865 1316096
TreeView+ depends on / blocked
 
Reported: 2014-10-31 12:38 UTC by senaik
Modified: 2016-09-17 13:05 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1163416 (view as bug list)
Environment:
Last Closed: 2015-03-31 11:04:22 UTC
Embargoed:


Attachments (Terms of Use)

Description senaik 2014-10-31 12:38:54 UTC
Description of problem:
======================
On creating nested directories, unable to navigate to .snaps directory from any of the directories or sub-directories 


Version-Release number of selected component (if applicable):
============================================================
glusterfs-3.6.0.30-1.el6rhs.x86_64

How reproducible:
=================
always


Steps to Reproduce:
===================
1.Create a dist-rep volume and start it

2.Fuse and NFS mount the volume 

3.From fuse mount create nested directories 
[root@dhcp-0-97 vol3_fuse]# mkdir -p a/b/c

From nfs mount create some more nested directories
[root@dhcp-0-97 vol3_nfs]# mkdir -p a_nfs/b_nfs/c_nfs

4.Enable USS on the volume 

[root@snapshot13 ~]# gluster v set vol3 features.uss enable
volume set: success
[root@snapshot13 ~]# gluster v i vol3
 
Volume Name: vol3
Type: Distributed-Replicate
Volume ID: e12fc6cc-c7df-4534-980a-0e21ad956ab4
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: snapshot13.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick2: snapshot14.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick3: snapshot15.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick4: snapshot16.lab.eng.blr.redhat.com:/rhs/brick4/b4
Options Reconfigured:
features.uss: enable
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable

5. From fuse mount, cd to .snaps directory 

from root it is successful:
~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@dhcp-0-97 vol3_fuse]# cd .snaps
[root@dhcp-0-97 .snaps]#

cd to directory 'a' and sub-directories 'b''c' ,then cd  to .snaps directory:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@dhcp-0-97 vol3_fuse]# ls
a  a_nfs
[root@dhcp-0-97 vol3_fuse]# cd a
[root@dhcp-0-97 a]# cd .snaps
bash: cd: .snaps: No such file or directory
[root@dhcp-0-97 a]# ls
b
[root@dhcp-0-97 a]# cd b
[root@dhcp-0-97 b]# cd .snaps
bash: cd: .snaps: No such file or directory
[root@dhcp-0-97 b]# cd c
[root@dhcp-0-97 c]# cd .snaps
bash: cd: .snaps: No such file or directory
[root@dhcp-0-97 c]# 

Similar output is seen from NFS mount 

===========================================================================

However creating a snapshot and then cd to .snaps is successful from fuse mount, but it still fails from nfs mount 

Fuse mount :
~~~~~~~~~~
[root@dhcp-0-97 vol3_fuse]# cd .snaps
[root@dhcp-0-97 .snaps]# cd ..
[root@dhcp-0-97 vol3_fuse]# cd a
[root@dhcp-0-97 a]# cd .snaps
[root@dhcp-0-97 .snaps]# ls
vol3_snap1
[root@dhcp-0-97 .snaps]# pwd
/mnt/vol3_fuse/a/.snaps
[root@dhcp-0-97 .snaps]# cd ..
[root@dhcp-0-97 a]# cd b/.snaps
[root@dhcp-0-97 .snaps]# cd ..
[root@dhcp-0-97 b]# cd c/.snaps

NFS mount :
~~~~~~~~~~
[root@dhcp-0-97 vol3_nfs]# ls
a  a_nfs
[root@dhcp-0-97 vol3_nfs]# cd a_nfs/
[root@dhcp-0-97 a_nfs]# cd .snaps
bash: cd: .snaps: No such file or directory
[root@dhcp-0-97 a_nfs]# ls
b_nfs
[root@dhcp-0-97 a_nfs]# cd b_nfs/
[root@dhcp-0-97 b_nfs]# cd .snaps
bash: cd: .snaps: No such file or directory
[root@dhcp-0-97 b_nfs]# ls
c_nfs
[root@dhcp-0-97 b_nfs]# cd c_nfs/
[root@dhcp-0-97 c_nfs]# cd .snaps
bash: cd: .snaps: No such file or directory


Actual results:
==============
Unable to cd to .snaps from any directory when nested directories are present 


Expected results:
=================
User should be able to cd to .snaps from directories and sub-directories after USS is enabled on the volume 

Additional info:

Comment 2 Rahul Hinduja 2014-10-31 12:57:42 UTC
As per the discussion with designer, "For .snaps to be accessible from a directory, it has to be part of atleast one of the snapshots taken. If a directory is not part of any snapshot, then you cant enter the snapshot world"

However this bug still holds true as even after creating a snapshot we can not enter to the snapshot world from NFS protocol.

Comment 3 rjoseph 2014-11-03 04:49:50 UTC
(In reply to Rahul Hinduja from comment #2)
> As per the discussion with designer, "For .snaps to be accessible from a
> directory, it has to be part of atleast one of the snapshots taken. If a
> directory is not part of any snapshot, then you cant enter the snapshot
> world"

It would be great if the reason for this decision is also mentioned here.
This creates different user experience while accessing different directories, i.e. root directory behave in one fashion and the sub-directory in another. 

> 
> However this bug still holds true as even after creating a snapshot we can
> not enter to the snapshot world from NFS protocol.

I think this should be a different bug and should not be clubbed with this.

Comment 4 Raghavendra Bhat 2014-11-03 06:29:59 UTC
This behavior is as per the design itself. 

Say /mnt/glusterfs is the mount point and it contains some snapshots and a directory dir is created newly and it is not part of any of the snapshots. Now when cd .snaps is done, the following operations happen.

1) lookup comes on root of the filesystem first, which snapview-client redirects to the normal graph and succeeds.
2) Lookup comes on /dir which snapview-client sends to the normal graph (because, root is a real inode and dir is not the name of the entry point) and succeeds
3) Now lookup comes on /dir/.snaps (i.e. inode of dir and name set to .snaps). Snapview client identifies that the parent inode is a real inode and entry name is the name of entry point and redirects it to the snap daemon.
4) Now, in snap daemon, the protocol/server tries to resolve the component on which lookup has come (i.e. inode of /dir and name set to ".snaps")
5) Since /dir was not looked up by snapd before it tries to resolve the gfid of /dir by doing an explicit lookup on that gfid.
6) The snapd now, tries to find the gfid (i.e. /dir in this context) in the latest snapshot taken (because that is the best and the latest information it has).
7) Since /dir is not part of any of the snapshots, snapd will not be able do a successful lookup on /dir and thus the lookup fails.
8) Since parent directory itself was not resolved properly, the lookup of .snaps if also considered a failure and failure is returned back.


This is an expected behavior as per the design. We can document that .snaps can be entered from a directory only if it is present in the snapshot world.

And as per the NFS behavior, I have updated the reason in the bug 1159283.

Comment 6 Vijaikumar Mallikarjuna 2014-11-12 16:46:39 UTC
There are two problems mentioned in the description.

Patch http://review.gluster.org/#/c/9106/ solves the second problem when cd to .snaps works on FUSE, but fails on NFS.

Comment 7 Vijaikumar Mallikarjuna 2014-12-18 10:15:36 UTC
Patch https://code.engineering.redhat.com/gerrit/#/c/37954/ fixes the problem

Comment 9 Avra Sengupta 2015-03-31 11:04:22 UTC

*** This bug has been marked as a duplicate of bug 1160621 ***


Note You need to log in before you can comment on or make changes to this bug.