Description of problem: ======================= Once the USS is enabled and if the directory is created than the newly created directory doesnt have .snaps folder. Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.6.0.30-1.el6rhs.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. Create and start a volume 2. Mount the volume 3. Enable the USS 4. From the root directory try to cd .snaps, should be successful 5. Create a directory named a 6. cd to a 7. cd to .snaps Actual results: =============== It fails with no such file or directory root@wingo ~]# cd /mnt/vol0 [root@wingo vol0]# cd .snaps [root@wingo .snaps]# cd .. [root@wingo vol0]# cd a [root@wingo a]# cd .snaps bash: cd: .snaps: No such file or directory [root@wingo a]# Expected results: ================= If the USS is enabled, newly created directory should have access to .snaps folder.
This behavior is as per the design itself. Say /mnt/glusterfs is the mount point and it contains some snapshots and a directory dir is created newly and it is not part of any of the snapshots. Now when cd .snaps is done, the following operations happen. 1) lookup comes on root of the filesystem first, which snapview-client redirects to the normal graph and succeeds. 2) Lookup comes on /dir which snapview-client sends to the normal graph (because, root is a real inode and dir is not the name of the entry point) and succeeds 3) Now lookup comes on /dir/.snaps (i.e. inode of dir and name set to .snaps). Snapview client identifies that the parent inode is a real inode and entry name is the name of entry point and redirects it to the snap daemon. 4) Now, in snap daemon, the protocol/server tries to resolve the component on which lookup has come (i.e. inode of /dir and name set to ".snaps") 5) Since /dir was not looked up by snapd before it tries to resolve the gfid of /dir by doing an explicit lookup on that gfid. 6) The snapd now, tries to find the gfid (i.e. /dir in this context) in the latest snapshot taken (because that is the best and the latest information it has). 7) Since /dir is not part of any of the snapshots, snapd will not be able do a successful lookup on /dir and thus the lookup fails. 8) Since parent directory itself was not resolved properly, the lookup of .snaps if also considered a failure and failure is returned back. This is an expected behavior as per the design. We can document that .snaps can be entered from a directory only if it is present in the snapshot world.
Another scenario where we face the similar behavior : ===================================================== -Create a 2x2 dist-rep volume and start it -Fuse and NFS mount the volume -Enable USS -Create a directory (dir1) -Take 2 snapshots of the volume -Cd to .snaps and access the snaps - snaps are listed and are accessible -Now delete dir1 and recreate it with same name -Now cd to .snaps - it fails with "No such file or directory"
upstream patch http://review.gluster.org/9229 fixes this issue
Patch https://code.engineering.redhat.com/gerrit/#/c/37954/ fixes the issue
[root@dhcp46-47 .snaps]# cd /mnt/fuse/ [root@dhcp46-47 fuse]# cd .snaps [root@dhcp46-47 .snaps]# cd .. [root@dhcp46-47 fuse]# mkdir a [root@dhcp46-47 fuse]# cd a [root@dhcp46-47 a]# cd .snaps [root@dhcp46-47 .snaps]# pwd /mnt/fuse/a/.snaps [root@dhcp46-47 fuse]# mkdir test/test1 [root@dhcp46-47 fuse]# mkdir test/test1/test2 [root@dhcp46-47 fuse]# mkdir test/test1/test2/test3 [root@dhcp46-47 fuse]# cd test/test1/test2/test3/ [root@dhcp46-47 test3]# cd .snaps [root@dhcp46-47 .snaps]# ll total 0 d---------. 0 root root 0 Jan 1 1970 snap1 d---------. 0 root root 0 Jan 1 1970 snap2 d---------. 0 root root 0 Jan 1 1970 snap3 Bug verified on build glusterfs-3.7.9-1.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240