Bug 1159263 - [USS]: Newly created directories doesnt have .snaps folder
Summary: [USS]: Newly created directories doesnt have .snaps folder
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Manikandan
QA Contact: Anil Shah
URL:
Whiteboard: USS
Depends On: 1161531
Blocks: 1299184
TreeView+ depends on / blocked
 
Reported: 2014-10-31 10:33 UTC by Rahul Hinduja
Modified: 2016-09-17 13:00 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-23 04:53:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Rahul Hinduja 2014-10-31 10:33:36 UTC
Description of problem:
=======================

Once the USS is enabled and if the directory is created than the newly created directory doesnt have .snaps folder.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.6.0.30-1.el6rhs.x86_64


How reproducible:
=================
always


Steps to Reproduce:
===================
1. Create and start a volume
2. Mount the volume
3. Enable the USS
4. From the root directory try to cd .snaps, should be successful
5. Create a directory named a
6. cd to a
7. cd to .snaps

Actual results:
===============

It fails with no such file or directory

root@wingo ~]# cd /mnt/vol0
[root@wingo vol0]# cd .snaps
[root@wingo .snaps]# cd ..
[root@wingo vol0]# cd a
[root@wingo a]# cd .snaps
bash: cd: .snaps: No such file or directory
[root@wingo a]# 


Expected results:
=================

If the USS is enabled, newly created directory should have access to .snaps folder.

Comment 2 Raghavendra Bhat 2014-11-03 06:38:19 UTC
This behavior is as per the design itself. 

Say /mnt/glusterfs is the mount point and it contains some snapshots and a directory dir is created newly and it is not part of any of the snapshots. Now when cd .snaps is done, the following operations happen.

1) lookup comes on root of the filesystem first, which snapview-client redirects to the normal graph and succeeds.
2) Lookup comes on /dir which snapview-client sends to the normal graph (because, root is a real inode and dir is not the name of the entry point) and succeeds
3) Now lookup comes on /dir/.snaps (i.e. inode of dir and name set to .snaps). Snapview client identifies that the parent inode is a real inode and entry name is the name of entry point and redirects it to the snap daemon.
4) Now, in snap daemon, the protocol/server tries to resolve the component on which lookup has come (i.e. inode of /dir and name set to ".snaps")
5) Since /dir was not looked up by snapd before it tries to resolve the gfid of /dir by doing an explicit lookup on that gfid.
6) The snapd now, tries to find the gfid (i.e. /dir in this context) in the latest snapshot taken (because that is the best and the latest information it has).
7) Since /dir is not part of any of the snapshots, snapd will not be able do a successful lookup on /dir and thus the lookup fails.
8) Since parent directory itself was not resolved properly, the lookup of .snaps if also considered a failure and failure is returned back.


This is an expected behavior as per the design. We can document that .snaps can be entered from a directory only if it is present in the snapshot world.

Comment 3 senaik 2014-11-10 10:31:05 UTC
Another scenario where we face the similar behavior :
=====================================================

-Create a 2x2 dist-rep volume and start it

-Fuse and NFS mount the volume

-Enable USS  

-Create a directory (dir1) 

-Take 2 snapshots of the volume 

-Cd to .snaps and access the snaps - snaps are listed and are accessible

-Now delete dir1 and recreate it with same name

-Now cd to .snaps - it fails with "No such file or directory"

Comment 4 Vijaikumar Mallikarjuna 2014-12-03 09:50:02 UTC
upstream patch http://review.gluster.org/9229 fixes this issue

Comment 5 Vijaikumar Mallikarjuna 2014-12-18 10:13:18 UTC
Patch https://code.engineering.redhat.com/gerrit/#/c/37954/ fixes the issue

Comment 10 Anil Shah 2016-03-30 10:06:21 UTC
[root@dhcp46-47 .snaps]# cd /mnt/fuse/
[root@dhcp46-47 fuse]# cd .snaps
[root@dhcp46-47 .snaps]# cd ..
[root@dhcp46-47 fuse]# mkdir a
[root@dhcp46-47 fuse]# cd a
[root@dhcp46-47 a]# cd .snaps
[root@dhcp46-47 .snaps]# pwd
/mnt/fuse/a/.snaps


[root@dhcp46-47 fuse]# mkdir  test/test1
[root@dhcp46-47 fuse]# mkdir  test/test1/test2
[root@dhcp46-47 fuse]# mkdir  test/test1/test2/test3
[root@dhcp46-47 fuse]# cd test/test1/test2/test3/
[root@dhcp46-47 test3]# cd .snaps
[root@dhcp46-47 .snaps]# ll
total 0
d---------. 0 root root 0 Jan  1  1970 snap1
d---------. 0 root root 0 Jan  1  1970 snap2
d---------. 0 root root 0 Jan  1  1970 snap3


Bug verified on build glusterfs-3.7.9-1.el7rhgs.x86_64

Comment 13 errata-xmlrpc 2016-06-23 04:53:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.