Bug 1161531

Summary: [USS]: Unable to access .snaps after snapshot restore after directories were deleted and recreated
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: senaik
Component: snapshotAssignee: Vijaikumar Mallikarjuna <vmallika>
Status: CLOSED ERRATA QA Contact: senaik
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.0CC: rhinduja, rhs-bugs, rjoseph, smohan, storage-qa-internal, surs, vmallika
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.0.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: USS
Fixed In Version: glusterfs-3.6.0.37-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1162498 (view as bug list) Environment:
Last Closed: 2015-01-15 13:42:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1159263, 1162498, 1162694    

Description senaik 2014-11-07 10:09:16 UTC
Description of problem:
======================
Unable to cd  to .snaps after restoring a volume to snapshot after directories were deleted and recreated 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.6.0.30

How reproducible:
================
always

Steps to Reproduce:
==================
1.Create a 2x2 dist-rep volume and start it

2.Fuse and NFS mount the volume 

3.Enable USS on the volume 

4.From fuse mount, create fuse_dir1 and files under them (fuse1..5)
  From NFS mount, create dir1 and files under them (nfs1..5)

5.Take a snapshot of the volume (snap1)

6.Create more files under fuse_dir1 (fuse6..10) and (nfs6..10) under dir1

7.Take another snapshot of the volume (snap2)

8. cd to .snaps from the directories and list the snaps 

9.Delete directories fuse_dir1 and dir1 and recreate them with same name and create some files under them (newfuse1..5 and newnfs1..5)

10.Take another snapshot of the volume (snap3)

11.Restore the volume to snap2 

12.Now cd to .snaps. It fails with error :

[root@dhcp-0-97 dir1]# cd .snaps
bash: cd: .snaps: No such file or directory


Actual results:
===============
Restoring a volume to snap which had the deleted directories and trying to access .snaps fails 


Expected results:
================
After restoring volume to a snap which had the deleted directories, user should be able to cd .snaps 

Additional info:

Comment 4 Vijaikumar Mallikarjuna 2014-11-10 12:29:55 UTC
There are two scenarios here:

Scenario-A)
1. Create a directory 'xyz'
2. Create a snapshot 'snap1'
3. Delete directory 'xyz'
4. Create a new directory with the same name 'xyz'
5. Now cd to '/mnt/xyz/.snaps' will fail, because xyz is not part of any snapshot
   Even though the directory with the same name xyz exists in snapshot snap1, from the file-system point of view both are different.
This is a valid Scenario and can be documented.


Scenario-B)
1. Create a directory 'xyz'
2. Create snapshots 'snap1' and 'snap2'
3. Delete directory 'xyz'
4. Create another snapshot 'snap3'
5. Now restore the volume to 'snap2'
6. cd to '/mnt/xyz/.snaps' will fail.
   directory xyz exists in older snapshot snap1, but still it fails. Because uss checks is the directory exists in the latest snapshot, which is not found in snap3 and it fails.
   This is a bug. This needs to be fixed.

Comment 5 Vijaikumar Mallikarjuna 2014-12-05 09:50:50 UTC
Patch submitted upstream: https://code.engineering.redhat.com/gerrit/37954

Comment 6 senaik 2014-12-15 10:01:31 UTC
Version :glusterfs 3.6.0.38
========

Retried the steps as mentioned in 'Description' unable to reproduce the issue. 

Marking the bug as 'Verified'

Comment 8 errata-xmlrpc 2015-01-15 13:42:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0038.html