Bug 1092510

Summary: DHT : - two directories has same gfid on snapshot restore
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rachana Patel <racpatel>
Component: distributeAssignee: Raghavendra G <rgowdapp>
Status: CLOSED DUPLICATE QA Contact: Matt Zywusko <mzywusko>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.0CC: asriram, mzywusko, nbalacha, rgowdapp, sankarshan, smohan
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: dht-gfid-dir, triaged, dht-snapshot
Fixed In Version: Doc Type: Known Issue
Doc Text:
If you create a snapshot when the rename of directory is in progress (here, its complete on hashed subvolume but not on all of the subvolumes), on snapshot restore, directory which was undergoing rename operation will have same GFID for both source and destination. Having same GFID is an inconsistency in DHT and can lead to undefined behaviour. This is because, in DHT, a rename (source, destination) of directories is done first on hashed-subvolume and if successful, then on rest of the subvolumes. At this point in time, if you have both source and destination directories present in the cluster with same GFID - destination on hashed-subvolume and source on rest of the subvolumes. A parallel lookup (on either source or destination) at this time can result in creation of directories on missing subvolumes - source directory entry on hashed and destination directory entry on rest of the subvolumes. Hence, there would be two directory entries - source and destination - having same GFID.
Story Points: ---
Clone Of:
: 1105082 (view as bug list) Environment:
Last Closed: 2016-09-01 12:16:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1087818, 1105082, 1118779, 1118780, 1252244, 1324381    

Description Rachana Patel 2014-04-29 13:15:09 UTC
Description of problem:
=======================
rename Directory (in both case, Destination exists and Destination does not exist), Take a snapshot when rename operation is not completed on all sub-volume.

On snapshot restore, lookup heals both directory and ends up having same fid for both source and Directory. 


Version-Release number of selected component (if applicable):
=============================================================
3.5qa2-0.340.gitc193996.el6_5.x86_64


How reproducible:
================
always


Steps to Reproduce:
===================

Case 1:- Destination does not exist
1. create Distributed volume, start it and FUSE mount it.
2. create Directory from mount point 
3. Rename Directory from mount point(destination does not exist) and make sure you take a snap of volume when Directory is renamed on one or more sub-volume and not on all (mv src dest)
4. stop volume and restore snap
5. mount volume again and send a lookup
6. verify gfid  of source and destination Directory in backend

Step 6:-
[root@OVM5 ~]# getfattr -d -m . -e hex /brick3/*/dest
getfattr: Removing leading '/' from absolute path names
# file: brick3/1/dest
trusted.gfid=0xba51b0e324fc46198cf909727081d4d5
trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff

# file: brick3/2/dest
trusted.gfid=0xba51b0e324fc46198cf909727081d4d5
trusted.glusterfs.dht=0x00000001000000000000000055555554

# file: brick3/3/dest
trusted.gfid=0xba51b0e324fc46198cf909727081d4d5
trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9

[root@OVM5 ~]# getfattr -d -m . -e hex /brick3/*/src
getfattr: Removing leading '/' from absolute path names
# file: brick3/1/src
trusted.gfid=0xba51b0e324fc46198cf909727081d4d5
trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff

# file: brick3/2/src
trusted.gfid=0xba51b0e324fc46198cf909727081d4d5
trusted.glusterfs.dht=0x00000001000000000000000055555554

# file: brick3/3/src
trusted.gfid=0xba51b0e324fc46198cf909727081d4d5
trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9


Case 2:-
Destination exist
1. create Distributed volume, start it and FUSE mount it.
2. create Directory from mount point 
3. Rename Directory from mount point(destination should exist) and make sure you take a snap of volume when Directory is renamed on one or more sub-volume  and not on all (mv src dest)
4. stop volume and restore snap
5. mount volume again and send a lookup
6. verify gfid  of source and destination Directory in backend

Step 6:-
[root@OVM5 ~]# getfattr -d -m . -e hex /brick3/*/src
getfattr: Removing leading '/' from absolute path names
# file: brick3/1/src
trusted.gfid=0x2ceac0f928c94437b7b7b985739dc74e
trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff

# file: brick3/2/src
trusted.gfid=0x2ceac0f928c94437b7b7b985739dc74e
trusted.glusterfs.dht=0x00000001000000000000000055555554

# file: brick3/3/src
trusted.gfid=0x2ceac0f928c94437b7b7b985739dc74e
trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9

[root@OVM5 ~]# getfattr -d -m . -e hex /brick3/*/dest/src
getfattr: Removing leading '/' from absolute path names
# file: brick3/1/dest/src
trusted.gfid=0x2ceac0f928c94437b7b7b985739dc74e
trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff

# file: brick3/2/dest/src
trusted.gfid=0x2ceac0f928c94437b7b7b985739dc74e
trusted.glusterfs.dht=0x00000001000000000000000055555554

# file: brick3/3/dest/src
trusted.gfid=0x2ceac0f928c94437b7b7b985739dc74e
trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9



Actual results:
===============
two directories have same gfid


Expected results:
=================
gfid should be unique for all Directories

Comment 5 Shalaka 2014-07-01 10:47:54 UTC
Edited the doc text. Please confirm.

Comment 6 Raghavendra G 2014-07-10 06:54:16 UTC
Reviewed doc-text. Its fine.

Comment 12 Nithya Balachandran 2016-09-01 12:16:50 UTC

*** This bug has been marked as a duplicate of bug 1118780 ***