Bug 1408836 - [ganesha+ec]: Contents of original file are not seen when hardlink is created
Summary: [ganesha+ec]: Contents of original file are not seen when hardlink is created
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: disperse
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.2.0
Assignee: Pranith Kumar K
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1409730 1412916 1413057
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-12-27 15:20 UTC by Arthy Loganathan
Modified: 2017-03-23 06:00 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.8.4-11
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1409730 (view as bug list)
Environment:
Last Closed: 2017-03-23 06:00:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Arthy Loganathan 2016-12-27 15:20:54 UTC
Description of problem:
Contents of original file are removed when hardlink is created


Version-Release number of selected component (if applicable):

nfs-ganesha-gluster-2.4.1-3.el7rhgs.x86_64
nfs-ganesha-2.4.1-3.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Create ganesha cluster and create 2*(4+2) EC volume.
2. Enable nfs-ganesha on the volume with mdcache settings.
3. Mount the volume.
4. Create a file and write contents to it.
5. Create hard link to that file.
6. Read the contents of the file.

Actual results:
Contents of original file are not seen when hardlink is created

Expected results:
Contents should not get removed

Additional info:

[root@dhcp47-49 ec_test]# echo "testfile" > test1
[root@dhcp47-49 ec_test]# cat test1
testfile
[root@dhcp47-49 ec_test]# ls -lhrtia test1
10548474259765385418 -rw-r--r--. 1 root root 9 Dec 27 20:38 test1
[root@dhcp47-49 ec_test]# ln test1 test1_hlink
[root@dhcp47-49 ec_test]# ls -lhrtia test1 test1_hlink
10548474259765385418 -rw-r--r--. 2 root root 0 Dec 27 20:38 test1_hlink
10548474259765385418 -rw-r--r--. 2 root root 0 Dec 27 20:38 test1
[root@dhcp47-49 ec_test]# cat test1
[root@dhcp47-49 ec_test]# cat test1_hlink
[root@dhcp47-49 ec_test]# 
[root@dhcp47-49 ec_test]# 
[root@dhcp47-49 ec_test]# cat test1
[root@dhcp47-49 ec_test]# cat test1_hlink
[root@dhcp47-49 ec_test]# cat test1
[root@dhcp47-49 ec_test]# cat test1

mount details:

10.70.44.93:/vol_ec on /mnt/ec_test type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.70.47.49,local_lock=none,addr=10.70.44.93)

sosreports and logs will be attached soon.

Comment 2 Arthy Loganathan 2016-12-27 15:24:13 UTC
[root@dhcp46-111 ~]# gluster vol status vol_ec
Status of volume: vol_ec
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick dhcp46-111.lab.eng.blr.redhat.com:/br
icks/brick6/br6                             49156     0          Y       7207 
Brick dhcp46-115.lab.eng.blr.redhat.com:/br
icks/brick6/br6                             49156     0          Y       16988
Brick dhcp46-139.lab.eng.blr.redhat.com:/br
icks/brick6/br6                             49156     0          Y       27827
Brick dhcp46-124.lab.eng.blr.redhat.com:/br
icks/brick6/br6                             49152     0          Y       15796
Brick dhcp46-131.lab.eng.blr.redhat.com:/br
icks/brick6/br6                             49154     0          Y       17281
Brick dhcp46-152.lab.eng.blr.redhat.com:/br
icks/brick6/br6                             49156     0          Y       12912
Brick dhcp46-111.lab.eng.blr.redhat.com:/br
icks/brick7/br7                             49157     0          Y       7227 
Brick dhcp46-115.lab.eng.blr.redhat.com:/br
icks/brick7/br7                             49157     0          Y       17008
Brick dhcp46-139.lab.eng.blr.redhat.com:/br
icks/brick7/br7                             49157     0          Y       27847
Brick dhcp46-124.lab.eng.blr.redhat.com:/br
icks/brick7/br7                             49153     0          Y       15816
Brick dhcp46-131.lab.eng.blr.redhat.com:/br
icks/brick7/br7                             49155     0          Y       17301
Brick dhcp46-152.lab.eng.blr.redhat.com:/br
icks/brick7/br7                             49157     0          Y       12932
Self-heal Daemon on localhost               N/A       N/A        Y       9298 
Self-heal Daemon on dhcp46-115.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       18989
Self-heal Daemon on dhcp46-124.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       18024
Self-heal Daemon on dhcp46-131.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       17437
Self-heal Daemon on dhcp46-139.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       29752
Self-heal Daemon on dhcp46-152.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       13070
 
Task Status of Volume vol_ec
------------------------------------------------------------------------------
There are no active volume tasks

Comment 3 Soumya Koduri 2016-12-28 10:34:20 UTC
[root@dhcp35-197 build]# gluster v status vol_disperse
Status of volume: vol_disperse
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.122.201:/tmp/disperse_brick1  49154     0          Y       3444 
Brick 192.168.122.201:/tmp/disperse_brick2  49155     0          Y       3464 
Brick 192.168.122.201:/tmp/disperse_brick3  49156     0          Y       3484 
Brick 192.168.122.201:/tmp/disperse_brick4  49157     0          Y       3504 
Brick 192.168.122.201:/tmp/disperse_brick5  49158     0          Y       3524 
Brick 192.168.122.201:/tmp/disperse_brick6  49159     0          Y       3544 
Brick 192.168.122.201:/tmp/disperse_brick7  49160     0          Y       3564 
Brick 192.168.122.201:/tmp/disperse_brick8  49161     0          Y       3584 
Brick 192.168.122.201:/tmp/disperse_brick9  49162     0          Y       3604 
Brick 192.168.122.201:/tmp/disperse_brick10 49163     0          Y       3624 
Brick 192.168.122.201:/tmp/disperse_brick11 49164     0          Y       3644 
Brick 192.168.122.201:/tmp/disperse_brick12 49165     0          Y       3664 
Self-heal Daemon on localhost               N/A       N/A        Y       3686 
 
Task Status of Volume vol_disperse
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-197 build]# gluster v info vol_disperse
 
Volume Name: vol_disperse
Type: Distributed-Disperse
Volume ID: d66d97a1-6bdb-476c-8c24-2c842f2bcb7a
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: 192.168.122.201:/tmp/disperse_brick1
Brick2: 192.168.122.201:/tmp/disperse_brick2
Brick3: 192.168.122.201:/tmp/disperse_brick3
Brick4: 192.168.122.201:/tmp/disperse_brick4
Brick5: 192.168.122.201:/tmp/disperse_brick5
Brick6: 192.168.122.201:/tmp/disperse_brick6
Brick7: 192.168.122.201:/tmp/disperse_brick7
Brick8: 192.168.122.201:/tmp/disperse_brick8
Brick9: 192.168.122.201:/tmp/disperse_brick9
Brick10: 192.168.122.201:/tmp/disperse_brick10
Brick11: 192.168.122.201:/tmp/disperse_brick11
Brick12: 192.168.122.201:/tmp/disperse_brick12
Options Reconfigured:
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
[root@dhcp35-197 build]# 


[root@dhcp35-197 ~]# mount -t glusterfs localhost:/vol_disperse /fuse-mnt
[root@dhcp35-197 ~]# cd /fuse-mnt
[root@dhcp35-197 fuse-mnt]# echo "hello" > test 
[root@dhcp35-197 fuse-mnt]# cat test
hello
[root@dhcp35-197 fuse-mnt]# ln test test_hlink
[root@dhcp35-197 fuse-mnt]# cat test
hello
[root@dhcp35-197 fuse-mnt]# cd
[root@dhcp35-197 ~]# umount  /fuse-mnt 
[root@dhcp35-197 ~]# 
[root@dhcp35-197 ~]# 
[root@dhcp35-197 ~]# umount  /fuse-mnt 
umount: /fuse-mnt: not mounted
[root@dhcp35-197 ~]# mount -t glusterfs localhost:/vol_disperse /fuse-mnt
[root@dhcp35-197 ~]# cd /fuse-mnt
[root@dhcp35-197 fuse-mnt]# cat test
hello
[root@dhcp35-197 fuse-mnt]# ln test test_hlink2
[root@dhcp35-197 fuse-mnt]# cat test
[root@dhcp35-197 fuse-mnt]# 
[root@dhcp35-197 fuse-mnt]# 

I observe similar issue for existing file on fuse_mount as well.

Comment 7 Pranith Kumar K 2017-01-03 06:39:48 UTC
@Atin, @Poornima, this is a bug in EC, sorry we didn't update the issue, my bad. When a hard link is created EC is as part of hardlink is doing a lookup on new path instead of old path, which leads to lookup failure so updates the size to be wrong. I need to have a discussion with Xavi and finalize on the fix.

Comment 8 Pranith Kumar K 2017-01-04 08:12:34 UTC
Upstream patch: http://review.gluster.org/16320

Comment 9 Arthy Loganathan 2017-01-04 10:32:19 UTC
Due to this bug, fs sanity rpc test suite fails with EC volume.

Comment 11 Nag Pavan Chilakam 2017-01-09 12:50:55 UTC
i have tested this on 3.8.4-11 and found the issue fixed. hence moving to verified
tested on both fuse and ganesha protocol on a dist-ec volume with mdcache settings enabled

Comment 12 Nag Pavan Chilakam 2017-01-09 14:37:33 UTC
while qa validation i hit a bz@ https://bugzilla.redhat.com/show_bug.cgi?id=1411352 - [mdcache]Rename of a file doesn't seem to send invalidation to the clients consistently and hence can end up with duplicate files

Comment 13 Nag Pavan Chilakam 2017-01-23 14:06:14 UTC
Testcase id in polarion:RHG3-11785 - BZ#1408836:Contents of original file are not seen when hardlink is created

Comment 15 errata-xmlrpc 2017-03-23 06:00:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.