Bug 1408836
Summary: | [ganesha+ec]: Contents of original file are not seen when hardlink is created | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Arthy Loganathan <aloganat> | |
Component: | disperse | Assignee: | Pranith Kumar K <pkarampu> | |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> | |
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.2 | CC: | amukherj, aspandey, dang, ffilz, jthottan, mbenjamin, pgurusid, pkarampu, rcyriac, rhinduja, rhs-bugs, skoduri, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.2.0 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-11 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1409730 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 06:00:50 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1409730, 1412916, 1413057 | |||
Bug Blocks: | 1351528 |
Description
Arthy Loganathan
2016-12-27 15:20:54 UTC
[root@dhcp46-111 ~]# gluster vol status vol_ec Status of volume: vol_ec Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick dhcp46-111.lab.eng.blr.redhat.com:/br icks/brick6/br6 49156 0 Y 7207 Brick dhcp46-115.lab.eng.blr.redhat.com:/br icks/brick6/br6 49156 0 Y 16988 Brick dhcp46-139.lab.eng.blr.redhat.com:/br icks/brick6/br6 49156 0 Y 27827 Brick dhcp46-124.lab.eng.blr.redhat.com:/br icks/brick6/br6 49152 0 Y 15796 Brick dhcp46-131.lab.eng.blr.redhat.com:/br icks/brick6/br6 49154 0 Y 17281 Brick dhcp46-152.lab.eng.blr.redhat.com:/br icks/brick6/br6 49156 0 Y 12912 Brick dhcp46-111.lab.eng.blr.redhat.com:/br icks/brick7/br7 49157 0 Y 7227 Brick dhcp46-115.lab.eng.blr.redhat.com:/br icks/brick7/br7 49157 0 Y 17008 Brick dhcp46-139.lab.eng.blr.redhat.com:/br icks/brick7/br7 49157 0 Y 27847 Brick dhcp46-124.lab.eng.blr.redhat.com:/br icks/brick7/br7 49153 0 Y 15816 Brick dhcp46-131.lab.eng.blr.redhat.com:/br icks/brick7/br7 49155 0 Y 17301 Brick dhcp46-152.lab.eng.blr.redhat.com:/br icks/brick7/br7 49157 0 Y 12932 Self-heal Daemon on localhost N/A N/A Y 9298 Self-heal Daemon on dhcp46-115.lab.eng.blr. redhat.com N/A N/A Y 18989 Self-heal Daemon on dhcp46-124.lab.eng.blr. redhat.com N/A N/A Y 18024 Self-heal Daemon on dhcp46-131.lab.eng.blr. redhat.com N/A N/A Y 17437 Self-heal Daemon on dhcp46-139.lab.eng.blr. redhat.com N/A N/A Y 29752 Self-heal Daemon on dhcp46-152.lab.eng.blr. redhat.com N/A N/A Y 13070 Task Status of Volume vol_ec ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp35-197 build]# gluster v status vol_disperse Status of volume: vol_disperse Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.122.201:/tmp/disperse_brick1 49154 0 Y 3444 Brick 192.168.122.201:/tmp/disperse_brick2 49155 0 Y 3464 Brick 192.168.122.201:/tmp/disperse_brick3 49156 0 Y 3484 Brick 192.168.122.201:/tmp/disperse_brick4 49157 0 Y 3504 Brick 192.168.122.201:/tmp/disperse_brick5 49158 0 Y 3524 Brick 192.168.122.201:/tmp/disperse_brick6 49159 0 Y 3544 Brick 192.168.122.201:/tmp/disperse_brick7 49160 0 Y 3564 Brick 192.168.122.201:/tmp/disperse_brick8 49161 0 Y 3584 Brick 192.168.122.201:/tmp/disperse_brick9 49162 0 Y 3604 Brick 192.168.122.201:/tmp/disperse_brick10 49163 0 Y 3624 Brick 192.168.122.201:/tmp/disperse_brick11 49164 0 Y 3644 Brick 192.168.122.201:/tmp/disperse_brick12 49165 0 Y 3664 Self-heal Daemon on localhost N/A N/A Y 3686 Task Status of Volume vol_disperse ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp35-197 build]# gluster v info vol_disperse Volume Name: vol_disperse Type: Distributed-Disperse Volume ID: d66d97a1-6bdb-476c-8c24-2c842f2bcb7a Status: Started Snapshot Count: 0 Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: 192.168.122.201:/tmp/disperse_brick1 Brick2: 192.168.122.201:/tmp/disperse_brick2 Brick3: 192.168.122.201:/tmp/disperse_brick3 Brick4: 192.168.122.201:/tmp/disperse_brick4 Brick5: 192.168.122.201:/tmp/disperse_brick5 Brick6: 192.168.122.201:/tmp/disperse_brick6 Brick7: 192.168.122.201:/tmp/disperse_brick7 Brick8: 192.168.122.201:/tmp/disperse_brick8 Brick9: 192.168.122.201:/tmp/disperse_brick9 Brick10: 192.168.122.201:/tmp/disperse_brick10 Brick11: 192.168.122.201:/tmp/disperse_brick11 Brick12: 192.168.122.201:/tmp/disperse_brick12 Options Reconfigured: performance.md-cache-timeout: 600 performance.cache-invalidation: on performance.stat-prefetch: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [root@dhcp35-197 build]# [root@dhcp35-197 ~]# mount -t glusterfs localhost:/vol_disperse /fuse-mnt [root@dhcp35-197 ~]# cd /fuse-mnt [root@dhcp35-197 fuse-mnt]# echo "hello" > test [root@dhcp35-197 fuse-mnt]# cat test hello [root@dhcp35-197 fuse-mnt]# ln test test_hlink [root@dhcp35-197 fuse-mnt]# cat test hello [root@dhcp35-197 fuse-mnt]# cd [root@dhcp35-197 ~]# umount /fuse-mnt [root@dhcp35-197 ~]# [root@dhcp35-197 ~]# [root@dhcp35-197 ~]# umount /fuse-mnt umount: /fuse-mnt: not mounted [root@dhcp35-197 ~]# mount -t glusterfs localhost:/vol_disperse /fuse-mnt [root@dhcp35-197 ~]# cd /fuse-mnt [root@dhcp35-197 fuse-mnt]# cat test hello [root@dhcp35-197 fuse-mnt]# ln test test_hlink2 [root@dhcp35-197 fuse-mnt]# cat test [root@dhcp35-197 fuse-mnt]# [root@dhcp35-197 fuse-mnt]# I observe similar issue for existing file on fuse_mount as well. @Atin, @Poornima, this is a bug in EC, sorry we didn't update the issue, my bad. When a hard link is created EC is as part of hardlink is doing a lookup on new path instead of old path, which leads to lookup failure so updates the size to be wrong. I need to have a discussion with Xavi and finalize on the fix. Upstream patch: http://review.gluster.org/16320 Due to this bug, fs sanity rpc test suite fails with EC volume. i have tested this on 3.8.4-11 and found the issue fixed. hence moving to verified tested on both fuse and ganesha protocol on a dist-ec volume with mdcache settings enabled while qa validation i hit a bz@ https://bugzilla.redhat.com/show_bug.cgi?id=1411352 - [mdcache]Rename of a file doesn't seem to send invalidation to the clients consistently and hence can end up with duplicate files Testcase id in polarion:RHG3-11785 - BZ#1408836:Contents of original file are not seen when hardlink is created Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |