Bug 1320818 - Over some time Files which were accessible become inaccessible(music files)
Summary: Over some time Files which were accessible become inaccessible(music files)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Manikandan
QA Contact:
URL:
Whiteboard: tier-fuse-nfs-samba
Depends On: 1302355
Blocks: 1320887 1320892
TreeView+ depends on / blocked
 
Reported: 2016-03-24 04:01 UTC by Raghavendra Bhat
Modified: 2016-06-16 14:01 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of: 1302355
: 1320887 1320892 (view as bug list)
Environment:
Last Closed: 2016-06-16 14:01:30 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2016-03-24 04:01:12 UTC
On a dist-rep over EC tiered volume, I tried to mounted the volume on my desktop using nfs and copied some mp3 files into it.
Now using vlc player, i tried to play the files, in a suffle mode (about 30 songs)

Intially about 10 mp3 files completed playing and then we start getting permission denied error as below:



VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah (2013) ~320Kbps/04 - Banthi Poola Janaki [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah%20%282013%29%20~320Kbps/04%20-%20Banthi%20Poola%20Janaki%20%5Bwww.AtoZmp3.Net%5D.mp3'. Check the log for details.
File reading failed:





gluster version:3.7.5-17

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-01-27 10:20:41 EST ---

This bug is automatically being proposed for the current z-stream release of Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from nchilaka on 2016-01-28 00:20:54 EST ---

gluster v info:
[root@dhcp37-202 ~]# gluster v info nagvol
 
Volume Name: nagvol
Type: Tier
Volume ID: 5972ca44-130a-4543-8cc0-abf76a133a34
Status: Started
Number of Bricks: 36
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 6 x 2 = 12
Brick1: 10.70.37.120:/rhs/brick7/nagvol_hot
Brick2: 10.70.37.60:/rhs/brick7/nagvol_hot
Brick3: 10.70.37.69:/rhs/brick7/nagvol_hot
Brick4: 10.70.37.101:/rhs/brick7/nagvol_hot
Brick5: 10.70.35.163:/rhs/brick7/nagvol_hot
Brick6: 10.70.35.173:/rhs/brick7/nagvol_hot
Brick7: 10.70.35.232:/rhs/brick7/nagvol_hot
Brick8: 10.70.35.176:/rhs/brick7/nagvol_hot
Brick9: 10.70.35.222:/rhs/brick7/nagvol_hot
Brick10: 10.70.35.155:/rhs/brick7/nagvol_hot
Brick11: 10.70.37.195:/rhs/brick7/nagvol_hot
Brick12: 10.70.37.202:/rhs/brick7/nagvol_hot
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (8 + 4) = 24
Brick13: 10.70.37.202:/rhs/brick1/nagvol
Brick14: 10.70.37.195:/rhs/brick1/nagvol
Brick15: 10.70.35.155:/rhs/brick1/nagvol
Brick16: 10.70.35.222:/rhs/brick1/nagvol
Brick17: 10.70.35.108:/rhs/brick1/nagvol
Brick18: 10.70.35.44:/rhs/brick1/nagvol
Brick19: 10.70.35.89:/rhs/brick1/nagvol
Brick20: 10.70.35.231:/rhs/brick1/nagvol
Brick21: 10.70.35.176:/rhs/brick1/nagvol
Brick22: 10.70.35.232:/rhs/brick1/nagvol
Brick23: 10.70.35.173:/rhs/brick1/nagvol
Brick24: 10.70.35.163:/rhs/brick1/nagvol
Brick25: 10.70.37.101:/rhs/brick1/nagvol
Brick26: 10.70.37.69:/rhs/brick1/nagvol
Brick27: 10.70.37.60:/rhs/brick1/nagvol
Brick28: 10.70.37.120:/rhs/brick1/nagvol
Brick29: 10.70.37.202:/rhs/brick2/nagvol
Brick30: 10.70.37.195:/rhs/brick2/nagvol
Brick31: 10.70.35.155:/rhs/brick2/nagvol
Brick32: 10.70.35.222:/rhs/brick2/nagvol
Brick33: 10.70.35.108:/rhs/brick2/nagvol
Brick34: 10.70.35.44:/rhs/brick2/nagvol
Brick35: 10.70.35.89:/rhs/brick2/nagvol
Brick36: 10.70.35.231:/rhs/brick2/nagvol
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: off
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on



[root@dhcp37-202 ~]# gluster v status nagvol
Status of volume: nagvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.37.120:/rhs/brick7/nagvol_hot   49156     0          Y       32513
Brick 10.70.37.60:/rhs/brick7/nagvol_hot    49156     0          Y       4060 
Brick 10.70.37.69:/rhs/brick7/nagvol_hot    49156     0          Y       32442
Brick 10.70.37.101:/rhs/brick7/nagvol_hot   49156     0          Y       4199 
Brick 10.70.35.163:/rhs/brick7/nagvol_hot   49156     0          Y       617  
Brick 10.70.35.173:/rhs/brick7/nagvol_hot   49156     0          Y       32751
Brick 10.70.35.232:/rhs/brick7/nagvol_hot   49156     0          Y       32361
Brick 10.70.35.176:/rhs/brick7/nagvol_hot   49156     0          Y       32383
Brick 10.70.35.222:/rhs/brick7/nagvol_hot   49155     0          Y       22713
Brick 10.70.35.155:/rhs/brick7/nagvol_hot   49155     0          Y       22505
Brick 10.70.37.195:/rhs/brick7/nagvol_hot   49156     0          Y       25832
Brick 10.70.37.202:/rhs/brick7/nagvol_hot   49156     0          Y       26275
Cold Bricks:
Brick 10.70.37.202:/rhs/brick1/nagvol       49152     0          Y       16950
Brick 10.70.37.195:/rhs/brick1/nagvol       49152     0          Y       16702
Brick 10.70.35.155:/rhs/brick1/nagvol       49152     0          Y       13578
Brick 10.70.35.222:/rhs/brick1/nagvol       49152     0          Y       13546
Brick 10.70.35.108:/rhs/brick1/nagvol       49152     0          Y       4675 
Brick 10.70.35.44:/rhs/brick1/nagvol        49152     0          Y       12288
Brick 10.70.35.89:/rhs/brick1/nagvol        49152     0          Y       12261
Brick 10.70.35.231:/rhs/brick1/nagvol       49152     0          Y       22810
Brick 10.70.35.176:/rhs/brick1/nagvol       49152     0          Y       22781
Brick 10.70.35.232:/rhs/brick1/nagvol       49152     0          Y       22783
Brick 10.70.35.173:/rhs/brick1/nagvol       49152     0          Y       22795
Brick 10.70.35.163:/rhs/brick1/nagvol       49152     0          Y       22805
Brick 10.70.37.101:/rhs/brick1/nagvol       49152     0          Y       22847
Brick 10.70.37.69:/rhs/brick1/nagvol        49152     0          Y       22847
Brick 10.70.37.60:/rhs/brick1/nagvol        49152     0          Y       22895
Brick 10.70.37.120:/rhs/brick1/nagvol       49152     0          Y       22916
Brick 10.70.37.202:/rhs/brick2/nagvol       49153     0          Y       16969
Brick 10.70.37.195:/rhs/brick2/nagvol       49153     0          Y       16721
Brick 10.70.35.155:/rhs/brick2/nagvol       49153     0          Y       13597
Brick 10.70.35.222:/rhs/brick2/nagvol       49153     0          Y       13565
Brick 10.70.35.108:/rhs/brick2/nagvol       49153     0          Y       4694 
Brick 10.70.35.44:/rhs/brick2/nagvol        49153     0          Y       12307
Brick 10.70.35.89:/rhs/brick2/nagvol        49153     0          Y       12280
Brick 10.70.35.231:/rhs/brick2/nagvol       49153     0          Y       22829
NFS Server on localhost                     2049      0          Y       26295
Self-heal Daemon on localhost               N/A       N/A        Y       26303
Quota Daemon on localhost                   N/A       N/A        Y       26311
NFS Server on 10.70.37.101                  2049      0          Y       4219 
Self-heal Daemon on 10.70.37.101            N/A       N/A        Y       4227 
Quota Daemon on 10.70.37.101                N/A       N/A        Y       4235 
NFS Server on 10.70.37.69                   2049      0          Y       32462
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       32470
Quota Daemon on 10.70.37.69                 N/A       N/A        Y       32478
NFS Server on 10.70.37.195                  2049      0          Y       25852
Self-heal Daemon on 10.70.37.195            N/A       N/A        Y       25860
Quota Daemon on 10.70.37.195                N/A       N/A        Y       25868
NFS Server on 10.70.37.60                   2049      0          Y       4080 
Self-heal Daemon on 10.70.37.60             N/A       N/A        Y       4088 
Quota Daemon on 10.70.37.60                 N/A       N/A        Y       4096 
NFS Server on 10.70.37.120                  2049      0          Y       32533
Self-heal Daemon on 10.70.37.120            N/A       N/A        Y       32541
Quota Daemon on 10.70.37.120                N/A       N/A        Y       32549
NFS Server on 10.70.35.173                  2049      0          Y       303  
Self-heal Daemon on 10.70.35.173            N/A       N/A        Y       311  
Quota Daemon on 10.70.35.173                N/A       N/A        Y       319  
NFS Server on 10.70.35.232                  2049      0          Y       32381
Self-heal Daemon on 10.70.35.232            N/A       N/A        Y       32389
Quota Daemon on 10.70.35.232                N/A       N/A        Y       32397
NFS Server on 10.70.35.176                  2049      0          Y       32403
Self-heal Daemon on 10.70.35.176            N/A       N/A        Y       32411
Quota Daemon on 10.70.35.176                N/A       N/A        Y       32419
NFS Server on 10.70.35.231                  2049      0          Y       32446
Self-heal Daemon on 10.70.35.231            N/A       N/A        Y       32455
Quota Daemon on 10.70.35.231                N/A       N/A        Y       32463
NFS Server on 10.70.35.163                  2049      0          Y       637  
Self-heal Daemon on 10.70.35.163            N/A       N/A        Y       645  
Quota Daemon on 10.70.35.163                N/A       N/A        Y       653  
NFS Server on 10.70.35.222                  2049      0          Y       22733
Self-heal Daemon on 10.70.35.222            N/A       N/A        Y       22742
Quota Daemon on 10.70.35.222                N/A       N/A        Y       22750
NFS Server on 10.70.35.108                  2049      0          Y       13877
Self-heal Daemon on 10.70.35.108            N/A       N/A        Y       13885
Quota Daemon on 10.70.35.108                N/A       N/A        Y       13893
NFS Server on 10.70.35.155                  2049      0          Y       22525
Self-heal Daemon on 10.70.35.155            N/A       N/A        Y       22533
Quota Daemon on 10.70.35.155                N/A       N/A        Y       22541
NFS Server on 10.70.35.44                   2049      0          Y       21479
Self-heal Daemon on 10.70.35.44             N/A       N/A        Y       21487
Quota Daemon on 10.70.35.44                 N/A       N/A        Y       21495
NFS Server on 10.70.35.89                   2049      0          Y       20671
Self-heal Daemon on 10.70.35.89             N/A       N/A        Y       20679
Quota Daemon on 10.70.35.89                 N/A       N/A        Y       20687
 
Task Status of Volume nagvol
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 0870550a-70ba-4cd1-98da-b456059bd6cc
Status               : in progress         
 






Later changed some performance settings as per Rafi's instruction to debug the issue:
[root@dhcp37-202 glusterfs]# gluster v info nagvol
 
Volume Name: nagvol
Type: Tier
Volume ID: 5972ca44-130a-4543-8cc0-abf76a133a34
Status: Started
Number of Bricks: 36
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 6 x 2 = 12
Brick1: 10.70.37.120:/rhs/brick7/nagvol_hot
Brick2: 10.70.37.60:/rhs/brick7/nagvol_hot
Brick3: 10.70.37.69:/rhs/brick7/nagvol_hot
Brick4: 10.70.37.101:/rhs/brick7/nagvol_hot
Brick5: 10.70.35.163:/rhs/brick7/nagvol_hot
Brick6: 10.70.35.173:/rhs/brick7/nagvol_hot
Brick7: 10.70.35.232:/rhs/brick7/nagvol_hot
Brick8: 10.70.35.176:/rhs/brick7/nagvol_hot
Brick9: 10.70.35.222:/rhs/brick7/nagvol_hot
Brick10: 10.70.35.155:/rhs/brick7/nagvol_hot
Brick11: 10.70.37.195:/rhs/brick7/nagvol_hot
Brick12: 10.70.37.202:/rhs/brick7/nagvol_hot
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (8 + 4) = 24
Brick13: 10.70.37.202:/rhs/brick1/nagvol
Brick14: 10.70.37.195:/rhs/brick1/nagvol
Brick15: 10.70.35.155:/rhs/brick1/nagvol
Brick16: 10.70.35.222:/rhs/brick1/nagvol
Brick17: 10.70.35.108:/rhs/brick1/nagvol
Brick18: 10.70.35.44:/rhs/brick1/nagvol
Brick19: 10.70.35.89:/rhs/brick1/nagvol
Brick20: 10.70.35.231:/rhs/brick1/nagvol
Brick21: 10.70.35.176:/rhs/brick1/nagvol
Brick22: 10.70.35.232:/rhs/brick1/nagvol
Brick23: 10.70.35.173:/rhs/brick1/nagvol
Brick24: 10.70.35.163:/rhs/brick1/nagvol
Brick25: 10.70.37.101:/rhs/brick1/nagvol
Brick26: 10.70.37.69:/rhs/brick1/nagvol
Brick27: 10.70.37.60:/rhs/brick1/nagvol
Brick28: 10.70.37.120:/rhs/brick1/nagvol
Brick29: 10.70.37.202:/rhs/brick2/nagvol
Brick30: 10.70.37.195:/rhs/brick2/nagvol
Brick31: 10.70.35.155:/rhs/brick2/nagvol
Brick32: 10.70.35.222:/rhs/brick2/nagvol
Brick33: 10.70.35.108:/rhs/brick2/nagvol
Brick34: 10.70.35.44:/rhs/brick2/nagvol
Brick35: 10.70.35.89:/rhs/brick2/nagvol
Brick36: 10.70.35.231:/rhs/brick2/nagvol
Options Reconfigured:
cluster.watermark-hi: 50
cluster.watermark-low: 30
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: off
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
[root@dhcp37-202 glusterfs]# gluster v status nagvol
Status of volume: nagvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.37.120:/rhs/brick7/nagvol_hot   49156     0          Y       32513
Brick 10.70.37.60:/rhs/brick7/nagvol_hot    49156     0          Y       4060 
Brick 10.70.37.69:/rhs/brick7/nagvol_hot    49156     0          Y       32442
Brick 10.70.37.101:/rhs/brick7/nagvol_hot   49156     0          Y       4199 
Brick 10.70.35.163:/rhs/brick7/nagvol_hot   49156     0          Y       617  
Brick 10.70.35.173:/rhs/brick7/nagvol_hot   49156     0          Y       32751
Brick 10.70.35.232:/rhs/brick7/nagvol_hot   49156     0          Y       32361
Brick 10.70.35.176:/rhs/brick7/nagvol_hot   49156     0          Y       32383
Brick 10.70.35.222:/rhs/brick7/nagvol_hot   49155     0          Y       22713
Brick 10.70.35.155:/rhs/brick7/nagvol_hot   49155     0          Y       22505
Brick 10.70.37.195:/rhs/brick7/nagvol_hot   49156     0          Y       25832
Brick 10.70.37.202:/rhs/brick7/nagvol_hot   49156     0          Y       26275
Cold Bricks:
Brick 10.70.37.202:/rhs/brick1/nagvol       49152     0          Y       16950
Brick 10.70.37.195:/rhs/brick1/nagvol       49152     0          Y       16702
Brick 10.70.35.155:/rhs/brick1/nagvol       49152     0          Y       13578
Brick 10.70.35.222:/rhs/brick1/nagvol       49152     0          Y       13546
Brick 10.70.35.108:/rhs/brick1/nagvol       49152     0          Y       4675 
Brick 10.70.35.44:/rhs/brick1/nagvol        49152     0          Y       12288
Brick 10.70.35.89:/rhs/brick1/nagvol        49152     0          Y       2668 
Brick 10.70.35.231:/rhs/brick1/nagvol       49152     0          Y       22810
Brick 10.70.35.176:/rhs/brick1/nagvol       49152     0          Y       22781
Brick 10.70.35.232:/rhs/brick1/nagvol       49152     0          Y       22783
Brick 10.70.35.173:/rhs/brick1/nagvol       49152     0          Y       22795
Brick 10.70.35.163:/rhs/brick1/nagvol       49152     0          Y       22805
Brick 10.70.37.101:/rhs/brick1/nagvol       49152     0          Y       22847
Brick 10.70.37.69:/rhs/brick1/nagvol        49152     0          Y       22847
Brick 10.70.37.60:/rhs/brick1/nagvol        49152     0          Y       22895
Brick 10.70.37.120:/rhs/brick1/nagvol       49152     0          Y       22916
Brick 10.70.37.202:/rhs/brick2/nagvol       49153     0          Y       16969
Brick 10.70.37.195:/rhs/brick2/nagvol       49153     0          Y       16721
Brick 10.70.35.155:/rhs/brick2/nagvol       49153     0          Y       13597
Brick 10.70.35.222:/rhs/brick2/nagvol       49153     0          Y       13565
Brick 10.70.35.108:/rhs/brick2/nagvol       49153     0          Y       4694 
Brick 10.70.35.44:/rhs/brick2/nagvol        49153     0          Y       12307
Brick 10.70.35.89:/rhs/brick2/nagvol        49153     0          Y       2683 
Brick 10.70.35.231:/rhs/brick2/nagvol       49153     0          Y       22829
NFS Server on localhost                     2049      0          Y       3356 
Self-heal Daemon on localhost               N/A       N/A        Y       3364 
Quota Daemon on localhost                   N/A       N/A        Y       3372 
NFS Server on 10.70.37.195                  2049      0          Y       2354 
Self-heal Daemon on 10.70.37.195            N/A       N/A        Y       2362 
Quota Daemon on 10.70.37.195                N/A       N/A        Y       2370 
NFS Server on 10.70.37.120                  2049      0          Y       9573 
Self-heal Daemon on 10.70.37.120            N/A       N/A        Y       9581 
Quota Daemon on 10.70.37.120                N/A       N/A        Y       9589 
NFS Server on 10.70.37.101                  2049      0          Y       13331
Self-heal Daemon on 10.70.37.101            N/A       N/A        Y       13339
Quota Daemon on 10.70.37.101                N/A       N/A        Y       13347
NFS Server on 10.70.37.60                   2049      0          Y       13071
Self-heal Daemon on 10.70.37.60             N/A       N/A        Y       13079
Quota Daemon on 10.70.37.60                 N/A       N/A        Y       13087
NFS Server on 10.70.37.69                   2049      0          Y       9368 
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       9376 
Quota Daemon on 10.70.37.69                 N/A       N/A        Y       9384 
NFS Server on 10.70.35.176                  2049      0          Y       9438 
Self-heal Daemon on 10.70.35.176            N/A       N/A        Y       9446 
Quota Daemon on 10.70.35.176                N/A       N/A        Y       9454 
NFS Server on 10.70.35.155                  2049      0          Y       31698
Self-heal Daemon on 10.70.35.155            N/A       N/A        Y       31706
Quota Daemon on 10.70.35.155                N/A       N/A        Y       31714
NFS Server on 10.70.35.232                  2049      0          Y       9301 
Self-heal Daemon on 10.70.35.232            N/A       N/A        Y       9309 
Quota Daemon on 10.70.35.232                N/A       N/A        Y       9317 
NFS Server on 10.70.35.163                  2049      0          Y       9935 
Self-heal Daemon on 10.70.35.163            N/A       N/A        Y       9943 
Quota Daemon on 10.70.35.163                N/A       N/A        Y       9951 
NFS Server on 10.70.35.89                   2049      0          Y       2483 
Self-heal Daemon on 10.70.35.89             N/A       N/A        Y       2560 
Quota Daemon on 10.70.35.89                 N/A       N/A        Y       2597 
NFS Server on 10.70.35.222                  2049      0          Y       32079
Self-heal Daemon on 10.70.35.222            N/A       N/A        Y       32087
Quota Daemon on 10.70.35.222                N/A       N/A        Y       32095
NFS Server on 10.70.35.173                  2049      0          Y       9724 
Self-heal Daemon on 10.70.35.173            N/A       N/A        Y       9732 
Quota Daemon on 10.70.35.173                N/A       N/A        Y       9740 
NFS Server on 10.70.35.231                  2049      0          Y       9171 
Self-heal Daemon on 10.70.35.231            N/A       N/A        Y       9179 
Quota Daemon on 10.70.35.231                N/A       N/A        Y       9187 
NFS Server on 10.70.35.44                   2049      0          Y       30600
Self-heal Daemon on 10.70.35.44             N/A       N/A        Y       30608
Quota Daemon on 10.70.35.44                 N/A       N/A        Y       30616
NFS Server on 10.70.35.108                  2049      0          Y       23000
Self-heal Daemon on 10.70.35.108            N/A       N/A        Y       23008
Quota Daemon on 10.70.35.108                N/A       N/A        Y       23016
 
Task Status of Volume nagvol
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 0870550a-70ba-4cd1-98da-b456059bd6cc
Status               : in progress         
 
[root@dhcp37-202 glusterfs]# 
[root@dhcp37-202 glusterfs]# 




[root@dhcp37-202 ~]# rpm -qa|grep gluster
glusterfs-client-xlators-3.7.5-17.el7rhgs.x86_64
glusterfs-server-3.7.5-17.el7rhgs.x86_64
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
glusterfs-3.7.5-17.el7rhgs.x86_64
glusterfs-api-3.7.5-17.el7rhgs.x86_64
glusterfs-cli-3.7.5-17.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-17.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-17.el7rhgs.x86_64
gluster-nagios-common-0.2.3-1.el7rhgs.noarch
python-gluster-3.7.5-16.el7rhgs.noarch
glusterfs-libs-3.7.5-17.el7rhgs.x86_64
glusterfs-fuse-3.7.5-17.el7rhgs.x86_64
glusterfs-rdma-3.7.5-17.el7rhgs.x86_64




-->Mounted the volume on 4 different clients(all NFS mounts): 3 were RHEL and 1 was the fedora personal laptop machine, where i had seen the problem 
--->total 16 gluster nodes 





================= Error thrown by VLC application================
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu (2013)~128Kbps/02 - Yaevaindho [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu%20(2013)~128Kbps/02%20-%20Yaevaindho%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/Laahiri Laahiri Laahiri Lo/OHOHO_CHILAKAMMA.mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/Laahiri%20Laahiri%20Laahiri%20Lo/OHOHO_CHILAKAMMA.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah (2013) ~320Kbps/04 - Banthi Poola Janaki [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah%20%282013%29%20~320Kbps/04%20-%20Banthi%20Poola%20Janaki%20%5Bwww.AtoZmp3.Net%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu (2013)~128Kbps/03 - Lucky Lucky Rai [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu%20(2013)~128Kbps/03%20-%20Lucky%20Lucky%20Rai%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah (2013) ~320Kbps/02 - Diamond Girl [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah%20%282013%29%20~320Kbps/02%20-%20Diamond%20Girl%20%5Bwww.AtoZmp3.Net%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu (2012) ~320Kbps/02.Jaga Jaga Jagadeka Veera [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu%20(2012)%20~320Kbps/02.Jaga%20Jaga%20Jagadeka%20Veera%20%5Bwww.AtoZmp3.Net%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu (2012) ~320Kbps/01.Made For Each Other [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu%20(2012)%20~320Kbps/01.Made%20For%20Each%20Other%20%5Bwww.AtoZmp3.Net%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu (2013) ~320 Kbps/06 - Pimple Dimple [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu%20(2013)%20~320%20Kbps/06%20-%20Pimple%20Dimple%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_sa rajkumar/Yavvana Veena.mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_sa%20rajkumar/Yavvana%20Veena.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu (2013)~128Kbps/04 - Padipoyaanila [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu%20(2013)~128Kbps/04%20-%20Padipoyaanila%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu (2013) ~320 Kbps/05 - Oye Oye [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu%20(2013)%20~320%20Kbps/05%20-%20Oye%20Oye%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu (2012) ~320Kbps/05.Kaatuka Kallu [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu%20(2012)%20~320Kbps/05.Kaatuka%20Kallu%20%5Bwww.AtoZmp3.Net%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/DADDY/03 VANA VANA.MP3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/DADDY/03%20VANA%20VANA.MP3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/Deviputhrudu/03___OKATA_RENDA.MP3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/Deviputhrudu/03___OKATA_RENDA.MP3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_sa rajkumar/Panchadara_Chilaka-Anukunnana.mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_sa%20rajkumar/Panchadara_Chilaka-Anukunnana.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/DADDY/01 LKKI.MP3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/DADDY/01%20LKKI.MP3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva (2014) ~320Kbps/04 - Pisthol Bava [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva%20(2014)%20~320Kbps/04%20-%20Pisthol%20Bava%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva (2014) ~320Kbps/02 - Chinnadana Chinnadana [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva%20(2014)%20~320Kbps/02%20-%20Chinnadana%20Chinnadana%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu (2013) ~320 Kbps/02 - Nee Jathaga  [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL 'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu%20(2013)%20~320%20Kbps/02%20-%20Nee%20Jathaga%20%20%5Bwww.AtoZmp3.in%5D.mp3'. Check the log for details.


============================================================================
How reproducible:
Reproduced it atleast  3 times

--- Additional comment from nchilaka on 2016-01-28 00:37:44 EST ---

sosreports:
[nchilaka@rhsqe-repo nchilaka]$ /home/repo/sosreports/nchilaka/bug.1302355

[nchilaka@rhsqe-repo nchilaka]$ hostname
rhsqe-repo.lab.eng.blr.redhat.com

--- Additional comment from Nithya Balachandran on 2016-01-28 03:42:03 EST ---

Which NFS server did you use to mount the volume?

--- Additional comment from nchilaka on 2016-01-29 05:21:49 EST ---

I saw the problem with atleast 2 different servers at different times.
10.70.37.202 and 10.70.37.120

--- Additional comment from Soumya Koduri on 2016-02-02 05:01:21 EST ---

Here is our analysis so far:

NFS clients send ACCESS fop to get permission of any file. GlusterFS server encapsulates the file permissions in the op_errno before sending to gluster-NFS server. In the pkt trace collected, we have seen few ACCESS fops sent with op_errno set to zero (that too for root gfid/inode). That means brick processes have been sending NULL permissions. We have seen this issue hit with cold-tier bricks.


When checked brick processes, we found out that, posix-acl xlator is checking against NULL perms at times. posix-acl xlator (when no acl set) stores permissions of the inodes in its 'ctx->perm' structure and uses those bits to decide access permission for any user. This 'ctx->perm' is updated as part of posix_ctx_update() which gets called as part of variety of fops (like lookup, stat, readdir etc).


gluster-nfs server:
(gdb)
1453                    gf_msg (this->name, GF_LOG_WARNING,
(gdb) c
Continuing.

Breakpoint 1, client3_3_access (frame=0x7f9885331570, this=0x7f9874031150, data=0x7f986bffe7a0)
    at client-rpc-fops.c:3520
3520    {
(gdb) p this->name
$10 = 0x7f9874030ae0 "nagvol-client-20"
(gdb)

volfile:
volume nagvol-client-20
    type protocol/client
    option send-gids true
    option password 492e0b7f-255f-469d-8b9c-6982079dbcd1
    option username 727545cc-cd4f-4d92-9fed-81763b6d3d29
    option transport-type tcp
    option remote-subvolume /rhs/brick2/nagvol
    option remote-host 10.70.35.108
    option ping-timeout 42
end-volume 


Brick process:

Breakpoint 3, posix_acl_ctx_update (inode=0x7f7b1a1b306c, this=this@entry=0x7f7b340106b0,
    buf=buf@entry=0x7f7acc4d9148) at posix-acl.c:734
734    {
(gdb) p inode>gfid
No symbol "gfid" in current context.
(gdb) p inode->gfid
$23 = '\000' <repeats 15 times>, "\001"
(gdb) n
743            ctx = posix_acl_ctx_get (inode, this);
(gdb)
744            if (!ctx) {
(gdb)
743            ctx = posix_acl_ctx_get (inode, this);
(gdb)
744            if (!ctx) {
(gdb)
749            LOCK(&inode->lock);
(gdb)
751                    ctx->uid   = buf->ia_uid;
(gdb)
753                    ctx->perm  = st_mode_from_ia (buf->ia_prot, buf->ia_type);
(gdb)
751                    ctx->uid   = buf->ia_uid;
(gdb)
752                    ctx->gid   = buf->ia_gid;
(gdb)
753                    ctx->perm  = st_mode_from_ia (buf->ia_prot, buf->ia_type);
(gdb)
753                    ctx->perm  = st_mode_from_ia (buf->ia_prot, buf->ia_type);
(gdb)
755            acl = ctx->acl_access;
(gdb) p/x ctx->perm
$24 = 0x4000
(gdb) p buf->ia_prot
$25 = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000',
    exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000',
    write = 0 '\000', exec = 0 '\000'}} 
(gdb) bt
#0  posix_acl_ctx_update (inode=0x7f7b1a1b306c, this=this@entry=0x7f7b340106b0, buf=buf@entry=0x7f7acc4d9148)
    at posix-acl.c:755
#1  0x00007f7b39932af5 in posix_acl_readdirp_cbk (frame=0x7f7b46619a14, cookie=<optimized out>,
    this=0x7f7b340106b0, op_ret=7, op_errno=0, entries=0x7f7ab008da50, xdata=0x0) at posix-acl.c:1625
#2  0x00007f7b39b46bef in br_stub_readdirp_cbk (frame=0x7f7b46627674, cookie=<optimized out>, this=0x7f7b3400f270,
    op_ret=7, op_errno=0, entries=0x7f7ab008da50, dict=0x0) at bit-rot-stub.c:2546
#3  0x00007f7b3b0af163 in posix_readdirp (frame=0x7f7b46618aa0, this=<optimized out>, fd=<optimized out>,
    size=<optimized out>, off=<optimized out>, dict=<optimized out>) at posix.c:6022
#4  0x00007f7b48b10535 in default_readdirp (frame=0x7f7b46618aa0, this=0x7f7b34009240, fd=0x7f7b2c005408, size=0,
    off=0, xdata=0x7f7b48ddd5b8) at defaults.c:2101
#5  0x00007f7b48b10535 in default_readdirp (frame=0x7f7b46618aa0, this=0x7f7b3400a880, fd=0x7f7b2c005408, size=0,
    off=0, xdata=0x7f7b48ddd5b8) at defaults.c:2101
#6  0x00007f7b48b10535 in default_readdirp (frame=0x7f7b46618aa0, this=0x7f7b3400d2e0, fd=0x7f7b2c005408, size=0,
    off=0, xdata=0x7f7b48ddd5b8) at defaults.c:2101
#7  0x00007f7b39b404db in br_stub_readdirp (frame=0x7f7b46627674, this=0x7f7b3400f270, fd=0x7f7b2c005408, size=0,
    offset=0, dict=0x7f7b48ddd5b8) at bit-rot-stub.c:2581
#8  0x00007f7b39930949 in posix_acl_readdirp (frame=0x7f7b46619a14, this=0x7f7b340106b0, fd=0x7f7b2c005408, size=0,
    offset=0, dict=0x7f7b48ddd5b8) at posix-acl.c:1674
#9  0x00007f7b39718817 in pl_readdirp (frame=0x7f7b4661b0ec, this=0x7f7b34011ac0, fd=0x7f7b2c005408, size=0,
    offset=0, xdata=0x7f7b48ddd5b8) at posix.c:2213
#10 0x00007f7b39506c85 in up_readdirp (frame=0x7f7b46631504, this=0x7f7b34012e60, fd=0x7f7b2c005408, size=0, off=0,
    dict=0x7f7b48ddd5b8) at upcall.c:1342
#11 0x00007f7b48b1e19d in default_readdirp_resume (frame=0x7f7b4662b4f0, this=0x7f7b340142d0, fd=0x7f7b2c005408,
    size=0, off=0, xdata=0x7f7b48ddd5b8) at defaults.c:1657
#12 0x00007f7b48b3b17d in call_resume (stub=0x7f7b46113684) at call-stub.c:2576
#13 0x00007f7b392f6363 in iot_worker (data=0x7f7b340546a0) at io-threads.c:215
#14 0x00007f7b47973dc5 in start_thread () from /lib64/libpthread.so.0
#15 0x00007f7b472ba21d in clone () from /lib64/libc.so.6
(gdb) f 1
#1  0x00007f7b39932af5 in posix_acl_readdirp_cbk (frame=0x7f7b46619a14, cookie=<optimized out>,
    this=0x7f7b340106b0, op_ret=7, op_errno=0, entries=0x7f7ab008da50, xdata=0x0) at posix-acl.c:1625
1625                    posix_acl_ctx_update (entry->inode, this, &entry->d_stat);
(gdb) p entry
$26 = (gf_dirent_t *) 0x7f7acc4d9120
(gdb) p entry->d_stat
$27 = {ia_ino = 0, ia_gfid = '\000' <repeats 15 times>, "\001", ia_dev = 0, ia_type = IA_IFDIR, ia_prot = {
    suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000',
      exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000',
      write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0,
  ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0,
  ia_ctime_nsec = 0} 


As we can see above, as part of readdirp, we ended up with root inode entry which hasn't got update with right stat (entry->d_stat). When looked at the code,

int
posix_make_ancestryfromgfid (xlator_t *this, char *path, int pathsize,
                             gf_dirent_t *head, int type, uuid_t gfid,
                             const size_t handle_size,
                             const char *priv_base_path, inode_table_t *itable,
                             inode_t **parent, dict_t *xdata, int32_t *op_errno)
{
        char        *linkname   = NULL; /* "../../<gfid[0]>/<gfid[1]/"
                                         "<gfidstr>/<NAME_MAX>" */ 

...........
...........
        if (__is_root_gfid (gfid)) {
                if (parent) {
                        if (*parent) {
                                inode_unref (*parent);
                        }

                        *parent = inode_ref (itable->root);
                }

                inode = itable->root;

                memset (&iabuf, 0, sizeof (iabuf));
                gf_uuid_copy (iabuf.ia_gfid, inode->gfid);
                iabuf.ia_type = inode->ia_type;

                ret = posix_make_ancestral_node (priv_base_path, path, pathsize,
                                                 head, "/", &iabuf, inode, type,
                                                 xdata);
                if (ret < 0)
                        *op_errno = ENOMEM;
                return ret;
        } 
.............

}

For root inode entry, we do not seem to fetching stat (for other entries 'posix_resolve()' call is made which updates stat). So we suspect this could have resulted in root entry with NULL perms which in turn got updated in posix_acl xlator 'ctx->perm' resulting in EPERM error for ACCESS fop. But its not yet sure why this issue hasn't been hit till now.

Comment 1 Vijay Bellur 2016-03-24 04:04:58 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#5) for review on master by Raghavendra Bhat (raghavendra)

Comment 2 Vijay Bellur 2016-03-24 07:26:02 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#6) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 3 Vijaikumar Mallikarjuna 2016-03-24 09:27:52 UTC
upstream patch: http://review.gluster.org/#/c/13730/

Comment 4 Vijay Bellur 2016-03-24 14:05:08 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#7) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 5 Vijay Bellur 2016-03-24 14:17:58 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#8) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 6 Vijay Bellur 2016-03-25 02:11:07 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#9) for review on master by Vijay Bellur (vbellur)

Comment 7 Vijay Bellur 2016-03-25 09:44:57 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#12) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 8 Vijay Bellur 2016-03-25 09:56:20 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#13) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 9 Vijay Bellur 2016-03-25 13:44:27 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#14) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 10 Vijay Bellur 2016-03-25 16:55:05 UTC
REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt attributes for the root inode) posted (#15) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 11 Vijay Bellur 2016-03-26 16:33:56 UTC
COMMIT: http://review.gluster.org/13730 committed in master by Raghavendra G (rgowdapp) 
------
commit 06d50c1c00fe35c6bc2192a392b8a749984f3efc
Author: Raghavendra Bhat <raghavendra>
Date:   Mon Mar 14 15:10:17 2016 -0400

    storage/posix: send proper iatt attributes for the root inode
    
    * changes in posix to send proper iatt attributes for the root directory
      when ancestry is built. Before posix was filling only the gfid and the
      inode type in the iatt structure keeping rest of the fields zeros. This
      was cached by posix-acl and used to send EACCES when some fops came on
      that object if the uid of the caller is same as the uid of the object on
      the disk.
    
    * getting and setting inode_ctx in function 'posix_acl_ctx_get' is not atomic
      and can lead to memory leak when there are multiple looups for an
      inode at same time. This patch fix this problem
    
    * Linking an inode in posix_build_ancestry, can cause a race in
      posix_acl.
      When parent inode is linked in posix_build_ancestry, and before
      it reaches posix_acl_readdirp_cbkc, reate/lookup can
      come on a leaf-inode, as parent-inode-ctx not yet updated
      in posix_acl_readdirp_cbk, create/lookup can fail
      with EACCESS. So do the inode linking in the quota xlator
    
    Change-Id: I3101eefb65551cc4162c4ff2963be1b73deacd6d
    BUG: 1320818
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/13730
    Tested-by: Vijaikumar Mallikarjuna <vmallika>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 12 Vijay Bellur 2016-03-29 03:46:27 UTC
REVIEW: http://review.gluster.org/13837 (posix_acl: create inode ctx for root inode) posted (#1) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 13 Vijay Bellur 2016-03-29 13:11:16 UTC
REVIEW: http://review.gluster.org/13837 (server: send lookup on root inode when itable is created) posted (#2) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 14 Vijay Bellur 2016-03-29 14:19:51 UTC
REVIEW: http://review.gluster.org/13837 (server: send lookup on root inode when itable is created) posted (#3) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 15 Vijay Bellur 2016-03-29 15:10:44 UTC
REVIEW: http://review.gluster.org/13837 (server: send lookup on root inode when itable is created) posted (#4) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 16 Vijay Bellur 2016-03-30 01:32:21 UTC
REVIEW: http://review.gluster.org/13837 (server: send lookup on root inode when itable is created) posted (#5) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 17 Vijay Bellur 2016-03-30 04:24:05 UTC
REVIEW: http://review.gluster.org/13837 (server: send lookup on root inode when itable is created) posted (#6) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 18 Vijay Bellur 2016-03-30 19:39:48 UTC
COMMIT: http://review.gluster.org/13837 committed in master by Jeff Darcy (jdarcy) 
------
commit 773e660de0c45221b53cf2a489f28209145475db
Author: vmallika <vmallika>
Date:   Tue Mar 29 18:34:11 2016 +0530

    server: send lookup on root inode when itable is created
    
     * xlators like quota, marker, posix_acl can cause problems
       if inode-ctx are not created.
       sometime these xlarors may not get lookup on root inode
       with below cases
       1) client may not send lookup on root inode (like NSR leader)
       2) if the xlators on one of the bricks are not up,
          and client sending lookup during this time: brick
          can miss the lookup
       It is always better to make sure that there is one lookup
       on root. So send a first lookup when the inode table is created
    
     * When sending lookup on root, new inode is created, we need to
       use itable->root instead
    
    Change-Id: Iff2eeaa1a89795328833a7761789ef588f11218f
    BUG: 1320818
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/13837
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Jeff Darcy <jdarcy>

Comment 19 Vijay Bellur 2016-03-31 12:37:21 UTC
REVIEW: http://review.gluster.org/13857 (marker: build_ancestry in marker) posted (#7) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 20 Vijay Bellur 2016-04-02 03:40:31 UTC
REVIEW: http://review.gluster.org/13892 (marker: optimize mq_update_dirty_inode_task) posted (#1) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 21 Vijay Bellur 2016-04-02 06:33:12 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: log for inode ctx NULL) posted (#1) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 22 Vijay Bellur 2016-04-02 08:07:47 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: log for inode ctx NULL) posted (#2) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 23 Vijay Bellur 2016-04-02 08:58:14 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: log for inode ctx NULL) posted (#3) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 24 Vijay Bellur 2016-04-02 12:06:04 UTC
REVIEW: http://review.gluster.org/13892 (marker: optimize mq_update_dirty_inode_task) posted (#2) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 25 Vijay Bellur 2016-04-02 12:08:22 UTC
REVIEW: http://review.gluster.org/13857 (marker: build_ancestry in marker) posted (#8) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 26 Vijay Bellur 2016-04-02 14:42:48 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: log for inode ctx NULL) posted (#4) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 27 Vijay Bellur 2016-04-02 14:46:11 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: skip acl_permits for special clients) posted (#5) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 28 Vijay Bellur 2016-04-02 14:51:38 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: skip acl_permits for special clients) posted (#6) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 29 Vijay Bellur 2016-04-02 15:44:49 UTC
REVIEW: http://review.gluster.org/13857 (marker: build_ancestry in marker) posted (#9) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 30 Vijay Bellur 2016-04-04 11:52:28 UTC
REVIEW: http://review.gluster.org/13902 (quota: script to test EACCES and quota usage with untar) posted (#1) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 31 Vijay Bellur 2016-04-04 12:40:16 UTC
REVIEW: http://review.gluster.org/13902 (quota: script to test EACCES and quota usage with untar) posted (#2) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 32 Vijay Bellur 2016-04-05 11:32:03 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: skip acl_permits for special clients) posted (#7) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 33 Vijay Bellur 2016-04-06 08:12:43 UTC
COMMIT: http://review.gluster.org/13857 committed in master by Raghavendra G (rgowdapp) 
------
commit 81955d8aaee8a2c7bf6370970926bc7b403a6efa
Author: vmallika <vmallika>
Date:   Wed Mar 30 20:16:32 2016 +0530

    marker: build_ancestry in marker
    
    * quota-enforcer doesn't execute build_ancestry in the below
      code path
        1) Special client (PID < 0)
        2) unlink
        3) rename within the same directory
        4) link within the same directory
    
        In these cases, marker accounting can fail as parent not found.
        We need to build_ancestry in marker if it doesn't find parent
        during update txn
    
    Change-Id: Idb7a2906500647baa6d183ba859b15e34769029c
    BUG: 1320818
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/13857
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.com>

Comment 34 Vijay Bellur 2016-04-06 08:14:11 UTC
REVIEW: http://review.gluster.org/13902 (quota: script to test EACCES and quota usage with untar) posted (#3) for review on master by Raghavendra G (rgowdapp)

Comment 35 Vijay Bellur 2016-04-06 08:56:16 UTC
REVIEW: http://review.gluster.org/13894 (posix_acl: skip acl_permits for special clients) posted (#8) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 36 Vijay Bellur 2016-04-06 12:08:36 UTC
COMMIT: http://review.gluster.org/13894 committed in master by Vijaikumar Mallikarjuna (vmallika) 
------
commit 1546572b7d46c1aee906608140c843160a529937
Author: vmallika <vmallika>
Date:   Sat Apr 2 12:02:22 2016 +0530

    posix_acl: skip acl_permits for special clients
    
    Change-Id: I3f478b7e4ecab517200f50eb09f65a634c029437
    BUG: 1320818
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/13894
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: jiffin tony Thottan <jthottan>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 37 Vijay Bellur 2016-04-06 12:23:23 UTC
COMMIT: http://review.gluster.org/13892 committed in master by Vijaikumar Mallikarjuna (vmallika) 
------
commit 34d1c81dc4c730eb0cd2b8fd756b8bffed655e9c
Author: vmallika <vmallika>
Date:   Sat Apr 2 08:57:00 2016 +0530

    marker: optimize mq_update_dirty_inode_task
    
    In function mq_update_dirty_inode_task we do readdirp
    on a dirty directory and for entry we again do
    lookup to fecth the contribution xattr.
    We can fetch this contribution as part of readdirp
    
    Change-Id: I766593c0dba793f1ab3b43625acce1c7d9af8d7f
    BUG: 1320818
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/13892
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Smoke: Gluster Build System <jenkins.com>
    Reviewed-by: Manikandan Selvaganesh <mselvaga>

Comment 39 Niels de Vos 2016-06-16 14:01:30 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.