Description of problem: stat /rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160220.bz2 File: `/rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160220.bz2' Size: 21429116 Blocks: 4608 IO Block: 4096 regular file Device: fd01h/64769d Inode: 57987500041 Links: 2 Access: (0644/-rw-r--r--) Uid: (700006530/sto_auto) Gid: (700006530/sto_auto) Access: 2016-02-20 00:02:33.000000000 +0000 Modify: 2016-03-01 16:52:29.340488231 +0000 Change: 2016-03-01 16:52:29.340488231 +0000 [root@fc-rhs-153158-005 ~]# getfattr -d -m. -ehex /rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160220.bz2 getfattr: Removing leading '/' from absolute path names # file: rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160220.bz2 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.v001-unrep-client-3=0x000000170000000000000000 trusted.bit-rot.version=0x020000000000000056c203e60000944d trusted.gfid=0x7b43e48f154748678c2efba696312a82 trusted.glusterfs.quota.156014fd-912a-43a2-a028-6eaf6e244619.contri=0x000000000146fc000000000000000001 trusted.pgfid.156014fd-912a-43a2-a028-6eaf6e244619=0x00000001 This is the stat and getfattr o/p of a file from the backends. Stat output is as below. The mtime isn't apart by seconds but by days between the replicas. fc-rhs-153158-005.dc.gs.com File: `/rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160219.bz2' Size: 19232700 Blocks: 37568 IO Block: 4096 regular file Device: fd01h/64769d Inode: 58139335702 Links: 2 Access: (0644/-rw-r--r--) Uid: (700006530/sto_auto) Gid: (700006530/sto_auto) Access: 2016-02-19 00:02:58.000000000 +0000 Modify: 2016-02-19 00:01:44.000000000 +0000 Change: 2016-02-19 00:02:58.704982079 +0000 ======================================== fc-rhs-153158-006.dc.gs.com File: `/rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160219.bz2' Size: 19232700 Blocks: 29184 IO Block: 4096 regular file Device: fd01h/64769d Inode: 57983570484 Links: 2 Access: (0644/-rw-r--r--) Uid: (700006530/sto_auto) Gid: (700006530/sto_auto) Access: 2016-02-19 00:02:58.000000000 +0000 Modify: 2016-03-10 18:07:43.108803679 +0000 Change: 2016-03-10 18:07:43.108803679 +0000 Getfattr o/p: [root@fc-rhs-153158-006 glusterfs]# getfattr -d -m . -e hex /rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160219.bz2 getfattr: Removing leading '/' from absolute path names # file: rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160219.bz2 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.v001-unrep-client-3=0x000000130000000000000000 trusted.bit-rot.version=0xac2300000000000056c07f21000d507a trusted.gfid=0x057849f191504bccb229da281e4da5fc trusted.glusterfs.quota.156014fd-912a-43a2-a028-6eaf6e244619.contri=0x0000000000e400000000000000000001 trusted.pgfid.156014fd-912a-43a2-a028-6eaf6e244619=0x00000001 [root@fc-rhs-153158-005 tmp]# getfattr -d -m . -e hex /rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160219.bz2 getfattr: Removing leading '/' from absolute path names # file: rhs/bricksdb/v001-unrep/1002969812/atmos-logs/Americas/cdatp2621n06/access_log-20160219.bz2 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.v001-unrep-client-3=0x000000150000000000000000 trusted.bit-rot.version=0x020000000000000056c203e60000944d trusted.gfid=0x057849f191504bccb229da281e4da5fc trusted.glusterfs.quota.156014fd-912a-43a2-a028-6eaf6e244619.contri=0x00000000012578000000000000000001 trusted.pgfid.156014fd-912a-43a2-a028-6eaf6e244619=0x00000001 Gluster volume info and gluster volume status: gluster volume info Volume Name: ctdblock Type: Replicate Volume ID: 97f1a24d-9603-4be3-9c15-026af1d18aa7 Status: Started Number of Bricks: 1 x 4 = 4 Transport-type: tcp Bricks: Brick1: fc-rhs-153158-003.dc.gs.com:/rhs/bricksdb/ctdblock Brick2: fc-rhs-153158-006.dc.gs.com:/rhs/bricksdb/ctdblock Brick3: fc-rhs-153158-011.dc.gs.com:/rhs/bricksdb/ctdblock Brick4: fc-rhs-153158-014.dc.gs.com:/rhs/bricksdb/ctdblock Options Reconfigured: performance.readdir-ahead: on Volume Name: firmwide Type: Distributed-Replicate Volume ID: 1c346593-ef86-415c-bbb0-95b4006e6b31 Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Bricks: Brick1: fc-rhs-153158-003.dc.gs.com:/rhs/bricksdb/firmwide Brick2: fc-rhs-153158-004.dc.gs.com:/rhs/bricksdb/firmwide Brick3: fc-rhs-153158-005.dc.gs.com:/rhs/bricksdb/firmwide Brick4: fc-rhs-153158-006.dc.gs.com:/rhs/bricksdb/firmwide Brick5: fc-rhs-153158-011.dc.gs.com:/rhs/bricksdb/firmwide Brick6: fc-rhs-153158-012.dc.gs.com:/rhs/bricksdb/firmwide Brick7: fc-rhs-153158-013.dc.gs.com:/rhs/bricksdb/firmwide Brick8: fc-rhs-153158-014.dc.gs.com:/rhs/bricksdb/firmwide Brick9: fc-rhs-153158-019.dc.gs.com:/rhs/bricksdb/firmwide Brick10: fc-rhs-153158-020.dc.gs.com:/rhs/bricksdb/firmwide Options Reconfigured: features.quota-deem-statfs: on features.quota: on network.ping-timeout: 20 cluster.quorum-type: auto nfs.export-volumes: on nfs.export-dirs: on nfs.addr-namelookup: on Volume Name: management Type: Distributed-Replicate Volume ID: 773e880e-626a-4b95-ad68-9370631344fb Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Bricks: Brick1: fc-rhs-153158-003.dc.gs.com:/rhs/bricksdb/management Brick2: fc-rhs-153158-004.dc.gs.com:/rhs/bricksdb/management Brick3: fc-rhs-153158-005.dc.gs.com:/rhs/bricksdb/management Brick4: fc-rhs-153158-006.dc.gs.com:/rhs/bricksdb/management Brick5: fc-rhs-153158-011.dc.gs.com:/rhs/bricksdb/management Brick6: fc-rhs-153158-012.dc.gs.com:/rhs/bricksdb/management Brick7: fc-rhs-153158-013.dc.gs.com:/rhs/bricksdb/management Brick8: fc-rhs-153158-014.dc.gs.com:/rhs/bricksdb/management Brick9: fc-rhs-153158-019.dc.gs.com:/rhs/bricksdb/management Brick10: fc-rhs-153158-020.dc.gs.com:/rhs/bricksdb/management Options Reconfigured: network.ping-timeout: 20 cluster.quorum-type: auto nfs.export-volumes: on nfs.export-dirs: on nfs.addr-namelookup: on Volume Name: v001-unrep Type: Distributed-Replicate Volume ID: f2ada7f6-8e34-48f6-ab13-8d93e71ca1d9 Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Bricks: Brick1: fc-rhs-153158-003.dc.gs.com:/rhs/bricksdb/v001-unrep Brick2: fc-rhs-153158-004.dc.gs.com:/rhs/bricksdb/v001-unrep Brick3: fc-rhs-153158-005.dc.gs.com:/rhs/bricksdb/v001-unrep Brick4: fc-rhs-153158-006.dc.gs.com:/rhs/bricksdb/v001-unrep Brick5: fc-rhs-153158-011.dc.gs.com:/rhs/bricksdb/v001-unrep Brick6: fc-rhs-153158-012.dc.gs.com:/rhs/bricksdb/v001-unrep Brick7: fc-rhs-153158-013.dc.gs.com:/rhs/bricksdb/v001-unrep Brick8: fc-rhs-153158-014.dc.gs.com:/rhs/bricksdb/v001-unrep Brick9: fc-rhs-153158-019.dc.gs.com:/rhs/bricksdb/v001-unrep Brick10: fc-rhs-153158-020.dc.gs.com:/rhs/bricksdb/v001-unrep Options Reconfigured: diagnostics.client-log-level: INFO diagnostics.brick-log-level: INFO performance.quick-read: on performance.io-cache: on nfs.addr-namelookup: on cluster.quorum-type: auto nfs.export-dirs: on nfs.export-volumes: on network.ping-timeout: 20 features.quota: on features.barrier: off features.inode-quota: on features.quota-deem-statfs: on __________________________________________________________________________________________________________________________________________________ [root@fc-rhs-153158-006 ~]# gluster volume status Status of volume: ctdblock Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick fc-rhs-153158-003.dc.gs.com:/rhs/bric ksdb/ctdblock 49157 0 Y 27764 Brick fc-rhs-153158-006.dc.gs.com:/rhs/bric ksdb/ctdblock 49153 0 Y 36031 Brick fc-rhs-153158-011.dc.gs.com:/rhs/bric ksdb/ctdblock 49158 0 Y 36952 Brick fc-rhs-153158-014.dc.gs.com:/rhs/bric ksdb/ctdblock 49154 0 Y 7167 NFS Server on localhost 2049 0 Y 10361 Self-heal Daemon on localhost N/A N/A Y 10375 NFS Server on fc-rhs-153158-014.dc.gs.com 2049 0 Y 12631 Self-heal Daemon on fc-rhs-153158-014.dc.gs .com N/A N/A Y 12643 NFS Server on fc-rhs-153158-004.dc.gs.com 2049 0 Y 7883 Self-heal Daemon on fc-rhs-153158-004.dc.gs .com N/A N/A Y 7895 NFS Server on fc-rhs-153158-011.dc.gs.com 2049 0 Y 7107 Self-heal Daemon on fc-rhs-153158-011.dc.gs .com N/A N/A Y 7119 NFS Server on fc-rhs-153158-003.dc.gs.com 2049 0 Y 27478 Self-heal Daemon on fc-rhs-153158-003.dc.gs .com N/A N/A Y 27492 NFS Server on fc-rhs-153158-012.dc.gs.com 2049 0 Y 20902 Self-heal Daemon on fc-rhs-153158-012.dc.gs .com N/A N/A Y 20915 NFS Server on fc-rhs-153158-013.dc.gs.com 2049 0 Y 12192 Self-heal Daemon on fc-rhs-153158-013.dc.gs .com N/A N/A Y 12210 NFS Server on fc-rhs-153158-005.dc.gs.com 2049 0 Y 6566 Self-heal Daemon on fc-rhs-153158-005.dc.gs .com N/A N/A Y 6664 NFS Server on fc-rhs-153158-019.dc.gs.com 2049 0 Y 19962 Self-heal Daemon on fc-rhs-153158-019.dc.gs .com N/A N/A Y 19974 NFS Server on fc-rhs-153158-020.dc.gs.com 2049 0 Y 13641 Self-heal Daemon on fc-rhs-153158-020.dc.gs .com N/A N/A Y 13653 Task Status of Volume ctdblock ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: firmwide Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick fc-rhs-153158-003.dc.gs.com:/rhs/bric ksdb/firmwide 49159 0 Y 27459 Brick fc-rhs-153158-004.dc.gs.com:/rhs/bric ksdb/firmwide 49154 0 Y 7864 Brick fc-rhs-153158-005.dc.gs.com:/rhs/bric ksdb/firmwide 49154 0 Y 6542 Brick fc-rhs-153158-006.dc.gs.com:/rhs/bric ksdb/firmwide 49155 0 Y 10342 Brick fc-rhs-153158-011.dc.gs.com:/rhs/bric ksdb/firmwide 49157 0 Y 6998 Brick fc-rhs-153158-012.dc.gs.com:/rhs/bric ksdb/firmwide 49155 0 Y 20879 Brick fc-rhs-153158-013.dc.gs.com:/rhs/bric ksdb/firmwide 49152 0 Y 12169 Brick fc-rhs-153158-014.dc.gs.com:/rhs/bric ksdb/firmwide 49156 0 Y 12523 Brick fc-rhs-153158-019.dc.gs.com:/rhs/bric ksdb/firmwide 49152 0 Y 19938 Brick fc-rhs-153158-020.dc.gs.com:/rhs/bric ksdb/firmwide 49156 0 Y 13621 NFS Server on localhost 2049 0 Y 10361 Self-heal Daemon on localhost N/A N/A Y 10375 Quota Daemon on localhost N/A N/A Y 10383 NFS Server on fc-rhs-153158-014.dc.gs.com 2049 0 Y 12631 Self-heal Daemon on fc-rhs-153158-014.dc.gs .com N/A N/A Y 12643 Quota Daemon on fc-rhs-153158-014.dc.gs.com N/A N/A Y 12651 NFS Server on fc-rhs-153158-004.dc.gs.com 2049 0 Y 7883 Self-heal Daemon on fc-rhs-153158-004.dc.gs .com N/A N/A Y 7895 Quota Daemon on fc-rhs-153158-004.dc.gs.com N/A N/A Y 7903 NFS Server on fc-rhs-153158-003.dc.gs.com 2049 0 Y 27478 Self-heal Daemon on fc-rhs-153158-003.dc.gs .com N/A N/A Y 27492 Quota Daemon on fc-rhs-153158-003.dc.gs.com N/A N/A Y 27500 NFS Server on fc-rhs-153158-011.dc.gs.com 2049 0 Y 7107 Self-heal Daemon on fc-rhs-153158-011.dc.gs .com N/A N/A Y 7119 Quota Daemon on fc-rhs-153158-011.dc.gs.com N/A N/A Y 7127 NFS Server on fc-rhs-153158-012.dc.gs.com 2049 0 Y 20902 Self-heal Daemon on fc-rhs-153158-012.dc.gs .com N/A N/A Y 20915 Quota Daemon on fc-rhs-153158-012.dc.gs.com N/A N/A Y 20923 NFS Server on fc-rhs-153158-005.dc.gs.com 2049 0 Y 6566 Self-heal Daemon on fc-rhs-153158-005.dc.gs .com N/A N/A Y 6664 Quota Daemon on fc-rhs-153158-005.dc.gs.com N/A N/A Y 6672 NFS Server on fc-rhs-153158-013.dc.gs.com 2049 0 Y 12192 Self-heal Daemon on fc-rhs-153158-013.dc.gs .com N/A N/A Y 12210 Quota Daemon on fc-rhs-153158-013.dc.gs.com N/A N/A Y 12218 NFS Server on fc-rhs-153158-020.dc.gs.com 2049 0 Y 13641 Self-heal Daemon on fc-rhs-153158-020.dc.gs .com N/A N/A Y 13653 Quota Daemon on fc-rhs-153158-020.dc.gs.com N/A N/A Y 13661 NFS Server on fc-rhs-153158-019.dc.gs.com 2049 0 Y 19962 Self-heal Daemon on fc-rhs-153158-019.dc.gs .com N/A N/A Y 19974 Quota Daemon on fc-rhs-153158-019.dc.gs.com N/A N/A Y 19982 Task Status of Volume firmwide ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: management Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick fc-rhs-153158-003.dc.gs.com:/rhs/bric ksdb/management 49158 0 Y 41837 Brick fc-rhs-153158-004.dc.gs.com:/rhs/bric ksdb/management 49153 0 Y 48205 Brick fc-rhs-153158-005.dc.gs.com:/rhs/bric ksdb/management 49153 0 Y 28942 Brick fc-rhs-153158-006.dc.gs.com:/rhs/bric ksdb/management 49154 0 Y 46671 Brick fc-rhs-153158-011.dc.gs.com:/rhs/bric ksdb/management 49159 0 Y 17442 Brick fc-rhs-153158-012.dc.gs.com:/rhs/bric ksdb/management 49154 0 Y 16306 Brick fc-rhs-153158-013.dc.gs.com:/rhs/bric ksdb/management 49154 0 Y 11909 Brick fc-rhs-153158-014.dc.gs.com:/rhs/bric ksdb/management 49155 0 Y 14601 Brick fc-rhs-153158-019.dc.gs.com:/rhs/bric ksdb/management 49155 0 Y 6873 Brick fc-rhs-153158-020.dc.gs.com:/rhs/bric ksdb/management 49155 0 Y 3407 NFS Server on localhost 2049 0 Y 10361 Self-heal Daemon on localhost N/A N/A Y 10375 NFS Server on fc-rhs-153158-014.dc.gs.com 2049 0 Y 12631 Self-heal Daemon on fc-rhs-153158-014.dc.gs .com N/A N/A Y 12643 NFS Server on fc-rhs-153158-004.dc.gs.com 2049 0 Y 7883 Self-heal Daemon on fc-rhs-153158-004.dc.gs .com N/A N/A Y 7895 NFS Server on fc-rhs-153158-003.dc.gs.com 2049 0 Y 27478 Self-heal Daemon on fc-rhs-153158-003.dc.gs .com N/A N/A Y 27492 NFS Server on fc-rhs-153158-011.dc.gs.com 2049 0 Y 7107 Self-heal Daemon on fc-rhs-153158-011.dc.gs .com N/A N/A Y 7119 NFS Server on fc-rhs-153158-005.dc.gs.com 2049 0 Y 6566 Self-heal Daemon on fc-rhs-153158-005.dc.gs .com N/A N/A Y 6664 NFS Server on fc-rhs-153158-020.dc.gs.com 2049 0 Y 13641 Self-heal Daemon on fc-rhs-153158-020.dc.gs .com N/A N/A Y 13653 NFS Server on fc-rhs-153158-013.dc.gs.com 2049 0 Y 12192 Self-heal Daemon on fc-rhs-153158-013.dc.gs .com N/A N/A Y 12210 NFS Server on fc-rhs-153158-019.dc.gs.com 2049 0 Y 19962 Self-heal Daemon on fc-rhs-153158-019.dc.gs .com N/A N/A Y 19974 NFS Server on fc-rhs-153158-012.dc.gs.com 2049 0 Y 20902 Self-heal Daemon on fc-rhs-153158-012.dc.gs .com N/A N/A Y 20915 Task Status of Volume management ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: v001-unrep Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick fc-rhs-153158-003.dc.gs.com:/rhs/bric ksdb/v001-unrep 49156 0 Y 19812 Brick fc-rhs-153158-004.dc.gs.com:/rhs/bric ksdb/v001-unrep 49152 0 Y 25368 Brick fc-rhs-153158-005.dc.gs.com:/rhs/bric ksdb/v001-unrep 49152 0 Y 28948 Brick fc-rhs-153158-006.dc.gs.com:/rhs/bric ksdb/v001-unrep 49152 0 Y 27547 Brick fc-rhs-153158-011.dc.gs.com:/rhs/bric ksdb/v001-unrep 49155 0 Y 5308 Brick fc-rhs-153158-012.dc.gs.com:/rhs/bric ksdb/v001-unrep 49152 0 Y 47222 Brick fc-rhs-153158-013.dc.gs.com:/rhs/bric ksdb/v001-unrep 49153 0 Y 29571 Brick fc-rhs-153158-014.dc.gs.com:/rhs/bric ksdb/v001-unrep 49152 0 Y 47448 Brick fc-rhs-153158-019.dc.gs.com:/rhs/bric ksdb/v001-unrep 49154 0 Y 39665 Brick fc-rhs-153158-020.dc.gs.com:/rhs/bric ksdb/v001-unrep 49153 0 Y 3413 NFS Server on localhost 2049 0 Y 10361 Self-heal Daemon on localhost N/A N/A Y 10375 Quota Daemon on localhost N/A N/A Y 10383 NFS Server on fc-rhs-153158-014.dc.gs.com 2049 0 Y 12631 Self-heal Daemon on fc-rhs-153158-014.dc.gs .com N/A N/A Y 12643 Quota Daemon on fc-rhs-153158-014.dc.gs.com N/A N/A Y 12651 NFS Server on fc-rhs-153158-004.dc.gs.com 2049 0 Y 7883 Self-heal Daemon on fc-rhs-153158-004.dc.gs .com N/A N/A Y 7895 Quota Daemon on fc-rhs-153158-004.dc.gs.com N/A N/A Y 7903 NFS Server on fc-rhs-153158-003.dc.gs.com 2049 0 Y 27478 Self-heal Daemon on fc-rhs-153158-003.dc.gs .com N/A N/A Y 27492 Quota Daemon on fc-rhs-153158-003.dc.gs.com N/A N/A Y 27500 NFS Server on fc-rhs-153158-011.dc.gs.com 2049 0 Y 7107 Self-heal Daemon on fc-rhs-153158-011.dc.gs .com N/A N/A Y 7119 Quota Daemon on fc-rhs-153158-011.dc.gs.com N/A N/A Y 7127 NFS Server on fc-rhs-153158-005.dc.gs.com 2049 0 Y 6566 Self-heal Daemon on fc-rhs-153158-005.dc.gs .com N/A N/A Y 6664 Quota Daemon on fc-rhs-153158-005.dc.gs.com N/A N/A Y 6672 NFS Server on fc-rhs-153158-020.dc.gs.com 2049 0 Y 13641 Self-heal Daemon on fc-rhs-153158-020.dc.gs .com N/A N/A Y 13653 Quota Daemon on fc-rhs-153158-020.dc.gs.com N/A N/A Y 13661 NFS Server on fc-rhs-153158-013.dc.gs.com 2049 0 Y 12192 Self-heal Daemon on fc-rhs-153158-013.dc.gs .com N/A N/A Y 12210 Quota Daemon on fc-rhs-153158-013.dc.gs.com N/A N/A Y 12218 NFS Server on fc-rhs-153158-019.dc.gs.com 2049 0 Y 19962 Self-heal Daemon on fc-rhs-153158-019.dc.gs .com N/A N/A Y 19974 Quota Daemon on fc-rhs-153158-019.dc.gs.com N/A N/A Y 19982 NFS Server on fc-rhs-153158-012.dc.gs.com 2049 0 Y 20902 Self-heal Daemon on fc-rhs-153158-012.dc.gs .com N/A N/A Y 20915 Quota Daemon on fc-rhs-153158-012.dc.gs.com N/A N/A Y 20923 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Can this be marked as Modified based on comment 2?
Upstream mainline : http://review.gluster.org/13782 Upstream 3.8 : available through branching And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.
QATP: ===== 1) created a 2x2 volume and mount it on gnfs on two different clients(c1,c2) 2)now create a dir dir1 from c1 3) now stat dir1 from both c1 and c2 4) the stat is fetched and collated from all the 4 bricks so note down the stat info and also check the stat of dir1 on backend bricks and try to correlate them 5) Now under dir1 ,create 10 files using touch and note down which file falls into which replica pair. let us assume file f9 falls in brick3/4 It can be such that c1 may be fetching most of the stat info from say n3 and c2 from n4 (it can also be that stat may be displaying atime from n1 ctime for someother node etc) 6) Now bring down brick 4 and make modifications to file f9 7) now do a stat of the dir1 and notice from where the stat info is coming for both clients Expected behvior: the stat info must not be fetching anything from the down bricks 8)now bring back the brick online. Still stat info must not be fetched from the directory which was down. Retry by bringing one brick after another which hosts the directory(ie including replica) stat must be fetching from right brick moving to verified as it works as expected now [root@dhcp35-37 ~]# rpm -qa|grep gluster gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-server-3.8.4-3.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-3.el7rhgs.x86_64 glusterfs-api-3.8.4-3.el7rhgs.x86_64 glusterfs-libs-3.8.4-3.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-3.el7rhgs.x86_64 nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64 glusterfs-cli-3.8.4-3.el7rhgs.x86_64 python-gluster-3.8.4-3.el7rhgs.noarch glusterfs-devel-3.8.4-3.el7rhgs.x86_64 glusterfs-events-3.8.4-3.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-3.el7rhgs.x86_64 glusterfs-fuse-3.8.4-3.el7rhgs.x86_64 glusterfs-api-devel-3.8.4-3.el7rhgs.x86_64 glusterfs-rdma-3.8.4-3.el7rhgs.x86_64 gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64 glusterfs-3.8.4-3.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days