Bug 1330901 - dht must avoid fresh lookups when a single replica pair goes offline
Summary: dht must avoid fresh lookups when a single replica pair goes offline
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.1.3
Assignee: Bug Updates Notification Mailing List
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On: 1281230 1283972
Blocks: 1311817
TreeView+ depends on / blocked
 
Reported: 2016-04-27 09:31 UTC by Ravishankar N
Modified: 2016-06-23 05:20 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.9-3
Doc Type: Bug Fix
Doc Text:
Clone Of: 1283972
Environment:
Last Closed: 2016-06-23 05:20:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Ravishankar N 2016-04-27 09:31:49 UTC
+++ This bug was initially created as a clone of Bug #1283972 +++

+++ This bug was initially created as a clone of Bug #1281230 +++

Description of problem:
Currently even if a single replica pair goes down, there will be fresh lookups for all files and directories thought there is no layout changes. Hence DHT must avoid fresh lookups when bricks go down.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a 2x2 dist-rep volume, mount the volume and create few directories.
2. Bring one the replica pair down.
3. Perform lookup on the directories

Actual results:
Fresh lookups on all the directories

Expected results:
Fresh lookup must be avoided, and read xattr from the other pair

--- Additional comment from Vijay Bellur on 2015-11-26 12:31:17 EST ---

REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#1) for review on release-3.7 by Sakshi Bansal

--- Additional comment from Vijay Bellur on 2016-03-06 23:47:57 EST ---

REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#2) for review on release-3.7 by Sakshi Bansal

--- Additional comment from Mike McCune on 2016-03-28 19:31:34 EDT ---

This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

--- Additional comment from Vijay Bellur on 2016-04-07 02:33:14 EDT ---

REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#3) for review on release-3.7 by Sakshi Bansal

--- Additional comment from Vijay Bellur on 2016-04-27 03:52:28 EDT ---

COMMIT: http://review.gluster.org/12767 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit fa78b755e9c58328c1df4ef1bfeb752d47534a4a
Author: Sakshi Bansal <sabansal>
Date:   Thu Nov 12 12:28:53 2015 +0530

    afr: replica pair going offline does not require CHILD_MODIFIED event
    
    As a part of CHILD_MODIFIED event DHT forgets the current layout and
    performs fresh lookup. However this is not required when a replica pair
    goes offline as the xattrs can be read from other replica pairs. Hence
    setting different event to handle replica pair going down.
    
    > Backport of http://review.gluster.org/#/c/12573/
    
    > Change-Id: I5ede2a6398e63f34f89f9d3c9bc30598974402e3
    > BUG: 1281230
    > Signed-off-by: Sakshi Bansal <sabansal>
    > Reviewed-on: http://review.gluster.org/12573
    > Reviewed-by: Ravishankar N <ravishankar>
    > Reviewed-by: Susant Palai <spalai>
    > Tested-by: NetBSD Build System <jenkins.org>
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Jeff Darcy <jdarcy>
    
    Change-Id: Ida30240d1ad8b8730af7ab50b129dfb05264fdf9
    BUG: 1283972
    Signed-off-by: Sakshi Bansal <sabansal>
    Reviewed-on: http://review.gluster.org/12767
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 6 Sweta Anandpara 2016-05-06 05:11:04 UTC
Tested and verified this on the build 3.7.9-3

Steps to reproduce/verify: 

1) Create a dist-rep volume. Set 'diagnostics.client-log-level' to DEBUG.
2) Mount it over fuse(/nfs) and create a directory say dir1
3) Check for this message "Calling fresh lookup for /dir1"
4) Just for checking perform "ls dir1" and again check the no of times you see the message "Calling fresh lookup on /dir1". we should not see any new 'fresh lookup' message.
5) Bring down one replica pair, by killing the brick process
6) Perform "ls dir1"
7) Repeat step4 and verify that no 'fresh lookup' message is seen in the logs.

Reproduced the issue on an older setup, and verified the same on the newest build. The expected behaviour was seen, with no 'fresh lookup' taking place when the replica bricks were down. 

Had a 4*2 volume, and got 4 replica bricks down one by one. Repeated the same steps on a replica3 volume, and got down 2 of the replica bricks. No fresh lookups were seen in the logs. When the bricks were brought back up, using 'gluster v start <volname> force', a single fresh lookup was seen for the accessed directory, as expected. 

Moving this BZ to verified in 3.1.3. Detailed logs are pasted below.

[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v create dr2 replica 2 10.70.35.210:/bricks/brick1/dr2 10.70.35.85:/bricks/brick1/dr2 10.70.35.137:/bricks/brick1/dr2 10.70.35.13:/bricks/brick1/dr2 10.70.35.210:/bricks/brick2/dr2 10.70.35.85:/bricks/brick2/dr2 10.70.35.137:/bricks/brick2/dr2 10.70.35.13:/bricks/brick2/dr2 
volume create: dr2: success: please start the volume to access data
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v info dr2
 
Volume Name: dr2
Type: Distributed-Replicate
Volume ID: d01e36c3-03b7-4f0e-a7f3-090b12ec2528
Status: Created
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.35.210:/bricks/brick1/dr2
Brick2: 10.70.35.85:/bricks/brick1/dr2
Brick3: 10.70.35.137:/bricks/brick1/dr2
Brick4: 10.70.35.13:/bricks/brick1/dr2
Brick5: 10.70.35.210:/bricks/brick2/dr2
Brick6: 10.70.35.85:/bricks/brick2/dr2
Brick7: 10.70.35.137:/bricks/brick2/dr2
Brick8: 10.70.35.13:/bricks/brick2/dr2
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v start dr2
volume start: dr2: success
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v create dr3 replica 3  10.70.35.85:/bricks/brick2/dr3 10.70.35.137:/bricks/brick2/dr3 10.70.35.13:/bricks/brick2/dr3  10.70.35.85:/bricks/brick3/dr3 10.70.35.137:/bricks/brick3/dr3 10.70.35.13:/bricks/brick3/dr3 10.70.35.85:/bricks/brick0/dr3 10.70.35.137:/bricks/brick0/dr3 10.70.35.13:/bricks/brick0/dr3
volume create: dr3: failed: Staging failed on 10.70.35.137. Error: /bricks/brick2/dr3 is already part of a volume
Staging failed on 10.70.35.13. Error: /bricks/brick2/dr3 is already part of a volume
Staging failed on 10.70.35.85. Error: /bricks/brick2/dr3 is already part of a volume
[root@dhcp35-210 ~]# gluster v create dr3 replica 3  10.70.35.85:/bricks/brick2/dr3 10.70.35.137:/bricks/brick2/dr3 10.70.35.13:/bricks/brick2/dr3  10.70.35.85:/bricks/brick3/dr3 10.70.35.137:/bricks/brick3/dr3 10.70.35.13:/bricks/brick3/dr3 10.70.35.85:/bricks/brick0/dr3 10.70.35.137:/bricks/brick0/dr3 10.70.35.13:/bricks/brick0/dr3 force
volume create: dr3: success: please start the volume to access data
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v info dr3
 
Volume Name: dr3
Type: Distributed-Replicate
Volume ID: c1aef5dc-6743-4060-a03e-388b4961d3fa
Status: Created
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.70.35.85:/bricks/brick2/dr3
Brick2: 10.70.35.137:/bricks/brick2/dr3
Brick3: 10.70.35.13:/bricks/brick2/dr3
Brick4: 10.70.35.85:/bricks/brick3/dr3
Brick5: 10.70.35.137:/bricks/brick3/dr3
Brick6: 10.70.35.13:/bricks/brick3/dr3
Brick7: 10.70.35.85:/bricks/brick0/dr3
Brick8: 10.70.35.137:/bricks/brick0/dr3
Brick9: 10.70.35.13:/bricks/brick0/dr3
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v start dr3
volume start: dr3: success
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v get dr2 diagnostics.client-log-level
Option                                  Value                                   
------                                  -----                                   
diagnostics.client-log-level            INFO                                    
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v set dr2 diagnostics.client-log-level DEBUG 
volume set: success
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2       49162     0          Y       25900
Brick 10.70.35.85:/bricks/brick1/dr2        49154     0          Y       32357
Brick 10.70.35.137:/bricks/brick1/dr2       49154     0          Y       758  
Brick 10.70.35.13:/bricks/brick1/dr2        49154     0          Y       29775
Brick 10.70.35.210:/bricks/brick2/dr2       49163     0          Y       25919
Brick 10.70.35.85:/bricks/brick2/dr2        49155     0          Y       32376
Brick 10.70.35.137:/bricks/brick2/dr2       49155     0          Y       793  
Brick 10.70.35.13:/bricks/brick2/dr2        49155     0          Y       29794
NFS Server on localhost                     2049      0          Y       26058
Self-heal Daemon on localhost               N/A       N/A        Y       26066
NFS Server on 10.70.35.85                   2049      0          Y       32526
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       32534
NFS Server on 10.70.35.13                   2049      0          Y       29940
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       29948
NFS Server on 10.70.35.137                  2049      0          Y       1004 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1015 
 
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# kill -9 25900
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2       N/A       N/A        N       N/A  
Brick 10.70.35.85:/bricks/brick1/dr2        49154     0          Y       32357
Brick 10.70.35.137:/bricks/brick1/dr2       49154     0          Y       758  
Brick 10.70.35.13:/bricks/brick1/dr2        49154     0          Y       29775
Brick 10.70.35.210:/bricks/brick2/dr2       49163     0          Y       25919
Brick 10.70.35.85:/bricks/brick2/dr2        49155     0          Y       32376
Brick 10.70.35.137:/bricks/brick2/dr2       49155     0          Y       793  
Brick 10.70.35.13:/bricks/brick2/dr2        49155     0          Y       29794
NFS Server on localhost                     2049      0          Y       26058
Self-heal Daemon on localhost               N/A       N/A        Y       26066
NFS Server on 10.70.35.85                   2049      0          Y       32526
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       32534
NFS Server on 10.70.35.137                  2049      0          Y       1004 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1015 
NFS Server on 10.70.35.13                   2049      0          Y       29940
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       29948
 
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# kill -9 25919
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2       N/A       N/A        N       N/A  
Brick 10.70.35.85:/bricks/brick1/dr2        49154     0          Y       32357
Brick 10.70.35.137:/bricks/brick1/dr2       49154     0          Y       758  
Brick 10.70.35.13:/bricks/brick1/dr2        49154     0          Y       29775
Brick 10.70.35.210:/bricks/brick2/dr2       N/A       N/A        N       N/A  
Brick 10.70.35.85:/bricks/brick2/dr2        49155     0          Y       32376
Brick 10.70.35.137:/bricks/brick2/dr2       49155     0          Y       793  
Brick 10.70.35.13:/bricks/brick2/dr2        49155     0          Y       29794
NFS Server on localhost                     2049      0          Y       26058
Self-heal Daemon on localhost               N/A       N/A        Y       26066
NFS Server on 10.70.35.13                   2049      0          Y       29940
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       29948
NFS Server on 10.70.35.85                   2049      0          Y       32526
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       32534
NFS Server on 10.70.35.137                  2049      0          Y       1004 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1015 
 
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# gluster v start dr2 force
volume start: dr2: success
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2       49162     0          Y       26244
Brick 10.70.35.85:/bricks/brick1/dr2        49154     0          Y       32357
Brick 10.70.35.137:/bricks/brick1/dr2       49154     0          Y       1215 
Brick 10.70.35.13:/bricks/brick1/dr2        49154     0          Y       29775
Brick 10.70.35.210:/bricks/brick2/dr2       49163     0          Y       26263
Brick 10.70.35.85:/bricks/brick2/dr2        49155     0          Y       32376
Brick 10.70.35.137:/bricks/brick2/dr2       49155     0          Y       1234 
Brick 10.70.35.13:/bricks/brick2/dr2        49155     0          Y       29794
NFS Server on localhost                     2049      0          Y       26283
Self-heal Daemon on localhost               N/A       N/A        Y       26291
NFS Server on 10.70.35.85                   2049      0          Y       32672
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       32680
NFS Server on 10.70.35.13                   2049      0          Y       30073
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       30081
NFS Server on 10.70.35.137                  2049      0          Y       1256 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1264 
 
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# kill -9 26263
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2       49162     0          Y       26244
Brick 10.70.35.85:/bricks/brick1/dr2        49154     0          Y       32357
Brick 10.70.35.137:/bricks/brick1/dr2       49154     0          Y       1215 
Brick 10.70.35.13:/bricks/brick1/dr2        49154     0          Y       29775
Brick 10.70.35.210:/bricks/brick2/dr2       N/A       N/A        N       N/A  
Brick 10.70.35.85:/bricks/brick2/dr2        49155     0          Y       32376
Brick 10.70.35.137:/bricks/brick2/dr2       49155     0          Y       1234 
Brick 10.70.35.13:/bricks/brick2/dr2        49155     0          Y       29794
NFS Server on localhost                     2049      0          Y       26283
Self-heal Daemon on localhost               N/A       N/A        Y       26291
NFS Server on 10.70.35.85                   2049      0          Y       32672
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       32680
NFS Server on 10.70.35.137                  2049      0          Y       1256 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1264 
NFS Server on 10.70.35.13                   2049      0          Y       30073
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       30081
 
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# gluster v start dr2 force
volume start: dr2: success
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2       49162     0          Y       26244
Brick 10.70.35.85:/bricks/brick1/dr2        49154     0          Y       32357
Brick 10.70.35.137:/bricks/brick1/dr2       49154     0          Y       1215 
Brick 10.70.35.13:/bricks/brick1/dr2        49154     0          Y       29775
Brick 10.70.35.210:/bricks/brick2/dr2       49163     0          Y       26393
Brick 10.70.35.85:/bricks/brick2/dr2        49155     0          Y       32376
Brick 10.70.35.137:/bricks/brick2/dr2       49155     0          Y       1234 
Brick 10.70.35.13:/bricks/brick2/dr2        49155     0          Y       29794
NFS Server on localhost                     2049      0          Y       26413
Self-heal Daemon on localhost               N/A       N/A        Y       26421
NFS Server on 10.70.35.137                  2049      0          Y       1372 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1380 
NFS Server on 10.70.35.13                   2049      0          Y       30158
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       30167
NFS Server on 10.70.35.85                   2049      0          Y       32762
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       302  
 
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v REset dr2 diagnostics.client-log-level 
unrecognized word: REset (position 1)
[root@dhcp35-210 ~]# gluster v REreset dr2 diagnostics.client-log-level 
unrecognized word: REreset (position 1)
[root@dhcp35-210 ~]# gluster v reset dr2 diagnostics.client-log-level 
volume reset: success: reset volume successful
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v get dr2 diagnostics.client-log-level
Option                                  Value                                   
------                                  -----                                   
diagnostics.client-log-level            INFO                                    
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v set dr3 diagnostics.client-log-level DEBUG 
volume set: success
[root@dhcp35-210 ~]# gluster v status dr3
Status of volume: dr3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3        49156     0          Y       32468
Brick 10.70.35.137:/bricks/brick2/dr3       49156     0          Y       930  
Brick 10.70.35.13:/bricks/brick2/dr3        49156     0          Y       29882
Brick 10.70.35.85:/bricks/brick3/dr3        49157     0          Y       32487
Brick 10.70.35.137:/bricks/brick3/dr3       49157     0          Y       959  
Brick 10.70.35.13:/bricks/brick3/dr3        49157     0          Y       29901
Brick 10.70.35.85:/bricks/brick0/dr3        49158     0          Y       32506
Brick 10.70.35.137:/bricks/brick0/dr3       49158     0          Y       983  
Brick 10.70.35.13:/bricks/brick0/dr3        49158     0          Y       29920
NFS Server on localhost                     2049      0          Y       26413
Self-heal Daemon on localhost               N/A       N/A        Y       26421
NFS Server on 10.70.35.85                   2049      0          Y       32762
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       302  
NFS Server on 10.70.35.13                   2049      0          Y       30158
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       30167
NFS Server on 10.70.35.137                  2049      0          Y       1372 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1380 
 
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v status dr3
Status of volume: dr3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3        49156     0          Y       32468
Brick 10.70.35.137:/bricks/brick2/dr3       N/A       N/A        N       N/A  
Brick 10.70.35.13:/bricks/brick2/dr3        N/A       N/A        N       N/A  
Brick 10.70.35.85:/bricks/brick3/dr3        49157     0          Y       32487
Brick 10.70.35.137:/bricks/brick3/dr3       49157     0          Y       959  
Brick 10.70.35.13:/bricks/brick3/dr3        49157     0          Y       29901
Brick 10.70.35.85:/bricks/brick0/dr3        49158     0          Y       32506
Brick 10.70.35.137:/bricks/brick0/dr3       49158     0          Y       983  
Brick 10.70.35.13:/bricks/brick0/dr3        49158     0          Y       29920
NFS Server on localhost                     2049      0          Y       26413
Self-heal Daemon on localhost               N/A       N/A        Y       26421
NFS Server on 10.70.35.85                   2049      0          Y       32762
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       302  
NFS Server on 10.70.35.137                  2049      0          Y       1372 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1380 
NFS Server on 10.70.35.13                   2049      0          Y       30158
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       30167
 
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster  v status dr3
Status of volume: dr3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3        49156     0          Y       32468
Brick 10.70.35.137:/bricks/brick2/dr3       N/A       N/A        N       N/A  
Brick 10.70.35.13:/bricks/brick2/dr3        N/A       N/A        N       N/A  
Brick 10.70.35.85:/bricks/brick3/dr3        49157     0          Y       32487
Brick 10.70.35.137:/bricks/brick3/dr3       N/A       N/A        N       N/A  
Brick 10.70.35.13:/bricks/brick3/dr3        N/A       N/A        N       N/A  
Brick 10.70.35.85:/bricks/brick0/dr3        49158     0          Y       32506
Brick 10.70.35.137:/bricks/brick0/dr3       49158     0          Y       983  
Brick 10.70.35.13:/bricks/brick0/dr3        49158     0          Y       29920
NFS Server on localhost                     2049      0          Y       26413
Self-heal Daemon on localhost               N/A       N/A        Y       26421
NFS Server on 10.70.35.137                  2049      0          Y       1372 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1380 
NFS Server on 10.70.35.13                   2049      0          Y       30158
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       30167
NFS Server on 10.70.35.85                   2049      0          Y       32762
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       302  
 
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# 
[root@dhcp35-210 ~]# gluster v start dr3 force
volume start: dr3: success
[root@dhcp35-210 ~]# gluster  v status dr3
Status of volume: dr3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3        49156     0          Y       32468
Brick 10.70.35.137:/bricks/brick2/dr3       49156     0          Y       1551 
Brick 10.70.35.13:/bricks/brick2/dr3        49156     0          Y       30361
Brick 10.70.35.85:/bricks/brick3/dr3        49157     0          Y       32487
Brick 10.70.35.137:/bricks/brick3/dr3       49157     0          Y       1570 
Brick 10.70.35.13:/bricks/brick3/dr3        49157     0          Y       30380
Brick 10.70.35.85:/bricks/brick0/dr3        49158     0          Y       32506
Brick 10.70.35.137:/bricks/brick0/dr3       49158     0          Y       983  
Brick 10.70.35.13:/bricks/brick0/dr3        49158     0          Y       29920
NFS Server on localhost                     2049      0          Y       26657
Self-heal Daemon on localhost               N/A       N/A        Y       26665
NFS Server on 10.70.35.85                   2049      0          Y       468  
Self-heal Daemon on 10.70.35.85             N/A       N/A        Y       478  
NFS Server on 10.70.35.137                  2049      0          Y       1590 
Self-heal Daemon on 10.70.35.137            N/A       N/A        Y       1598 
NFS Server on 10.70.35.13                   2049      0          Y       30400
Self-heal Daemon on 10.70.35.13             N/A       N/A        Y       30408
 
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-210 ~]#

=============================================================================
                                  CLIENT LOGS
=============================================================================

[root@dhcp35-3 ~]# mkdir /mnt/dr2
[root@dhcp35-3 ~]# mount -t glusterfs 10.70.35.210:/dr2 /mnt/dr2
[root@dhcp35-3 ~]# 
[root@dhcp35-3 ~]# 
[root@dhcp35-3 ~]# cd /mnt/dr2
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# mkdir dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# 
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log 
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:49:09.425439] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# cd
[root@dhcp35-3 ~]# mkdir /mnt/dr3
[root@dhcp35-3 ~]# mount -t glusterfs 10.70.35.210:/dr3 /mnt/dr3
[root@dhcp35-3 ~]# cd /mnt/dr3
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# mkdir dire
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log 
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log 
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log 
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# 
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log 
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log 
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:54:26.704374] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]#

Comment 8 errata-xmlrpc 2016-06-23 05:20:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.