Bug 1330901
| Summary: | dht must avoid fresh lookups when a single replica pair goes offline | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Ravishankar N <ravishankar> |
| Component: | distribute | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
| Status: | CLOSED ERRATA | QA Contact: | Sweta Anandpara <sanandpa> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.1 | CC: | asrivast, bugs, nbalacha, rhinduja, sabansal, sanandpa |
| Target Milestone: | --- | Keywords: | Triaged, ZStream |
| Target Release: | RHGS 3.1.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.7.9-3 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1283972 | Environment: | |
| Last Closed: | 2016-06-23 05:20:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1281230, 1283972 | ||
| Bug Blocks: | 1311817 | ||
|
Description
Ravishankar N
2016-04-27 09:31:49 UTC
Tested and verified this on the build 3.7.9-3
Steps to reproduce/verify:
1) Create a dist-rep volume. Set 'diagnostics.client-log-level' to DEBUG.
2) Mount it over fuse(/nfs) and create a directory say dir1
3) Check for this message "Calling fresh lookup for /dir1"
4) Just for checking perform "ls dir1" and again check the no of times you see the message "Calling fresh lookup on /dir1". we should not see any new 'fresh lookup' message.
5) Bring down one replica pair, by killing the brick process
6) Perform "ls dir1"
7) Repeat step4 and verify that no 'fresh lookup' message is seen in the logs.
Reproduced the issue on an older setup, and verified the same on the newest build. The expected behaviour was seen, with no 'fresh lookup' taking place when the replica bricks were down.
Had a 4*2 volume, and got 4 replica bricks down one by one. Repeated the same steps on a replica3 volume, and got down 2 of the replica bricks. No fresh lookups were seen in the logs. When the bricks were brought back up, using 'gluster v start <volname> force', a single fresh lookup was seen for the accessed directory, as expected.
Moving this BZ to verified in 3.1.3. Detailed logs are pasted below.
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v create dr2 replica 2 10.70.35.210:/bricks/brick1/dr2 10.70.35.85:/bricks/brick1/dr2 10.70.35.137:/bricks/brick1/dr2 10.70.35.13:/bricks/brick1/dr2 10.70.35.210:/bricks/brick2/dr2 10.70.35.85:/bricks/brick2/dr2 10.70.35.137:/bricks/brick2/dr2 10.70.35.13:/bricks/brick2/dr2
volume create: dr2: success: please start the volume to access data
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v info dr2
Volume Name: dr2
Type: Distributed-Replicate
Volume ID: d01e36c3-03b7-4f0e-a7f3-090b12ec2528
Status: Created
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.35.210:/bricks/brick1/dr2
Brick2: 10.70.35.85:/bricks/brick1/dr2
Brick3: 10.70.35.137:/bricks/brick1/dr2
Brick4: 10.70.35.13:/bricks/brick1/dr2
Brick5: 10.70.35.210:/bricks/brick2/dr2
Brick6: 10.70.35.85:/bricks/brick2/dr2
Brick7: 10.70.35.137:/bricks/brick2/dr2
Brick8: 10.70.35.13:/bricks/brick2/dr2
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v start dr2
volume start: dr2: success
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v create dr3 replica 3 10.70.35.85:/bricks/brick2/dr3 10.70.35.137:/bricks/brick2/dr3 10.70.35.13:/bricks/brick2/dr3 10.70.35.85:/bricks/brick3/dr3 10.70.35.137:/bricks/brick3/dr3 10.70.35.13:/bricks/brick3/dr3 10.70.35.85:/bricks/brick0/dr3 10.70.35.137:/bricks/brick0/dr3 10.70.35.13:/bricks/brick0/dr3
volume create: dr3: failed: Staging failed on 10.70.35.137. Error: /bricks/brick2/dr3 is already part of a volume
Staging failed on 10.70.35.13. Error: /bricks/brick2/dr3 is already part of a volume
Staging failed on 10.70.35.85. Error: /bricks/brick2/dr3 is already part of a volume
[root@dhcp35-210 ~]# gluster v create dr3 replica 3 10.70.35.85:/bricks/brick2/dr3 10.70.35.137:/bricks/brick2/dr3 10.70.35.13:/bricks/brick2/dr3 10.70.35.85:/bricks/brick3/dr3 10.70.35.137:/bricks/brick3/dr3 10.70.35.13:/bricks/brick3/dr3 10.70.35.85:/bricks/brick0/dr3 10.70.35.137:/bricks/brick0/dr3 10.70.35.13:/bricks/brick0/dr3 force
volume create: dr3: success: please start the volume to access data
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v info dr3
Volume Name: dr3
Type: Distributed-Replicate
Volume ID: c1aef5dc-6743-4060-a03e-388b4961d3fa
Status: Created
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.70.35.85:/bricks/brick2/dr3
Brick2: 10.70.35.137:/bricks/brick2/dr3
Brick3: 10.70.35.13:/bricks/brick2/dr3
Brick4: 10.70.35.85:/bricks/brick3/dr3
Brick5: 10.70.35.137:/bricks/brick3/dr3
Brick6: 10.70.35.13:/bricks/brick3/dr3
Brick7: 10.70.35.85:/bricks/brick0/dr3
Brick8: 10.70.35.137:/bricks/brick0/dr3
Brick9: 10.70.35.13:/bricks/brick0/dr3
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v start dr3
volume start: dr3: success
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v get dr2 diagnostics.client-log-level
Option Value
------ -----
diagnostics.client-log-level INFO
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v set dr2 diagnostics.client-log-level DEBUG
volume set: success
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2 49162 0 Y 25900
Brick 10.70.35.85:/bricks/brick1/dr2 49154 0 Y 32357
Brick 10.70.35.137:/bricks/brick1/dr2 49154 0 Y 758
Brick 10.70.35.13:/bricks/brick1/dr2 49154 0 Y 29775
Brick 10.70.35.210:/bricks/brick2/dr2 49163 0 Y 25919
Brick 10.70.35.85:/bricks/brick2/dr2 49155 0 Y 32376
Brick 10.70.35.137:/bricks/brick2/dr2 49155 0 Y 793
Brick 10.70.35.13:/bricks/brick2/dr2 49155 0 Y 29794
NFS Server on localhost 2049 0 Y 26058
Self-heal Daemon on localhost N/A N/A Y 26066
NFS Server on 10.70.35.85 2049 0 Y 32526
Self-heal Daemon on 10.70.35.85 N/A N/A Y 32534
NFS Server on 10.70.35.13 2049 0 Y 29940
Self-heal Daemon on 10.70.35.13 N/A N/A Y 29948
NFS Server on 10.70.35.137 2049 0 Y 1004
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1015
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# kill -9 25900
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2 N/A N/A N N/A
Brick 10.70.35.85:/bricks/brick1/dr2 49154 0 Y 32357
Brick 10.70.35.137:/bricks/brick1/dr2 49154 0 Y 758
Brick 10.70.35.13:/bricks/brick1/dr2 49154 0 Y 29775
Brick 10.70.35.210:/bricks/brick2/dr2 49163 0 Y 25919
Brick 10.70.35.85:/bricks/brick2/dr2 49155 0 Y 32376
Brick 10.70.35.137:/bricks/brick2/dr2 49155 0 Y 793
Brick 10.70.35.13:/bricks/brick2/dr2 49155 0 Y 29794
NFS Server on localhost 2049 0 Y 26058
Self-heal Daemon on localhost N/A N/A Y 26066
NFS Server on 10.70.35.85 2049 0 Y 32526
Self-heal Daemon on 10.70.35.85 N/A N/A Y 32534
NFS Server on 10.70.35.137 2049 0 Y 1004
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1015
NFS Server on 10.70.35.13 2049 0 Y 29940
Self-heal Daemon on 10.70.35.13 N/A N/A Y 29948
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# kill -9 25919
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2 N/A N/A N N/A
Brick 10.70.35.85:/bricks/brick1/dr2 49154 0 Y 32357
Brick 10.70.35.137:/bricks/brick1/dr2 49154 0 Y 758
Brick 10.70.35.13:/bricks/brick1/dr2 49154 0 Y 29775
Brick 10.70.35.210:/bricks/brick2/dr2 N/A N/A N N/A
Brick 10.70.35.85:/bricks/brick2/dr2 49155 0 Y 32376
Brick 10.70.35.137:/bricks/brick2/dr2 49155 0 Y 793
Brick 10.70.35.13:/bricks/brick2/dr2 49155 0 Y 29794
NFS Server on localhost 2049 0 Y 26058
Self-heal Daemon on localhost N/A N/A Y 26066
NFS Server on 10.70.35.13 2049 0 Y 29940
Self-heal Daemon on 10.70.35.13 N/A N/A Y 29948
NFS Server on 10.70.35.85 2049 0 Y 32526
Self-heal Daemon on 10.70.35.85 N/A N/A Y 32534
NFS Server on 10.70.35.137 2049 0 Y 1004
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1015
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]# gluster v start dr2 force
volume start: dr2: success
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2 49162 0 Y 26244
Brick 10.70.35.85:/bricks/brick1/dr2 49154 0 Y 32357
Brick 10.70.35.137:/bricks/brick1/dr2 49154 0 Y 1215
Brick 10.70.35.13:/bricks/brick1/dr2 49154 0 Y 29775
Brick 10.70.35.210:/bricks/brick2/dr2 49163 0 Y 26263
Brick 10.70.35.85:/bricks/brick2/dr2 49155 0 Y 32376
Brick 10.70.35.137:/bricks/brick2/dr2 49155 0 Y 1234
Brick 10.70.35.13:/bricks/brick2/dr2 49155 0 Y 29794
NFS Server on localhost 2049 0 Y 26283
Self-heal Daemon on localhost N/A N/A Y 26291
NFS Server on 10.70.35.85 2049 0 Y 32672
Self-heal Daemon on 10.70.35.85 N/A N/A Y 32680
NFS Server on 10.70.35.13 2049 0 Y 30073
Self-heal Daemon on 10.70.35.13 N/A N/A Y 30081
NFS Server on 10.70.35.137 2049 0 Y 1256
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1264
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# kill -9 26263
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2 49162 0 Y 26244
Brick 10.70.35.85:/bricks/brick1/dr2 49154 0 Y 32357
Brick 10.70.35.137:/bricks/brick1/dr2 49154 0 Y 1215
Brick 10.70.35.13:/bricks/brick1/dr2 49154 0 Y 29775
Brick 10.70.35.210:/bricks/brick2/dr2 N/A N/A N N/A
Brick 10.70.35.85:/bricks/brick2/dr2 49155 0 Y 32376
Brick 10.70.35.137:/bricks/brick2/dr2 49155 0 Y 1234
Brick 10.70.35.13:/bricks/brick2/dr2 49155 0 Y 29794
NFS Server on localhost 2049 0 Y 26283
Self-heal Daemon on localhost N/A N/A Y 26291
NFS Server on 10.70.35.85 2049 0 Y 32672
Self-heal Daemon on 10.70.35.85 N/A N/A Y 32680
NFS Server on 10.70.35.137 2049 0 Y 1256
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1264
NFS Server on 10.70.35.13 2049 0 Y 30073
Self-heal Daemon on 10.70.35.13 N/A N/A Y 30081
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]# gluster v start dr2 force
volume start: dr2: success
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr2
Status of volume: dr2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.210:/bricks/brick1/dr2 49162 0 Y 26244
Brick 10.70.35.85:/bricks/brick1/dr2 49154 0 Y 32357
Brick 10.70.35.137:/bricks/brick1/dr2 49154 0 Y 1215
Brick 10.70.35.13:/bricks/brick1/dr2 49154 0 Y 29775
Brick 10.70.35.210:/bricks/brick2/dr2 49163 0 Y 26393
Brick 10.70.35.85:/bricks/brick2/dr2 49155 0 Y 32376
Brick 10.70.35.137:/bricks/brick2/dr2 49155 0 Y 1234
Brick 10.70.35.13:/bricks/brick2/dr2 49155 0 Y 29794
NFS Server on localhost 2049 0 Y 26413
Self-heal Daemon on localhost N/A N/A Y 26421
NFS Server on 10.70.35.137 2049 0 Y 1372
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1380
NFS Server on 10.70.35.13 2049 0 Y 30158
Self-heal Daemon on 10.70.35.13 N/A N/A Y 30167
NFS Server on 10.70.35.85 2049 0 Y 32762
Self-heal Daemon on 10.70.35.85 N/A N/A Y 302
Task Status of Volume dr2
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v REset dr2 diagnostics.client-log-level
unrecognized word: REset (position 1)
[root@dhcp35-210 ~]# gluster v REreset dr2 diagnostics.client-log-level
unrecognized word: REreset (position 1)
[root@dhcp35-210 ~]# gluster v reset dr2 diagnostics.client-log-level
volume reset: success: reset volume successful
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v get dr2 diagnostics.client-log-level
Option Value
------ -----
diagnostics.client-log-level INFO
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v set dr3 diagnostics.client-log-level DEBUG
volume set: success
[root@dhcp35-210 ~]# gluster v status dr3
Status of volume: dr3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3 49156 0 Y 32468
Brick 10.70.35.137:/bricks/brick2/dr3 49156 0 Y 930
Brick 10.70.35.13:/bricks/brick2/dr3 49156 0 Y 29882
Brick 10.70.35.85:/bricks/brick3/dr3 49157 0 Y 32487
Brick 10.70.35.137:/bricks/brick3/dr3 49157 0 Y 959
Brick 10.70.35.13:/bricks/brick3/dr3 49157 0 Y 29901
Brick 10.70.35.85:/bricks/brick0/dr3 49158 0 Y 32506
Brick 10.70.35.137:/bricks/brick0/dr3 49158 0 Y 983
Brick 10.70.35.13:/bricks/brick0/dr3 49158 0 Y 29920
NFS Server on localhost 2049 0 Y 26413
Self-heal Daemon on localhost N/A N/A Y 26421
NFS Server on 10.70.35.85 2049 0 Y 32762
Self-heal Daemon on 10.70.35.85 N/A N/A Y 302
NFS Server on 10.70.35.13 2049 0 Y 30158
Self-heal Daemon on 10.70.35.13 N/A N/A Y 30167
NFS Server on 10.70.35.137 2049 0 Y 1372
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1380
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr3
Status of volume: dr3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3 49156 0 Y 32468
Brick 10.70.35.137:/bricks/brick2/dr3 N/A N/A N N/A
Brick 10.70.35.13:/bricks/brick2/dr3 N/A N/A N N/A
Brick 10.70.35.85:/bricks/brick3/dr3 49157 0 Y 32487
Brick 10.70.35.137:/bricks/brick3/dr3 49157 0 Y 959
Brick 10.70.35.13:/bricks/brick3/dr3 49157 0 Y 29901
Brick 10.70.35.85:/bricks/brick0/dr3 49158 0 Y 32506
Brick 10.70.35.137:/bricks/brick0/dr3 49158 0 Y 983
Brick 10.70.35.13:/bricks/brick0/dr3 49158 0 Y 29920
NFS Server on localhost 2049 0 Y 26413
Self-heal Daemon on localhost N/A N/A Y 26421
NFS Server on 10.70.35.85 2049 0 Y 32762
Self-heal Daemon on 10.70.35.85 N/A N/A Y 302
NFS Server on 10.70.35.137 2049 0 Y 1372
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1380
NFS Server on 10.70.35.13 2049 0 Y 30158
Self-heal Daemon on 10.70.35.13 N/A N/A Y 30167
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v status dr3
Status of volume: dr3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3 49156 0 Y 32468
Brick 10.70.35.137:/bricks/brick2/dr3 N/A N/A N N/A
Brick 10.70.35.13:/bricks/brick2/dr3 N/A N/A N N/A
Brick 10.70.35.85:/bricks/brick3/dr3 49157 0 Y 32487
Brick 10.70.35.137:/bricks/brick3/dr3 N/A N/A N N/A
Brick 10.70.35.13:/bricks/brick3/dr3 N/A N/A N N/A
Brick 10.70.35.85:/bricks/brick0/dr3 49158 0 Y 32506
Brick 10.70.35.137:/bricks/brick0/dr3 49158 0 Y 983
Brick 10.70.35.13:/bricks/brick0/dr3 49158 0 Y 29920
NFS Server on localhost 2049 0 Y 26413
Self-heal Daemon on localhost N/A N/A Y 26421
NFS Server on 10.70.35.137 2049 0 Y 1372
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1380
NFS Server on 10.70.35.13 2049 0 Y 30158
Self-heal Daemon on 10.70.35.13 N/A N/A Y 30167
NFS Server on 10.70.35.85 2049 0 Y 32762
Self-heal Daemon on 10.70.35.85 N/A N/A Y 302
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]#
[root@dhcp35-210 ~]# gluster v start dr3 force
volume start: dr3: success
[root@dhcp35-210 ~]# gluster v status dr3
Status of volume: dr3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.35.85:/bricks/brick2/dr3 49156 0 Y 32468
Brick 10.70.35.137:/bricks/brick2/dr3 49156 0 Y 1551
Brick 10.70.35.13:/bricks/brick2/dr3 49156 0 Y 30361
Brick 10.70.35.85:/bricks/brick3/dr3 49157 0 Y 32487
Brick 10.70.35.137:/bricks/brick3/dr3 49157 0 Y 1570
Brick 10.70.35.13:/bricks/brick3/dr3 49157 0 Y 30380
Brick 10.70.35.85:/bricks/brick0/dr3 49158 0 Y 32506
Brick 10.70.35.137:/bricks/brick0/dr3 49158 0 Y 983
Brick 10.70.35.13:/bricks/brick0/dr3 49158 0 Y 29920
NFS Server on localhost 2049 0 Y 26657
Self-heal Daemon on localhost N/A N/A Y 26665
NFS Server on 10.70.35.85 2049 0 Y 468
Self-heal Daemon on 10.70.35.85 N/A N/A Y 478
NFS Server on 10.70.35.137 2049 0 Y 1590
Self-heal Daemon on 10.70.35.137 N/A N/A Y 1598
NFS Server on 10.70.35.13 2049 0 Y 30400
Self-heal Daemon on 10.70.35.13 N/A N/A Y 30408
Task Status of Volume dr3
------------------------------------------------------------------------------
There are no active volume tasks
[root@dhcp35-210 ~]#
=============================================================================
CLIENT LOGS
=============================================================================
[root@dhcp35-3 ~]# mkdir /mnt/dr2
[root@dhcp35-3 ~]# mount -t glusterfs 10.70.35.210:/dr2 /mnt/dr2
[root@dhcp35-3 ~]#
[root@dhcp35-3 ~]#
[root@dhcp35-3 ~]# cd /mnt/dr2
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]# mkdir dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]# ls dir1
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]#
[root@dhcp35-3 dr2]# grep -R "Calling fresh lookup for /dir1" /var/log/glusterfs/mnt-dr2.log
[2016-05-06 04:44:08.334406] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.339016] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:44:08.343790] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:46:46.045046] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[2016-05-06 04:49:09.425439] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr2-dht: Calling fresh lookup for /dir1 on dr2-replicate-1
[root@dhcp35-3 dr2]# cd
[root@dhcp35-3 ~]# mkdir /mnt/dr3
[root@dhcp35-3 ~]# mount -t glusterfs 10.70.35.210:/dr3 /mnt/dr3
[root@dhcp35-3 ~]# cd /mnt/dr3
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]# mkdir dire
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]#
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]# ls dire
[root@dhcp35-3 dr3]# grep -R "Calling fresh lookup for /dire" /var/log/glusterfs/mnt-dr3.log
[2016-05-06 04:50:41.798636] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.803414] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:50:41.808270] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[2016-05-06 04:54:26.704374] D [MSGID: 0] [dht-common.c:2471:dht_lookup] 0-dr3-dht: Calling fresh lookup for /dire on dr3-replicate-0
[root@dhcp35-3 dr3]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |