Bug 1336267

Summary: [scale]: Bricks not started after node reboot.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Byreddy <bsrirama>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED ERRATA QA Contact: Byreddy <bsrirama>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, pousley, pprakash, rcyriac, rhinduja, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---   
Target Release: RHGS 3.2.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-1 Doc Type: Bug Fix
Doc Text:
When a node is rebooted, each brick has its identity verified as it resumes operation. Previously, this was done with address resolution. However, this meant that when a cluster had a large number of bricks, there were a very large number of address lookups, which created contention and sometimes meant that bricks failed to restart. This update changes the brick verification method to use a brick's UUID rather than its address, which reduces contention and ensures that all brick processes restart after a reboot.
Story Points: ---
Clone Of:
: 1352279 (view as bug list) Environment:
Last Closed: 2017-03-23 05:30:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1352279, 1352817, 1352833    
Bug Blocks: 1351522, 1351530    

Description Byreddy 2016-05-16 04:07:58 UTC
Description of problem:
=======================
After rebooting the nodes which are hosting 400 volumes bricks, failed to start some of the volume bricks.


Errors in glutserd logs:
=======================
[2016-05-13 08:16:04.924247] E [socket.c:2393:socket_connect_finish]
0-glusterfs: connection to 10.70.36.45:24007 failed (Connection timed out)
[2016-05-13 08:16:05.128728] E [glusterfsd-mgmt.c:1907:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host:
rhs-client21.lab.eng.blr.redhat.com (Transport endpoint is not connected)
[2016-05-13 08:16:05.340730] I [glusterfsd-mgmt.c:1913:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.9-4.


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Have two RHGS node with 16 GB RAM each.
2. Create 400 1*2 volumes using both the nodes and start all the volumes.
3. Reboot the nodes and check all volume bricks are running.

Actual results:
===============
Bricks not starting after node reboot.

Expected results:
=================
Bricks should start after rebooting of nodes.


Additional info:

Comment 2 Atin Mukherjee 2016-05-16 04:26:52 UTC
This looks like that GlusterD is not able to communicate with bricks due to lack of multi threaded e-poll support in GlusterD.

[2016-05-13 08:16:04.924247] E [socket.c:2393:socket_connect_finish]
0-glusterfs: connection to 10.70.36.45:24007 failed (Connection timed out)
[2016-05-13 08:16:05.128728] E [glusterfsd-mgmt.c:1907:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host:
rhs-client21.lab.eng.blr.redhat.com (Transport endpoint is not connected)
[2016-05-13 08:16:05.340730] I [glusterfsd-mgmt.c:1913:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile

>From the above log (especially the first one) this indicates that the
same brick process failed to connect to glusterd and the connection got
timed out. This can happen in a situation where there is lot of back
pressure on the other side. Since GlusterD is limited to a single
threaded e-poll communication with the brick processes happen over a
single path and hence while glusterd tried to start 400 odd brick
processes there were 400 RPC connections to handle and that's why few of
the brick process got to hear from GlusterD and they came up but others
did not.

With out a brick multiplexing feature in place, scaling volumes is going to be always a challenge with different set of problems.

Moving it to 3.2.

Comment 4 Atin Mukherjee 2016-06-28 12:29:24 UTC
I tried to enable MT-epoll on a set up of 4 nodes, 400 volumes with bricks spanning over all the nodes. After rebooting a node, all the gluster brick processes didn't come up and same error message was seen in few brick log file. So on a nutshell, MT-epoll is not going to solve this scalability issue. Its the big lock which is causing the threads to block and time out.

Comment 5 Atin Mukherjee 2016-07-03 10:30:47 UTC
Surprisingly, big lock is not a culprit here. Its the pmap_signin from the brick processes which was consuming lot of glusterd's bandwidth and a code walk through revealed that we were doing an unnecessary address resolution which was not needed. Applying the fix solves this problem and I could see that on rebooting, glusterd is able to bring up all the brick processes.

Comment 6 Atin Mukherjee 2016-07-03 10:56:13 UTC
http://review.gluster.org/#/c/14849/ posted for review.

Comment 8 Atin Mukherjee 2016-09-17 14:37:15 UTC
Upstream mainline : http://review.gluster.org/14849
Upstream 3.8 : http://review.gluster.org/14860

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 13 Byreddy 2016-11-09 09:32:52 UTC
Verified this bug using the build  glusterfs-3.8.4-3, Reported issue is not seen any more.

Created 500 1*2 volumes using two nodes having 32G RAM and done stop and start of volumes, it worked well and done node reboots to check the issue reported,all the volume bricks were up.

Moving to verified state.

Comment 15 Atin Mukherjee 2017-03-06 04:58:38 UTC
LGTM :)

Comment 17 errata-xmlrpc 2017-03-23 05:30:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html