The multiple disconnect events are due to reconnect/disconnect to glusterd (port 24007). rpc/clnt has a reconnect feature which tries to reconnect to a disconnected brick and client connection to brick is a two step process:
1. connect to glusterd, get brick port then disconnect
2. connect to brick
In this case step 1 would be successful and step 2 won't happen as glusterd wouldn't send back the brick port (as brick is dead). Nevertheless there is a chain of connect/disconnect (to glusterd) at rpc layer and these are valid steps as we need reconnection logic. However subsequent disconnect events were prevented from reaching parents of protocol/client as it remembered which was the last sent and if current event is the same as last event, it would skip notification. Before Halo replication feature - https://review.gluster.org/16177, last_sent_event for this test case would be GF_EVENT_DISCONNECT and hence subsequent disconnects were skipped notification to parent xlators. But Halo replication introduced another event GF_EVENT_CHILD_PING which gets notified to parents of protocol/client whenever there is a successful ping response. In this case, the successful ping response would be from glusterd and would change conf->last_sent_event to GF_EVENT_CHILD_PING. This made subsequent disconnect events are not skipped.
A patch to propagate GF_EVENT_CHILD_PING only after a successful handshake prevents spurious CHILD_DOWN events to afr. However, I am not sure whether this breaks Halo replication. Would request afr team members comment on the patch (I'll post shortly).
upstream patch: https://review.gluster.org/#/c/glusterfs/+/22821/
Tested with the following:
The steps performed :
1.Created a replica 3 volume and started it.
2.Installed the GDB and glusterfs-debuginfo packages on the node.
3.Enter into the gdb session (gdb -p `pidof glusterfs`) and set a break point (b __afr_handle_child_down_event)
4.Went to one of the node and killed a brick on the node (kill -9 pid)
5.The following output was seen.There was only 1 event per one disconnect and didnot see any issue that of events generating every 3 sec for one disconnect.
(gdb) b __afr_handle_child_down_event
Breakpoint 1 at 0x7f675b62b940: file afr-common.c, line 5249.
[Switching to Thread 0x7f675cafe700 (LWP 16524)]
Breakpoint 1, __afr_handle_child_down_event (this=this@entry=0x7f675400ffe0, child_xlator=child_xlator@entry=0x7f675400d5c0, idx=idx@entry=2, child_latency_msec=child_latency_msec@entry=-1,
event=event@entry=0x7f675cafcc5c, call_psh=call_psh@entry=0x7f675cafcc68, up_child=up_child@entry=0x7f675cafcc6c) at afr-common.c:5249
Based on the above output moving the bug to verified state.