Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1408101

Summary: Fix potential socket_poller thread deadlock and resource leak
Product: [Community] GlusterFS Reporter: Kaushal <kaushal>
Component: rpcAssignee: Raghavendra G <rgowdapp>
Status: CLOSED DUPLICATE QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: mainlineCC: bugs, jahernan, rgowdapp
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1408104 (view as bug list) Environment:
Last Closed: 2019-11-25 12:50:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1408104    

Description Kaushal 2016-12-22 06:32:00 UTC
The fix for bug #1404181 [1], has a potential deadlock and resource leak of the socket_poller thread.

A disconnect caused by a PARENT_DOWN event during a fuse graph switch, can lead to the socket_poller thread being deadlocked. The deadlock doesn't affect the fuse client as no new fops are sent on the old graph.

In addition to the above, the race in gfapi solved by [1] can also occur in other codepaths, and need to be solved.

Quoting Raghavendra G's comment from the review,
"""
- The race addressed by this patch (race b/w socket_disconnect cleaning up resources in priv and socket_poller using the same and resulting in undefined behaviour - crash/corruption etc) can potentially happen irrespective of the codepaths socket_disconnect is invoked from (like glusterd, client_portmap_cbk, handling of PARENT_DOWN, changelog etc). Note the usage of word "potential" here and I am not saying that this race happens in existing code. However, I would like this issue gets fixed for these potential cases too.
- If there are fops in progress at the time of graph switch, sending PARENT_DOWN event on the currently active (soon to be old) graph is deferred till all the fops are complete (though new graph becomes active and new I/O is redirected to that graph). So, PARENT_DOWN event can be sent after processing last response (to fop). This means PARENT_DOWN can be sent in thread executing socket_poller itself. Since PARENT_DOWN triggers a disconnect and disconnect waits for socket_poller to complete, we've a deadlock. Specifically the deadlock is: socket_poller -> notify-msg-received -> fuse processes fop response -> fuse sends PARENT_DOWN -> rpc-clnt calls rpc_clnt_disable -> socket_disconnect -> wait till socket_poller to complete before returning from socket_disconnect. Luckily we've have a socket_poller thread for each transport and threads that deadlock are the threads belonging to transports from older graphs on which no I/O happening. So, at worst this will be a case of resource leakage (threads/sockets etc) of old graph.

"""


[1] https://review.gluster.org/16141

Comment 1 Amar Tumballi 2019-05-09 20:31:56 UTC
Is this still relevant?

Comment 2 Xavi Hernandez 2019-11-20 10:23:21 UTC
Does this problem still exist in current versions ?

Otherwise I'll close the bug.

Comment 3 Raghavendra G 2019-11-25 12:50:52 UTC
[1] removes the socket_pollerr thread itself. So, these issues are no longer relevant.

[1] https://review.gluster.org/c/glusterfs/+/19308/

*** This bug has been marked as a duplicate of bug 1561332 ***