Bug 1509189

Summary: timer: Possible race condition between gf_timer_* routines
Product: [Community] GlusterFS Reporter: Soumya Koduri <skoduri>
Component: coreAssignee: Soumya Koduri <skoduri>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, ndevos
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: glusterfs-4.0.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1565590 1568373 1568374 (view as bug list) Environment:
Last Closed: 2018-03-15 11:19:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1508817, 1565590, 1568373, 1568374    

Description Soumya Koduri 2017-11-03 10:11:27 UTC
Description of problem:

As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1508817#c4, there is a chance of hitting race between gf_timer_registry_destroy(). gf_timer_call_cancel() and gf_timer_proc() leading to use_after_free. 

As explained by Dan, the flow is as below -
gf_timer_proc() is called, locks reg, and gets an event.  It unlocks reg, and calls the callback.

Now, gf_timer_registry_destroy() is called, and removes reg from ctx, and joins on gf_timer_proc().

Now, gf_timer_call_cancel() is called on the event being processed.  It cannot find reg (since it's been removed from reg), so it frees event.

Now the callback returns into gf_timer_proc(), and it tries to free event, but it's already free, so double free.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2017-11-03 10:44:11 UTC
REVIEW: https://review.gluster.org/18652 (timer: Fix possible race during cleanup) posted (#1) for review on master by soumya k

Comment 2 Worker Ant 2017-11-21 08:56:56 UTC
COMMIT: https://review.gluster.org/18652 committed in master by \"soumya k\" <skoduri> with a commit message- timer: Fix possible race during cleanup

As mentioned in bug1509189, there is a possible race
between gf_timer_cancel(), gf_timer_proc() and
gf_timer_registry_destroy() leading to use_after_free.

Problem:

1) gf_timer_proc() is called, locks reg, and gets an event.
It unlocks reg, and calls the callback.

2) Meanwhile gf_timer_registry_destroy() is called, and removes
reg from ctx, and joins on gf_timer_proc().

3) gf_timer_call_cancel() is called on the event being
processed.  It cannot find reg (since it's been removed from reg),
so it frees event.

4) the callback returns into gf_timer_proc(), and it tries to free
event, but it's already free, so double free.

Solution:
The fix is to bail out in gf_timer_cancel() when registry
is not found. The logic behind this is that, gf_timer_cancel()
is called only on any existing event. That means there was a valid
registry earlier while creating that event. And the only reason
we cannot find that registry now is that it must have got set to
NULL when context cleanup is started.
Since gf_timer_proc() takes care of releasing all the remaining
events active on that registry, it seems safe to bail out
in gf_timer_cancel().

Change-Id: Ia9b088533141c3bb335eff2fe06b52d1575bb34f
BUG: 1509189
Reported-by: Daniel Gryniewicz <dang>
Signed-off-by: Soumya Koduri <skoduri>

Comment 3 Shyamsundar 2018-03-15 11:19:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/