Bug 1651165

Summary: Race in per-thread mem-pool when a thread is terminated
Product: [Community] GlusterFS Reporter: Xavi Hernandez <jahernan>
Component: coreAssignee: Xavi Hernandez <jahernan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-03-25 16:32:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Xavi Hernandez 2018-11-19 10:39:44 UTC
Description of problem:

There's a race that happens when a thread is terminated and a mem_put() of an object allocated by that thread is executed at the same time.

It can cause memory corruption and/or use after free.

The issue appears when the following sequence of events happens:

1. Thread T1 allocates a memory object O1 from its own private pool P1
2. T1 terminates and P1 is marked to be destroyed
3. The mem-sweeper thread is woken up and scans all private pools
4. It detects that P1 needs to be destroyed and starts releasing the
   objects from hot and cold lists.
5. Thread T2 releases O1
6. O1 is added to the hot list of P1

Steps 4 and 6 access the same list without proper locks, so the list can get corrupted.

Version-Release number of selected component (if applicable): mainline


How reproducible:
Unknown. Seen the issue by inspecting the code.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-11-19 11:05:07 UTC
REVIEW: https://review.gluster.org/21583 (libglusterfs: fix memory corruption caused by per-thread mem pools) posted (#4) for review on master by Xavi Hernandez

Comment 2 Worker Ant 2018-11-26 04:24:52 UTC
REVIEW: https://review.gluster.org/21583 (libglusterfs: fix memory corruption caused by per-thread mem pools) posted (#5) for review on master by Amar Tumballi

Comment 3 Shyamsundar 2019-03-25 16:32:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/