Description of problem: There's a race that happens when a thread is terminated and a mem_put() of an object allocated by that thread is executed at the same time. It can cause memory corruption and/or use after free. The issue appears when the following sequence of events happens: 1. Thread T1 allocates a memory object O1 from its own private pool P1 2. T1 terminates and P1 is marked to be destroyed 3. The mem-sweeper thread is woken up and scans all private pools 4. It detects that P1 needs to be destroyed and starts releasing the objects from hot and cold lists. 5. Thread T2 releases O1 6. O1 is added to the hot list of P1 Steps 4 and 6 access the same list without proper locks, so the list can get corrupted. Version-Release number of selected component (if applicable): mainline How reproducible: Unknown. Seen the issue by inspecting the code. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/21583 (libglusterfs: fix memory corruption caused by per-thread mem pools) posted (#4) for review on master by Xavi Hernandez
REVIEW: https://review.gluster.org/21583 (libglusterfs: fix memory corruption caused by per-thread mem pools) posted (#5) for review on master by Amar Tumballi
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/