Description of problem: There are problems as follow: 1.thread1 client_ctx_get return NULL 2.thread 2 client_ctx_set ctx1 ok 3.thread1 client_ctx_set ctx2 ok threaad1 use ctx1, thread2 use ctx2 and ctx1 will leak Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/17219 (libglusterfs: fix race condition in client_ctx_set) posted (#1) for review on master by Zhou Zhengping (johnzzpcrystal)
REVIEW: https://review.gluster.org/17219 (libglusterfs: fix race condition in client_ctx_set) posted (#2) for review on master by Zhou Zhengping (johnzzpcrystal)
REVIEW: https://review.gluster.org/17219 (libglusterfs: fix race condition in client_ctx_set) posted (#3) for review on master by Zhou Zhengping (johnzzpcrystal)
COMMIT: https://review.gluster.org/17219 committed in master by Raghavendra G (rgowdapp) ------ commit 333474e0d6efe1a2b3a9ecffc9bdff3e49325910 Author: Zhou Zhengping <johnzzpcrystal> Date: Tue May 9 20:57:34 2017 +0800 libglusterfs: fix race condition in client_ctx_set follow procedures: 1.thread1 client_ctx_get return NULL 2.thread 2 client_ctx_set ctx1 ok 3.thread1 client_ctx_set ctx2 ok thread1 use ctx1, thread2 use ctx2 and ctx1 will leak Change-Id: I990b02905edd1b3179323ada56888f852d20f538 BUG: 1449232 Signed-off-by: Zhou Zhengping <johnzzpcrystal> Reviewed-on: https://review.gluster.org/17219 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jeff.us>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/