REVIEW: http://review.gluster.org/15949 (cluster/dht: Fix memory corruption while accessing regex stored in private) posted (#1) for review on release-3.9 by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/15949 (cluster/dht: Fix memory corruption while accessing regex stored in private) posted (#2) for review on release-3.9 by Raghavendra G (rgowdapp)
COMMIT: http://review.gluster.org/15949 committed in release-3.9 by Raghavendra G (rgowdapp) ------ commit cc37d3475833efbc75570b91ea8eace073586df3 Author: Raghavendra G <rgowdapp> Date: Tue Nov 8 12:09:57 2016 +0530 cluster/dht: Fix memory corruption while accessing regex stored in private If reconfigure is executed parallely (or concurrently with dht_init), there are races that can corrupt memory. One such race is modification of regexes stored in conf (conf->rsync_regex_valid and conf->extra_regex_valid) through dht_init_regex. With change [1], reconfigure codepath can get executed parallely (with itself or with dht_init) and this fix is needed. Also, a reconfigure can race with any thread doing dht_layout_search, resulting in dht_layout_search accessing regex freed up by reconfigure (like in bz 1399134). [1] http://review.gluster.org/15046 >Change-Id: I039422a65374cf0ccbe0073441f0e8c442ebf830 >BUG: 1399134 >Signed-off-by: Raghavendra G <rgowdapp> >Reviewed-on: http://review.gluster.org/15945 >Smoke: Gluster Build System <jenkins.org> >NetBSD-regression: NetBSD Build System <jenkins.org> >Reviewed-by: N Balachandran <nbalacha> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Shyamsundar Ranganathan <srangana> Change-Id: I039422a65374cf0ccbe0073441f0e8c442ebf830 BUG: 1399422 Signed-off-by: Raghavendra G <rgowdapp> (cherry picked from commit 64451d0f25e7cc7aafc1b6589122648281e4310a) Reviewed-on: http://review.gluster.org/15949 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report. glusterfs-3.9.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029725.html [2] https://www.gluster.org/pipermail/gluster-users/