Description of problem: In dht_migration_complete_check_task, ret = inode_ctx_reset1 (inode, this, &tmp_miginfo); if (tmp_miginfo) { GF_FREE (tmp_miginfo); goto out; } However, another fop might be still using miginfo while we free it. The correct way to solve this is to use refcounting mechanism of memory management. Version-Release number of selected component (if applicable): How reproducible: Found through code-review. Its a race that can happen while doing parallel operations on a file during migration. Since its a race, it might not be reproducible consistently Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/11418 (cluster/dht: use refcount to manage memory used to store migration information.) posted (#1) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/11418 (cluster/dht: use refcount to manage memory used to store migration information.) posted (#2) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/11418 (cluster/dht: use refcount to manage memory used to store migration information.) posted (#3) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/11418 (cluster/dht: use refcount to manage memory used to store migration information.) posted (#4) for review on master by Raghavendra G (rgowdapp)
COMMIT: http://review.gluster.org/11418 committed in master by Raghavendra G (rgowdapp) ------ commit 1701239a4ef34c1780e2aa9cbc2843626bf61e2f Author: Raghavendra G <rgowdapp> Date: Fri Jun 26 11:53:11 2015 +0530 cluster/dht: use refcount to manage memory used to store migration information. Without refcounting, we might free up memory while other fops are still accessing it. BUG: 1235927 Change-Id: Ia4fa4a651cd6fe2394a0c20cef83c8d2cbc8750f Signed-off-by: Raghavendra G <rgowdapp> Reviewed-on: http://review.gluster.org/11418 Reviewed-by: Shyamsundar Ranganathan <srangana> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: N Balachandran <nbalacha> Tested-by: NetBSD Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user