REVIEW: http://review.gluster.org/16015 (dht/rename : Incase of failure remove linkto file properly) posted (#1) for review on release-3.8 by jiffin tony Thottan (jthottan)
COMMIT: http://review.gluster.org/16015 committed in release-3.8 by Niels de Vos (ndevos) ------ commit aaf2a2897080754aceb42245e92b0269556d6f38 Author: Jiffin Tony Thottan <jthottan> Date: Mon Nov 21 18:08:14 2016 +0530 dht/rename : Incase of failure remove linkto file properly Generally linkto file is created using root user. Consider following case, a user is trying to rename a file which he is not permitted. So the rename fails with EACESS and when rename tries to cleanup the linkto file, it fails. The above issue happens when rename/00.t test executed on nfs-ganesha clients : Steps executed in script * create a file "abc" using root * rename the file "abc" to "xyz" using a non root user, it fails with EACESS * delete "abc" * create directory "abc" using root * again try ot rename "abc" to "xyz" using non root user, test hungs here which slowly leds to OOM kill of ganesha process RCA put forwarded by Du for OOM kill of ganesha Note that when we hit this bug, we've a scenario of a dentry being present as: * a linkto file on one subvol * a directory on rest of subvols When a lookup happens on the dentry in such a scenario, the control flow goes into an infinite loop of: dht_lookup_everywhere dht_lookup_everywhere_cbk dht_lookup_unlink_cbk dht_lookup_everywhere_done dht_lookup_directory (as local->dir_count > 0) dht_lookup_dir_cbk (sets to local->need_selfheal = 1 as the entry is a linkto file on one of the subvol) dht_lookup_everywhere (as need_selfheal = 1). This infinite loop can cause increased consumption of memory due to: 1) dht_lookup_directory assigns a new layout to local->layout unconditionally 2) Most of the functions in this loop do a stack_wind of various fops. This results in growing of call stack (note that call-stack is destroyed only after lookup response is received by fuse - which never happens in this case) Thanks Du for root causing the oom kill and Sushant for suggesting the fix Upstream reference : >Change-Id: I1e16bc14aa685542afbd21188426ecb61fd2689d >BUG: 1397052 >Signed-off-by: Jiffin Tony Thottan <jthottan> >Reviewed-on: http://review.gluster.org/15894 >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Smoke: Gluster Build System <jenkins.org> >Reviewed-by: Raghavendra G <rgowdapp> >(cherry picked from commit 57d59f4be205ae0c7888758366dc0049bdcfe449) Change-Id: I1e16bc14aa685542afbd21188426ecb61fd2689d BUG: 1401029 Signed-off-by: Jiffin Tony Thottan <jthottan> Reviewed-on: http://review.gluster.org/16015 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Niels de Vos <ndevos>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.8, please open a new bug report. glusterfs-3.8.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2017-January/000064.html [2] https://www.gluster.org/pipermail/gluster-users/