Description of problem: http://review.gluster.org/10342 introduced a cleanup thread for expired client entries. When enabling the 'features.cache-invalidation' volume option, the brick process starts to run in a busy-loop. Obviously this is not intentional, and a process occupying 100% of the cycles on a CPU or core is not wanted. Version-Release number of selected component (if applicable): glusterfs 3.7.1 How reproducible: always Steps to Reproduce: 1. enable the upcall xlator with # gluster volume set $VOLNAME features.cache-invalidation on Actual results: The glusterfsd processes start to run with 100% cpu usage. Expected results: Minimal/no impact on the glusterfsd processes. Additional info: Needs backport of http://review.gluster.org/11198
REVIEW: http://review.gluster.org/11211 (upcall: prevent busy loop in reaper thread) posted (#1) for review on release-3.7 by Niels de Vos (ndevos)
COMMIT: http://review.gluster.org/11211 committed in release-3.7 by Niels de Vos (ndevos) ------ commit a6ce8584c63c6aabfc2a559b3d4bb946f7ca1a58 Author: Niels de Vos <ndevos> Date: Sun Jun 14 12:35:02 2015 +0200 upcall: prevent busy loop in reaper thread http://review.gluster.org/10342 introduced a cleanup thread for expired client entries. When enabling the 'features.cache-invalidation' volume option, the brick process starts to run in a busy-loop. Obviously this is not intentional, and a process occupying 100% of the cycles on a CPU or core is not wanted. Cherry picked from commit a367d4c6965e1f0da36f17ab6c5fdbd37925ebdd)\: > Change-Id: I453c612d72001f4d8bbecdd5ac07aaed75b43914 > BUG: 1200267 > Signed-off-by: Niels de Vos <ndevos> > Reviewed-on: http://review.gluster.org/11198 > Reviewed-by: soumya k <skoduri> > Reviewed-by: Kaleb KEITHLEY <kkeithle> > Tested-by: Gluster Build System <jenkins.com> Change-Id: I453c612d72001f4d8bbecdd5ac07aaed75b43914 BUG: 1231516 Signed-off-by: Niels de Vos <ndevos> Reviewed-on: http://review.gluster.org/11211 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: soumya k <skoduri>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report. glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user