Description of problem S31ganesha-reset.sh hook script's CPU usage is seen to be 100%. The script doesn't terminate by itself. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Execute gluster vol reset <volname> <option> <force> command 2. Check if the script is running and also check for its CPU usage 3. Actual results: The script that runs in the post phase of the reset command doesn't terminate. It accounts for 100% CPU usage. Expected results: The script shouldn't do anything at all, when ganesha is not enabled. And also the script should terminate in less than a second. Additional info:
REVIEW: http://review.gluster.org/8966 (Hooks : Infinite while loop introduced by another change.) posted (#1) for review on master by Meghana M (mmadhusu)
COMMIT: http://review.gluster.org/8966 committed in master by Vijay Bellur (vbellur) ------ commit a7a8a7507ca938b23d20a52931fa034cfaaa29f8 Author: Meghana Madhusudhan <mmadhusu> Date: Tue Oct 21 19:50:29 2014 +0530 Hooks : Infinite while loop introduced by another change. A change made to all the hook scripts introduced an infinite while loop in the script S31ganesha-reset.sh. It resulted in 100% CPU usage by this script. Change-Id: If62d8f0e065c6e6511363b8b26eae433f59bc5c3 BUG: 1155489 Signed-off-by: Meghana Madhusudhan <mmadhusu> Reviewed-on: http://review.gluster.org/8966 Reviewed-by: soumya k <skoduri> Reviewed-by: Raghavendra Talur <rtalur> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
REVIEW: http://review.gluster.org/8973 (Hooks : Infinite while loop introduced by another change.) posted (#1) for review on release-3.6 by Meghana M (mmadhusu)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user