+++ This bug was initially created as a clone of Bug #1204641 +++ +++ This bug was initially created as a clone of Bug #1204044 +++ Description of problem: 'service glusterd stop' stops only the management daemon and not all the gluster processes - which is how it was decided (as per bug 1152992 and the related doc bug 1184846). A separate script 'stop-all-gluster-processes.sh' was created to achieve the intent of stopping all gluster processes. That fails to work when there is more than one gsync process running. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Have geo-rep session(s) created between master and slave (any configuration) 2. After successfully establishing and starting session(s), do a "ps aux | grep gluster | grep gsync | awk {'print $2'} " to make sure that there is more than one gsync process running (or any related process for that matter) 3. Execute the script "/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh" and that should go through without any errors. 4. Run the 'ps' command again (mentioned in step 2) to verify that the gluster processes have stopped. Actual results: The script fails when there is more than one process running. It errors out saying 'too many arguments' Expected results: The script should work irrespective of number of processes active at that time. Additional info: --- Additional comment from Kotresh HR on 2015-03-23 05:37:14 EDT --- Description of problem: 'service glusterd stop' stops only the management daemon and not all the gluster processes. A separate script 'stop-all-gluster-processes.sh' was created to achieve the intent of stopping all gluster processes. That fails to work when there is more than one gsync process running. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Have geo-rep session(s) created between master and slave (any configuration) 2. After successfully establishing and starting session(s), do a "ps aux | grep gluster | grep gsync | awk {'print $2'} " to make sure that there is more than one gsync process running (or any related process for that matter) 3. Execute the script "/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh" and that should go through without any errors. 4. Run the 'ps' command again (mentioned in step 2) to verify that the gluster processes have stopped. Actual results: The script fails when there is more than one process running. It errors out saying 'too many arguments' Expected results: The script should work irrespective of number of processes active at that time. Additional info: --- Additional comment from Kotresh HR on 2015-03-23 05:43:30 EDT --- Patch Sent: http://review.gluster.org/#/c/9970/ --- Additional comment from Anand Avati on 2015-04-01 02:59:27 EDT --- REVIEW: http://review.gluster.org/9970 (extras: Fix stop-all-gluster-processes.sh script) posted (#5) for review on master by Kotresh HR (khiremat) --- Additional comment from Anand Avati on 2015-04-06 13:44:45 EDT --- COMMIT: http://review.gluster.org/9970 committed in master by Vijay Bellur (vbellur) ------ commit 85865daa1b7dd11badf9f5192e050e1998c76f8a Author: Kotresh HR <khiremat> Date: Mon Mar 23 15:07:47 2015 +0530 extras: Fix stop-all-gluster-processes.sh script "test -n" command takes single string as argument. The command was failing with "Too many arguments" when multiple pids are got. Change-Id: Icc409082f492c72522168d5e203684f00f52cf1b BUG: 1204641 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/9970 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Aravinda VK <avishwan> Reviewed-by: Vijay Bellur <vbellur> --- Additional comment from Niels de Vos on 2015-05-14 13:29:23 EDT --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user --- Additional comment from Niels de Vos on 2015-05-14 13:35:54 EDT --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user --- Additional comment from Niels de Vos on 2015-05-14 13:38:16 EDT --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user --- Additional comment from Niels de Vos on 2015-05-14 13:46:37 EDT --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user --- Additional comment from Anand Avati on 2015-05-27 03:05:26 EDT --- REVIEW: http://review.gluster.org/10931 (scripts: Added script stop-all-gluster-processes.sh in rpm) posted (#1) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/11015 (scripts: Added script stop-all-gluster-processes.sh in rpm) posted (#1) for review on release-3.7 by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/11015 (scripts: Added script stop-all-gluster-processes.sh in rpm) posted (#2) for review on release-3.7 by Aravinda VK (avishwan)
COMMIT: http://review.gluster.org/11015 committed in release-3.7 by Vijay Bellur (vbellur) ------ commit 43c54f7e3615e657427cd69cf029e97eeac1411b Author: Aravinda VK <avishwan> Date: Sun May 31 12:09:20 2015 +0530 scripts: Added script stop-all-gluster-processes.sh in rpm This script was not included as part of rpm. Fixed now BUG: 1225331 Change-Id: I5e559b187253cc2f4f8ea7cf8ec56a32802e5ab2 Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/10931 Reviewed-by: Kotresh HR <khiremat> Reviewed-by: Kaleb KEITHLEY <kkeithle> Reviewed-on: http://review.gluster.org/11015 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Venky Shankar <vshankar> Tested-by: NetBSD Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.1, please reopen this bug report. glusterfs-3.7.1 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/1 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user