+++ This bug was initially created as a clone of Bug #1218055 +++ Description of problem: Currently "snap_scheduler disable" command has same return code for different types of failures like; snap_scheduler init not executed, snap_scheduler already disabled.. etc. When this command is executed through an external program like vdsm, its better to depend on return code than messages to recognise the failures. So, its better to have different return codes for different errors. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Darshan on 2015-05-28 08:48:36 EDT --- Can you return the script with success when disable is executed on a already disabled setup. --- Additional comment from Anand Avati on 2015-05-29 08:53:20 EDT --- REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Retuen proper error code in case of failure) posted (#1) for review on master by Avra Sengupta (asengupt) --- Additional comment from Anand Avati on 2015-05-29 09:18:13 EDT --- REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Retuen proper error code in case of failure) posted (#2) for review on master by Avra Sengupta (asengupt) --- Additional comment from Anand Avati on 2015-05-29 15:42:51 EDT --- REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Retuen proper error code in case of failure) posted (#3) for review on master by Avra Sengupta (asengupt) --- Additional comment from Anand Avati on 2015-05-30 06:40:17 EDT --- REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Return proper error code in case of failure) posted (#5) for review on master by Avra Sengupta (asengupt) --- Additional comment from Anand Avati on 2015-06-01 03:03:22 EDT --- REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Return proper error code in case of failure) posted (#6) for review on master by Avra Sengupta (asengupt) --- Additional comment from Anand Avati on 2015-06-03 02:49:59 EDT --- COMMIT: http://review.gluster.org/11005 committed in master by Krishnan Parthasarathi (kparthas) ------ commit 9798a24febba9bbf28e97656b81b8a01a1325f68 Author: Avra Sengupta <asengupt> Date: Fri May 29 18:11:01 2015 +0530 snapshot/scheduler: Return proper error code in case of failure ENUM RETCODE ERROR ---------------------------------------------------------- INTERNAL_ERROR 2 Internal Error SHARED_STORAGE_DIR_DOESNT_EXIST 3 Shared Storage Dir does not exist SHARED_STORAGE_NOT_MOUNTED 4 Shared storage is not mounted ANOTHER_TRANSACTION_IN_PROGRESS 5 Another transaction is in progress INIT_FAILED 6 Initialisation failed SCHEDULING_ALREADY_DISABLED 7 Scheduler is already disabled SCHEDULING_ALREADY_ENABLED 8 Scheduler is already enabled NODE_NOT_INITIALISED 9 Node not initialised ANOTHER_SCHEDULER_ACTIVE 10 Another scheduler is active JOB_ALREADY_EXISTS 11 Job already exists JOB_NOT_FOUND 12 Job not found INVALID_JOBNAME 13 Jobname is invalid INVALID_VOLNAME 14 Volname is invalid INVALID_SCHEDULE 15 Schedule is invalid INVALID_ARG 16 Argument is invalid Change-Id: Ia1da166659099f4c951fcdb4d755529e41167b80 BUG: 1218055 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/11005 Reviewed-by: Aravinda VK <avishwan> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas>
REVIEW: http://review.gluster.org/11057 (snapshot/scheduler: Return proper error code in case of failure) posted (#1) for review on release-3.7 by Avra Sengupta (asengupt)
Fixed with http://review.gluster.org/#/c/11057/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report. glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user