Bug 1218055
Summary: | "Snap_scheduler disable" should have different return codes for different failures. | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Darshan <dnarayan> | |
Component: | snapshot | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | asengupt, barumuga, bugs, gluster-bugs | |
Target Milestone: | --- | Keywords: | Reopened, Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | Scheduler | |||
Fixed In Version: | glusterfs-3.8rc2 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1223206 1227615 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-16 12:57:38 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1186580, 1223206, 1227615 |
Description
Darshan
2015-05-04 07:15:59 UTC
Can you return the script with success when disable is executed on a already disabled setup. REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Retuen proper error code in case of failure) posted (#1) for review on master by Avra Sengupta (asengupt) REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Retuen proper error code in case of failure) posted (#2) for review on master by Avra Sengupta (asengupt) REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Retuen proper error code in case of failure) posted (#3) for review on master by Avra Sengupta (asengupt) REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Return proper error code in case of failure) posted (#5) for review on master by Avra Sengupta (asengupt) REVIEW: http://review.gluster.org/11005 (snapshot/scheduler: Return proper error code in case of failure) posted (#6) for review on master by Avra Sengupta (asengupt) COMMIT: http://review.gluster.org/11005 committed in master by Krishnan Parthasarathi (kparthas) ------ commit 9798a24febba9bbf28e97656b81b8a01a1325f68 Author: Avra Sengupta <asengupt> Date: Fri May 29 18:11:01 2015 +0530 snapshot/scheduler: Return proper error code in case of failure ENUM RETCODE ERROR ---------------------------------------------------------- INTERNAL_ERROR 2 Internal Error SHARED_STORAGE_DIR_DOESNT_EXIST 3 Shared Storage Dir does not exist SHARED_STORAGE_NOT_MOUNTED 4 Shared storage is not mounted ANOTHER_TRANSACTION_IN_PROGRESS 5 Another transaction is in progress INIT_FAILED 6 Initialisation failed SCHEDULING_ALREADY_DISABLED 7 Scheduler is already disabled SCHEDULING_ALREADY_ENABLED 8 Scheduler is already enabled NODE_NOT_INITIALISED 9 Node not initialised ANOTHER_SCHEDULER_ACTIVE 10 Another scheduler is active JOB_ALREADY_EXISTS 11 Job already exists JOB_NOT_FOUND 12 Job not found INVALID_JOBNAME 13 Jobname is invalid INVALID_VOLNAME 14 Volname is invalid INVALID_SCHEDULE 15 Schedule is invalid INVALID_ARG 16 Argument is invalid Change-Id: Ia1da166659099f4c951fcdb4d755529e41167b80 BUG: 1218055 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/11005 Reviewed-by: Aravinda VK <avishwan> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas> Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |