Bug 1223206 - "Snap_scheduler disable" should have different return codes for different failures.
Summary: "Snap_scheduler disable" should have different return codes for different fai...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Avra Sengupta
QA Contact: senaik
URL:
Whiteboard: Scheduler
Depends On: 1218055 1227615
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2015-05-20 06:10 UTC by Avra Sengupta
Modified: 2016-09-17 13:05 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.1-1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1218055
Environment:
Last Closed: 2015-07-29 04:43:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Avra Sengupta 2015-05-20 06:10:01 UTC
+++ This bug was initially created as a clone of Bug #1218055 +++

Description of problem:
Currently "snap_scheduler disable" command has same return code for different types of failures like; snap_scheduler init not executed, snap_scheduler already disabled.. etc. 
When this command is executed through an external program like vdsm, its better to depend on return code than messages to recognise the failures. So, its better to have different return codes for different errors. 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Avra Sengupta 2015-06-04 11:13:51 UTC
Mainline Patch Url: http://review.gluster.org/#/c/11005/
Release 3.7 Url: http://review.gluster.org/#/c/11057/
RHGS 3.1 Dev Branch Url: https://code.engineering.redhat.com/gerrit/#/c/49908/

Comment 4 Avra Sengupta 2015-06-05 06:43:28 UTC
FIxed on RHGS3.1 branch with https://code.engineering.redhat.com/gerrit/50090

Comment 5 senaik 2015-07-03 10:59:17 UTC
Version : glusterfs-3.7.1-7.el6rhs.x86_64

Different failure scenarios with snap_scheduler disable :

1) Without initialising scheduler if snap_scheduler is executed
 snap_scheduler.py status
snap_scheduler: Please run 'snap_scheduler.py' init to initialise the snap scheduler for the local node.

snap_scheduler.py disable
snap_scheduler: Please run 'snap_scheduler.py' init to initialise the snap scheduler for the local node.

echo $?
9

2) Snap_scheduler is already disabled and trying to disable it again:
snap_scheduler.py status
snap_scheduler: Snapshot scheduling status: Disabled

snap_scheduler.py disable
snap_scheduler: Failed to disable scheduling. Error: Snapshot scheduling is already disabled.

[root@inception post]# echo $?
7

3) Unmount or Delete the shared storage and disable snap_scheduler:

[root@inception post]# snap_scheduler.py disable
snap_scheduler: Failed: Shared storage is not mounted at /var/run/gluster/shared_storage

[root@inception post]# echo $?
4

Delete the shared storage using "gluster v set all cluster.enable-shared-storage disable" and disable snap_scheduler:

snap_scheduler.py disable
snap_scheduler: Failed: Shared storage is not mounted at /var/run/gluster/shared_storage
[root@inception post]# echo $?
4

4) Kill all the bricks in the shared storage and disable snap_scheduler

snap_scheduler.py disable
snap_scheduler: Failed: /var/run/gluster/shared_storage does not exist.

[root@inception post]# echo $?
3

Marking the bug 'Verified'

Comment 6 errata-xmlrpc 2015-07-29 04:43:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.