Bug 1240231 - Ovirt should update current_scheduler file, once gluster shared storage is disabled and enabled (meta volume deleted and created back again)
Summary: Ovirt should update current_scheduler file, once gluster shared storage is d...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.1
Assignee: Shubhendu Tripathi
QA Contact: Triveni Rao
URL:
Whiteboard:
Depends On:
Blocks: 1216951 1223636 1251815
TreeView+ depends on / blocked
 
Reported: 2015-07-06 09:48 UTC by Anil Shah
Modified: 2016-05-16 04:39 UTC (History)
11 users (show)

Fixed In Version: rhsc-3.1.1-63
Doc Type: Bug Fix
Doc Text:
Previously, if the gluster meta volume was deleted from the CLI and added back again, Red Hat Gluster Storage Console did not trigger the disabling of CLI based volume snapshot scheduling again. With this fix, the gluster sync job in the console is modified such that, even if the meta volume gets deleted and created back again, the console will explicitly disable the CLI based snapshot schedule.
Clone Of:
Environment:
Last Closed: 2015-10-05 09:22:19 UTC
Embargoed:


Attachments (Terms of Use)
rhsc1 (78.43 KB, image/png)
2015-08-21 09:06 UTC, Triveni Rao
no flags Details
rhsc2 (76.00 KB, image/png)
2015-08-21 09:07 UTC, Triveni Rao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1848 0 normal SHIPPED_LIVE Red Hat Gluster Storage Console 3.1 update 1 bug fixes 2015-10-05 13:19:50 UTC
oVirt gerrit 43302 0 master MERGED gluster: Enabled to disable CLI volume snapshot schedule again Never
oVirt gerrit 44301 0 ovirt-engine-3.5-gluster MERGED gluster: Enabled to disable CLI volume snapshot schedule again Never

Description Anil Shah 2015-07-06 09:48:59 UTC
Description of problem:

Once you have scheduled snapshot using UI. Disabling  and enabling shared storage , ovirt should update the current_scheduler file. 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Shubhendu Tripathi 2015-07-07 05:40:52 UTC
If the meta volume gets deleted and created again at a later stage when volume snapshot scheduling is taken care by oVirt and is disabled from CLI.

While addition of the meta volume again to the oVirt DB, the flag should be set accordingly in current_scheduler in gluster.

Comment 3 monti lawrence 2015-07-23 15:48:23 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 4 Shubhendu Tripathi 2015-07-23 17:03:15 UTC
Edited the doc-text a bit. Looks fine now.

Comment 8 Shubhendu Tripathi 2015-08-13 05:43:49 UTC
Errata moved it to ON_QA

Comment 11 Triveni Rao 2015-08-21 09:06:35 UTC
Created attachment 1065508 [details]
rhsc1

Comment 12 Triveni Rao 2015-08-21 09:07:31 UTC
Created attachment 1065510 [details]
rhsc2

Comment 13 Triveni Rao 2015-08-21 09:08:03 UTC
This bug is verified and found no issues:

Following steps were performed to verify this bug.

1. fresh installation of rhsc
2. add hosts.create volume.
3. create volume snapshot and schedule from rhsc.
4. this should disable cli snapshot scheduler status :disabled
5. now delete the meta volume from Cli.
6. delete the mount path of the meta volume
7. recreate the meta volume from cli
8. mount it in the required path---automatically mounts after sync	 
9. from UI create schedule for a volume
10. check the status of the scheduler in cli and also the scheduler file 
11. it should be disabled and set to ovirt.


output from cli of the node:

[root@casino-vm3 ~]# gluster v stop gluster_shared_storage
Stopping the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y
volume stop: gluster_shared_storage: success
[root@casino-vm3 ~]# gluster v delete gluster_shared_storage
Deleting the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y
volume delete: gluster_shared_storage: success
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# 

[root@casino-vm3 ~]# gluster v create gluster_shared_storage 10.70.35.77:/rhgs/brick1/v0 10.70.35.82:/rhgs/brick1/v0 force
volume create: gluster_shared_storage: success: please start the volume to access data
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# d
[root@casino-vm3 ~]# gluster v start gluster_shared_storage 
volume start: gluster_shared_storage: success
[root@casino-vm3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_casinovm3-lv_root
                       18G  2.8G   14G  18% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/vda1             477M   36M  416M   8% /boot
/dev/mapper/vg--brick1-brick1
                       50G   33M   50G   1% /rhgs/brick1
/dev/mapper/vg--brick2-brick2
                       50G   33M   50G   1% /rhgs/brick2
/dev/mapper/vg--brick3-brick3
                       50G   33M   50G   1% /rhgs/brick3
/dev/mapper/vg--brick4-brick4
                       50G   33M   50G   1% /rhgs/brick4
/dev/mapper/vg--brick5-brick5
                       50G   35M   50G   1% /rhgs/brick5
dhcp35-82.lab.eng.blr.redhat.com:/gluster_shared_storage
                       99G   66M   99G   1% /var/run/gluster/shared_storage
/dev/mapper/vg--brick1-857f038386aa49d483ed244beafb9bd5_0
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick2
/dev/mapper/vg--brick2-857f038386aa49d483ed244beafb9bd5_1
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick4
/dev/mapper/vg--brick3-f4f345ff4ca54b6db849d01b0a75883f_0
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick3
/dev/mapper/vg--brick4-f4f345ff4ca54b6db849d01b0a75883f_1
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick4
/dev/mapper/vg--brick2-ab6058d55c3f47eb9c1d445205a6c2b9_1
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick4
/dev/mapper/vg--brick1-ab6058d55c3f47eb9c1d445205a6c2b9_0
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick2
/dev/mapper/vg--brick4-2d21fb2333d34e97b337541beaf858a7_1
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick4
/dev/mapper/vg--brick3-2d21fb2333d34e97b337541beaf858a7_0
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick3
[root@casino-vm3 ~]#



[root@casino-vm3 ~]# gluster v start gluster_shared_storage 
volume start: gluster_shared_storage: success
[root@casino-vm3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_casinovm3-lv_root
                       18G  2.8G   14G  18% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/vda1             477M   36M  416M   8% /boot
/dev/mapper/vg--brick1-brick1
                       50G   33M   50G   1% /rhgs/brick1
/dev/mapper/vg--brick2-brick2
                       50G   33M   50G   1% /rhgs/brick2
/dev/mapper/vg--brick3-brick3
                       50G   33M   50G   1% /rhgs/brick3
/dev/mapper/vg--brick4-brick4
                       50G   33M   50G   1% /rhgs/brick4
/dev/mapper/vg--brick5-brick5
                       50G   35M   50G   1% /rhgs/brick5
dhcp35-82.lab.eng.blr.redhat.com:/gluster_shared_storage
                       99G   66M   99G   1% /var/run/gluster/shared_storage
/dev/mapper/vg--brick1-857f038386aa49d483ed244beafb9bd5_0
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick2
/dev/mapper/vg--brick2-857f038386aa49d483ed244beafb9bd5_1
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick4
/dev/mapper/vg--brick3-f4f345ff4ca54b6db849d01b0a75883f_0
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick3
/dev/mapper/vg--brick4-f4f345ff4ca54b6db849d01b0a75883f_1
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick4
/dev/mapper/vg--brick2-ab6058d55c3f47eb9c1d445205a6c2b9_1
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick4
/dev/mapper/vg--brick1-ab6058d55c3f47eb9c1d445205a6c2b9_0
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick2
/dev/mapper/vg--brick4-2d21fb2333d34e97b337541beaf858a7_1
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick4
/dev/mapper/vg--brick3-2d21fb2333d34e97b337541beaf858a7_0
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick3
[root@casino-vm3 ~]#

[root@casino-vm3 ~]# vi /var/run/gluster/shared_storage/snaps/current_scheduler 
[root@casino-vm3 ~]# cat /var/run/gluster/shared_storage/snaps/current_scheduler 
none[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# cat /var/run/gluster/shared_storage/snaps/current_scheduler 
ovirt
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# snap_scheduler.py status
snap_scheduler: Snapshot scheduling status: Disabled
[root@casino-vm3 ~]#

Comment 14 Bhavana 2015-09-22 06:08:42 UTC
Hi Shubhendu,

The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same.

Comment 15 Shubhendu Tripathi 2015-09-22 06:10:16 UTC
Looks fine.

Comment 17 errata-xmlrpc 2015-10-05 09:22:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1848.html


Note You need to log in before you can comment on or make changes to this bug.