Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1240231 - Ovirt should update current_scheduler file, once gluster shared storage is disabled and enabled (meta volume deleted and created back again)
Ovirt should update current_scheduler file, once gluster shared storage is d...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
3.1
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.1.1
Assigned To: Shubhendu Tripathi
Triveni Rao
: ZStream
Depends On:
Blocks: 1216951 1223636 1251815
  Show dependency treegraph
 
Reported: 2015-07-06 05:48 EDT by Anil Shah
Modified: 2016-05-16 00:39 EDT (History)
11 users (show)

See Also:
Fixed In Version: rhsc-3.1.1-63
Doc Type: Bug Fix
Doc Text:
Previously, if the gluster meta volume was deleted from the CLI and added back again, Red Hat Gluster Storage Console did not trigger the disabling of CLI based volume snapshot scheduling again. With this fix, the gluster sync job in the console is modified such that, even if the meta volume gets deleted and created back again, the console will explicitly disable the CLI based snapshot schedule.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-05 05:22:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
rhsc1 (78.43 KB, image/png)
2015-08-21 05:06 EDT, Triveni Rao
no flags Details
rhsc2 (76.00 KB, image/png)
2015-08-21 05:07 EDT, Triveni Rao
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 43302 master MERGED gluster: Enabled to disable CLI volume snapshot schedule again Never
oVirt gerrit 44301 ovirt-engine-3.5-gluster MERGED gluster: Enabled to disable CLI volume snapshot schedule again Never
Red Hat Product Errata RHBA-2015:1848 normal SHIPPED_LIVE Red Hat Gluster Storage Console 3.1 update 1 bug fixes 2015-10-05 09:19:50 EDT

  None (edit)
Description Anil Shah 2015-07-06 05:48:59 EDT
Description of problem:

Once you have scheduled snapshot using UI. Disabling  and enabling shared storage , ovirt should update the current_scheduler file. 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 2 Shubhendu Tripathi 2015-07-07 01:40:52 EDT
If the meta volume gets deleted and created again at a later stage when volume snapshot scheduling is taken care by oVirt and is disabled from CLI.

While addition of the meta volume again to the oVirt DB, the flag should be set accordingly in current_scheduler in gluster.
Comment 3 monti lawrence 2015-07-23 11:48:23 EDT
Doc text is edited. Please sign off to be included in Known Issues.
Comment 4 Shubhendu Tripathi 2015-07-23 13:03:15 EDT
Edited the doc-text a bit. Looks fine now.
Comment 8 Shubhendu Tripathi 2015-08-13 01:43:49 EDT
Errata moved it to ON_QA
Comment 11 Triveni Rao 2015-08-21 05:06:35 EDT
Created attachment 1065508 [details]
rhsc1
Comment 12 Triveni Rao 2015-08-21 05:07:31 EDT
Created attachment 1065510 [details]
rhsc2
Comment 13 Triveni Rao 2015-08-21 05:08:03 EDT
This bug is verified and found no issues:

Following steps were performed to verify this bug.

1. fresh installation of rhsc
2. add hosts.create volume.
3. create volume snapshot and schedule from rhsc.
4. this should disable cli snapshot scheduler status :disabled
5. now delete the meta volume from Cli.
6. delete the mount path of the meta volume
7. recreate the meta volume from cli
8. mount it in the required path---automatically mounts after sync	 
9. from UI create schedule for a volume
10. check the status of the scheduler in cli and also the scheduler file 
11. it should be disabled and set to ovirt.


output from cli of the node:

[root@casino-vm3 ~]# gluster v stop gluster_shared_storage
Stopping the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y
volume stop: gluster_shared_storage: success
[root@casino-vm3 ~]# gluster v delete gluster_shared_storage
Deleting the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y
volume delete: gluster_shared_storage: success
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# 

[root@casino-vm3 ~]# gluster v create gluster_shared_storage 10.70.35.77:/rhgs/brick1/v0 10.70.35.82:/rhgs/brick1/v0 force
volume create: gluster_shared_storage: success: please start the volume to access data
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# d
[root@casino-vm3 ~]# gluster v start gluster_shared_storage 
volume start: gluster_shared_storage: success
[root@casino-vm3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_casinovm3-lv_root
                       18G  2.8G   14G  18% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/vda1             477M   36M  416M   8% /boot
/dev/mapper/vg--brick1-brick1
                       50G   33M   50G   1% /rhgs/brick1
/dev/mapper/vg--brick2-brick2
                       50G   33M   50G   1% /rhgs/brick2
/dev/mapper/vg--brick3-brick3
                       50G   33M   50G   1% /rhgs/brick3
/dev/mapper/vg--brick4-brick4
                       50G   33M   50G   1% /rhgs/brick4
/dev/mapper/vg--brick5-brick5
                       50G   35M   50G   1% /rhgs/brick5
dhcp35-82.lab.eng.blr.redhat.com:/gluster_shared_storage
                       99G   66M   99G   1% /var/run/gluster/shared_storage
/dev/mapper/vg--brick1-857f038386aa49d483ed244beafb9bd5_0
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick2
/dev/mapper/vg--brick2-857f038386aa49d483ed244beafb9bd5_1
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick4
/dev/mapper/vg--brick3-f4f345ff4ca54b6db849d01b0a75883f_0
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick3
/dev/mapper/vg--brick4-f4f345ff4ca54b6db849d01b0a75883f_1
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick4
/dev/mapper/vg--brick2-ab6058d55c3f47eb9c1d445205a6c2b9_1
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick4
/dev/mapper/vg--brick1-ab6058d55c3f47eb9c1d445205a6c2b9_0
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick2
/dev/mapper/vg--brick4-2d21fb2333d34e97b337541beaf858a7_1
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick4
/dev/mapper/vg--brick3-2d21fb2333d34e97b337541beaf858a7_0
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick3
[root@casino-vm3 ~]#



[root@casino-vm3 ~]# gluster v start gluster_shared_storage 
volume start: gluster_shared_storage: success
[root@casino-vm3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_casinovm3-lv_root
                       18G  2.8G   14G  18% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/vda1             477M   36M  416M   8% /boot
/dev/mapper/vg--brick1-brick1
                       50G   33M   50G   1% /rhgs/brick1
/dev/mapper/vg--brick2-brick2
                       50G   33M   50G   1% /rhgs/brick2
/dev/mapper/vg--brick3-brick3
                       50G   33M   50G   1% /rhgs/brick3
/dev/mapper/vg--brick4-brick4
                       50G   33M   50G   1% /rhgs/brick4
/dev/mapper/vg--brick5-brick5
                       50G   35M   50G   1% /rhgs/brick5
dhcp35-82.lab.eng.blr.redhat.com:/gluster_shared_storage
                       99G   66M   99G   1% /var/run/gluster/shared_storage
/dev/mapper/vg--brick1-857f038386aa49d483ed244beafb9bd5_0
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick2
/dev/mapper/vg--brick2-857f038386aa49d483ed244beafb9bd5_1
                       50G   33M   50G   1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick4
/dev/mapper/vg--brick3-f4f345ff4ca54b6db849d01b0a75883f_0
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick3
/dev/mapper/vg--brick4-f4f345ff4ca54b6db849d01b0a75883f_1
                       50G   33M   50G   1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick4
/dev/mapper/vg--brick2-ab6058d55c3f47eb9c1d445205a6c2b9_1
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick4
/dev/mapper/vg--brick1-ab6058d55c3f47eb9c1d445205a6c2b9_0
                       50G   33M   50G   1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick2
/dev/mapper/vg--brick4-2d21fb2333d34e97b337541beaf858a7_1
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick4
/dev/mapper/vg--brick3-2d21fb2333d34e97b337541beaf858a7_0
                       50G   33M   50G   1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick3
[root@casino-vm3 ~]#

[root@casino-vm3 ~]# vi /var/run/gluster/shared_storage/snaps/current_scheduler 
[root@casino-vm3 ~]# cat /var/run/gluster/shared_storage/snaps/current_scheduler 
none[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# cat /var/run/gluster/shared_storage/snaps/current_scheduler 
ovirt
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# snap_scheduler.py status
snap_scheduler: Snapshot scheduling status: Disabled
[root@casino-vm3 ~]#
Comment 14 Bhavana 2015-09-22 02:08:42 EDT
Hi Shubhendu,

The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same.
Comment 15 Shubhendu Tripathi 2015-09-22 02:10:16 EDT
Looks fine.
Comment 17 errata-xmlrpc 2015-10-05 05:22:19 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1848.html

Note You need to log in before you can comment on or make changes to this bug.