Description of problem: Once you have scheduled snapshot using UI. Disabling and enabling shared storage , ovirt should update the current_scheduler file. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
If the meta volume gets deleted and created again at a later stage when volume snapshot scheduling is taken care by oVirt and is disabled from CLI. While addition of the meta volume again to the oVirt DB, the flag should be set accordingly in current_scheduler in gluster.
Doc text is edited. Please sign off to be included in Known Issues.
Edited the doc-text a bit. Looks fine now.
Errata moved it to ON_QA
Created attachment 1065508 [details] rhsc1
Created attachment 1065510 [details] rhsc2
This bug is verified and found no issues: Following steps were performed to verify this bug. 1. fresh installation of rhsc 2. add hosts.create volume. 3. create volume snapshot and schedule from rhsc. 4. this should disable cli snapshot scheduler status :disabled 5. now delete the meta volume from Cli. 6. delete the mount path of the meta volume 7. recreate the meta volume from cli 8. mount it in the required path---automatically mounts after sync 9. from UI create schedule for a volume 10. check the status of the scheduler in cli and also the scheduler file 11. it should be disabled and set to ovirt. output from cli of the node: [root@casino-vm3 ~]# gluster v stop gluster_shared_storage Stopping the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y volume stop: gluster_shared_storage: success [root@casino-vm3 ~]# gluster v delete gluster_shared_storage Deleting the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y volume delete: gluster_shared_storage: success [root@casino-vm3 ~]# [root@casino-vm3 ~]# [root@casino-vm3 ~]# gluster v create gluster_shared_storage 10.70.35.77:/rhgs/brick1/v0 10.70.35.82:/rhgs/brick1/v0 force volume create: gluster_shared_storage: success: please start the volume to access data [root@casino-vm3 ~]# [root@casino-vm3 ~]# d [root@casino-vm3 ~]# gluster v start gluster_shared_storage volume start: gluster_shared_storage: success [root@casino-vm3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_casinovm3-lv_root 18G 2.8G 14G 18% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 36M 416M 8% /boot /dev/mapper/vg--brick1-brick1 50G 33M 50G 1% /rhgs/brick1 /dev/mapper/vg--brick2-brick2 50G 33M 50G 1% /rhgs/brick2 /dev/mapper/vg--brick3-brick3 50G 33M 50G 1% /rhgs/brick3 /dev/mapper/vg--brick4-brick4 50G 33M 50G 1% /rhgs/brick4 /dev/mapper/vg--brick5-brick5 50G 35M 50G 1% /rhgs/brick5 dhcp35-82.lab.eng.blr.redhat.com:/gluster_shared_storage 99G 66M 99G 1% /var/run/gluster/shared_storage /dev/mapper/vg--brick1-857f038386aa49d483ed244beafb9bd5_0 50G 33M 50G 1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick2 /dev/mapper/vg--brick2-857f038386aa49d483ed244beafb9bd5_1 50G 33M 50G 1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick4 /dev/mapper/vg--brick3-f4f345ff4ca54b6db849d01b0a75883f_0 50G 33M 50G 1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick3 /dev/mapper/vg--brick4-f4f345ff4ca54b6db849d01b0a75883f_1 50G 33M 50G 1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick4 /dev/mapper/vg--brick2-ab6058d55c3f47eb9c1d445205a6c2b9_1 50G 33M 50G 1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick4 /dev/mapper/vg--brick1-ab6058d55c3f47eb9c1d445205a6c2b9_0 50G 33M 50G 1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick2 /dev/mapper/vg--brick4-2d21fb2333d34e97b337541beaf858a7_1 50G 33M 50G 1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick4 /dev/mapper/vg--brick3-2d21fb2333d34e97b337541beaf858a7_0 50G 33M 50G 1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick3 [root@casino-vm3 ~]# [root@casino-vm3 ~]# gluster v start gluster_shared_storage volume start: gluster_shared_storage: success [root@casino-vm3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_casinovm3-lv_root 18G 2.8G 14G 18% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 36M 416M 8% /boot /dev/mapper/vg--brick1-brick1 50G 33M 50G 1% /rhgs/brick1 /dev/mapper/vg--brick2-brick2 50G 33M 50G 1% /rhgs/brick2 /dev/mapper/vg--brick3-brick3 50G 33M 50G 1% /rhgs/brick3 /dev/mapper/vg--brick4-brick4 50G 33M 50G 1% /rhgs/brick4 /dev/mapper/vg--brick5-brick5 50G 35M 50G 1% /rhgs/brick5 dhcp35-82.lab.eng.blr.redhat.com:/gluster_shared_storage 99G 66M 99G 1% /var/run/gluster/shared_storage /dev/mapper/vg--brick1-857f038386aa49d483ed244beafb9bd5_0 50G 33M 50G 1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick2 /dev/mapper/vg--brick2-857f038386aa49d483ed244beafb9bd5_1 50G 33M 50G 1% /var/run/gluster/snaps/857f038386aa49d483ed244beafb9bd5/brick4 /dev/mapper/vg--brick3-f4f345ff4ca54b6db849d01b0a75883f_0 50G 33M 50G 1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick3 /dev/mapper/vg--brick4-f4f345ff4ca54b6db849d01b0a75883f_1 50G 33M 50G 1% /var/run/gluster/snaps/f4f345ff4ca54b6db849d01b0a75883f/brick4 /dev/mapper/vg--brick2-ab6058d55c3f47eb9c1d445205a6c2b9_1 50G 33M 50G 1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick4 /dev/mapper/vg--brick1-ab6058d55c3f47eb9c1d445205a6c2b9_0 50G 33M 50G 1% /var/run/gluster/snaps/ab6058d55c3f47eb9c1d445205a6c2b9/brick2 /dev/mapper/vg--brick4-2d21fb2333d34e97b337541beaf858a7_1 50G 33M 50G 1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick4 /dev/mapper/vg--brick3-2d21fb2333d34e97b337541beaf858a7_0 50G 33M 50G 1% /var/run/gluster/snaps/2d21fb2333d34e97b337541beaf858a7/brick3 [root@casino-vm3 ~]# [root@casino-vm3 ~]# vi /var/run/gluster/shared_storage/snaps/current_scheduler [root@casino-vm3 ~]# cat /var/run/gluster/shared_storage/snaps/current_scheduler none[root@casino-vm3 ~]# [root@casino-vm3 ~]# [root@casino-vm3 ~]# cat /var/run/gluster/shared_storage/snaps/current_scheduler ovirt [root@casino-vm3 ~]# [root@casino-vm3 ~]# snap_scheduler.py status snap_scheduler: Snapshot scheduling status: Disabled [root@casino-vm3 ~]#
Hi Shubhendu, The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same.
Looks fine.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1848.html