Description of problem: When ever user triggers a snapshot scheduling from UI, it should disable the scheduling which is present in CLI. Version-Release number of selected component (if applicable): ovirt-engine-3.6.0-0.0.master.20150420232310.gite30f655.el6.noarch How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: snapshot scheduling from UI and cli should not co exist. When user triggers a schedule for snapshot from UI, snapshot scheduling from cli should be disabled. Additional info:
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high
oVirt 3.6.0 has been released on November 4th, re-targeting to 4.0 since this bug has been marked with severity < high
@Sahina, I remember this was merged. Please check and can be moved to MODIFIED.
This bug is flagged for 3.6, yet the milestone is for 4.0 version, therefore the milestone has been reset. Please set the correct milestone or add the flag.
Fixed bug tickets must have target milestone set prior to fixing them. Please set the correct milestone and move the bugs back to the previous status after this is corrected.
This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset. Please set the correct milestone or add the z-stream flag.
3.6.1 is out, moving to 3.6.2.
Tested with RHEV 3.6.3.4, snap scheduling from UI and CLI seem to co-exist 1. I could able to schedule snap from CLI 2. At the same time I could able to schedule snap from UI
Hi Sahina, Tested with RHGS-C and snap scheduling from UI and CLI does not co-exist. # snap_scheduler.py init snap_scheduler: Successfully inited snapshot scheduler for this node # snap_scheduler.py enable snap_scheduler: Snapshot scheduling is enabled # snap_scheduler.py status snap_scheduler: Snapshot scheduling status: Enabled # cat /var/run/gluster/shared_storage/snaps/current_scheduler cli Now go to webui and schedule snapshots there --> Warning: ~~~ Gluster CLI based snapshot scheduling is enabled. It would be disabled once volume snapshots scheduled from UI. ~~~ Snapshots scheduled and check cli: ~~~ # cat /var/run/gluster/shared_storage/snaps/current_scheduler ovirt # snap_scheduler.py status snap_scheduler: Snapshot scheduling status: Disabled ~~~ Try enabling it from CLI by running the command # snap_scheduler.py enable snap_scheduler: Failed to enable snapshot scheduling. Error: Another scheduler is active. . Now suppose in CLI you enable schedule manually, engine does not try to disable it again as RHSC earlier had maintained a flag that CLI scheduling was disabled for the cluster Because of this if you try to create new schedule again from RHSC, there is no way it can disable CLI schedule. Also my understanding is that once RHSC is used for maintaining to a gluster, it takes hold of snapshot scheduling for the cluster and user is not going to revert back to CLI scheduling later. This is the reason feature is developed such a way that only first time RHSC tries to disable the CLI schedule and maintains the state.
Shubhendu, can you please check if we're missing some patches from master in 3.6 branch (vdsm or engine) ?
Kasturi's comments are correct. Only first time when we try to create a snapshot schedule from oVirt engine, it disables the CLI based scheduling for the cluster. Then onwards oVirt maintains a flag for the cluster that the CLI schedule is disabled for the said cluster and only oVirt can schedule snapshots for the cluster. Now if somebody manually edits the configuration in CLI, oVirt cannot make it out and re-disable it. At least using the command "snap_scheduler.py enable" if somebody tries to enable CLI based scheduling it does not allow in back-end. @Sahina, I dont think we are missing some patches here.
Thanks Shubhendu. Sas, can you verify based on comments - 16 & 17?
Bugs moved pre-mature to ON_QA since they didn't have target release. Notice that only bugs with a set target release will move to ON_QA.
Has this been fixed in 3.6.5? I see patches only for master.
The patches in master were merged (jun 2015) before the 3.6 branch creation - and has been available since the 3.6.0 builds. This bug needs to be verified as per Comment 17. Moving to ON_QA
Tested with 3.6.5 & RHGS 3.1.2, by adding the RHGS node to 3.5 cluster. While setting up snap scheduling from UI, the /var/run/gluster/shared_storage/snaps/current_scheduler file contains the "none", against the value of "ovirt" I see the warning messages in the engine.log <snip> "2016-04-18 06:01:59,530 WARN [org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (ajp-/127.0.0.1:8702-15) [57ad0ef7] The message key 'ScheduleGlusterVolumeSnapshot' is missing fro m 'bundles/ExecutionMessages'" </snip>
(In reply to SATHEESARAN from comment #22) > Tested with 3.6.5 & RHGS 3.1.2, by adding the RHGS node to 3.5 cluster. > > While setting up snap scheduling from UI, the > /var/run/gluster/shared_storage/snaps/current_scheduler file contains the > "none", against the value of "ovirt" > > I see the warning messages in the engine.log > > <snip> > "2016-04-18 06:01:59,530 WARN > [org.ovirt.engine.core.dal.job.ExecutionMessageDirector] > (ajp-/127.0.0.1:8702-15) [57ad0ef7] The message key > 'ScheduleGlusterVolumeSnapshot' is missing fro > m 'bundles/ExecutionMessages'" > </snip> Based on this, I am moving this bug to FailedQA
The error message provided in Comment 22 is not related to this - it is a benign message related to display of error messages. Triveni, can you please check the test case being executed? I think this works in RHSC with same patches
I have checked with recent version of RHGS-C (3.1.3) and dont see this issue. it works as expected.
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.
oVirt 4.0 beta has been released, moving to RC milestone.
This is not a commonly seen scenario - hence closing this. Please reopen if you see a need to fix this.