Description of problem: We need to have control of RHEV-M engine guest placement to enforce our business rules. The expectation is that we will use the Scheduler API features introduced in 3.3 to do this. The ovirt-scheduler-proxy seems to be the immediate path to this. When the scheduler proxy is running and external scheduling enabled in RHEV-M, RHEV-M doesn't expose or automatically place the external schedulers into the default cluster policy(None). I don't see a way via rhevm-shell or the API to create new policies or modify the None policy such that these policy units from the external scheduler can be effective. That's left me modifying the engine database to test this stuff. Here is what I'm currently doing to enable testing of an external scheduler: # On the RHEV Manager: [root@lb0160 ~]# chkconfig ovirt-scheduler-proxy on [root@lb0160 ~]# service ovirt-scheduler-proxy start Starting oVirt Scheduler Proxy: [ OK ] [root@lb0160 ~]# rhevm-config -s ExternalSchedulerEnabled=true * Placed a functional version of /usr/share/doc/ovirt-scheduler-proxy-0.1.3/plugins/examples/max_vm_filter.py at: /usr/share/ovirt-scheduler-proxy/plugins/max_vm_filter.py # engine database change # set the maximum_vm_count property used by max_vm_filter.py in the test # cluster's policy custom properties: engine=> update vds_groups set cluster_policy_custom_properties = '{ "maximum_vm_count" : "2" }' where name = 'rhevc-west'; # engine database change # Add the "max_vms" policy unit discovered by the engine from the external # scheduler proxy script to the "None" cluster scheduling policy: engine=> cluster_policy_units (cluster_policy_id, policy_unit_id, filter_sequence, factor) select cp.id, pu.id, 0, 0 from (select id from cluster_policies where name = 'None') as cp, (select id from policy_units where name = 'max_vms') as pu; # restart the engine because I'm not familiar with the data access [root@lb0160 ~]# service ovirt-engine restart Version-Release number of selected component (if applicable): 3.3.0 GA How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: Working From UI: in order to use an external module you will need to create a new cluster policy, which is a system-level activity. This is done using the 'Configure' link-button on the upper-right corner of the screen. Once you press it, you'll be able to see a dialog with a side-tab called 'Cluster Policies'. Here is where the admin manages non-default cluster policies. Use the copy button for an existing policy. You should be able to see an indication of (EXT) for your filter on the right list (disabled filters). Use drag and drop to move it to the left side. Give a name to this policy. Next, close the dialogs and go to the relevant cluster in the cluster main tab. right click and choose edit. In the policy tab, you should be able to see your new policy.
rhevm-shell is built on top of the SDK, which is using the REST API. So this feature is basically about supporting scheduling policy (including external module updates) in the REST API.
*** Bug 1108601 has been marked as a duplicate of this bug. ***
Verified on rhevm-3.5.0-0.13.beta.el6ev.noarch. Checked possibility via REST to: - remove external scheduler unit - add external scheduler unit to user policy - remove external scheduler unit from user policy
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0158.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days