Bug 1014697 - Scheduling: allow overbooking resources
Scheduling: allow overbooking resources
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.3.0
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 3.3.0
Assigned To: Gilad Chaplik
Artyom
sla
:
Depends On:
Blocks: 1019461 3.3rc1
  Show dependency treegraph
 
Reported: 2013-10-02 10:46 EDT by Gilad Chaplik
Modified: 2016-02-10 15:14 EST (History)
8 users (show)

See Also:
Fixed In Version: is28
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 19272 None None None Never
oVirt gerrit 22174 None None None Never

  None (edit)
Description Gilad Chaplik 2013-10-02 10:46:47 EDT
Scheduling process is synchronized to avoid overbooking of cluster's resources.
add the cluster ability to avoid synchronizing scheduling requests.

This allows concurrent scheduling requests for the same cluster, and allow handling a large amount scheduling requests, but as the name indicates
may overbook hosts with extra VMs. Depending on the capabilities, the VMs
may fail to run or succeed while overloading the host (which should be
handled by load balancing).

Note that this should be limited to a minimal configurable amount, so if there
are less than X requests they will still be synchronized.

By default, setting overbooking is disabled and hidden, and can be visible by setting a config value.
Comment 1 Artyom 2013-12-19 07:31:34 EST
Can you please provide information, how I can verify the bug?
Thanks.
Comment 2 Gilad Chaplik 2013-12-19 08:47:20 EST
(In reply to Artyom from comment #1)
> Can you please provide information, how I can verify the bug?
> Thanks.
Sure Artyom :)

Brief explanation:
Adding a cluster optimization to enable parallel VM scheduling
requests for cluster (skip lock), in case pending requests are
greater than configurable threshold.
By default this feature is hidden from the user (unless
setting config.SchedulerAllowOverBooking to true).

Steps to verify:
1) Enable the feature (config.SchedulerAllowOverBooking).
2) Set config.SchedulerOverBookingThreshold to a reasonable number (3?).
3) Create a time consuming external filter, and restart external scheduling proxy.
4) Restart server to stick config options and new filter.
5) add that filter to cluster policy
6) start serveral VMs (separably or via REST) - running several VM from ui syncronize them, so there's no point of removing the scheduling lock.
7) Once there are more than config.SchedulerOverBookingThreshold waiting to be scheduled, all the VMs will be scheduled together.

Thanks, 
Gilad.
Comment 3 Artyom 2013-12-23 05:14:10 EST
Verified on is28
Message appear in engine log:
2013-12-22 08:37:33,147 INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (pool-4-thread-46) [4169c47] scheduler: cluster (cl_33) lock is skipped (cluster is allowed to overbook)
And also after this I can see that vm powering up together:
2013-12-23 12:09:16,303 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-82) VM test_pool-3 4a78f45c-4eca-4fed-94c9-bbb0c22904b6 moved from WaitForLaunch --> PoweringUp
2013-12-23 12:09:16,304 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-82) VM test_pool-1 be31d7b0-b0a0-42a8-9ec2-a84596617141 moved from WaitForLaunch --> PoweringUp
2013-12-23 12:09:16,304 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-82) VM test_pool-2 b3fa0a8b-740f-4b56-8c9f-5c4a9f167a4d moved from WaitForLaunch --> PoweringUp
Comment 4 Itamar Heim 2014-01-21 17:20:57 EST
Closing - RHEV 3.3 Released
Comment 5 Itamar Heim 2014-01-21 17:26:27 EST
Closing - RHEV 3.3 Released

Note You need to log in before you can comment on or make changes to this bug.