Bug 1145764 - [RFE] Add a "freeze" mode to hypervisors, to avoid starting or migrating new VMs to it.
Summary: [RFE] Add a "freeze" mode to hypervisors, to avoid starting or migrating new ...
Keywords:
Status: CLOSED DUPLICATE of bug 1438408
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: 3.4.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ovirt-4.2.0
: ---
Assignee: Scott Herold
QA Contact: Shai Revivo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-23 17:06 UTC by Martin Tessun
Modified: 2019-10-10 09:24 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 12:26:01 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:
sherold: Triaged+


Attachments (Terms of Use)

Description Martin Tessun 2014-09-23 17:06:16 UTC
3. What is the nature and description of the request?  
      Add a mode besides Maintenance that will not allow any new VMs migrated or started on that host, but leave all (already running VMs) on that hypervisor.

    4. Why does the customer need this? (List the business requirements here)  
      If the host has some problems, planning a change within the customer might need some time. If more VMs get migrated to that host during this time, the change could eventually not take place, as the VM can't be migrated off the host anymore (e.g. due to some issues with live migration). For this sort of scenarios, it is needed to have this Hypervisor in a static state, with no VMs changing on it.

    5. How would the customer like to achieve this? (List the functional requirements here)  
      Add some addtional mode to the Hypervisor besides Maintenance (e.g. Freeze Mode)

    6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.  
      If the host is in Freeze mode, no new VMs should be able to start on that host. Live-migrating VMs to that host should also be forbidden, and fail with an appropriate error message therefore.

    7. Is there already an existing RFE upstream or in Red Hat Bugzilla?  
      Not that I am aware of.

    8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?  
      ASAP as usual.

    9. Is the sales team involved in this request and do they have any additional input?  
      No

    10. List any affected packages or components.  
      RHEV-M maybe vdsm

    11. Would the customer be able to assist in testing this functionality if implemented?  
      Yes.

Comment 11 Doron Fediuck 2015-09-08 09:56:33 UTC
If we wanted to write your local freeze policy you should follow
these guidelines:

1. Take a look at http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy
in order to understand how to setup the external scheduler.

2. Once you have it in place you an write a trivial filter that returns an empty list. Here's an example of a filter: http://www.ovirt.org/Filter_By_VM_Count

3. Now all you need to have in the do_filter function is:
accepted_host_ids = []
print accepted_host_ids

(Maybe add a log to explain why...)

4. The last step would be to create a new scheduling policy called 'freeze' and use the filter you wrote.

Comment 16 Moran Goldboim 2017-06-19 12:26:01 UTC

*** This bug has been marked as a duplicate of bug 1438408 ***

Comment 17 Martin Sivák 2017-06-20 08:56:51 UTC
This is not a duplicate. The bug you are referring to is about the whole cluster. But Martin here requests a permanent blacklist of a single host only.

Comment 18 Moran Goldboim 2017-07-02 14:42:56 UTC
(In reply to Martin Sivák from comment #17)
> This is not a duplicate. The bug you are referring to is about the whole
> cluster. But Martin here requests a permanent blacklist of a single host
> only.

Martin, doesn't prepare for maintenance stage prevents starting/migrating new VMs to the host?

Comment 19 Oved Ourfali 2017-07-02 18:52:59 UTC
Preparing for maintenance is used as part of the maintenance flow. It is wrong to use it elsewhere.

Comment 20 Moran Goldboim 2017-07-02 19:09:50 UTC
(In reply to Oved Ourfali from comment #19)
> Preparing for maintenance is used as part of the maintenance flow. It is
> wrong to use it elsewhere.

I wasn't thinking about using it somewhere else. from use case perspective here:
-no new VMs to get started in the cluster - should be solved with bug 1438408
-incoming migration shouldn't happen to the host while in problematic state (according to comment 1) - I would suggest on this cases putting such a host in maintenance (which is part of our host upgrade flow as well). blocking incoming migrations during that stage seems to me like a logical thing to do, does it make sense? is this the way we support it today?

Comment 21 Oved Ourfali 2017-07-03 04:39:53 UTC
(In reply to Moran Goldboim from comment #20)
> (In reply to Oved Ourfali from comment #19)
> > Preparing for maintenance is used as part of the maintenance flow. It is
> > wrong to use it elsewhere.
> 
> I wasn't thinking about using it somewhere else. from use case perspective
> here:
> -no new VMs to get started in the cluster - should be solved with bug 1438408

From what I understand while reading this RFE is that this is all that's required.

> -incoming migration shouldn't happen to the host while in problematic state
> (according to comment 1) - I would suggest on this cases putting such a host
> in maintenance (which is part of our host upgrade flow as well). blocking
> incoming migrations during that stage seems to me like a logical thing to
> do, does it make sense? is this the way we support it today?

This is something different.
Incoming migrations indeed don't happen on hosts that are in maintenance, but this RFE is about hosts that are Up, but they don't want to accept more VMs, and that should be solved by scheduling policy.


Note You need to log in before you can comment on or make changes to this bug.