Bug 1569107 - When instance has pinned cpus in a given numa zone, evacuation will try to schedule the same cpus/numa zone on another compute
Summary: When instance has pinned cpus in a given numa zone, evacuation will try to sc...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: OSP DFG:Compute
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-18 15:43 UTC by David Hill
Modified: 2023-03-21 18:48 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-26 23:36:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-5042 0 None None None 2022-08-16 09:51:08 UTC

Description David Hill 2018-04-18 15:43:42 UTC
Description of problem:
When instance has pinned cpus in a given numa zone, evacuation will try to schedule the same cpus/numa zone on another compute and this will fail if the same cpus/numa zone is taken .    If the cpus / numa zone are free, scheduling will pass and the VM will actually be rebuilt on the new compute.  It looks like NUMAToplogyFilter tries to find the same topology on a different host.

This is when hw:cpu_policy=dedicated is being used.   We feel like this is a bug.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 10 Artom Lifshitz 2018-04-26 23:36:25 UTC
Based on our IRC conversations, I'm closing this as NOTABUG for now. It looks like the NUMA topology filter was correct after all in thinking that not enough CPUs were available. If this turns out not to be the case, by all means reopen this bug.


Note You need to log in before you can comment on or make changes to this bug.