Bug 1664702
Summary: | [OSP10] Oversubscription broken for instances with NUMA topologies | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Stephen Finucane <stephenfin> |
Component: | openstack-nova | Assignee: | Stephen Finucane <stephenfin> |
Status: | CLOSED ERRATA | QA Contact: | OSP DFG:Compute <osp-dfg-compute> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 10.0 (Newton) | CC: | dasmith, eglynn, jhakimra, kchamart, lyarwood, mbooth, mgeary, sbauza, sgordon, vromanso |
Target Milestone: | async | Keywords: | Triaged, ZStream |
Target Release: | 10.0 (Newton) | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | openstack-nova-14.1.0-43.el7ost | Doc Type: | Known Issue |
Doc Text: |
Previously, due to an update that made memory allocation pagesize aware, you cannot oversubscribe memory for instances with NUMA topologies.
With this update, memory oversubscription is disabled for all NUMA topology instances, including implicit NUMA topologies, such as hugepages or CPU pinning.
|
Story Points: | --- |
Clone Of: | 1664701 | Environment: | |
Last Closed: | 2019-04-30 16:59:16 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1519540, 1664698, 1664701 | ||
Bug Blocks: |
Description
Stephen Finucane
2019-01-09 13:38:19 UTC
Verification steps: # 2 compute nodes with ~6GB of memory each [stack@undercloud-0 ~]$ for i in 6 8; do ssh heat-admin.24.$i 'echo $(hostname) $(grep MemTotal /proc/meminfo)'; done compute-1 MemTotal: 5944884 kB compute-0 MemTotal: 5944892 kB # Create a large flavor with numa_nodes [stack@undercloud-0 ~]$ openstack flavor create --vcpu 2 --disk 0 --ram 4096 test.numa [stack@undercloud-0 ~]$ openstack flavor set test.numa --property hw:numa_nodes=1 # boot 2 instances with this flavor. Works because each instance goes on a separate compute [stack@undercloud-0 ~]$ nova boot --poll --image cirros --flavor test.numa test1 --nic net-id=353d787b-7788-40b0-aaff-a0ab2325b64e [stack@undercloud-0 ~]$ nova boot --poll --image cirros --flavor test.numa test2 --nic net-id=353d787b-7788-40b0-aaff-a0ab2325b64e # Negative test, booting a third instance will fail with the 'No valid host error' [stack@undercloud-0 ~]$ nova boot --poll --image cirros --flavor test.numa test3 --nic net-id=353d787b-7788-40b0-aaff-a0ab2325b64e # Modify `ram_allocation_ratio` in nova.conf on the compute node [heat-admin@compute-1 ~]$ sudo grep ram_allocation_ratio /etc/nova/nova.conf ram_allocation_ratio=2.0 # Boot a 4th instance, it boots successfully [stack@undercloud-0 ~]$ nova boot --poll --image cirros --flavor test.numa test4 --nic net-id=353d787b-7788-40b0-aaff-a0ab2325b64e [stack@undercloud-0 ~]$ nova list +--------------------------------------+-------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+------------------------+ | 4baccd63-0a8e-4288-97a0-b2b449d45a39 | test1 | ACTIVE | - | Running | private=192.168.100.9 | | ff0a5dd2-a1b8-4937-a3e9-c8a45f5253dd | test2 | ACTIVE | - | Running | private=192.168.100.6 | | 5bb3597c-a193-479a-9292-6d652b799a66 | test3 | ERROR | - | NOSTATE | | | 81ce205a-1a15-48f6-8055-3c1a39334602 | test4 | ACTIVE | - | Running | private=192.168.100.16 | +--------------------------------------+-------+--------+------------+-------------+------------------------+ # Package version: openstack-nova-common.noarch 1:14.1.0-44.el7ost @rhos-10.0-signed Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0923 |