+++ This bug was initially created as a clone of Bug #1332916 +++ Description of problem: 'hw:cpu_thread_policy=prefer' allocates vCPUs in pairs of sibling threads properly. An odd number of vCPUs will allocate pairs and a single one. That single one should not be isolated. So 20 available threads, shall be able to allocate 4 VMs of 5 vCPUs. When booting up the third VM, it is giving an error. Version-Release number of selected component (if applicable): Mitaka How reproducible: Highly Steps to Reproduce: Check https://bugs.launchpad.net/nova/+bug/1578155 Actual results: Check https://bugs.launchpad.net/nova/+bug/1578155 Expected results: Check https://bugs.launchpad.net/nova/+bug/1578155 Additional info:
Verified as follows - With 10 available threads was able to allocate 4 VMs of 3 vCPUS each. ******** VERSION ******** # yum list installed | grep openstack-nova openstack-nova-api.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle openstack-nova-cert.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle openstack-nova-common.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle openstack-nova-compute.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle openstack-nova-conductor.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle openstack-nova-console.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle openstack-nova-novncproxy.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle openstack-nova-scheduler.noarch 1:13.1.2-9.el7ost @rhelosp-9.0-puddle ********* LOGS ********* # nova flavor-show 101 +----------------------------+------------------------------------------------------------------+ | Property | Value | +----------------------------+------------------------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 5 | | extra_specs | {"hw:cpu_policy": "dedicated", "hw:cpu_thread_policy": "prefer"} | | id | 101 | | name | m1.pinned | | os-flavor-access:is_public | True | | ram | 512 | | rxtx_factor | 1.0 | | swap | | | vcpus | 3 | +----------------------------+------------------------------------------------------------------+ # numactl -H available: 1 nodes (0) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 node 0 size: 65478 MB node 0 free: 52862 MB node distances: node 0 0: 10 # nova list +--------------------------------------+------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+---------------------+ | 387f0a3a-391b-49d1-863d-0bd7c932d176 | vm1 | ACTIVE | - | Running | public=172.24.4.228 | | 05366de9-9d1b-47b0-b62f-0d6c96ca8528 | vm2 | ACTIVE | - | Running | public=172.24.4.229 | | 5b01621d-6134-4371-8dc4-c37c4bd0b59b | vm3 | ACTIVE | - | Running | public=172.24.4.230 | | 5b0fc1f4-3275-443d-a10e-4ef79e247614 | vm4 | ACTIVE | - | Running | public=172.24.4.231 | +--------------------------------------+------+--------+------------+-------------+---------------------+ # nova show vm1 | grep flavor | flavor | m1.pinned (101) | # nova show vm2 | grep flavor | flavor | m1.pinned (101) | # nova show vm3 | grep flavor | flavor | m1.pinned (101) | # nova show vm4 | grep flavor | flavor | m1.pinned (101) | # virsh vcpupin 3 VCPU: CPU Affinity ---------------------------------- 0: 10 1: 4 2: 11 # virsh vcpupin 4 VCPU: CPU Affinity ---------------------------------- 0: 9 1: 3 2: 0 # virsh vcpupin 5 VCPU: CPU Affinity ---------------------------------- 0: 1 1: 7 2: 8 # virsh vcpupin 6 VCPU: CPU Affinity ---------------------------------- 0: 5 1: 6 2: 2 # lscpu -e CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE 0 0 0 0 0:0:0:0 yes 1 0 0 1 1:1:1:0 yes 2 0 0 2 2:2:2:0 yes 3 0 0 3 3:3:3:0 yes 4 0 0 4 4:4:4:0 yes 5 0 0 5 5:5:5:0 yes 6 0 0 0 0:0:0:0 yes 7 0 0 1 1:1:1:0 yes 8 0 0 2 2:2:2:0 yes 9 0 0 3 3:3:3:0 yes 10 0 0 4 4:4:4:0 yes 11 0 0 5 5:5:5:0 yes
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2992.html