Bug 1333457

Summary: 'hw:cpu_thread_policy=prefer' misbehaviour
Product: Red Hat OpenStack Reporter: Stephen Gordon <sgordon>
Component: openstack-novaAssignee: Vladik Romanovsky <vromanso>
Status: CLOSED ERRATA QA Contact: Prasanth Anbalagan <panbalag>
Severity: high Docs Contact:
Priority: unspecified    
Version: 9.0 (Mitaka)CC: berrange, dasmith, eglynn, kchamart, mburns, nlevinki, panbalag, rnoriega, sbauza, sferdjao, sgordon, srevivo, vromanso
Target Milestone: asyncKeywords: ZStream
Target Release: 9.0 (Mitaka)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-nova-13.1.1-8.el7ost Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1332916 Environment:
Last Closed: 2016-12-21 16:36:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1332916    
Bug Blocks: 1277736    

Description Stephen Gordon 2016-05-05 14:50:10 UTC
+++ This bug was initially created as a clone of Bug #1332916 +++

Description of problem:

'hw:cpu_thread_policy=prefer' allocates vCPUs in pairs of sibling threads properly. An odd number of vCPUs will allocate pairs and a single one. That single one should not be isolated. So 20 available threads, shall be able to allocate 4 VMs of 5 vCPUs. When booting up the third VM, it is giving an error.

Version-Release number of selected component (if applicable):

Mitaka

How reproducible:

Highly

Steps to Reproduce:

Check https://bugs.launchpad.net/nova/+bug/1578155

Actual results:


Check https://bugs.launchpad.net/nova/+bug/1578155

Expected results:


Check https://bugs.launchpad.net/nova/+bug/1578155

Additional info:

Comment 13 Prasanth Anbalagan 2016-12-15 19:13:01 UTC
Verified as follows - With 10 available threads was able to allocate 4 VMs of 3 vCPUS
each.

********
VERSION
********

# yum list installed | grep openstack-nova
openstack-nova-api.noarch            1:13.1.2-9.el7ost       @rhelosp-9.0-puddle
openstack-nova-cert.noarch           1:13.1.2-9.el7ost       @rhelosp-9.0-puddle
openstack-nova-common.noarch         1:13.1.2-9.el7ost       @rhelosp-9.0-puddle
openstack-nova-compute.noarch        1:13.1.2-9.el7ost       @rhelosp-9.0-puddle
openstack-nova-conductor.noarch      1:13.1.2-9.el7ost       @rhelosp-9.0-puddle
openstack-nova-console.noarch        1:13.1.2-9.el7ost       @rhelosp-9.0-puddle
openstack-nova-novncproxy.noarch     1:13.1.2-9.el7ost       @rhelosp-9.0-puddle
openstack-nova-scheduler.noarch      1:13.1.2-9.el7ost       @rhelosp-9.0-puddle

*********
LOGS
*********
# nova flavor-show 101
+----------------------------+------------------------------------------------------------------+
| Property                   | Value                                                            |
+----------------------------+------------------------------------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                                            |
| OS-FLV-EXT-DATA:ephemeral  | 0                                                                |
| disk                       | 5                                                                |
| extra_specs                | {"hw:cpu_policy": "dedicated", "hw:cpu_thread_policy": "prefer"} |
| id                         | 101                                                              |
| name                       | m1.pinned                                                        |
| os-flavor-access:is_public | True                                                             |
| ram                        | 512                                                              |
| rxtx_factor                | 1.0                                                              |
| swap                       |                                                                  |
| vcpus                      | 3                                                                |
+----------------------------+------------------------------------------------------------------+

# numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11
node 0 size: 65478 MB
node 0 free: 52862 MB
node distances:
node   0 
  0:  10 

# nova list
+--------------------------------------+------+--------+------------+-------------+---------------------+
| ID                                   | Name | Status | Task State | Power State | Networks            |
+--------------------------------------+------+--------+------------+-------------+---------------------+
| 387f0a3a-391b-49d1-863d-0bd7c932d176 | vm1  | ACTIVE | -          | Running     | public=172.24.4.228 |
| 05366de9-9d1b-47b0-b62f-0d6c96ca8528 | vm2  | ACTIVE | -          | Running     | public=172.24.4.229 |
| 5b01621d-6134-4371-8dc4-c37c4bd0b59b | vm3  | ACTIVE | -          | Running     | public=172.24.4.230 |
| 5b0fc1f4-3275-443d-a10e-4ef79e247614 | vm4  | ACTIVE | -          | Running     | public=172.24.4.231 |
+--------------------------------------+------+--------+------------+-------------+---------------------+
# nova show vm1 | grep flavor
| flavor                               | m1.pinned (101)                                          |
# nova show vm2 | grep flavor
| flavor                               | m1.pinned (101)                                          |
# nova show vm3 | grep flavor
| flavor                               | m1.pinned (101)                                          |
# nova show vm4 | grep flavor
| flavor                               | m1.pinned (101)                                          |


# virsh vcpupin 3
VCPU: CPU Affinity
----------------------------------
   0: 10
   1: 4
   2: 11

# virsh vcpupin 4
VCPU: CPU Affinity
----------------------------------
   0: 9
   1: 3
   2: 0

# virsh vcpupin 5
VCPU: CPU Affinity
----------------------------------
   0: 1
   1: 7
   2: 8

# virsh vcpupin 6
VCPU: CPU Affinity
----------------------------------
   0: 5
   1: 6
   2: 2

# lscpu -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE
0   0    0      0    0:0:0:0       yes
1   0    0      1    1:1:1:0       yes
2   0    0      2    2:2:2:0       yes
3   0    0      3    3:3:3:0       yes
4   0    0      4    4:4:4:0       yes
5   0    0      5    5:5:5:0       yes
6   0    0      0    0:0:0:0       yes
7   0    0      1    1:1:1:0       yes
8   0    0      2    2:2:2:0       yes
9   0    0      3    3:3:3:0       yes
10  0    0      4    4:4:4:0       yes
11  0    0      5    5:5:5:0       yes

Comment 15 errata-xmlrpc 2016-12-21 16:36:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2992.html