Bug 1480062 - The virsh command: vcpupin displays the wrong information
The virsh command: vcpupin displays the wrong information
Status: ASSIGNED
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt (Show other bugs)
7.4
ppc64le Linux
medium Severity medium
: rc
: 7.6
Assigned To: Andrea Bolognani
Virtualization Bugs
:
Depends On:
Blocks: 1513404 1528344
  Show dependency treegraph
 
Reported: 2017-08-09 23:56 EDT by junli
Modified: 2017-12-21 10:34 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
IBM Linux Technology Center 158286 None None None 2017-09-05 09:27 EDT

  None (edit)
Description junli 2017-08-09 23:56:17 EDT
Description of problem:
The virsh command: vcpupin displays the wrong information

Version-Release number of selected component (if applicable):
# rpm -q libvirt qemu-kvm-rhev
libvirt-3.2.0-14.virtcov.el7_4.2.ppc64le
qemu-kvm-rhev-2.9.0-16.el7_4.3.ppc64le

# uname -a
Linux ibm-p8-rhevm-14.rhts.eng.bos.redhat.com 3.10.0-693.el7.ppc64le #1 SMP Thu Jul 6 19:59:44 EDT 2017 ppc64le ppc64le ppc64le GNU/Linux

How reproducible:
100%

1.Prepare a domain xml:

 <domain type='kvm' id='17'>
   <name>avocado-vt-vm1</name>
   <uuid>aafc3e8a-ce65-44c5-86ab-1d39bab26887</uuid>
   <memory unit='KiB'>1048576</memory>
   <currentMemory unit='KiB'>1048576</currentMemory>
   <vcpu placement='static' current='4'>8</vcpu>
   <resource>
     <partition>/machine</partition>
   </resource>
   <os>
     <type arch='ppc64le' machine='pseries-rhel7.4.0'>hvm</type>
     <boot dev='hd'/>
   </os>
   <clock offset='utc'/>
   <devices>
     <emulator>/usr/libexec/qemu-kvm</emulator>
     <disk type='file' device='disk'>
       <driver name='qemu' type='qcow2'/>
       <source
 file='/var/lib/avocado/data/avocado-vt/images/jeos-25-64.qcow2'/>
       <backingStore/>
       <target dev='vda' bus='virtio'/>
       <alias name='virtio-disk0'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
 function='0x0'/>
     </disk>
     <interface type='bridge'>
       <mac address='52:54:00:1f:3b:f9'/>
       <source bridge='virbr0'/>
       <target dev='vnet0'/>
       <model type='virtio'/>
       <alias name='net0'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
 function='0x0'/>
     </interface>
     <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
       <listen type='address' address='127.0.0.1'/>
     </graphics>
   </devices>
   <seclabel type='dynamic' model='selinux' relabel='yes'>
     <label>system_u:system_r:svirt_t:s0:c120,c334</label>
     <imagelabel>system_u:object_r:svirt_image_t:s0:c120,c334</imagelabel>
   </seclabel>
   <seclabel type='dynamic' model='dac' relabel='yes'>
     <label>+107:+107</label>
     <imagelabel>+107:+107</imagelabel>
   </seclabel>
 </domain>

 2.Define this xml
 3.Start the guest

 4.Run "virsh vcpupin avocado-vt-vm1"

 Actual results:

 VCPU: CPU Affinity
 ----------------------------------
    0: 0-79
    1: 0-79
    2: 0-79
    3: 0-79
    4: 0-79
    5: 0-79
    6: 0-79
    7: 0-79

Expected results:

 VCPU: CPU Affinity
 ----------------------------------
    0: 0,8,16,24,32,40,48,56,64,72
    1: 0,8,16,24,32,40,48,56,64,72
    2: 0,8,16,24,32,40,48,56,64,72
    3: 0,8,16,24,32,40,48,56,64,72
    4: 0,8,16,24,32,40,48,56,64,72
    5: 0,8,16,24,32,40,48,56,64,72
    6: 0,8,16,24,32,40,48,56,64,72
    7: 0,8,16,24,32,40,48,56,64,72

 Additional info:

# lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                80
On-line CPU(s) list:   0,8,16,24,32,40,48,56,64,72
Off-line CPU(s) list:  1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79
Thread(s) per core:    1
Core(s) per socket:    5
Socket(s):             2
NUMA node(s):          2
Model:                 2.1 (pvr 004b 0201)
Model name:            POWER8E (raw), altivec supported
CPU max MHz:           3690.0000
CPU min MHz:           2061.0000
Hypervisor vendor:     (null)
Virtualization type:   full
L1d cache:             64K
L1i cache:             32K
L2 cache:              512K
L3 cache:              8192K
NUMA node0 CPU(s):     0,8,16,24,32
NUMA node1 CPU(s):     40,48,56,64,72
Comment 2 David Gibson 2017-08-10 20:38:01 EDT
This is on POWER8 IIUC.  On POWER8 the secondary threads will be disable don the host, and so 0..79 is actuall equivalent to 0,8,16,24,32,40,48,56,64,72.
Comment 3 junli 2017-08-10 20:59:04 EDT
(In reply to David Gibson from comment #2)
> This is on POWER8 IIUC.  On POWER8 the secondary threads will be disable don
> the host, and so 0..79 is actuall equivalent to 0,8,16,24,32,40,48,56,64,72.

Yes, but vcpuinfo's result is 0,8,16,24,32,40,48,56,64,72, as follows:

CPU Affinity:
y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------

Both them report cpu affinity, and should them report the same result?
Comment 4 David Gibson 2017-08-10 23:45:25 EDT
Well, in a sense they are the same result, just formatted differently.  I don't know the details about the specific tools, but it's possible that one is simply listing the bound threads, whereas the other is simplifying that result to "all online threads" since the host online threads are not contiguous.
Comment 5 junli 2017-08-11 01:27:10 EDT
(In reply to David Gibson from comment #4)
> Well, in a sense they are the same result, just formatted differently.  I
> don't know the details about the specific tools, but it's possible that one
> is simply listing the bound threads, whereas the other is simplifying that
> result to "all online threads" since the host online threads are not
> contiguous.

I know what your mean, but actually we couldn't use vcpupin to pin the disabled cpu. So, why we display it looks like it can be used?
Comment 6 Andrea Bolognani 2017-08-14 05:42:47 EDT
(In reply to junli from comment #5)
> (In reply to David Gibson from comment #4)
> > Well, in a sense they are the same result, just formatted differently.  I
> > don't know the details about the specific tools, but it's possible that one
> > is simply listing the bound threads, whereas the other is simplifying that
> > result to "all online threads" since the host online threads are not
> > contiguous.
> 
> I know what your mean, but actually we couldn't use vcpupin to pin the
> disabled cpu. So, why we display it looks like it can be used?

I agree, the current behavior is fairly confusing:

  # virsh vcpupin guest --vcpu 0
  VCPU: CPU Affinity
  ----------------------------------
     0: 0-95

  # virsh vcpupin guest --vcpu 0 --cpulist 0-95
  error: Invalid value '0-95' for 'cpuset.cpus': Invalid argument

  # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d6\\x2dguest.scope/vcpu0/cpuset.cpus
  0,8,16,24,32,40,48,56,64,72,80,88

  # virsh vcpupin guest --vcpu 0 --cpulist 0,8,16,24,32,40,48,56,64,72,80,88

  # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d6\\x2dguest.scope/vcpu0/cpuset.cpus
  0,8,16,24,32,40,48,56,64,72,80,88

  # virsh vcpupin guest --vcpu 0
  VCPU: CPU Affinity
  ----------------------------------
     0: 0,8,16,24,32,40,48,56,64,72,80,88
Comment 9 David Gibson 2017-12-20 21:20:41 EST
Basically cosmetic, so deferring.

Note You need to log in before you can comment on or make changes to this bug.