Description of problem: Boot the vm and check virsh vcpupin output, it does shows the affinity with offline cpus(host) aswell. Version-Release number of selected component (if applicable): # virsh version Compiled against library: libvirt 3.2.0 Using library: libvirt 3.2.0 Using API: QEMU 3.2.0 Running hypervisor: QEMU 2.8.50 libvirt compiled against commit a6d681485ff85e27859583a5c20e1630c5cf8352 Author: John Ferlan <jferlan> Date: Tue Mar 7 16:10:38 2017 -0500 qemu compiled againt commit ebedf0f9cd46b617df331eecc857c379d574ac62 Author: Marek Vasut <marex> Date: Fri Mar 17 22:06:27 2017 +0100 How reproducible: Always Steps to Reproduce: 1. # virsh start vm1 Domain vm1 started 2. # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 160 On-line CPU(s) list: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 Off-line CPU(s) list: 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79,81-87,89-95,97-103,105-111,113-119,121-127,129-135,137-143,145-151,153-159 Thread(s) per core: 1 Core(s) per socket: 5 Socket(s): 4 NUMA node(s): 4 Model: 2.1 (pvr 004b 0201) Model name: POWER8E (raw), altivec supported L1d cache: 64K L1i cache: 32K L2 cache: 512K L3 cache: 8192K NUMA node0 CPU(s): 0,8,16,24,32 NUMA node1 CPU(s): 40,48,56,64,72 NUMA node16 CPU(s): 80,88,96,104,112 NUMA node17 CPU(s): 120,128,136,144,152 2. virsh vcpuinfo vm1 VCPU: 0 CPU: 48 State: running CPU time: 27.3s CPU Affinity: y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y------- -----------[OK] # virsh vcpupin vm1 VCPU: CPU Affinity ---------------------------------- 0: 0-159----------------------------------[NOK] 3. Actual results: 0: 0-159 Expected results: 0: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 Additional info:
Since you did not configure any specific vcpu pinning, the vcpu threads are allowed to run on all the host vcpus from libvirt's point of view. Returning the value expected by you would indicate that there's a pinning configured which is not true.
(In reply to Peter Krempa from comment #1) > Since you did not configure any specific vcpu pinning, the vcpu threads are > allowed to run on all the host vcpus from libvirt's point of view. > > Returning the value expected by you would indicate that there's a pinning > configured which is not true. I agree partially but it can not be run on a offlined cpu which is in this case needs a proper initial value, it shows the invalid initial range for the user which is wrong whereas vcpuinfo API output is as expected. #virsh vcpupin vm1 0 1 error: Invalid value '1' for 'cpuset.cpus': Invalid argument --------------OK but the initial output range of affinity "0-159" contradicts that.
This seems to me yet another design decision issue, of course it can be 'fixed', but do we want it fixed? Again, I am just trying to contribute some code. Dan
(In reply to srwx4096 from comment #3) > This seems to me yet another design decision issue, of course it can be > 'fixed', but do we want it fixed? > > Again, I am just trying to contribute some code. > > Dan Hi Dan, I think this should be fixed because as pointed out by Viktor, CPU hotplug is very common on Linux running on z Systems and also widely used by customers. Reference: https://www.spinics.net/linux/fedora/libvir/msg140443.html Hence if a host CPU is offline and virsh cpupin/emulatorpin shows them as available for pinning, this would mislead user/other layers which shall end up in trying wrong pinning and fail. -Nitesh
Any update on this?
------- Comment From scheloh.com 2019-07-30 15:08 EDT------- https://www.redhat.com/archives/libvir-list/2019-July/msg00747.html
------- Comment From lagarcia.com 2020-04-22 10:42 EDT------- It seems this one never got upstream. Getting it back to the team backlog.
The best way to get a forgotten patch noticed is to rebase it to current upstream, then repost it to the mailing list with --subject-prefix="libvirt PATCH v2" Beyond that, libvirt is deprecating the use of bugzilla for upstream bugs. In the future all upstream bug tracking will be done using gitlab's issue tracker: https://www.redhat.com/archives/libvir-list/2020-April/msg00782.html Dan has been slowly going through the existing bugs in bugzilla and closing them or creating new records in the gitlab tracker as appropriate.
I've just merged patches upstream: 2020c6af8a conf, qemu: consider available CPUs in vcpupin/emulatorpin output 42036650c6 virhostcpu.c: introduce virHostCPUGetAvailableCPUsBitmap() bc07020511 virhostcpu.c: refactor virHostCPUParseCountLinux() 9d31433483 virsh-domain.c: modernize cmdVcpuinfo() a3a628f54c virsh-domain.c: modernize virshVcpuinfoInactive() de6a40f01f virhostcpu.c: use g_autoptr in virHostCPUGetMap() 42bf2a7573 qemu_driver.c: use g_autoptr in qemuDomainGetEmulatorPinInfo() v6.5.0-69-g2020c6af8a
------- Comment From lagarcia.com 2020-08-05 07:08 EDT------- Fedora Rawhide has now libvirt 6.6, which includes these patches. Could you please verify and close this bug if everything is OK, Satheesh?
------- Comment From satheera.com 2020-08-05 08:05 EDT------- (In reply to comment #15) > Fedora Rawhide has now libvirt 6.6, which includes these patches. Could you > please verify and close this bug if everything is OK, Satheesh? Sure, will have it tested. Regards, -Satheesh
------- Comment From satheera.com 2020-08-07 08:11 EDT------- Tested with fedora rawhide libvirt version and found the issue is fixed, this bug can be closed. # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0,8,16,24 Off-line CPU(s) list: 1-7,9-15,17-23,25-31 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Model: 2.3 (pvr 004e 1203) Model name: POWER9 (architected), altivec supported ... # virsh start f31 Domain f31 started # virsh vcpupin f31 VCPU CPU Affinity ---------------------- 0 0,8,16,24 1 0,8,16,24 2 0,8,16,24 3 0,8,16,24 4 0,8,16,24 5 0,8,16,24 6 0,8,16,24 7 0,8,16,24 # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 8 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Model: 2.3 (pvr 004e 1203) Model name: POWER9 (architected), altivec supported ... # virsh vcpupin f31 VCPU CPU Affinity ---------------------- 0 0-31 1 0-31 2 0-31 3 0-31 4 0-31 5 0-31 6 0-31 7 0-31 # rpm -qa|grep libvirt libvirt-bash-completion-6.6.0-1.fc33.ppc64le libvirt-libs-6.6.0-1.fc33.ppc64le libvirt-daemon-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-core-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-network-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-nwfilter-6.6.0-1.fc33.ppc64le libvirt-daemon-config-nwfilter-6.6.0-1.fc33.ppc64le libvirt-daemon-config-network-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-lxc-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-disk-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-gluster-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-iscsi-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-iscsi-direct-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-mpath-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-scsi-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-sheepdog-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-zfs-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-nodedev-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-qemu-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-secret-6.6.0-1.fc33.ppc64le python3-libvirt-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-logical-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-interface-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-rbd-6.6.0-1.fc33.ppc64le libvirt-daemon-driver-storage-6.6.0-1.fc33.ppc64le libvirt-client-6.6.0-1.fc33.ppc64le libvirt-6.6.0-1.fc33.ppc64le libvirt-daemon-kvm-6.6.0-1.fc33.ppc64le libvirt-admin-6.6.0-1.fc33.ppc64le libvirt-daemon-qemu-6.6.0-1.fc33.ppc64le Regards, -Satheesh