RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1716908 - 'cannot set CPU affinity' error when starting guest
Summary: 'cannot set CPU affinity' error when starting guest
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: libvirt
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.1
Assignee: Andrea Bolognani
QA Contact: jiyan
URL:
Whiteboard:
: 1724408 (view as bug list)
Depends On: 1716943
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-04 11:36 UTC by Andrea Bolognani
Modified: 2020-11-14 13:20 UTC (History)
4 users (show)

Fixed In Version: libvirt-4.5.0-25.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-05 20:50:11 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:3345 0 None None None 2019-11-05 20:50:50 UTC

Description Andrea Bolognani 2019-06-04 11:36:34 UTC
This bug was initially created as a copy of Bug #1703661

I am copying this bug because: 

  RHEL 8.1 shouldn't contain bugs that have been fixed in RHEL 7.7.

Description of problem:
When numatune placement auto is set, guest can not be started.

Version-Release number of selected component (if applicable):
# rpm -q libvirt qemu-kvm-rhev kernel
libvirt-4.5.0-15.virtcov.el7.ppc64le
qemu-kvm-rhev-2.12.0-27.el7.ppc64le
kernel-3.10.0-1034.el7.ppc64le

How reproducible:
100%

Steps to Reproduce:
1. To define a guest with:
<numatune><memory mode="strict" placement="auto" /></numatune>
2. Try to start it

Actual results:
# virsh start test1
error: Failed to start domain test1
error: invalid argument: Failed to parse bitmap ''

Expected results:
Domain test1 started

Additional info:
1. It looks like this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1233023
2. When I use the compose RHEL-7.7-20190403.0(libvirt-4.5.0-11), the test pass. And then I update the libvirt to the latest(4.5.0-15), it will reproduce the problem. But when I downgrade it to 4.5.0-11, it still could be reproduced.
3. When I use the compose RHEL-7.7-20190424.0(libvirt-4.5.0-14), it could be reproduced

+++ Comment 19 (Andrea Bolognani)

(In reply to Junxiang Li from comment #0)
> Steps to Reproduce:
> 1. To define a guest with:
> <numatune><memory mode="strict" placement="auto" /></numatune>
> 2. Try to start it
> 
> Actual results:
> # virsh start test1
> error: Failed to start domain test1
> error: invalid argument: Failed to parse bitmap ''

One thing that I apparently forgot to point out is that I never
managed to reproduce those exact symptoms: what I got instead was
along the lines of

  # virsh start guest
  error: Failed to start domain guest
  error: cannot set CPU affinity on process 40055: Invalid argument

Comment 3 Junxiang Li 2019-06-28 02:25:33 UTC
*** Bug 1724408 has been marked as a duplicate of this bug. ***

Comment 4 jiyan 2019-07-03 09:04:58 UTC
Hi I am trying to reproduce this issue on libvirt-4.5.0-23.module+el8.1.0+2983+b2ae9c0a.x86_64 using the way in https://bugzilla.redhat.com/show_bug.cgi?id=1703661#c28 for many times.
But I failed, could you please check whether this issue exists in this version? thank you.


Version:
libvirt-4.5.0-23.module+el8.1.0+2983+b2ae9c0a.x86_64
qemu-kvm-2.12.0-80.module+el8.1.0+3572+48154135.x86_64
kernel-4.18.0-109.el8.x86_64

Steps:
# virsh domstate test
shut off

# virsh dumpxml test --inactive |grep "<vcpu" -A3
  <vcpu placement='static'>1</vcpu>
  <numatune>
    <memory mode='strict' nodeset='1'/>
  </numatune>

# cat /sys/devices/system/cpu/cpu1/online
0

# numactl --hard
available: 2 nodes (0-1)
node 0 cpus: 0 2 3 4 5
node 0 size: 15932 MB
node 0 free: 15436 MB
node 1 cpus: 6 7 8 9 10 11
node 1 size: 16101 MB
node 1 free: 15560 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

# virsh start test
Domain test started

# virsh dumpxml test |grep "<vcpu" -A3
  <vcpu placement='static'>1</vcpu>
  <numatune>
    <memory mode='strict' nodeset='1'/>
  </numatune>

VM starts successfully here.

Comment 6 Andrea Bolognani 2019-07-03 11:08:21 UTC
(In reply to jiyan from comment #4)
> Hi I am trying to reproduce this issue on
> libvirt-4.5.0-23.module+el8.1.0+2983+b2ae9c0a.x86_64 using the way in
> https://bugzilla.redhat.com/show_bug.cgi?id=1703661#c28 for many times.
> But I failed, could you please check whether this issue exists in this
> version? thank you.
> 
> Version:
> libvirt-4.5.0-23.module+el8.1.0+2983+b2ae9c0a.x86_64
> qemu-kvm-2.12.0-80.module+el8.1.0+3572+48154135.x86_64
> kernel-4.18.0-109.el8.x86_64

For RHEL 7, the issue was introduced in libvirt-4.5.0-13.el7 while
addressing Bug 1695434, and subsequently fixed in libvirt-4.5.0-20.el7
(tracked by Bug 1703661): any libvirt version in between those two can
be used to reproduce the incorrect behavior.

For RHEL 8, the corresponding trackers are Bug 1716943 and
Bug 1716908 respectively, both of which have been addressed in
libvirt-4.5.0-25.el8, so there's no libvirt version you can use to
reproduce the incorrect behavior.

Hope that cleared up the situation! I agree it's pretty confusing, and
in fact I had to do some digging of my own before I could confidently
write the above :)

Comment 7 jiyan 2019-07-05 06:41:19 UTC
Verified this bug on libvirt-4.5.0-28.module+el8.1.0+3531+2918145b.x86_64

Version:
libvirt-4.5.0-28.module+el8.1.0+3531+2918145b.x86_64
qemu-kvm-2.12.0-80.module+el8.1.0+3572+48154135.x86_64
kernel-4.18.0-109.el8.x86_64

Steps:
# numactl --hard
available: 2 nodes (0-1)
node 0 cpus: 0 2 3 4 5
node 0 size: 15932 MB
node 0 free: 15241 MB
node 1 cpus: 6 7 8 9 10 11
node 1 size: 16101 MB
node 1 free: 15408 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

 virsh domstate test
shut off

# virsh dumpxml test --inactive |grep "<vcpu" -A3
  <vcpu placement='static'>1</vcpu>
  <numatune>
    <memory mode='strict' nodeset='1'/>
  </numatune>

# echo 0 > /sys/devices/system/cpu/cpu1/online 

# cat /sys/devices/system/cpu/cpu1/online 
0

# virsh start test
Domain test started

# virsh dumpxml test |grep "<vcpu" -A3
  <vcpu placement='static'>1</vcpu>
  <numatune>
    <memory mode='strict' nodeset='1'/>
  </numatune>

The test result is asexpected, move this bug to be verified.

Comment 9 errata-xmlrpc 2019-11-05 20:50:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:3345


Note You need to log in before you can comment on or make changes to this bug.