RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1719558 - libvirtd crashed when start vm with emulatorpin configuration
Summary: libvirtd crashed when start vm with emulatorpin configuration
Keywords:
Status: CLOSED DUPLICATE of bug 1718172
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.7
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Libvirt Maintainers
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-12 06:12 UTC by Fangge Jin
Modified: 2019-06-12 07:47 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-12 07:47:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log (1.69 MB, text/plain)
2019-06-12 06:12 UTC, Fangge Jin
no flags Details

Description Fangge Jin 2019-06-12 06:12:32 UTC
Created attachment 1579638 [details]
libvirtd log

Description of problem:
libvirtd crashed when start vm with emulatorpin configuration

Version-Release number of selected component:
libvirt-4.5.0-21.virtcov.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
0. The host has 4 cpus online:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
...

1. Prepare a vm xml with emulatorpin configuration:
...
  <vcpu placement='static' cpuset='0-3' current='8'>16</vcpu>
  <cputune>
    <emulatorpin cpuset='3'/>
  </cputune>
...

2. Create vm:
# virsh create min-rep.xml
error: Disconnected from qemu:///system due to end of file
error: Failed to create domain from min-rep.xml
error: End of file while reading data: Input/output error

3. Check backtrace:
Thread 1 (Thread 0x7f76aae70700 (LWP 15977)):
#0  0x00007f76bcb1ad99 in virBitmapCopy (dst=0x0, src=0x7f7684038e90) at util/virbitmap.c:164
#1  0x00007f766c3505a2 in qemuProcessInitCpuAffinity (vm=vm@entry=0x7f7640188930) at qemu/qemu_process.c:2395
#2  0x00007f766c35ba3b in qemuProcessLaunch (conn=conn@entry=0x7f7684000a00, driver=driver@entry=0x7f7640100d80, vm=vm@entry=0x7f7640188930, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_START, incoming=incoming@entry=0x0,
    snapshot=snapshot@entry=0x0, vmop=vmop@entry=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, flags=flags@entry=17) at qemu/qemu_process.c:6521
#3  0x00007f766c3635ef in qemuProcessStart (conn=conn@entry=0x7f7684000a00, driver=driver@entry=0x7f7640100d80, vm=0x7f7640188930, updatedCPU=updatedCPU@entry=0x0, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_START,
    migrateFrom=migrateFrom@entry=0x0, migrateFd=migrateFd@entry=-1, migratePath=migratePath@entry=0x0, snapshot=snapshot@entry=0x0, vmop=vmop@entry=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, flags=17, flags@entry=1)
    at qemu/qemu_process.c:6806
#4  0x00007f766c3e1cd9 in qemuDomainCreateXML (conn=0x7f7684000a00,
    xml=0x7f7684001d50 "<domain type='kvm'>\n  <name>rhel7.6</name>\n  <uuid>df899f5c-db94-48b2-867a-e0c266b59b7a</uuid>\n  <genid>001b2039-ca77-4352-ab4a-433521eabf48</genid>\n  <title>A short description - rhel7.6 full xml - o"..., flags=0) at qemu/qemu_driver.c:1745
#5  0x00007f76bcdd91d5 in virDomainCreateXML (conn=0x7f7684000a00,
    xmlDesc=0x7f7684001d50 "<domain type='kvm'>\n  <name>rhel7.6</name>\n  <uuid>df899f5c-db94-48b2-867a-e0c266b59b7a</uuid>\n  <genid>001b2039-ca77-4352-ab4a-433521eabf48</genid>\n  <title>A short description - rhel7.6 full xml - o"..., flags=0) at libvirt-domain.c:176
#6  0x000055c7008f5d77 in remoteDispatchDomainCreateXML (ret=0x7f7684001d20, args=0x7f7684001c60, rerr=0x7f76aae6fbc0, msg=0x55c701931e60, client=0x55c701932620, server=0x55c701910e30)
    at remote/remote_daemon_dispatch_stubs.h:4575
#7  remoteDispatchDomainCreateXMLHelper (server=0x55c701910e30, client=0x55c701932620, msg=0x55c701931e60, rerr=0x7f76aae6fbc0, args=0x7f7684001c60, ret=0x7f7684001d20) at remote/remote_daemon_dispatch_stubs.h:4553
#8  0x00007f76bcce1215 in virNetServerProgramDispatchCall (msg=0x55c701931e60, client=0x55c701932620, server=0x55c701910e30, prog=0x55c70192f650) at rpc/virnetserverprogram.c:437
#9  virNetServerProgramDispatch (prog=0x55c70192f650, server=server@entry=0x55c701910e30, client=client@entry=0x55c701932620, msg=0x55c701931e60) at rpc/virnetserverprogram.c:304
#10 0x00007f76bcce9bea in virNetServerProcessMsg (srv=srv@entry=0x55c701910e30, client=0x55c701932620, prog=<optimized out>, msg=0x55c701931e60) at rpc/virnetserver.c:143
#11 0x00007f76bcce9f51 in virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x55c701910e30) at rpc/virnetserver.c:164
#12 0x00007f76bcbc7b6c in virThreadPoolWorker (opaque=opaque@entry=0x55c701904e00) at util/virthreadpool.c:167
#13 0x00007f76bcbc691a in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#14 0x00007f76b9ee1ea5 in start_thread (arg=0x7f76aae70700) at pthread_create.c:307
#15 0x00007f76b9c0a8cd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Actual results:
libvirtd crashed

Expected results:
Vm can start successfully

Additional info:

Comment 3 Andrea Bolognani 2019-06-12 07:47:17 UTC

*** This bug has been marked as a duplicate of bug 1718172 ***


Note You need to log in before you can comment on or make changes to this bug.