RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1260846 - libvirtd crash when define a guest which numa cell id is out of order
Summary: libvirtd crash when define a guest which numa cell id is out of order
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-08 03:38 UTC by Luyao Huang
Modified: 2015-11-19 06:54 UTC (History)
6 users (show)

Fixed In Version: libvirt-1.2.17-9.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 06:54:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Luyao Huang 2015-09-08 03:38:23 UTC
Description of problem:
libvirtd crash when define a guest which numa cell id is disordered

Find by Dice program:
(https://github.com/code-dice/dice)

Version-Release number of selected component (if applicable):
libvirt-1.2.17-8.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.
# cat /tmp/test1.xml
<domain id="100" type="kvm">
  <name>virt-trinity-2</name>
  <uuid>eebf8A5bDfEaC45baABDCBDb099DA7Ab</uuid>
  <title>c</title>
  <description>19L0o</description>
  <cpu match="strict" mode="custom">
    <numa>
      <cell cpus="1" id="0" memAccess="private" memory="6044000" unit="b" />
      <cell cpus="3" id="2" memAccess="shared" memory="1" unit="m" />
      <cell cpus="4" id="1" memAccess="private" memory="890" unit="kib" />
    </numa>
  </cpu>
  <os>
    <type>hvm</type>
  </os>
  <memory dumpCore="on" unit="b">143464</memory>
  <maxMemory slots="8" unit="mb">8</maxMemory>
  <currentMemory unit="byte">142087</currentMemory>
  <vcpu current="2" placement="auto">19</vcpu>
  <devices>
    <emulator>/usr/bin/qemu-kvm</emulator>
  </devices>
</domain>

2.
# virsh define /tmp/test1.xml
error: Failed to define domain from /tmp/test1.xml
error: End of file while reading data: Input/output error

3.

Actual results:
libvirtd crash when define a guest which numa cell id is disordered

Expected results:
No crash and define success

Additional info:

Program received signal SIGSEGV, Segmentation fault.
virBitmapOverlaps (b1=0x7fc870001940, b2=0x0) at util/virbitmap.c:849
849	    if (b1->max_bit > b2->max_bit) {
(gdb) bt
#0  virBitmapOverlaps (b1=0x7fc870001940, b2=0x0) at util/virbitmap.c:849
#1  0x00007fc89a9de1cf in virDomainNumaDefCPUParseXML (def=0x7fc870001890, ctxt=ctxt@entry=0x7fc870003230) at conf/numa_conf.c:763
#2  0x00007fc89a9cfe1f in virDomainDefParseXML (xml=xml@entry=0x7fc8700030d0, root=root@entry=0x7fc870003430, ctxt=ctxt@entry=0x7fc870003230, caps=caps@entry=0x7fc8781fb060, xmlopt=xmlopt@entry=0x7fc878221a60, 
    flags=flags@entry=2) at conf/domain_conf.c:15051
#3  0x00007fc89a9d59d0 in virDomainDefParseNode (xml=xml@entry=0x7fc8700030d0, root=0x7fc870003430, caps=caps@entry=0x7fc8781fb060, xmlopt=xmlopt@entry=0x7fc878221a60, flags=flags@entry=2)
    at conf/domain_conf.c:16434
#4  0x00007fc89a9d5ae8 in virDomainDefParse (
    xmlStr=xmlStr@entry=0x7fc870000d20 "<domain id=\"100\" type=\"kvm\">\n      <name>virt-trinity-2</name>\n        <uuid>eebf8A5bDfEaC45baABDCBDb099DA7Ab</uuid>\n          <title>c</title>\n", ' ' <repeats 12 times>, "<description>19L0o</description>\n", ' ' <repeats 11 times>..., filename=filename@entry=0x0, caps=caps@entry=0x7fc8781fb060, xmlopt=0x7fc878221a60, flags=flags@entry=2) at conf/domain_conf.c:16381
#5  0x00007fc89a9d5b30 in virDomainDefParseString (
    xmlStr=xmlStr@entry=0x7fc870000d20 "<domain id=\"100\" type=\"kvm\">\n      <name>virt-trinity-2</name>\n        <uuid>eebf8A5bDfEaC45baABDCBDb099DA7Ab</uuid>\n          <title>c</title>\n", ' ' <repeats 12 times>, "<description>19L0o</description>\n", ' ' <repeats 11 times>..., caps=caps@entry=0x7fc8781fb060, xmlopt=<optimized out>, flags=flags@entry=2) at conf/domain_conf.c:16396
#6  0x00007fc881b2dccb in qemuDomainDefineXMLFlags (conn=0x7fc868000a00, 
    xml=0x7fc870000d20 "<domain id=\"100\" type=\"kvm\">\n      <name>virt-trinity-2</name>\n        <uuid>eebf8A5bDfEaC45baABDCBDb099DA7Ab</uuid>\n          <title>c</title>\n", ' ' <repeats 12 times>, "<description>19L0o</description>\n", ' ' <repeats 11 times>..., flags=<optimized out>) at qemu/qemu_driver.c:7506
#7  0x00007fc89aa3278e in virDomainDefineXML (conn=0x7fc868000a00, 
    xml=0x7fc870000d20 "<domain id=\"100\" type=\"kvm\">\n      <name>virt-trinity-2</name>\n        <uuid>eebf8A5bDfEaC45baABDCBDb099DA7Ab</uuid>\n          <title>c</title>\n", ' ' <repeats 12 times>, "<description>19L0o</description>\n", ' ' <repeats 11 times>...) at libvirt-domain.c:6416
#8  0x00007fc89b691fe8 in remoteDispatchDomainDefineXML (server=0x7fc89c14fe30, msg=0x7fc89c1677a0, ret=0x7fc870000cf0, args=0x7fc8700008c0, rerr=0x7fc88aac2c30, client=0x7fc89c1672b0) at remote_dispatch.h:3837
#9  remoteDispatchDomainDefineXMLHelper (server=0x7fc89c14fe30, client=0x7fc89c1672b0, msg=0x7fc89c1677a0, rerr=0x7fc88aac2c30, args=0x7fc8700008c0, ret=0x7fc870000cf0) at remote_dispatch.h:3815
#10 0x00007fc89aa9b022 in virNetServerProgramDispatchCall (msg=0x7fc89c1677a0, client=0x7fc89c1672b0, server=0x7fc89c14fe30, prog=0x7fc89c15ef70) at rpc/virnetserverprogram.c:437
#11 virNetServerProgramDispatch (prog=0x7fc89c15ef70, server=server@entry=0x7fc89c14fe30, client=0x7fc89c1672b0, msg=0x7fc89c1677a0) at rpc/virnetserverprogram.c:307
#12 0x00007fc89aa9629d in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7fc89c14fe30) at rpc/virnetserver.c:135
#13 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7fc89c14fe30) at rpc/virnetserver.c:156
#14 0x00007fc89a9913e5 in virThreadPoolWorker (opaque=opaque@entry=0x7fc89c14adb0) at util/virthreadpool.c:145
#15 0x00007fc89a990908 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#16 0x00007fc897fccdf5 in start_thread (arg=0x7fc88aac3700) at pthread_create.c:308
#17 0x00007fc897cf31ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Comment 4 hongming 2015-09-18 09:17:03 UTC
Verify it as follows. The result is expected. Move its status to VERIFIED.

# rpm -q libvirt
libvirt-1.2.17-9.el7.x86_64

# cat test1.xml
 <domain type='kvm'>
   <name>QEMUGuest2</name>
   <memory unit='KiB'>328650</memory>
   <currentMemory unit='KiB'>328650</currentMemory>
   <vcpu placement='static'>16</vcpu>
   <os>
     <type arch='x86_64' machine='pc'>hvm</type>
     <boot dev='network'/>
   </os>
   <cpu>
     <topology sockets='2' cores='4' threads='2'/>
     <numa>
      <cell  id="0" cpus="1" memory="6044000" unit="b" />
      <cell  id="2" cpus="2" memory="1" unit="m" />
      <cell  id="1" cpus="0" memory="890" unit="kib" />
     </numa>
   </cpu>
   <clock offset='utc'/>
   <on_poweroff>destroy</on_poweroff>
   <on_reboot>restart</on_reboot>
   <on_crash>destroy</on_crash>
   <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
   </devices>
 </domain>

# virsh define test1.xml
Domain QEMUGuest2 defined from test1.xml

# virsh start QEMUGuest2
Domain QEMUGuest2 started

# virsh dumpxml QEMUGuest2|grep /cpu -B7
  <cpu>
    <topology sockets='2' cores='4' threads='2'/>
    <numa>
      <cell id='0' cpus='1' memory='6144' unit='KiB'/>
      <cell id='1' cpus='0' memory='1024' unit='KiB'/>
      <cell id='2' cpus='2' memory='1024' unit='KiB'/>
    </numa>
  </cpu>


# cat test.xml
 <domain type='kvm'>
   <name>QEMUGuest1</name>
   <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
   <memory unit='KiB'>328650</memory>
   <currentMemory unit='KiB'>328650</currentMemory>
   <vcpu placement='static'>16</vcpu>
   <os>
     <type arch='x86_64' machine='pc'>hvm</type>
     <boot dev='network'/>
   </os>
   <cpu>
     <topology sockets='2' cores='4' threads='2'/>
     <numa>
       <cell id='0' cpus='0-5' memory='109550' unit='KiB'/>
       <cell id='2' cpus='6-10' memory='109550' unit='KiB'/>
       <cell id='1' cpus='11-15' memory='109550' unit='KiB'/>
     </numa>
   </cpu>
   <clock offset='utc'/>
   <on_poweroff>destroy</on_poweroff>
   <on_reboot>restart</on_reboot>
   <on_crash>destroy</on_crash>
   <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
   </devices>
 </domain>

# virsh define test.xml
Domain QEMUGuest1 defined from test.xml


# virsh start QEMUGuest1
Domain QEMUGuest1 started

# virsh dumpxml QEMUGuest1|grep /cpu -B7
  <cpu>
    <topology sockets='2' cores='4' threads='2'/>
    <numa>
      <cell id='0' cpus='0-5' memory='109568' unit='KiB'/>
      <cell id='1' cpus='11-15' memory='109568' unit='KiB'/>
      <cell id='2' cpus='6-10' memory='109568' unit='KiB'/>
    </numa>
  </cpu>

# cat test2.xml|grep /cpu -B7
   <cpu>
     <topology sockets='2' cores='4' threads='2'/>
     <numa>
      <cell  id="0" cpus="5-10" memory="6044000" unit="b" />
      <cell  id="2" cpus="0-6" memory="1" unit="m" />
      <cell  id="1" cpus="11-15" memory="890" unit="kib" />
     </numa>
   </cpu>

# virsh define test2.xml
error: Failed to define domain from test2.xml
error: unsupported configuration: NUMA cells 2 and 0 have overlapping vCPU ids

# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2015-09-17 16:18:04 CST; 24h ago

Comment 6 errata-xmlrpc 2015-11-19 06:54:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.