RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1392031 - ivshmem-plain support in RHEL 7.3
Summary: ivshmem-plain support in RHEL 7.3
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1347049
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-04 15:42 UTC by Marcel Kolaja
Modified: 2016-12-06 17:11 UTC (History)
25 users (show)

Fixed In Version: libvirt-2.0.0-10.el7_3.2
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of: 1347049
Environment:
Last Closed: 2016-12-06 17:11:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2885 0 normal SHIPPED_LIVE libvirt bug fix update 2016-12-06 22:01:23 UTC

Description Marcel Kolaja 2016-11-04 15:42:12 UTC
This bug has been copied from bug #1347049 and has been proposed
to be backported to 7.3 z-stream (EUS).

Comment 6 Luyao Huang 2016-11-09 05:53:56 UTC
Test on libvirt-2.0.0-10.el7_3.1.x86_64, qemu-kvm-rhev-2.6.0-27.el7.x86_64:

Test ivshmem-plain device:

1.
# virsh dumpxml r7
...
    <shmem name='my_shmem1'>
      <model type='ivshmem-plain'/>
      <size unit='M'>4</size>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </shmem>
...

2.
# virsh start r7
Domain r7 started

3. check qemu command line:

# ps aux|grep r7
...
-object memory-backend-file,id=shmmem-shmem0,mem-path=/dev/shm/my_shmem1,size=4194304 -device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,bus=pci.0,addr=0xb
...

4.
IN GUEST:

# lspci -vvv -s  00:0b.0
00:0b.0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)
	Subsystem: Red Hat, Inc QEMU Virtual Machine
	Physical Slot: 11
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Region 0: Memory at fc21a000 (32-bit, non-prefetchable) [size=256]
	Region 2: Memory at fe400000 (64-bit, prefetchable) [size=4M]

5. verify ivshmem device
IN GUEST:

#  ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:0b.0/resource0 16 | od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
0000020

# dd if=/dev/urandom bs=4 count=4 | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:0b.0/resource0 16
4+0 records in
4+0 records out
16 bytes (16 B) copied, 6.2824e-05 s, 255 kB/s

#  ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:0b.0/resource0 16 | od -t x2
0000000 827b 48a9 6c37 3319 0000 0000 0000 0000
0000020

#  ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:0b.0/resource2 16 | od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
0000020

# dd if=/dev/urandom of=data bs=1 count=128
128+0 records in
128+0 records out
128 bytes (128 B) copied, 0.00052689 s, 243 kB/s

# cat data | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:0b.0/resource2 128

# ./ivshmem-7.1-test  /sys/devices/pci0000\:00/0000\:00\:0b.0/resource2 128 | od -t x2
0000000 c18a 76c0 10cc 5712 c1cb 1f48 5a4b b259
0000020 2be8 e72f 1617 e70a 0dc0 7812 b2f9 ca27
0000040 d007 741b 30bc 8097 6909 14c3 ca76 8d50
0000060 7214 6997 8f90 1e57 3da1 4477 7d11 b9d8
0000100 5d21 488e badd 8876 5f28 b4dd 402a 5f65
0000120 c1f0 9110 7439 55eb 2485 675c fc5a 892c
0000140 4f06 7f1c cd3c bc2b fa23 ed51 98f3 be9b
0000160 7b9a f5d8 51b7 4353 bbdf 2af3 53f0 aa13
0000200

# cat data | od -t x2
0000000 c18a 76c0 10cc 5712 c1cb 1f48 5a4b b259
0000020 2be8 e72f 1617 e70a 0dc0 7812 b2f9 ca27
0000040 d007 741b 30bc 8097 6909 14c3 ca76 8d50
0000060 7214 6997 8f90 1e57 3da1 4477 7d11 b9d8
0000100 5d21 488e badd 8876 5f28 b4dd 402a 5f65
0000120 c1f0 9110 7439 55eb 2485 675c fc5a 892c
0000140 4f06 7f1c cd3c bc2b fa23 ed51 98f3 be9b
0000160 7b9a f5d8 51b7 4353 bbdf 2af3 53f0 aa13
0000200

6. attempt to read bar2 registers on host, should the same with guest, but result is not expected:

# ./ivshmem-7.1-test my_shmem1 128 | od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000200



And looks like libvirt doesn't generate share=yes in qemu command line, and check the qemu source code docs:

- Just shared memory: -device ivshmem-plain,memdev=HMB,...

  This uses host memory backend HMB.  It should have option "share"
  set.

After apply a patch to make libvirt generate the share=yes in qemu command line and retest it, it works as expected:

# ps aux|grep qemu
...
-object memory-backend-file,id=shmmem-shmem0,mem-path=/dev/shm/my_shmem1,size=4194304,share=yes -device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,bus=pci.0,addr=0xb
...

# ./ivshmem-7.1-test my_shmem1 128 | od -t x2
0000000 c18a 76c0 10cc 5712 c1cb 1f48 5a4b b259
0000020 2be8 e72f 1617 e70a 0dc0 7812 b2f9 ca27
0000040 d007 741b 30bc 8097 6909 14c3 ca76 8d50
0000060 7214 6997 8f90 1e57 3da1 4477 7d11 b9d8
0000100 5d21 488e badd 8876 5f28 b4dd 402a 5f65
0000120 c1f0 9110 7439 55eb 2485 675c fc5a 892c
0000140 4f06 7f1c cd3c bc2b fa23 ed51 98f3 be9b
0000160 7b9a f5d8 51b7 4353 bbdf 2af3 53f0 aa13
0000200

Comment 7 Luyao Huang 2016-11-09 06:00:47 UTC
Hi Martin,

Could you please help to check the problem in comment 6 ? seems ivshmem-plain's memory-backend-file need set share=yes to make ivshmem-plain device really worked. And also you can check the verify steps in that qemu bug (bug 1333318 comment 22)

Thanks

Comment 8 Luyao Huang 2016-11-09 09:08:19 UTC
Also hit a libvirtd crash problem:

1. start a guest with ivshmem:

# virsh dumpxml r7
...
    <shmem name='my_shmem1'>
      <model type='ivshmem-plain'/>
      <size unit='M'>4</size>
      <alias name='shmem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </shmem>

...

2. attach a ivshmem device:

# cat ivshmem.xml; virsh attach-device r7 ivshmem.xml
    <shmem name='my_shmem1'>
<model type='ivshmem-plain'/>
      <size unit='M'>4</size>
    </shmem>

Device attached successfully

3. destroy guest:

# virsh destroy r7
error: Disconnected from qemu:///system due to I/O error
error: Failed to destroy domain r7
error: End of file while reading data: Input/output error


Backtrace:

Thread 10 (Thread 0x7f80fbd32700 (LWP 1484)):
#0  0x00007f810b97d4ff in __strlen_sse42 () from /lib64/libc.so.6
#1  0x00007f810e5c6bcc in virBufferEscapeString (buf=buf@entry=0x7f80fbd317e0, format=format@entry=0x7f810e7fbbee "<shmem name='%s'>\n", str=0x2cafe0020 <Address 0x2cafe0020 out of bounds>)
    at util/virbuffer.c:455
#2  0x00007f810e64e5ec in virDomainShmemDefFormat (buf=buf@entry=0x7f80fbd317e0, def=0x7f80e4002e30, flags=flags@entry=625) at conf/domain_conf.c:21883
#3  0x00007f810e66b6dd in virDomainDefFormatInternal (def=0x7f80e81f6fb0, caps=caps@entry=0x7f80e81e7570, flags=flags@entry=625, buf=buf@entry=0x7f80fbd317e0) at conf/domain_conf.c:23999
#4  0x00007f810e66f08f in virDomainObjFormat (xmlopt=0x7f80e81ee390, obj=obj@entry=0x7f80e81f8900, caps=0x7f80e81e7570, flags=flags@entry=625) at conf/domain_conf.c:24099
#5  0x00007f810e66f14c in virDomainSaveStatus (xmlopt=<optimized out>, statusDir=0x7f80e80efca0 "/var/run/libvirt/qemu", obj=obj@entry=0x7f80e81f8900, caps=<optimized out>) at conf/domain_conf.c:24303
#6  0x00007f80f1f67405 in qemuDomainObjSaveJob (driver=driver@entry=0x7f80e8186750, obj=obj@entry=0x7f80e81f8900) at qemu/qemu_domain.c:2818
#7  0x00007f80f1f683e3 in qemuDomainObjBeginJobInternal (driver=driver@entry=0x7f80e8186750, obj=obj@entry=0x7f80e81f8900, job=job@entry=QEMU_JOB_DESTROY, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_NONE)
    at qemu/qemu_domain.c:2995
#8  0x00007f80f1f6afab in qemuDomainObjBeginJob (driver=driver@entry=0x7f80e8186750, obj=obj@entry=0x7f80e81f8900, job=job@entry=QEMU_JOB_DESTROY) at qemu/qemu_domain.c:3070
#9  0x00007f80f1f8d3a6 in qemuProcessBeginStopJob (driver=driver@entry=0x7f80e8186750, vm=0x7f80e81f8900, job=job@entry=QEMU_JOB_DESTROY, forceKill=forceKill@entry=true) at qemu/qemu_process.c:5708
#10 0x00007f80f1fdfe92 in qemuDomainDestroyFlags (dom=0x7f80cc0009f0, flags=0) at qemu/qemu_driver.c:2219

#11 0x00007f810e6c4d3c in virDomainDestroy (domain=domain@entry=0x7f80cc0009f0) at libvirt-domain.c:479
#12 0x00007f810f354b8b in remoteDispatchDomainDestroy (server=0x7f8110c91a80, msg=0x7f8110c93850, args=<optimized out>, rerr=0x7f80fbd31c50, client=0x7f8110c938c0) at remote_dispatch.h:4509
#13 remoteDispatchDomainDestroyHelper (server=0x7f8110c91a80, client=0x7f8110c938c0, msg=0x7f8110c93850, rerr=0x7f80fbd31c50, args=<optimized out>, ret=0x7f80cc0008e0) at remote_dispatch.h:4485
#14 0x00007f810e743012 in virNetServerProgramDispatchCall (msg=0x7f8110c93850, client=0x7f8110c938c0, server=0x7f8110c91a80, prog=0x7f8110ca6de0) at rpc/virnetserverprogram.c:437
#15 virNetServerProgramDispatch (prog=0x7f8110ca6de0, server=server@entry=0x7f8110c91a80, client=0x7f8110c938c0, msg=0x7f8110c93850) at rpc/virnetserverprogram.c:307
#16 0x00007f810f365c6d in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7f8110c91a80) at rpc/virnetserver.c:148
#17 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7f8110c91a80) at rpc/virnetserver.c:169
#18 0x00007f810e62fd41 in virThreadPoolWorker (opaque=opaque@entry=0x7f8110c868e0) at util/virthreadpool.c:167
#19 0x00007f810e62f0c8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#20 0x00007f810bc11dc5 in start_thread () from /lib64/libpthread.so.0
#21 0x00007f810b93973d in clone () from /lib64/libc.so.6

Comment 9 Luyao Huang 2016-11-09 09:31:42 UTC
And another critical problem:

1. start a guest without ivshmem-plain device:

# virsh start r7
Domain r7 started

2. 

# cat ivshmem.xml 
    <shmem name='my_shmem1'>
<model type='ivshmem-plain'/>
      <size unit='M'>4</size>
    </shmem>


# virsh attach-device r7 ivshmem.xml
Device attached successfully

3. recheck xml and will find libvirt generate a invalid xml:

# virsh dumpxml r7
...
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
      <model type='ivshmem-plain'/>           <-----no <shmem name='xxx'>
      <size unit='M'>4</size>
    </shmem>
    <memory model='dimm'>
...

Comment 10 Martin Kletzander 2016-11-09 16:54:09 UTC
Thanks for finding that out, the additional issue is fixed upstream with v2.4.0-46-gcca34e38fd32:

commit cca34e38fd32dbafa2c647f41a7dfb30d1e2e0a9
Author: Martin Kletzander <mkletzan>
Date:   Wed Nov 9 17:40:17 2016 +0100

    qemu: Fix double free when live-attaching shmem

Comment 12 Luyao Huang 2016-11-10 10:58:10 UTC
Verify this bug with libvirt-2.0.0-10.el7_3.2.x86_64, qemu-kvm-rhev-2.6.0-27.el7.x86_64:

Test ivshmem-plain device:

1.

# virsh dumpxml r7
...
    <shmem name='my_shmem1'>
      <model type='ivshmem-plain'/>
      <size unit='M'>4</size>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </shmem>

...

2.

# virsh start r7
Domain r7 started

3. check qemu command line:

# ps aux|grep qemu
...
-object memory-backend-file,id=shmmem-shmem0,mem-path=/dev/shm/my_shmem1,size=4194304,share=yes -device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,bus=pci.0,addr=0x7
...

4. check device in guest:

# lspci -vvv -s  00:07.0
00:07.0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)
	Subsystem: Red Hat, Inc QEMU Virtual Machine
	Physical Slot: 7
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Region 0: Memory at fc219000 (32-bit, non-prefetchable) [size=256]
	Region 2: Memory at fe400000 (64-bit, prefetchable) [size=4M]

5. verify ivshmem-plain device work:

In GUEST:

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:07.0/resource0 16 | od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
0000020

# dd if=/dev/urandom bs=4 count=4 | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:07.0/resource0 16
4+0 records in
4+0 records out
16 bytes (16 B) copiedread: Success
, 0.000111503 s, 143 kB/s

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:07.0/resource0 16 | od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
0000020

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:07.0/resource2 128 |od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000200

# cat data | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:07.0/resource2 128

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:07.0/resource2 128 |od -t x2
0000000 8608 a088 e418 0ed8 fb94 ab57 a8ff ed35
0000020 7f06 91ee 8863 efa5 59ad e3f2 4c6f 1939
0000040 edca 8000 76ae 6a68 5608 19b9 1151 f9c5
0000060 486d f1b6 9e48 424a ae62 cd95 5d1f 912d
0000100 6039 b536 2356 27a2 aac6 443c c12d 1ec3
0000120 f056 18b5 4f2a 1d7e 69bc 582b 28c1 d190
0000140 dea8 3be0 5cd7 b35a 247e b830 9bd0 a5f7
0000160 5312 1406 1adf 9b7f 1aa0 d1dd fc32 ab75
0000200

6. Check shmem in host:

# ./ivshmem-7.1-test my_shmem1 128 | od -t x2
0000000 8608 a088 e418 0ed8 fb94 ab57 a8ff ed35
0000020 7f06 91ee 8863 efa5 59ad e3f2 4c6f 1939
0000040 edca 8000 76ae 6a68 5608 19b9 1151 f9c5
0000060 486d f1b6 9e48 424a ae62 cd95 5d1f 912d
0000100 6039 b536 2356 27a2 aac6 443c c12d 1ec3
0000120 f056 18b5 4f2a 1d7e 69bc 582b 28c1 d190
0000140 dea8 3be0 5cd7 b35a 247e b830 9bd0 a5f7
0000160 5312 1406 1adf 9b7f 1aa0 d1dd fc32 ab75
0000200

7. attempt to write bar2 registers to an offset, and read again

# dd if=/dev/urandom of=data2 bs=1 count=64
64+0 records in
64+0 records out
64 bytes (64 B) copied, 0.000363592 s, 176 kB/s

# cat data2 | ./ivshmem-7.1-test -w -o 64 my_shmem1 64

# cat data2 | od -t x2
0000000 31ce 3939 f251 c488 aec2 4cbc 2a6b c333
0000020 2a55 c2c3 7a30 aada 0866 322c 8a1e 1ef0
0000040 8d6b a06d ad0f 78ad f223 65a3 f660 a5ce
0000060 09b9 3f2b db66 d477 3c5a dc12 d0ec 705e
0000100

# ./ivshmem-7.1-test my_shmem1 128 | od -t x2
0000000 8608 a088 e418 0ed8 fb94 ab57 a8ff ed35
0000020 7f06 91ee 8863 efa5 59ad e3f2 4c6f 1939
0000040 edca 8000 76ae 6a68 5608 19b9 1151 f9c5
0000060 486d f1b6 9e48 424a ae62 cd95 5d1f 912d
0000100 31ce 3939 f251 c488 aec2 4cbc 2a6b c333
0000120 2a55 c2c3 7a30 aada 0866 322c 8a1e 1ef0
0000140 8d6b a06d ad0f 78ad f223 65a3 f660 a5ce
0000160 09b9 3f2b db66 d477 3c5a dc12 d0ec 705e
0000200

8. in another guest use same ivshmem:

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:0b.0/resource2 128 |od -t x2
0000000 8608 a088 e418 0ed8 fb94 ab57 a8ff ed35
0000020 7f06 91ee 8863 efa5 59ad e3f2 4c6f 1939
0000040 edca 8000 76ae 6a68 5608 19b9 1151 f9c5
0000060 486d f1b6 9e48 424a ae62 cd95 5d1f 912d
0000100 31ce 3939 f251 c488 aec2 4cbc 2a6b c333
0000120 2a55 c2c3 7a30 aada 0866 322c 8a1e 1ef0
0000140 8d6b a06d ad0f 78ad f223 65a3 f660 a5ce
0000160 09b9 3f2b db66 d477 3c5a dc12 d0ec 705e
0000200

9. attach another ivshmem device:

# virsh attach-device r7 /root/ivshmem.xml
Device attached successfully

# cat /root/ivshmem.xml
    <shmem name='my_shmem3'>
<model type='ivshmem-plain'/>
      <size unit='M'>8</size>
    </shmem>

10. check guest xml:

# virsh dumpxml r7
    <shmem name='my_shmem3'>
      <model type='ivshmem-plain'/>
      <size unit='M'>8</size>
      <alias name='shmem1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
    </shmem>

11. recheck ivshmem-plain device with the same steps list in step 1-8.

12. detach ivshmem device:

# virsh detach-device r7 /root/ivshmem.xml
Device detached successfully

13. recheck guest xml:


# virsh dumpxml r7

no that device

14. try to migrate guest:

# virsh managedsave r7
error: Failed to save domain r7 state
error: Requested operation is not valid: migration with shmem device is not supported

# virsh migrate r7 qemu+ssh://targethost/system --live
error: Requested operation is not valid: migration with shmem device is not supported

Comment 13 Luyao Huang 2016-11-10 11:01:30 UTC
Test ivshmem device and ivshmem-doorbell ( qemu not support it):

# virsh dumpxml r7

    <shmem name='my_shmem1'>
      <model type='ivshmem'/>
      <size unit='M'>4</size>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </shmem>

# virsh start r7
error: Failed to start domain r7
error: internal error: qemu unexpectedly closed the monitor: 2016-11-10T11:00:04.116513Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 6 7 8 9
2016-11-10T11:00:04.116771Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config
2016-11-10T11:00:04.190646Z qemu-kvm: -device ivshmem,id=shmem0,size=4m,shm=my_shmem1,bus=pci.0,addr=0x7: Parameter 'driver' expects pluggable device type

# virsh dumpxml r7

    <shmem name='my_shmem2'>
      <model type='ivshmem-doorbell'/>
      <server path='/tmp/ivshmem_socket'/>
      <msi ioeventfd='on'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </shmem>

# virsh start r7
error: Failed to start domain r7
error: internal error: qemu unexpectedly closed the monitor: 2016-11-10T11:01:05.264988Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 6 7 8 9
2016-11-10T11:01:05.265302Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config
2016-11-10T11:01:05.333094Z qemu-kvm: -device ivshmem-doorbell,id=shmem0,chardev=charshmem0,ioeventfd=on,bus=pci.0,addr=0x7: Parameter 'driver' expects pluggable device type

Comment 15 errata-xmlrpc 2016-12-06 17:11:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2885.html


Note You need to log in before you can comment on or make changes to this bug.