Bug 1347049 - ivshmem-plain support in RHEL 7.3
Summary: ivshmem-plain support in RHEL 7.3
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: Luyao Huang
URL:
Whiteboard:
Depends On: 1289417 1333318
Blocks: 1218603 1333282 1392031
TreeView+ depends on / blocked
 
Reported: 2016-06-15 22:00 UTC by Ademar Reis
Modified: 2017-08-01 23:51 UTC (History)
24 users (show)

Fixed In Version: libvirt-2.5.0-1.el7
Doc Type: Enhancement
Doc Text:
Clone Of: 1333318
: 1392031 (view as bug list)
Environment:
Last Closed: 2017-08-01 17:09:12 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Comment 8 Markus Armbruster 2016-08-04 09:37:14 UTC
Context:

* We disabled ivshmem RHEL-7.0 due to poor guest driver support, poor
  code quality, and no real business case (bug 787463).

* We enabled a *limited* variant of ivshmem in RHEL-7.1 for the memnic
  use case (bug 1104063).  This variant does not support interrupts or
  migration.

* We worked upstream to improve ivshmem over the 2.5 and the 2.6
  development cycle.  The device was split into ivshmem-plain (no
  interrupt capability) and ivshmem-doorbell (interrupt capability).
  For rationale, see upstream commit 5400c02.  Device ivshmem is
  deprecated since then.

* In RHEL-7.3, we're replacing our downstream ivshmem variant by
  ivshmem-plain, with migration disabled (bug 1333318).  Device
  ivshmem no longer exists.

What happened in the test described in comment#7: libvirt tries to use
-device ivshmem, which qemu-kvm-rhev-2.6.0-18.el7.x86_64 doesn't have.
The error message is expected.

What upstream libvirt should do:

* When asked to provide an ivshmem device without interrupt
  capability, try ivshmem-plain, and if that doesn't exist, fall back
  to legacy ivshmem.

* When asked to provide an ivshmem device with interrupt capability,
  try ivshmem-doorbell, and if that doesn't exist, fall back to legacy
  ivshmem.

The same should work fine downstream.  Of course, any attempt to
provide an ivshmem device with interrupt capability will fail there,
as it always has.

If it turns out that replacing ivshmem with ivshmem-plain creates
serious problems, we can retain our downstream variant of ivshmem.
This requires a non-trivial forward-port of downstream patches, and
adds ongoing maintenance cost.

Comment 9 Pei Zhang 2016-09-07 06:01:40 UTC
KVM QE has verified ivshmem-plain in qemu-kvm layer, please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1333318#c22 to get testing info if you would like.

Best Regards,
-Pei

Comment 19 Martin Kletzander 2016-11-03 13:41:53 UTC
Fixed upstream with mainly v2.4.0-9-g06524fd52c74:

commit 06524fd52c74a4fc672e9eec2b5a13d540e7ee06
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Wed Aug 10 11:15:22 2016 +0200

    qemu: Support newer ivshmem device variants

Comment 22 Luyao Huang 2017-06-01 02:59:16 UTC
Verify this bug with libvirt-3.2.0-7.el7.x86_64 and qemu-kvm-rhev-2.9.0-6.el7.x86_64:

1. prepare guest with ivshmem-plain:

    <shmem name='my_shmem0'>
      <model type='ivshmem-plain'/>
      <size unit='M'>4</size>
      <alias name='shmem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </shmem>


2. check qemu command line:

...
-object memory-backend-file,id=shmmem-shmem0,mem-path=/dev/shm/my_shmem0,size=4194304,share=yes -device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,bus=pci.0,addr=0x3...

3. check device in guest:

# lspci -vvv -s  00:03.0
00:03.0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)
	Subsystem: Red Hat, Inc QEMU Virtual Machine
	Physical Slot: 3
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Region 0: Memory at fc056000 (32-bit, non-prefetchable) [size=256]
	Region 2: Memory at fe400000 (64-bit, prefetchable) [size=4M]
	Kernel modules: virtio_pci

4. verify ivshmem-plain device work:

In GUEST:

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:03.0/resource0 16 | od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
0000020

# dd if=/dev/urandom bs=4 count=4 | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:03.0/resource0 16
4+0 records in
4+0 records out
16 bytes (16 B) copiedread: Success
, 8.1741e-05 s, 196 kB/s

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:03.0/resource0 16 | od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
0000020

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:03.0/resource2 128 |od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000200

# dd if=/dev/urandom of=data bs=1 count=128
128+0 records in
128+0 records out
128 bytes (128 B) copied, 0.000612402 s, 209 kB/s

# cat data | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:03.0/resource2 128

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:03.0/resource2 128 |od -t x2
0000000 baaf 85d0 1599 ffa8 998a 6450 6364 c942
0000020 ffc1 11cc 916b 3076 7783 0f35 8ae7 19d2
0000040 4aa8 9457 2c08 a6f3 ab7a 60a1 6bd7 cbb1
0000060 0ed7 db97 88e8 311e 0539 ac21 cbb2 35f1
0000100 daf9 63df 0c0c a2f7 9ec3 80df 1929 843d
0000120 a875 b5f9 27d2 e5e0 8607 938b 53c4 571f
0000140 897f 6224 cce3 b336 db94 6640 64f2 ac00
0000160 4407 f0cf 7ccc b5f0 e220 a7e8 da8d 27ea
0000200

5. Check shmem in host:

# ./ivshmem-7.1-test my_shmem0 128 | od -t x2
0000000 baaf 85d0 1599 ffa8 998a 6450 6364 c942
0000020 ffc1 11cc 916b 3076 7783 0f35 8ae7 19d2
0000040 4aa8 9457 2c08 a6f3 ab7a 60a1 6bd7 cbb1
0000060 0ed7 db97 88e8 311e 0539 ac21 cbb2 35f1
0000100 daf9 63df 0c0c a2f7 9ec3 80df 1929 843d
0000120 a875 b5f9 27d2 e5e0 8607 938b 53c4 571f
0000140 897f 6224 cce3 b336 db94 6640 64f2 ac00
0000160 4407 f0cf 7ccc b5f0 e220 a7e8 da8d 27ea
0000200

6. attach another ivshmem device:

# cat /root/ivshmem.xml
    <shmem name='my_shmem1'>
<model type='ivshmem-plain'/>
      <size unit='M'>16</size>
    </shmem>

# virsh attach-device r7 /root/ivshmem.xml
Device attached successfully

7. check guest xml:

    <shmem name='my_shmem1'>
      <model type='ivshmem-plain'/>
      <size unit='M'>16</size>
      <alias name='shmem1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </shmem>

8.  recheck ivshmem-plain device with the same steps list in step 1-5.

9. detach ivshmem device:

# virsh detach-device r7 /root/ivshmem.xml
Device detached successfully

10. recheck xml:


# virsh dumpxml r7

no that device

11. try to migrate guest:

# virsh managedsave r7
error: Failed to save domain r7 state
error: Requested operation is not valid: migration with shmem device is not supported

# virsh migrate r7 qemu+ssh://target/system
error: Requested operation is not valid: migration with shmem device is not supported

12. test with an old qemu:

# virsh start r7
error: Failed to start domain r7
error: unsupported configuration: shmem model 'ivshmem-plain' is not supported by this QEMU binary

Comment 23 errata-xmlrpc 2017-08-01 17:09:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 24 errata-xmlrpc 2017-08-01 23:51:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.