RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1333318 - ivshmem-plain support in RHEL 7.3
Summary: ivshmem-plain support in RHEL 7.3
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Markus Armbruster
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On: 1289417
Blocks: 1333282 1347049
TreeView+ depends on / blocked
 
Reported: 2016-05-05 09:29 UTC by Miroslav Rezanina
Modified: 2016-11-07 21:07 UTC (History)
14 users (show)

Fixed In Version: qemu-kvm-rhev-2.6.0-18.el7
Doc Type: Enhancement
Doc Text:
Clone Of:
: 1347049 (view as bug list)
Environment:
Last Closed: 2016-11-07 21:07:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2673 0 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2016-11-08 01:06:13 UTC

Description Miroslav Rezanina 2016-05-05 09:29:26 UTC
There are 3 ivshmem devices in QEMU 2.6 - ivshmem, ivshmem-plain and ivshmem-doorbell.

As there's no longer customer demands for this device we have to decide supporting of this devices - we enabled ivshmem in RHEL 7.2.

Comment 1 Andrew Jones 2016-05-05 10:09:20 UTC
Reassigning to virt-maint. I'm not sure who should own this, but I'll be first to say "not it".

Comment 17 Miroslav Rezanina 2016-08-02 15:22:00 UTC
Fix included in qemu-kvm-rhev-2.6.0-18.el7

Comment 19 Pei Zhang 2016-08-12 07:25:30 UTC
Hi Markus,
How to test ivshmem-plain? Could you give some suggestions?

Thank you,
Pei

Comment 20 Markus Armbruster 2016-08-15 07:46:10 UTC
The minimal test is running tests/ivshmem-test.  The easiest way to do
that is "make check".

The minimal test uses a simulated guest.  A test using a real guest
running RHEL and an application that uses ivshmem would be useful.
Unfortunately, I can't tell you what application to use, or how to set
it up.  I guess we should test with memnic, because that's what made
us add ivshmem to RHEL in the first place (bug 1104063).  Perhaps the
customer we added it for (NEC) is willing to help by verifying the new
stuff still works for them.

However, I think this needs to be done for the complete host stack,
i.e. libvirt over qemu-kvm.  If you agree, let's add a suitable note
to the libvirt buddy bug 1347049.

Comment 21 Pei Zhang 2016-09-05 11:28:53 UTC
When verifying this bug, Hit a new issue:
Bug 1373154 - Guest fails boot up with ivshmem-plain and virtio-pci device

Comment 22 Pei Zhang 2016-09-07 05:54:20 UTC
(In reply to Markus Armbruster from comment #20)
> The minimal test is running tests/ivshmem-test.  The easiest way to do
> that is "make check".

Thanks Markus for suggestions, it's helpful. 

Verification:

Summary: Verify this bug using 2 parts. 
(1)Part1 is using ivshmem-test, PASS. 
(2)Part2 is using ivshmem-7.1-test testing code which can check memory share info between host<->guest & guest<->guest. PASS

Versions:
Host
3.10.0-503.el7.x86_64
qemu-kvm-rhev-2.6.0-22.el7.src.rpm(Part1 testing)
qemu-kvm-rhev-2.6.0-18.el7.x86_64 (Part2 testing, as Comment 21, Will re-verifying this part using same steps after this bug is fixed.)

Guest:
3.10.0-503.el7.x86_64
part 1:
Steps:
1. Rebuild qemu-kvm-rhev-2.6.0-22.el7.src.rpm, and test with ivshmem-test. PASS.
# export QTEST_QEMU_BINARY=/root/rpmbuild/BUILD/qemu-2.6.0/x86_64-softmmu/qemu-system-x86_64

# echo $QTEST_QEMU_BINARY
/root/rpmbuild/BUILD/qemu-2.6.0/x86_64-softmmu/qemu-system-x86_64

# root/rpmbuild/BUILD/qemu-2.6.0/tests/ivshmem-test
/x86_64/ivshmem/single: OK
/x86_64/ivshmem/memdev: OK

> The minimal test uses a simulated guest.  A test using a real guest
> running RHEL and an application that uses ivshmem would be useful.
> Unfortunately, I can't tell you what application to use, or how to set
> it up.  I guess we should test with memnic, because that's what made
> us add ivshmem to RHEL in the first place (bug 1104063).  Perhaps the
> customer we added it for (NEC) is willing to help by verifying the new
> stuff still works for them.

Part 2:

Note:
1. Testing code ivshmem-7.1-test comes from https://bugzilla.redhat.com/show_bug.cgi?id=1104063#c20
2. Testing steps almost same with https://bugzilla.redhat.com/show_bug.cgi?id=1104063#c54.
3. Testing with 1 host and 2 guests(guest1 and guest2)

Steps:
1. Boot guest1 with ivshmem-plain
# /usr/libexec/qemu-kvm -name rhel7.3 \
-cpu IvyBridge,check -m 4G \
-smp 4,sockets=2,cores=2,threads=1 \
-netdev tap,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:54:00:5c:77:61 \
-spice port=5902,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on \
-monitor stdio \
-drive file=/home/pezhang/rhel7.3-2.qcow2,format=qcow2,if=none,id=drive-virtio-blk0,werror=stop,rerror=stop \
-device virtio-blk-pci,drive=drive-virtio-blk0,id=virtio-blk0 \
-usbdevice tablet \
-object memory-backend-file,id=shmmem-shmem0,size=4G,mem-path=/dev/shm/shmem0,share \
-device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0 \

2. In guest1: check ivshmem
# lspci 
[...]
00:05.0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)

# lspci -vvv -s 00:05.0
00:05.0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)
	Subsystem: Red Hat, Inc QEMU Virtual Machine
	Physical Slot: 5
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Region 0: Memory at fc054000 (32-bit, non-prefetchable) [size=256]
	Region 2: Memory at 200000000 (64-bit, prefetchable) [size=4G]
	Kernel modules: virtio_pci

3. In guest1: attepmt to read bar0 registers
# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource0 16 | od -t x2
0000000

4. In guest1: In Attepmt to write bar0 registers, and read again
# dd if=/dev/urandom bs=4 count=4 | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:05.0/resource0 16
read: Success

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource0 16 | od -t x2
0000000

5. In guest1: attempt to read bar2 registers, it's zeros at the first read.
# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128 |od -t x2
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000200

6. In guest1: attempt to write bar2 registers, and read again. Data matches.
# dd if=/dev/urandom of=data bs=1 count=128
128+0 records in
128+0 records out
128 bytes (128 B) copied, 0.000910017 s, 141 kB/s

# cat data | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128 | od -t x2
0000000 7c57 13cb f6dd 024b c705 8afa e770 ad52
0000020 108b e33b d0f6 3788 8390 2729 6711 8466
0000040 8849 7741 f8fd ab05 51b1 c7bf e92b 08d4
0000060 046e 4f1c 77f4 1562 4c02 3da3 d6a6 566f
0000100 f160 d5d5 9efb e4ea 714e 4265 9c9a 854f
0000120 9099 b4e9 cbb6 ea17 bbcb 5e7d f217 c208
0000140 255a 5bfc 3a8b f2a9 3c5b ff14 a919 b7f9
0000160 834f 4a8d ee65 3932 b408 77d9 e248 beac
0000200

# cat data | od -t x2
0000000 7c57 13cb f6dd 024b c705 8afa e770 ad52
0000020 108b e33b d0f6 3788 8390 2729 6711 8466
0000040 8849 7741 f8fd ab05 51b1 c7bf e92b 08d4
0000060 046e 4f1c 77f4 1562 4c02 3da3 d6a6 566f
0000100 f160 d5d5 9efb e4ea 714e 4265 9c9a 854f
0000120 9099 b4e9 cbb6 ea17 bbcb 5e7d f217 c208
0000140 255a 5bfc 3a8b f2a9 3c5b ff14 a919 b7f9
0000160 834f 4a8d ee65 3932 b408 77d9 e248 beac
0000200

7. In host: attempt to read bar2 registers, data matches with step6.
# ./ivshmem-7.1-test /shmem0 128 | od -t x2
0000000 7c57 13cb f6dd 024b c705 8afa e770 ad52
0000020 108b e33b d0f6 3788 8390 2729 6711 8466
0000040 8849 7741 f8fd ab05 51b1 c7bf e92b 08d4
0000060 046e 4f1c 77f4 1562 4c02 3da3 d6a6 566f
0000100 f160 d5d5 9efb e4ea 714e 4265 9c9a 854f
0000120 9099 b4e9 cbb6 ea17 bbcb 5e7d f217 c208
0000140 255a 5bfc 3a8b f2a9 3c5b ff14 a919 b7f9
0000160 834f 4a8d ee65 3932 b408 77d9 e248 beac
0000200

8. In host: attempt to write bar2 registers to an offset, and read again.
# dd if=/dev/urandom of=data2 bs=1 count=64
64+0 records in
64+0 records out
64 bytes (64 B) copied, 0.000515079 s, 124 kB/s

# cat data2 | ./ivshmem-7.1-test -w -o 64 /shmem0 64 

# cat data2 | od -t x2
0000000 704a 08e1 61de 17b6 55fd 0621 2259 d23e
0000020 8f27 d615 03fd 1ebc 9888 22f3 f38c d40c
0000040 ef6c 8691 50b3 e0e8 f836 e3dd ce73 dc3b
0000060 23e8 a23b 288b d641 8bad b985 b8fd ca25
0000100

We should see the first 64 bytes of step 6's data and then the data from this step starting at offset 64 here.
# ./ivshmem-7.1-test /shmem0 128 | od -t x2
0000000 7c57 13cb f6dd 024b c705 8afa e770 ad52
0000020 108b e33b d0f6 3788 8390 2729 6711 8466
0000040 8849 7741 f8fd ab05 51b1 c7bf e92b 08d4
0000060 046e 4f1c 77f4 1562 4c02 3da3 d6a6 566f
0000100 704a 08e1 61de 17b6 55fd 0621 2259 d23e
0000120 8f27 d615 03fd 1ebc 9888 22f3 f38c d40c
0000140 ef6c 8691 50b3 e0e8 f836 e3dd ce73 dc3b
0000160 23e8 a23b 288b d641 8bad b985 b8fd ca25
0000200

9. In guest1: attempt to read bar2 registers, data matches step8.
# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128 | od -t x2
0000000 7c57 13cb f6dd 024b c705 8afa e770 ad52
0000020 108b e33b d0f6 3788 8390 2729 6711 8466
0000040 8849 7741 f8fd ab05 51b1 c7bf e92b 08d4
0000060 046e 4f1c 77f4 1562 4c02 3da3 d6a6 566f
0000100 704a 08e1 61de 17b6 55fd 0621 2259 d23e
0000120 8f27 d615 03fd 1ebc 9888 22f3 f38c d40c
0000140 ef6c 8691 50b3 e0e8 f836 e3dd ce73 dc3b
0000160 23e8 a23b 288b d641 8bad b985 b8fd ca25
0000200

(Repeat step 1~9)
10. Boot guest2 with ivshmem-plain and same shared memory object 'shmem0'
# /usr/libexec/qemu-kvm -name rhel7.3-2 \
-cpu IvyBridge,check -m 4G \
-smp 4,sockets=2,cores=2,threads=1 \
-netdev tap,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:54:00:5c:77:62 \
-spice port=5903,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on \
-monitor stdio \
-drive file=/home/pezhang/rhel7.3.qcow2,format=qcow2,if=none,id=drive-virtio-blk0,werror=stop,rerror=stop \
-device virtio-blk-pci,drive=drive-virtio-blk0,id=virtio-blk0 \
-object memory-backend-file,id=shmmem-shmem0,size=4G,mem-path=/dev/shm/shmem0,share \
-device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0 \

11. In guest2: check ivshmem
# lspci
[...]
00:05.0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)

# lspci -vvv -s 00:05.0
00:05.0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)
	Subsystem: Red Hat, Inc QEMU Virtual Machine
	Physical Slot: 5
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Region 0: Memory at fc054000 (32-bit, non-prefetchable) [size=256]
	Region 2: Memory at 200000000 (64-bit, prefetchable) [size=4G]
	Kernel modules: virtio_pci

12. In guest2: attempt to read bar0 registers
# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource0 16 | od -t x2
0000000

13.  In guest2: attempt to write bar0 registers, and read again
# dd if=/dev/urandom bs=4 count=4 | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:05.0/resource0 16
read: Success

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource0 16 | od -t x2
0000000

14. In guest2: attempt to read bar2 registers, data matches with step8 and step9.
# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128 | od -t x2
0000000 7c57 13cb f6dd 024b c705 8afa e770 ad52
0000020 108b e33b d0f6 3788 8390 2729 6711 8466
0000040 8849 7741 f8fd ab05 51b1 c7bf e92b 08d4
0000060 046e 4f1c 77f4 1562 4c02 3da3 d6a6 566f
0000100 704a 08e1 61de 17b6 55fd 0621 2259 d23e
0000120 8f27 d615 03fd 1ebc 9888 22f3 f38c d40c
0000140 ef6c 8691 50b3 e0e8 f836 e3dd ce73 dc3b
0000160 23e8 a23b 288b d641 8bad b985 b8fd ca25
0000200

15. In guest2: attempt to write bar2 registers, and read again. data matches.
# dd if=/dev/urandom of=data bs=1 count=128
128+0 records in
128+0 records out
128 bytes (128 B) copied, 0.000887611 s, 144 kB/s

# cat data | ./ivshmem-7.1-test -w /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128

# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128 | od -t x2
0000000 fc37 fcfd f8c5 450d 2e41 4f42 81a3 7ea8
0000020 78a0 93eb 1555 5bf6 9e7c cca7 bb16 00c7
0000040 4ef1 7346 aead aee0 3d43 4847 8255 81fc
0000060 9f3f e702 5418 8ebd 23fa adf5 c0de 7536
0000100 1434 0517 09e8 fdd0 f2fe 06aa c4ed 4d07
0000120 c76e 4bcb a1b9 72d3 b646 be33 bc55 789d
0000140 a8b4 7fd3 02a9 43b7 d1e0 b6b3 85f4 13a6
0000160 1e36 42a7 0375 229c e61a 5787 9faf 9c71
0000200

# cat data | od -t x2
0000000 fc37 fcfd f8c5 450d 2e41 4f42 81a3 7ea8
0000020 78a0 93eb 1555 5bf6 9e7c cca7 bb16 00c7
0000040 4ef1 7346 aead aee0 3d43 4847 8255 81fc
0000060 9f3f e702 5418 8ebd 23fa adf5 c0de 7536
0000100 1434 0517 09e8 fdd0 f2fe 06aa c4ed 4d07
0000120 c76e 4bcb a1b9 72d3 b646 be33 bc55 789d
0000140 a8b4 7fd3 02a9 43b7 d1e0 b6b3 85f4 13a6
0000160 1e36 42a7 0375 229c e61a 5787 9faf 9c71
0000200

16. In host: attempt to read bar2 registers, data matches with step15.
# ./ivshmem-7.1-test /shmem0 128 | od -t x2
0000000 fc37 fcfd f8c5 450d 2e41 4f42 81a3 7ea8
0000020 78a0 93eb 1555 5bf6 9e7c cca7 bb16 00c7
0000040 4ef1 7346 aead aee0 3d43 4847 8255 81fc
0000060 9f3f e702 5418 8ebd 23fa adf5 c0de 7536
0000100 1434 0517 09e8 fdd0 f2fe 06aa c4ed 4d07
0000120 c76e 4bcb a1b9 72d3 b646 be33 bc55 789d
0000140 a8b4 7fd3 02a9 43b7 d1e0 b6b3 85f4 13a6
0000160 1e36 42a7 0375 229c e61a 5787 9faf 9c71
0000200

17. In host: attempt to write bar2 registers to an offset, and read again
# dd if=/dev/urandom of=data2 bs=1 count=64
64+0 records in
64+0 records out
64 bytes (64 B) copied, 0.000377737 s, 169 kB/s

# cat data2 | ./ivshmem-7.1-test -w -o 64 /shmem0 64

# cat data2 | od -t x2
0000000 9c52 929c efe3 cca0 43f1 e597 33ee 4790
0000020 9ef5 8c54 026f 32f8 9b27 4a4a 5ada a84c
0000040 1d4f 6425 b04e 6a8a 0f81 1f53 0d2f faeb
0000060 31f8 508b 0138 e37f 746c 723c bb03 45cd
0000100

We should see the first 64 bytes of step16's data and then the data from this step starting at offset 64 here.
# ./ivshmem-7.1-test /shmem0 128 | od -t x2
0000000 fc37 fcfd f8c5 450d 2e41 4f42 81a3 7ea8
0000020 78a0 93eb 1555 5bf6 9e7c cca7 bb16 00c7
0000040 4ef1 7346 aead aee0 3d43 4847 8255 81fc
0000060 9f3f e702 5418 8ebd 23fa adf5 c0de 7536
0000100 9c52 929c efe3 cca0 43f1 e597 33ee 4790
0000120 9ef5 8c54 026f 32f8 9b27 4a4a 5ada a84c
0000140 1d4f 6425 b04e 6a8a 0f81 1f53 0d2f faeb
0000160 31f8 508b 0138 e37f 746c 723c bb03 45cd
0000200

18. In guest2: attempt to read bar2 registers. data matches with step17.
# ./ivshmem-7.1-test /sys/devices/pci0000\:00/0000\:00\:05.0/resource2 128 | od -t x2
0000000 fc37 fcfd f8c5 450d 2e41 4f42 81a3 7ea8
0000020 78a0 93eb 1555 5bf6 9e7c cca7 bb16 00c7
0000040 4ef1 7346 aead aee0 3d43 4847 8255 81fc
0000060 9f3f e702 5418 8ebd 23fa adf5 c0de 7536
0000100 9c52 929c efe3 cca0 43f1 e597 33ee 4790
0000120 9ef5 8c54 026f 32f8 9b27 4a4a 5ada a84c
0000140 1d4f 6425 b04e 6a8a 0f81 1f53 0d2f faeb
0000160 31f8 508b 0138 e37f 746c 723c bb03 45cd
0000200

> However, I think this needs to be done for the complete host stack,
> i.e. libvirt over qemu-kvm.  If you agree, let's add a suitable note
> to the libvirt buddy bug 1347049.

OK. I will add this comment to libvirt buddy bug 1347049.

Comment 23 Pei Zhang 2016-09-07 05:58:56 UTC
Hi Markus,

QE still want to confirm this with you, are steps in Comment 22 correct? Are they enough to verify this bug?

Thank you,
-Pei

Comment 24 Markus Armbruster 2016-09-07 07:18:46 UTC
Looks good to me.

Let's include this test in a test plan.

Thanks!

Comment 26 Pei Zhang 2016-09-16 03:50:43 UTC
(In reply to Markus Armbruster from comment #24)
> Looks good to me.
> 
> Let's include this test in a test plan.
> 
OK, this test case has been put in test plan:
RHEL7-68094 - [NFV][ivshmem-plain] Memory sharing testing between guest1, guest2 and host.	

(In reply to Pei Zhang from comment #22)
[...]
> Versions:
> Host
> 3.10.0-503.el7.x86_64
> qemu-kvm-rhev-2.6.0-22.el7.src.rpm(Part1 testing)
> qemu-kvm-rhev-2.6.0-18.el7.x86_64 (Part2 testing, as Comment 21, Will
> re-verifying this part using same steps after this bug is fixed.)
> 

As 'Bug 1373154 - Guest fails boot up with ivshmem-plain and virtio-pci device' has been fixed. So re-test this bug with latest verisons
qemu-kvm-rhev-2.6.0-25.el7.x86_64 & seabios-1.9.1-5.el7.x86_64, everything works well.

Comment 28 errata-xmlrpc 2016-11-07 21:07:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2673.html


Note You need to log in before you can comment on or make changes to this bug.