RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1464132 - Booting/shutting down VM with vhost-user and IOMMU when backend does not support IOMMU will cause qemu error
Summary: Booting/shutting down VM with vhost-user and IOMMU when backend does not supp...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.4
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Maxime Coquelin
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks: 1473046
TreeView+ depends on / blocked
 
Reported: 2017-06-22 13:36 UTC by Pei Zhang
Modified: 2018-01-15 09:35 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-15 09:35:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Pei Zhang 2017-06-22 13:36:58 UTC
Description of problem:
Boot guest with vhost-user and vIOMMU, when the backend does not support IOMMU,
Qemu terminal will repeat popup error info: 
"qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend "

Then rebooting VM will cause qemu segmentation fault.


Version-Release number of selected component (if applicable):
3.10.0-685.el7.x86_64
qemu-kvm-rhev-2.9.0-12.el7.x86_64
libvirt-3.2.0-14.el7.x86_64
openvswitch-2.7.0-8.git20170530.el7fdb.x86_64
dpdk-17.05-2.el7fdb.x86_64


How reproducible:
100%


Steps to Reproduce:
1. Start ovs

2. Boot VM with vhost-user and vIOMMU, qemu terminal will repeat promote error info.
/usr/libexec/qemu-kvm \
-name guest=rhel7.4_nonrt \
-machine q35,kernel-irqchip=split \
-device intel-iommu,device-iotlb=on,intremap \
-cpu host \
-m 8192 \
-smp 6,sockets=1,cores=6,threads=1 \
-device pcie-root-port,id=root.1,slot=1 \
-device pcie-root-port,id=root.2,slot=2 \
-device pcie-root-port,id=root.3,slot=3 \
-object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages,share=yes,size=8589934592,host-nodes=1,policy=bind \
-numa node,nodeid=0,cpus=0-5,memdev=ram-node0 \
-drive file=/home/images_nfv-virt-rt-kvm/rhel7.4_nonrt.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none,aio=threads \
-device virtio-blk-pci,scsi=off,bus=root.1,drive=drive-virtio-disk0,id=virtio-disk0 \
-chardev socket,id=charnet1,path=/tmp/vhostuser0.sock,server \
-netdev vhost-user,chardev=charnet1,id=hostnet1 \
-device virtio-net-pci,mq=on,netdev=hostnet1,id=net1,mac=88:66:da:5f:dd:12,bus=root.2,iommu_platform=on,ats=on \
-chardev socket,id=charnet2,path=/tmp/vhostuser1.sock,server \
-netdev vhost-user,chardev=charnet2,id=hostnet2 \
-device virtio-net-pci,mq=on,netdev=hostnet2,id=net2,mac=88:66:da:5f:dd:13,bus=root.3,iommu_platform=on,ats=on \
-monitor stdio \
-vnc :2 \
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=88:66:da:5f:dd:11 \

qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
...

3. Reboot VM, qemu core dump
...
vhost lacks feature mask 8589934592 for backend
qemu-kvm: failed to init vhost_net for queue 0
Segmentation fault



Actual results:
Boot VM cause qemu repeat popup error info, and reboot VM will cause qemu core dump.

Expected results:
When the backend does not support IOMMU, probably qemu should give some friendly warning message. And qemu should not core dump when reboot the VM.


Additional info:
1. This issue was found when verifying bug[1].
[1]Bug 1451862 - IOMMU support in QEMU for Vhost-user backend 

2. As dpdk does not support IOMMU, so set 7.5 flag.

Comment 1 Maxime Coquelin 2017-11-16 14:00:48 UTC
Hi Pei,

Could you please retry with dpdk v17.11-rc4 tag?

Testpmd's cmdline requires a new Vhost PMD parameter (iommu-support):
--vdev 'net_vhost0,iface=/tmp/vhost-user1,iommu-support=1'

Thanks,
Maxime

Comment 2 Pei Zhang 2017-11-20 09:59:38 UTC
(In reply to Maxime Coquelin from comment #1)
> Hi Pei,
> 
> Could you please retry with dpdk v17.11-rc4 tag?
> 
> Testpmd's cmdline requires a new Vhost PMD parameter (iommu-support):
> --vdev 'net_vhost0,iface=/tmp/vhost-user1,iommu-support=1'

Hi Maxime,

With dpdk-17.11.tar.xz, qemu core dump issue has gone. Thanks.

Versions:
dpdk-17.11.tar.xz
qemu-kvm-rhev-2.10.0-6.el7.x86_64
3.10.0-784.el7.x86_64

testpmd command line:
# /root/dpdk-17.11/x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd \
-l 1,3,5 --socket-mem=1024,1024 -n 4 \
-d /root/dpdk-17.11/x86_64-native-linuxapp-gcc/lib/librte_pmd_vhost.so.2.1 \
--vdev 'net_vhost0,iface=/tmp/vhost-user1,iommu-support=1' -- \
--portmask=3 --disable-hw-vlan -i --rxq=1 --txq=1 \
--nb-cores=2 --forward-mode=io


Best Regards,
Pei


> 
> Thanks,
> Maxime


Note You need to log in before you can comment on or make changes to this bug.