Bug 1569982 - Guest in Server mode hang on start in a OVS-dpdk vxlan tunnelling topo with Selinux=Permissive
Summary: Guest in Server mode hang on start in a OVS-dpdk vxlan tunnelling topo with S...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Aaron Conole
QA Contact: Jean-Tsung Hsiao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-20 12:15 UTC by Jean-Tsung Hsiao
Modified: 2018-04-20 22:17 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-20 22:17:11 UTC
Target Upstream Version:


Attachments (Terms of Use)
/var/log/libvirt/qemu/mq-vhu-tunnel-server.log (11.32 KB, text/plain)
2018-04-20 15:05 UTC, Jean-Tsung Hsiao
no flags Details

Description Jean-Tsung Hsiao 2018-04-20 12:15:34 UTC
Description of problem: Guest in Server mode hang on start in a OVS-dpdk vxlan tunnelling topo with Selinux=Permissive


Version-Release number of selected component (if applicable):

[root@netqe5 vxlan-tunnel]# rpm -q openvswitch
openvswitch-2.9.0-15.el7fdp.x86_64
[root@netqe5 vxlan-tunnel]# uname -a
Linux netqe5.knqe.lab.eng.bos.redhat.com 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@netqe5 vxlan-tunnel]# getenforce
Permissive

[root@netqe5 vxlan-tunnel]# ps -elf | grep qemu
6 S qemu       6284      1  0  80   0 - 79069 skb_wa Apr19 ?        00:00:00 /usr/libexec/qemu-kvm -name guest=mq-vhu-tunnel-server,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-mq-vhu-tunnel-server/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX -m 8192 -realtime mlock=off -smp 5,sockets=5,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/4-mq-vhu-tunnel-server,share=yes,size=8589934592 -numa node,nodeid=0,cpus=0,memdev=ram-node0 -uuid 599be106-543c-4c3a-b65f-4e11f5bd8d79 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-mq-vhu-tunnel-server/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/home/images/mq_vhu_tunnel_server.img,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev socket,id=charnet0,path=/tmp/vhost0,server -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:32:0f:71,bus=pci.0,addr=0x3 -netdev tap,fd=26,id=hostnet1,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:77:96:95,bus=pci.0,addr=0xb -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-4-mq-vhu-tunnel-server/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

How reproducible: Reproducible


Steps to Reproduce:
1.Config and start OVS-dpdk bridge in dpdkvhostuserclient mode
2.Config guest in Server mode
3.virsh start <guest>

Actual results: Hang in paused state


Expected results: Should be in active state


Additional info:

Comment 2 Aaron Conole 2018-04-20 13:46:26 UTC
Why do you think this is an openvswitch issue?  Can you provide the logs from the qemu process and the ovs-vswitchd process?

Comment 3 Jean-Tsung Hsiao 2018-04-20 15:05:22 UTC
Created attachment 1424540 [details]
/var/log/libvirt/qemu/mq-vhu-tunnel-server.log

Comment 4 Jean-Tsung Hsiao 2018-04-20 15:11:19 UTC
Well, in a non-tunnelling env, dpdkvhostuserclient mode has been working file. This issue only happens to a OVS-dpdk tunnelling topo. That's why I started with OVS.

Nothing from ovs-vswitch.log. Please get the qemu log from Comment #3 above.

Comment 5 Aaron Conole 2018-04-20 18:09:51 UTC
"Nothing from ovs-vswitch.log" - what does this mean?

Comment 6 Jean-Tsung Hsiao 2018-04-20 20:34:08 UTC
(In reply to Aaron Conole from comment #5)
> "Nothing from ovs-vswitch.log" - what does this mean?

Sometimes, got these INFO messages:

2018-04-20T19:15:39.235Z|00111|netdev_linux|INFO|ioctl(SIOCGIFINDEX) on vxlan_sys_4789 device failed: No such device
2018-04-20T19:15:39.265Z|00112|netdev_linux|INFO|ioctl(SIOCGIFINDEX) on vxlan_sys_4789 device failed: No such device
2018-04-20T19:15:41.276Z|00113|netdev_linux|INFO|ioctl(SIOCGIFINDEX) on vxlan_sys_4789 device failed: No such device
2018-04-20T19:15:41.280Z|00114|netdev_linux|INFO|ioctl(SIOCGIFINDEX) on vxlan_sys_4789 device failed: No such device
2018-04-20T19:15:43.288Z|00115|netdev_linux|INFO|ioctl(SIOCGIFINDEX) on vxlan_sys_4789 device failed: No such device
2018-04-20T19:15:43.292Z|00116|netdev_linux|INFO|ioctl(SIOCGIFINDEX) on vxlan_sys_4789 device failed: No such device

Comment 7 Jean-Tsung Hsiao 2018-04-20 22:17:11 UTC
NOT A BUG
Wrong argument used:
        Port "vhost0"
            Interface "vhost0"
                type: dpdkvhostuserclient
                options: {server-path="/tmp/vhost0"}

NOTE: The daemon log did not show warning in this case.


Note You need to log in before you can comment on or make changes to this bug.