Bug 1447112
Summary: | RHOS 11 DPDK vhost_sockets directory wrong | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Lon Hohberger <lhh> | |
Component: | puppet-tripleo | Assignee: | Karthik Sundaravel <ksundara> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | nlevinki <nlevinki> | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | 11.0 (Ocata) | CC: | aconole, ailan, amuller, apevec, astafeye, atelang, berrange, chrisw, ealcaniz, edannon, ekuris, fbaudin, fleitner, jjoyce, jschluet, ksundara, lhh, lvrabec, mbabushk, mburns, mgrepl, molasaga, mprivozn, nyechiel, oblaut, rhallise, rhos-maint, sclewis, skramaja, slinaber, srevivo, tvignaud, twilson, vchundur, yrachman | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | 11.0 (Ocata) | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | 1431556 | |||
: | 1496700 (view as bug list) | Environment: | ||
Last Closed: | 2017-12-11 12:10:23 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: |
Description
Lon Hohberger
2017-05-01 19:02:48 UTC
Basically, a fix was proposed - but never finalized upstream. Dropping priority - this is solved with an updated first_boot.yaml in bug 1431556. Upstream issue is fixed and for OSP10 and OSP11 documentation is provided with first-boot changes. Hi, we are having issues with one customer about using these OVS DPDK Version OSP10Z4 ealcaniz@ealcaniz systemd]$ grep vhue49fc191-70 * systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[16983]: ovs|00001|db_ctl_base|ERR|no row "/vhue49fc191-70" in table Interface systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[17001]: ovs|00001|db_ctl_base|ERR|no row "/vhue49fc191-70" in table Interface systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it libvirtd[2789]: 2017-09-25 09:37:15.824+0000: 2789: error : qemuProcessReportLogError:1862 : internal error: qemu unexpectedly closed the monitor: 2017-09-25T09:37:15.813841Z qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70: Failed to connect socket: Permission denied systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[16972]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- --if-exists del-port vhue49fc191-70 -- add-port br-int vhue49fc191-70 -- set Interface vhue49fc191-70 external-ids:iface-id=e49fc191-70bd-4edf-b096-cef609c8b7d4 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:90:c4:62 external-ids:vm-uuid=8f1be557-ea9a-4d5e-b31c-93e06666ed08 type=dpdkvhostuser systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[16975]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- set interface vhue49fc191-70 mtu_request=9000 systemctl_status_--all:Sep 25 11:37:16 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[17031]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- --if-exists del-port br-int vhue49fc191-70 systemctl_status_--all:Sep 25 11:37:16 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[17033]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- --if-exists del-port br-int vhue49fc191-70 systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vswitchd[1577]: VHOST_CONFIG: bind to /var/run/openvswitch/vhue49fc191-70 2017-09-25 09:37:15.660+0000: starting up libvirt version: 3.2.0, package: 14.el7_4.3 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2017-08-22-08:54:01, x86-039.build.eng.bos.redhat.com), qemu version: 2.9.0(qemu-kvm-rhev-2.9.0-10.el7), hostname: cpt1-dpdk-totp.nfv.cselt.it LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=instance-000010a4,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-instance-000010a4/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client,ss=on,hypervisor=on,tsc_adjust=on,pdpe1gb=on,mpx=off,xsavec=off,xgetbv1=off -m 4096 -realtime mlock=off -smp 2,sockets=1,cores=1,threads=2 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/3-instance-000010a4,share=yes,size=4294967296,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 -uuid 8f1be557-ea9a-4d5e-b31c-93e06666ed08 -smbios 'type=1,manufacturer=Red Hat,product=OpenStack Compute,version=14.0.7-11.el7ost,serial=38873f97-1eae-4dc9-a74a-71b078bb59c9,uuid=8f1be557-ea9a-4d5e-b31c-93e06666ed08,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-instance-000010a4/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/8f1be557-ea9a-4d5e-b31c-93e06666ed08/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70 -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:90:c4:62,bus=pci.0,addr=0x3 -add-fd set=0,fd=28 -chardev file,id=charserial0,path=/dev/fdset/0,append=on -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.20.0.181:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on 2017-09-25T09:37:15.813841Z qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70: Failed to connect socket: Permission denied 2017-09-25 09:37:15.824+0000: shutting down, reason=failed **nova-compute.log** 2017-09-25 11:37:16.038 5296 ERROR nova.virt.libvirt.driver [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Failed to start libvirt guest 2017-09-25 11:37:16.050 5296 INFO os_vif [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] Successfully unplugged vif VIFVHostUser(active=False,address=fa:16:3e:90:c4:62,has_traffic_filtering=False,id=e49fc191-70bd-4edf-b096-cef609c8b7d4,mode='client',network=Network(3d557637-a9e5-40ab-849f-2e4dbd842a51),path='/var/run/openvswitch/vhue49fc191-70',plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name=<?>) 2017-09-25 11:37:16.058 5296 INFO nova.virt.libvirt.driver [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Deleting instance files /var/lib/nova/instances/8f1be557-ea9a-4d5e-b31c-93e06666ed08_del 2017-09-25 11:37:16.059 5296 INFO nova.virt.libvirt.driver [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Deletion of /var/lib/nova/instances/8f1be557-ea9a-4d5e-b31c-93e06666ed08_del complete 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Instance failed to spawn ..... 017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] libvirtError: internal error: qemu unexpectedly closed the monitor: 2017-09-25T09:37:15.813841Z qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70: Failed to connect socket: Permission denied 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Hi, we are having issues with one customer about using these OVS DPDK Version OSP10Z4 ealcaniz@ealcaniz systemd]$ grep vhue49fc191-70 * systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[16983]: ovs|00001|db_ctl_base|ERR|no row "/vhue49fc191-70" in table Interface systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[17001]: ovs|00001|db_ctl_base|ERR|no row "/vhue49fc191-70" in table Interface systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it libvirtd[2789]: 2017-09-25 09:37:15.824+0000: 2789: error : qemuProcessReportLogError:1862 : internal error: qemu unexpectedly closed the monitor: 2017-09-25T09:37:15.813841Z qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70: Failed to connect socket: Permission denied systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[16972]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- --if-exists del-port vhue49fc191-70 -- add-port br-int vhue49fc191-70 -- set Interface vhue49fc191-70 external-ids:iface-id=e49fc191-70bd-4edf-b096-cef609c8b7d4 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:90:c4:62 external-ids:vm-uuid=8f1be557-ea9a-4d5e-b31c-93e06666ed08 type=dpdkvhostuser systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[16975]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- set interface vhue49fc191-70 mtu_request=9000 systemctl_status_--all:Sep 25 11:37:16 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[17031]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- --if-exists del-port br-int vhue49fc191-70 systemctl_status_--all:Sep 25 11:37:16 cpt1-dpdk-totp.nfv.cselt.it ovs-vsctl[17033]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=120 -- --if-exists del-port br-int vhue49fc191-70 systemctl_status_--all:Sep 25 11:37:15 cpt1-dpdk-totp.nfv.cselt.it ovs-vswitchd[1577]: VHOST_CONFIG: bind to /var/run/openvswitch/vhue49fc191-70 2017-09-25 09:37:15.660+0000: starting up libvirt version: 3.2.0, package: 14.el7_4.3 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2017-08-22-08:54:01, x86-039.build.eng.bos.redhat.com), qemu version: 2.9.0(qemu-kvm-rhev-2.9.0-10.el7), hostname: cpt1-dpdk-totp.nfv.cselt.it LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=instance-000010a4,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-instance-000010a4/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client,ss=on,hypervisor=on,tsc_adjust=on,pdpe1gb=on,mpx=off,xsavec=off,xgetbv1=off -m 4096 -realtime mlock=off -smp 2,sockets=1,cores=1,threads=2 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/3-instance-000010a4,share=yes,size=4294967296,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 -uuid 8f1be557-ea9a-4d5e-b31c-93e06666ed08 -smbios 'type=1,manufacturer=Red Hat,product=OpenStack Compute,version=14.0.7-11.el7ost,serial=38873f97-1eae-4dc9-a74a-71b078bb59c9,uuid=8f1be557-ea9a-4d5e-b31c-93e06666ed08,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-instance-000010a4/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/8f1be557-ea9a-4d5e-b31c-93e06666ed08/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70 -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:90:c4:62,bus=pci.0,addr=0x3 -add-fd set=0,fd=28 -chardev file,id=charserial0,path=/dev/fdset/0,append=on -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.20.0.181:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on 2017-09-25T09:37:15.813841Z qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70: Failed to connect socket: Permission denied 2017-09-25 09:37:15.824+0000: shutting down, reason=failed **nova-compute.log** 2017-09-25 11:37:16.038 5296 ERROR nova.virt.libvirt.driver [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Failed to start libvirt guest 2017-09-25 11:37:16.050 5296 INFO os_vif [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] Successfully unplugged vif VIFVHostUser(active=False,address=fa:16:3e:90:c4:62,has_traffic_filtering=False,id=e49fc191-70bd-4edf-b096-cef609c8b7d4,mode='client',network=Network(3d557637-a9e5-40ab-849f-2e4dbd842a51),path='/var/run/openvswitch/vhue49fc191-70',plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name=<?>) 2017-09-25 11:37:16.058 5296 INFO nova.virt.libvirt.driver [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Deleting instance files /var/lib/nova/instances/8f1be557-ea9a-4d5e-b31c-93e06666ed08_del 2017-09-25 11:37:16.059 5296 INFO nova.virt.libvirt.driver [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Deletion of /var/lib/nova/instances/8f1be557-ea9a-4d5e-b31c-93e06666ed08_del complete 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [req-dadd0f0e-93b9-4700-be40-d44dd62dfbfe df9893ff6fe042dd955337ea04279f0d dfd9ac55feec4b7795208bfa9415955d - - -] [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] Instance failed to spawn ..... 017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] libvirtError: internal error: qemu unexpectedly closed the monitor: 2017-09-25T09:37:15.813841Z qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhue49fc191-70: Failed to connect socket: Permission denied 2017-09-25 11:37:16.206 5296 ERROR nova.compute.manager [instance: 8f1be557-ea9a-4d5e-b31c-93e06666ed08] 1. Please check if the file /usr/lib/systemd/system/ovs-vswitchd.service in compute node has RuntimeDirectoryMode=0775 Group=qemu UMask=0002 2. Check if the file /usr/share/openvswitch/scripts/ovs-ctl in compute node has umask 0002 && start_daemon "$OVS_VSWITCHD_PRIORITY" "$OVS_VSWITCHD_WRAPPER" "$@" || in the function do_start_forwarding() 3. Please check if ovs-vsctl show throws any errors on the compute node with DPDK 4. Please add the SOS reports Please note that this BZ is reported on OSP11. OSP11 and later works with mode=server, while OSP10 works with mode=client, and the vhostuser socket directories differ in both cases. So I think its appropriate to raise a new BZ moved to bz https://bugzilla.redhat.com/show_bug.cgi?id=1496700 for OSP10. |