Red Hat Bugzilla – Bug 1295374
ovs-dpdk: neutron-openvswitch-agent failed to start when vxlan nic bind to DPDK-compatible driver
Last modified: 2017-03-19 02:47:13 EDT
Created attachment 1111411 [details] log & steps to reproduce Description of problem: On OSP 8 setup with 1 controller & 1 compute I set on compute node ovs-dpdk. The setup deployed with packstack to work with vxlan tunnel. During the configuration I bind the tunnel interface to be "DPDK-compatible driver". After that neutron-openvswitch-agent failed to start error in log : 2016-01-04 10:29:06.706 6785 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 7.0.1 2016-01-04 10:29:06.717 6785 WARNING oslo_config.cfg [-] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency". 2016-01-04 10:29:06.749 6785 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Tunneling can't be enabled with invalid local_ip '2.2.2.2'. IP couldn't be found on this host's interfaces. Version-Release number of selected component (if applicable): # rpm -qa |grep neutron openstack-neutron-common-7.0.1-2.el7ost.noarch openstack-neutron-7.0.1-2.el7ost.noarch python-neutronclient-3.1.0-1.el7ost.noarch python-neutron-7.0.1-2.el7ost.noarch openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch [root@puma10 ~]# rpm -qa |grep dpdk openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64 dpdk-2.1.0-5.el7.x86_64 dpdk-tools-2.1.0-5.el7.x86_64 [root@puma10 ~]# rpm -qa |grep openvswitch openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64 python-openvswitch-2.4.0-1.el7.noarch openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch [root@puma10 ~]# rpm -qa |grep packstack openstack-packstack-puppet-7.0.0-0.8.dev1661.gaf13b7e.el7ost.noarch openstack-packstack-7.0.0-0.8.dev1661.gaf13b7e.el7ost.noarch How reproducible: always Steps to Reproduce: 1.attached file with all steps and logs 2. 3. Actual results: Expected results: Additional info:
Created attachment 1111412 [details] setup
Furthermore , I boot 2 vm's they didn't get IP address and even if I set static IP for those instances there is no connectivity between them.
Eran: I don't think we have ever tried (or at least successfully reported) tunneling with neutron + ovs + dpdk. Can you try a setup using vlans and see if that fixes the issue? That way we can at least narrow down the problem.
Terry in vlan environment I success to boot 2 instances with dpdk ports and full connectivity.
I deployed new setup of OVS-DPDK. I added manually IP address to br-tun after I binded the port to DPDK and all services & agents are active. The issue is that there is no connectivity between Compute node and controller. I boot vm but it does not get an IP address .
Created attachment 1131247 [details] vxlan
Hey, Eran. Yeah that seems to be the same issue I am having on my test environment. According to http://openvswitch.org/support/config-cookbooks/userspace-tunneling/ userspace tunneling requires a bridge with an ip (br-tun in our case) so that OVS can use kernel routing/arp. When I look at the arp cache with `ovs-appctl tnl/arp/show` I do not see entries for the tunnel endpionts. The instructions mention setting them manually if they aren't there, but that tnl/arp/set isn't available in ovs 2.4.0. I'm pretty much at the limit of my knowledge here (to the point where any progress I'd make on my own would be *very slow*). We really need someone from the OVS team to log into either your servers or mine to verify there isn't something we've missed and to let us know if this is a feature that we should be able to get to work with OVS 2.4.0.
Hi, Terry. Can I have a look at your system where this issue happens? Even if you can't use tnl/arp/set, OVS should send an ARP request and snoop on ARP responses. It might cause the first and maybe some following packets to be lost. But I suppose the ARP response should get before a DHCP retry is done. By the way, can you help me understand the setup better? br-tun is the underlay bridge with no tunnels on it? In that case, is its datapath_type set to netdev, or is it using the kernel datapath? Tunnels with DPDK will only work if the underlay bridge also uses the netdev datapath (either using DPDK or tap/packet). Cascardo.
Hi, Terry. Sorry it took me some time to get back with some debugging. After trying to replicate the same thing, I noticed similar outputs on my system with the same openvswitch version when using ofproto/trace, trying to get output to the tunnel. So, whenever we try to push packets out to the tunnel using netdev datapath type, the first packet will be lost, because we try to send the ARP request. I have mentioned that already. The second time, even when using ofproto/trace, the output will show more details of getting out to the tunnels, because it finds the ARP entry in the cache. That, however, does not happen on your system. When trying to find out why, I noticed that one of the compute nodes did not have the br-data0 tunnel with the datapath_type correctly set to netdev. I fixed that. However, I still can't see any packets coming in br-data0 at compute2 when I ping it from compute1. To clarify, I am not using the tunnels. I am just trying to communicate from compute1 to compute 2 using the DPDK port associated to the br-data0 bridge on both sides. That is, ping 192.168.2.3 doesn't work. I see the ARP request output on compute1, but don't see the input on compute2. That begs the question, how are those two nodes connected? Specifically, how are the two DPDK ports on those nodes connected? When using 'ovs-vsctl get interface dpdk0 status', it looks like both nodes have DPDK working fine after I set the datapath_type of br-data0 on compute2 to netdev. One other way I can try to debug this is not using DPDK for those PCI ports and see if they are connected fine and find out if there is any problem with DPDK, then bring the right guys for the problem. But that would also mean the other setup has similar problems, or a different problem we need to look at. For now, can you tell me how those two ports are connected, and check if everything is working as it should be? Thanks. Cascardo.
Sorry, I missed the updated comment somehow. I tested the connectivity before switch over to dpdk. The interfaces are plugged directly into a switch. Without dpdk, assigning an IP on each interface directly, they communicated fine. Add dpdk, no communication. Feel free to log into the system and unbind interfaces/add IPs for testing. I am not using the machines for anything else.
Panu, Aaron, this smells like a DPDK issue. I will see if I can run some tests with testpmd and add the results. For now, I am copying you FYI. Cascardo.
FYI Yesterday I open this bug : https://bugzilla.redhat.com/show_bug.cgi?id=1325984 looks like degradation. As you can see in comment#5 I success to work on vlan environment
Hi, Terry. I noticed your system is not using VFIO for the devices, but UIO. When loading testpmd, the DPDK PMD driver for ixgbe cannot get status for the device, so it ends up with a Link Down message. It's likely the same problem happens with openvswitch. In fact, running ovs-vsctl list Interface dpdk0, you can see the link_state is down. The use of VFIO is preferred and there are some limitations when using uio_pci_generic, like requiring that LSIs are available. This could also be a DPDK bug, but hard to tell right now. Ideally, we should try VFIO, but I am not sure your system will allow that, it doesn't seem to support an IOMMU. Now, have you ever had a setup on your system that worked with DPDK? Is it possible to revert to that? Eran, what about your system? What ovs-vsctl list Interface gives you? Thanks. Cascardo.
Cascardo: My system doesn't support vfio. I have had it working with uio in the past w/ vlan, as has Eran. But that was with an older rpm for openvswitch. The fact that vlan isn't working for him now either seems troubling. Though there are so many ways to screw up a dpdk install, who knows. Terry
Cascardo , as Terry wrote my system does not support vfio and I success to work with vlan setup when my system configures with UIO. You can find in this bug : https://bugzilla.redhat.com/show_bug.cgi?id=1325984 output of ovs-vsctl list Interface.
Hi, Eran. So bug#1325984 is now closed as not a bug, do I read correctly that there was some configuration problem? Does it work now correctly with VLAN? If yes, what about using that same setup or the lessons learned with that bug to test VXLAN setup? I would appreciate that, as I can't reproduce the same DPDK failure right now on a machine in beaker. Even when using uio_pci_generic, testpmd tells me the link is up. Cascardo.
Hi, After using the configuration provided by Flavio, we successfully boot an instance,but the instance doesn't get an IP address, like the issue we had on https://bugzilla.redhat.com/show_bug.cgi?id=1325984 These are the errors I get: /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:44.170 2887 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor Interface name,ofport,external_ids --format=json]: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End of file) /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:44.171 2887 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End of file) /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:44.844 2887 ERROR neutron.agent.linux.utils [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:44.846 2887 ERROR neutron.agent.common.ovs_lib [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Unable to execute ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']. Exception: /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:46.847 2887 ERROR neutron.agent.linux.utils [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:46.850 2887 ERROR neutron.agent.common.ovs_lib [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Unable to execute ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']. Exception: /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:48.845 2887 ERROR neutron.agent.linux.utils [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:48.846 2887 ERROR neutron.agent.common.ovs_lib [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Unable to execute ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']. Exception: /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:50.845 2887 ERROR neutron.agent.linux.utils [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:50.846 2887 ERROR neutron.agent.common.ovs_lib [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Unable to execute ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']. Exception: /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:53.381 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:54.897 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:56.894 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:41:58.894 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:42:00.893 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:42:02.896 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:42:04.894 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:42:06.893 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:42:08.894 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:42:10.894 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active /var/log/neutron/openvswitch-agent.log:2016-05-01 10:42:12.894 2887 ERROR neutron.agent.linux.ovsdb_monitor [req-808d716a-cdec-4c05-bb11-23715c7d4ee2 - - - - -] Interface monitor is not active compute: [root@puma48 ~]# ovs-vsctl show 1451b387-628a-4a59-9b88-53e3fff6ff07 Bridge br-int fail_mode: secure Port int-br-vlan Interface int-br-vlan type: patch options: {peer=phy-br-vlan} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Port "vhub3fae435-b0" tag: 3 Interface "vhub3fae435-b0" type: dpdkvhostuser Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "dpdk0" Interface "dpdk0" type: dpdk Port "vxlan-0a23a0b7" Interface "vxlan-0a23a0b7" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.35.160.193", out_key=flow, remote_ip="10.35.160.183"} Bridge br-vlan Port br-vlan Interface br-vlan type: internal Port phy-br-vlan Interface phy-br-vlan type: patch options: {peer=int-br-vlan} ovs_version: "2.4.0" [root@puma48 ~]# ll /var/run/openvswitch/ total 8 srwx------ 1 root qemu 0 May 1 10:56 br-int.mgmt srwx------ 1 root qemu 0 May 1 10:56 br-int.snoop srwx------ 1 root qemu 0 May 1 10:56 br-tun.mgmt srwx------ 1 root qemu 0 May 1 10:56 br-tun.snoop srwx------ 1 root qemu 0 May 1 10:56 br-vlan.mgmt srwx------ 1 root qemu 0 May 1 10:56 br-vlan.snoop srwx------ 1 root qemu 0 May 1 10:56 db.sock srwx------ 1 root qemu 0 May 1 10:56 ovsdb-server.5724.ctl -rw-r--r-- 1 root qemu 5 May 1 10:56 ovsdb-server.pid srwx------ 1 root qemu 0 May 1 10:56 ovs-vswitchd.5744.ctl -rw-rw-r-- 1 root qemu 5 May 1 10:56 ovs-vswitchd.pid srwxrwxr-x 1 root qemu 0 May 1 11:17 vhub3fae435-b0 controller: 9428cb31-30aa-4846-8345-a7db0646f11a Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port "vxlan-0a23a0c1" Interface "vxlan-0a23a0c1" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.35.160.183", out_key=flow, remote_ip="10.35.160.193"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "enp5s0f1" Interface "enp5s0f1" Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-vlan Interface int-br-vlan type: patch options: {peer=phy-br-vlan} Port "tap4179420d-77" tag: 1 Interface "tap4179420d-77" type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-vlan Port br-vlan Interface br-vlan type: internal Port phy-br-vlan Interface phy-br-vlan type: patch options: {peer=int-br-vlan} ovs_version: "2.4.0"
Created attachment 1152749 [details] sosreport I'm attaching the sosreport, If necessary, I'll be glad to provide the servers for debugging, please contact me by mail or IRC for connection details. Thanks,
Hi, Eyal. I don't see how the errors in comment #20 are related to the problem in this bug report. Looking at the sosreport, it looks like ovs-vswitchd and ovsdb-server are running, and communication works fine after OVS is "restarted", after 10:56:28. Not sure why it's not running before that. Maybe it's crashing, but sosreport does not contain openvswitch logs, and I can't find evidence in dmesg that it has crashed. Can you attach openvswitch logs found at /var/log/openvswitch/? Thanks. Cascardo.
[root@puma48 ~]# ll /var/run/openvswitch/ total 8 srwx------ 1 root qemu 0 May 8 10:23 br-int.mgmt srwx------ 1 root qemu 0 May 8 10:23 br-int.snoop srwx------ 1 root qemu 0 May 8 10:23 br-tun.mgmt srwx------ 1 root qemu 0 May 8 10:23 br-tun.snoop srwx------ 1 root qemu 0 May 8 10:23 br-vlan.mgmt srwx------ 1 root qemu 0 May 8 10:23 br-vlan.snoop srwx------ 1 root qemu 0 May 8 10:23 db.sock srwx------ 1 root qemu 0 May 8 10:23 ovsdb-server.99305.ctl -rw-r--r-- 1 root qemu 6 May 8 10:23 ovsdb-server.pid srwx------ 1 root qemu 0 May 8 10:23 ovs-vswitchd.99322.ctl -rw-rw-r-- 1 root qemu 6 May 8 10:23 ovs-vswitchd.pid srwxrwxr-x 1 root qemu 0 May 8 10:23 vhub43e910a-49 srwxrwxr-x 1 root qemu 0 May 8 10:23 vhuc1defa82-3c [root@puma48 ~]# cat /var/log/openvswitch/ovsdb-server.log 2016-05-08T00:40:01.958Z|00006|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log 2016-05-08T07:14:41.388Z|00007|fatal_signal|WARN|terminating with signal 15 (Terminated) 2016-05-08T07:14:41.550Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log 2016-05-08T07:14:41.796Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.4.0 2016-05-08T07:14:51.802Z|00003|memory|INFO|2540 kB peak resident set size after 10.3 seconds 2016-05-08T07:14:51.802Z|00004|memory|INFO|cells:592 monitors:1 sessions:1 2016-05-08T07:22:21.184Z|00005|fatal_signal|WARN|terminating with signal 15 (Terminated) 2016-05-08T07:22:21.185Z|00002|daemon_unix(monitor)|INFO|pid 95683 died, killed (Terminated), exiting 2016-05-08T07:22:21.347Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log 2016-05-08T07:22:21.593Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.4.0 2016-05-08T07:22:31.599Z|00003|memory|INFO|2576 kB peak resident set size after 10.3 seconds 2016-05-08T07:22:31.599Z|00004|memory|INFO|cells:648 monitors:1 sessions:1 2016-05-08T07:23:19.266Z|00005|fatal_signal|WARN|terminating with signal 15 (Terminated) 2016-05-08T07:23:19.267Z|00002|daemon_unix(monitor)|INFO|pid 98818 died, killed (Terminated), exiting 2016-05-08T07:23:19.429Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log 2016-05-08T07:23:19.682Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.4.0 2016-05-08T07:23:29.437Z|00003|memory|INFO|2572 kB peak resident set size after 10.0 seconds 2016-05-08T07:23:29.437Z|00004|memory|INFO|cells:592 monitors:1 sessions:1 [root@puma48 ~]# cat /var/log/openvswitch/ovs-vswitchd.log 2016-05-08T00:40:01.963Z|00050|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-05-08T07:14:48.983Z|00002|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-05-08T07:14:48.998Z|00003|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 0 2016-05-08T07:14:48.998Z|00004|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 1 2016-05-08T07:14:48.998Z|00005|ovs_numa|INFO|Discovered 2 NUMA nodes and 24 CPU cores 2016-05-08T07:14:48.998Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-05-08T07:14:49.002Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-05-08T07:14:49.011Z|00008|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-05-08T07:14:49.011Z|00009|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-05-08T07:14:49.011Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-05-08T07:14:49.086Z|00001|ofproto_dpif_upcall(handler5)|INFO|received packet on unassociated datapath port 0 2016-05-08T07:14:49.094Z|00011|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation 2016-05-08T07:14:49.096Z|00012|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3 2016-05-08T07:14:49.096Z|00013|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports unique flow ids 2016-05-08T07:14:49.142Z|00014|bridge|INFO|bridge br-vlan: added interface br-vlan on port 65534 2016-05-08T07:14:49.142Z|00015|bridge|INFO|bridge br-vlan: added interface phy-br-vlan on port 2 2016-05-08T07:14:49.142Z|00016|bridge|INFO|bridge br-int: added interface int-br-vlan on port 1 2016-05-08T07:14:49.148Z|00017|bridge|INFO|bridge br-int: added interface br-int on port 65534 2016-05-08T07:14:49.149Z|00018|bridge|INFO|bridge br-int: added interface patch-tun on port 2 2016-05-08T07:14:49.149Z|00019|bridge|INFO|bridge br-tun: added interface patch-int on port 1 2016-05-08T07:14:49.149Z|00020|bridge|INFO|bridge br-tun: added interface vxlan-0a23a0b7 on port 2 2016-05-08T07:14:49.154Z|00021|bridge|INFO|bridge br-tun: added interface br-tun on port 65534 2016-05-08T07:14:49.306Z|00022|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:14:49.306Z|00023|bridge|INFO|bridge br-vlan: using datapath ID 0000e24008961b42 2016-05-08T07:14:49.306Z|00024|connmgr|INFO|br-vlan: added service controller "punix:/var/run/openvswitch/br-vlan.mgmt" 2016-05-08T07:14:49.342Z|00025|bridge|INFO|bridge br-int: using datapath ID 00004ad13f18a04d 2016-05-08T07:14:49.342Z|00026|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt" 2016-05-08T07:14:49.342Z|00027|bridge|INFO|bridge br-tun: using datapath ID 00003e47c685fc4f 2016-05-08T07:14:49.342Z|00028|connmgr|INFO|br-tun: added service controller "punix:/var/run/openvswitch/br-tun.mgmt" 2016-05-08T07:14:49.351Z|00029|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.4.0 2016-05-08T07:14:50.449Z|00030|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:14:50.460Z|00031|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.469Z|00032|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.479Z|00033|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.524Z|00034|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:14:50.533Z|00035|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.559Z|00036|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:14:50.587Z|00037|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:14:50.610Z|00038|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.619Z|00039|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.632Z|00040|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:14:50.646Z|00041|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:14:50.710Z|00042|bridge|INFO|bridge br-int: added interface patch-tun on port 3 2016-05-08T07:14:50.710Z|00043|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:14:50.748Z|00044|connmgr|INFO|br-tun<->unix: 9 flow_mods in the last 0 s (9 adds) 2016-05-08T07:14:50.757Z|00045|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.826Z|00046|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:14:50.950Z|00047|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:14:50.958Z|00048|ofp_util|INFO|normalization changed ofp_match, details: 2016-05-08T07:14:50.958Z|00049|ofp_util|INFO| pre: in_port=2,nw_proto=58,tp_src=136 2016-05-08T07:14:50.958Z|00050|ofp_util|INFO|post: in_port=2 2016-05-08T07:14:50.959Z|00051|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:14:50.967Z|00052|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:14:52.213Z|00053|memory|INFO|237580 kB peak resident set size after 10.4 seconds 2016-05-08T07:14:52.213Z|00054|memory|INFO|handlers:16 ports:8 revalidators:8 rules:29 2016-05-08T07:15:44.037Z|00055|bridge|WARN|could not open network device vhu719c6cc9-8d (No such device) 2016-05-08T07:15:44.037Z|00056|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:15:44.159Z|00057|dpdk|INFO|Socket /var/run/openvswitch/vhu719c6cc9-8d created for vhost-user port vhu719c6cc9-8d 2016-05-08T07:15:44.164Z|00058|dpif_netdev|INFO|Created 1 pmd threads on numa node 0 2016-05-08T07:15:44.164Z|00001|dpif_netdev(pmd40)|INFO|Core 0 processing port 'vhu719c6cc9-8d' 2016-05-08T07:15:44.164Z|00059|bridge|INFO|bridge br-int: added interface vhu719c6cc9-8d on port 4 2016-05-08T07:15:44.164Z|00060|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:15:44.164Z|00002|dpif_netdev(pmd40)|INFO|Core 0 processing port 'vhu719c6cc9-8d' 2016-05-08T07:15:44.740Z|00061|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 modifications) 2016-05-08T07:15:44.750Z|00062|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:15:44.779Z|00063|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:15:44.804Z|00064|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:15:44.814Z|00065|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:15:44.826Z|00066|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:15:44.837Z|00067|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:15:44.847Z|00068|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:15:44.858Z|00069|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:15:44.874Z|00070|bridge|WARN|could not open network device dpdk0 (No such device) 2016-05-08T07:15:59.310Z|00001|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhu719c6cc9-8d' (0) has been added 2016-05-08T07:17:10.751Z|00002|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhu719c6cc9-8d' (0) has been removed 2016-05-08T07:17:18.692Z|00003|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhu719c6cc9-8d' (0) has been added 2016-05-08T07:22:21.182Z|00003|daemon_unix(monitor)|INFO|pid 95699 died, killed (Terminated), exiting 2016-05-08T07:22:27.321Z|00002|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-05-08T07:22:27.348Z|00003|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 0 2016-05-08T07:22:27.348Z|00004|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 1 2016-05-08T07:22:27.349Z|00005|ovs_numa|INFO|Discovered 2 NUMA nodes and 24 CPU cores 2016-05-08T07:22:27.349Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-05-08T07:22:27.349Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-05-08T07:22:27.361Z|00008|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-05-08T07:22:27.361Z|00009|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-05-08T07:22:27.362Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-05-08T07:22:27.444Z|00001|ofproto_dpif_upcall(handler5)|INFO|received packet on unassociated datapath port 0 2016-05-08T07:22:27.453Z|00011|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation 2016-05-08T07:22:27.454Z|00012|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3 2016-05-08T07:22:27.454Z|00013|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports unique flow ids 2016-05-08T07:22:27.501Z|00014|bridge|INFO|bridge br-vlan: added interface br-vlan on port 65534 2016-05-08T07:22:27.501Z|00015|bridge|INFO|bridge br-vlan: added interface phy-br-vlan on port 2 2016-05-08T07:22:27.501Z|00016|bridge|INFO|bridge br-int: added interface int-br-vlan on port 1 2016-05-08T07:22:27.507Z|00017|bridge|INFO|bridge br-int: added interface br-int on port 65534 2016-05-08T07:22:27.507Z|00018|dpdk|INFO|Socket /var/run/openvswitch/vhu719c6cc9-8d created for vhost-user port vhu719c6cc9-8d 2016-05-08T07:22:27.664Z|00019|dpif_netdev|INFO|Created 1 pmd threads on numa node 0 2016-05-08T07:22:27.665Z|00001|dpif_netdev(pmd28)|INFO|Core 0 processing port 'vhu719c6cc9-8d' 2016-05-08T07:22:27.665Z|00020|bridge|INFO|bridge br-int: added interface vhu719c6cc9-8d on port 4 2016-05-08T07:22:27.665Z|00021|bridge|INFO|bridge br-int: added interface patch-tun on port 3 2016-05-08T07:22:27.665Z|00022|bridge|INFO|bridge br-tun: added interface patch-int on port 1 2016-05-08T07:22:27.666Z|00002|dpif_netdev(pmd28)|INFO|Core 0 processing port 'vhu719c6cc9-8d' 2016-05-08T07:22:27.666Z|00023|bridge|INFO|bridge br-tun: added interface vxlan-0a23a0b7 on port 2 2016-05-08T07:22:27.671Z|00024|bridge|INFO|bridge br-tun: added interface br-tun on port 65534 2016-05-08T07:22:27.830Z|00025|dpdk|INFO|Port 0: a0:36:9f:7f:28:ba 2016-05-08T07:22:28.085Z|00026|dpdk|INFO|Port 0: a0:36:9f:7f:28:ba 2016-05-08T07:22:28.086Z|00003|dpif_netdev(pmd28)|INFO|Core 0 processing port 'vhu719c6cc9-8d' 2016-05-08T07:22:28.086Z|00004|dpif_netdev(pmd28)|INFO|Core 0 processing port 'dpdk0' 2016-05-08T07:22:28.086Z|00027|bridge|INFO|bridge br-tun: added interface dpdk0 on port 3 2016-05-08T07:22:28.086Z|00028|bridge|INFO|bridge br-vlan: using datapath ID 0000e24008961b42 2016-05-08T07:22:28.086Z|00029|connmgr|INFO|br-vlan: added service controller "punix:/var/run/openvswitch/br-vlan.mgmt" 2016-05-08T07:22:28.121Z|00030|bridge|INFO|bridge br-int: using datapath ID 00004ad13f18a04d 2016-05-08T07:22:28.121Z|00031|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt" 2016-05-08T07:22:28.121Z|00032|bridge|INFO|bridge br-tun: using datapath ID 0000a0369f7f28ba 2016-05-08T07:22:28.122Z|00033|connmgr|INFO|br-tun: added service controller "punix:/var/run/openvswitch/br-tun.mgmt" 2016-05-08T07:22:28.125Z|00034|dpif_netdev|INFO|Created 1 pmd threads on numa node 0 2016-05-08T07:22:28.141Z|00035|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.4.0 2016-05-08T07:22:28.142Z|00001|dpif_netdev(pmd41)|INFO|Core 0 processing port 'vhu719c6cc9-8d' 2016-05-08T07:22:28.142Z|00002|dpif_netdev(pmd41)|INFO|Core 0 processing port 'dpdk0' 2016-05-08T07:22:28.523Z|00036|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.531Z|00037|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.540Z|00038|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.587Z|00039|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:22:28.595Z|00040|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.673Z|00041|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.681Z|00042|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.773Z|00043|bridge|INFO|bridge br-int: added interface patch-tun on port 2 2016-05-08T07:22:28.819Z|00044|connmgr|INFO|br-tun<->unix: 9 flow_mods in the last 0 s (9 adds) 2016-05-08T07:22:28.827Z|00045|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.895Z|00046|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:28.904Z|00047|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 modifications) 2016-05-08T07:22:29.029Z|00048|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:22:29.039Z|00049|ofp_util|INFO|normalization changed ofp_match, details: 2016-05-08T07:22:29.039Z|00050|ofp_util|INFO| pre: in_port=3,nw_proto=58,tp_src=136 2016-05-08T07:22:29.039Z|00051|ofp_util|INFO|post: in_port=3 2016-05-08T07:22:29.039Z|00052|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:22:29.048Z|00053|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:22:29.188Z|00054|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 modifications) 2016-05-08T07:22:29.197Z|00055|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:29.241Z|00056|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:22:29.251Z|00057|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:29.260Z|00058|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:29.269Z|00059|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:29.278Z|00060|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:31.780Z|00061|memory|INFO|247040 kB peak resident set size after 10.2 seconds 2016-05-08T07:22:31.780Z|00062|memory|INFO|handlers:16 ports:10 revalidators:8 rules:35 udpif keys:39 2016-05-08T07:22:36.735Z|00063|bridge|WARN|could not open network device vhub43e910a-49 (No such device) 2016-05-08T07:22:36.859Z|00064|dpdk|INFO|Socket /var/run/openvswitch/vhub43e910a-49 created for vhost-user port vhub43e910a-49 2016-05-08T07:22:36.859Z|00003|dpif_netdev(pmd41)|INFO|Core 0 processing port 'vhu719c6cc9-8d' 2016-05-08T07:22:36.859Z|00004|dpif_netdev(pmd41)|INFO|Core 0 processing port 'dpdk0' 2016-05-08T07:22:36.859Z|00005|dpif_netdev(pmd41)|INFO|Core 0 processing port 'vhub43e910a-49' 2016-05-08T07:22:36.859Z|00065|bridge|INFO|bridge br-int: added interface vhub43e910a-49 on port 5 2016-05-08T07:22:38.716Z|00066|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:22:38.724Z|00067|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:22:38.735Z|00068|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:38.744Z|00069|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:38.753Z|00070|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:22:38.762Z|00071|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:13.941Z|00006|dpif_netdev(pmd41)|INFO|Core 0 processing port 'dpdk0' 2016-05-08T07:23:13.941Z|00007|dpif_netdev(pmd41)|INFO|Core 0 processing port 'vhub43e910a-49' 2016-05-08T07:23:14.654Z|00072|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:14.662Z|00073|ofp_util|INFO|normalization changed ofp_match, details: 2016-05-08T07:23:14.662Z|00074|ofp_util|INFO| pre: in_port=4,nw_proto=58,tp_src=136 2016-05-08T07:23:14.662Z|00075|ofp_util|INFO|post: in_port=4 2016-05-08T07:23:14.663Z|00076|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:14.672Z|00077|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:18.972Z|00001|fatal_signal(handler31)|WARN|terminating with signal 15 (Terminated) 2016-05-08T07:23:19.080Z|00003|daemon_unix(monitor)|INFO|pid 98834 died, killed (Terminated), exiting 2016-05-08T07:23:27.209Z|00002|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-05-08T07:23:27.232Z|00003|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 0 2016-05-08T07:23:27.232Z|00004|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 1 2016-05-08T07:23:27.232Z|00005|ovs_numa|INFO|Discovered 2 NUMA nodes and 24 CPU cores 2016-05-08T07:23:27.232Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-05-08T07:23:27.232Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-05-08T07:23:27.243Z|00008|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-05-08T07:23:27.243Z|00009|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-05-08T07:23:27.243Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-05-08T07:23:27.324Z|00001|ofproto_dpif_upcall(handler5)|INFO|received packet on unassociated datapath port 0 2016-05-08T07:23:27.332Z|00011|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation 2016-05-08T07:23:27.334Z|00012|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3 2016-05-08T07:23:27.334Z|00013|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports unique flow ids 2016-05-08T07:23:27.381Z|00014|bridge|INFO|bridge br-vlan: added interface br-vlan on port 65534 2016-05-08T07:23:27.382Z|00015|bridge|INFO|bridge br-vlan: added interface phy-br-vlan on port 2 2016-05-08T07:23:27.382Z|00016|bridge|INFO|bridge br-int: added interface int-br-vlan on port 1 2016-05-08T07:23:27.388Z|00017|bridge|INFO|bridge br-int: added interface br-int on port 65534 2016-05-08T07:23:27.388Z|00018|dpdk|INFO|Socket /var/run/openvswitch/vhub43e910a-49 created for vhost-user port vhub43e910a-49 2016-05-08T07:23:27.545Z|00019|dpif_netdev|INFO|Created 1 pmd threads on numa node 0 2016-05-08T07:23:27.546Z|00001|dpif_netdev(pmd28)|INFO|Core 0 processing port 'vhub43e910a-49' 2016-05-08T07:23:27.546Z|00020|bridge|INFO|bridge br-int: added interface vhub43e910a-49 on port 5 2016-05-08T07:23:27.546Z|00021|bridge|INFO|bridge br-int: added interface patch-tun on port 2 2016-05-08T07:23:27.546Z|00002|dpif_netdev(pmd28)|INFO|Core 0 processing port 'vhub43e910a-49' 2016-05-08T07:23:27.546Z|00022|bridge|INFO|bridge br-tun: added interface patch-int on port 1 2016-05-08T07:23:27.546Z|00023|bridge|INFO|bridge br-tun: added interface vxlan-0a23a0b7 on port 2 2016-05-08T07:23:27.552Z|00024|bridge|INFO|bridge br-tun: added interface br-tun on port 65534 2016-05-08T07:23:27.711Z|00025|dpdk|INFO|Port 0: a0:36:9f:7f:28:ba 2016-05-08T07:23:27.967Z|00026|dpdk|INFO|Port 0: a0:36:9f:7f:28:ba 2016-05-08T07:23:27.967Z|00003|dpif_netdev(pmd28)|INFO|Core 0 processing port 'vhub43e910a-49' 2016-05-08T07:23:27.967Z|00004|dpif_netdev(pmd28)|INFO|Core 0 processing port 'dpdk0' 2016-05-08T07:23:27.967Z|00027|bridge|INFO|bridge br-tun: added interface dpdk0 on port 3 2016-05-08T07:23:27.967Z|00028|bridge|INFO|bridge br-vlan: using datapath ID 0000e24008961b42 2016-05-08T07:23:27.967Z|00029|connmgr|INFO|br-vlan: added service controller "punix:/var/run/openvswitch/br-vlan.mgmt" 2016-05-08T07:23:28.003Z|00030|bridge|INFO|bridge br-int: using datapath ID 00004ad13f18a04d 2016-05-08T07:23:28.003Z|00031|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt" 2016-05-08T07:23:28.003Z|00032|bridge|INFO|bridge br-tun: using datapath ID 0000a0369f7f28ba 2016-05-08T07:23:28.003Z|00033|connmgr|INFO|br-tun: added service controller "punix:/var/run/openvswitch/br-tun.mgmt" 2016-05-08T07:23:28.007Z|00034|dpif_netdev|INFO|Created 1 pmd threads on numa node 0 2016-05-08T07:23:28.012Z|00001|dpif_netdev(pmd41)|INFO|Core 0 processing port 'vhub43e910a-49' 2016-05-08T07:23:28.012Z|00002|dpif_netdev(pmd41)|INFO|Core 0 processing port 'dpdk0' 2016-05-08T07:23:28.015Z|00035|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.4.0 2016-05-08T07:23:29.191Z|00036|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.199Z|00037|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.208Z|00038|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.254Z|00039|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:29.262Z|00040|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.342Z|00041|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.351Z|00042|connmgr|INFO|br-vlan<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.444Z|00043|bridge|INFO|bridge br-int: added interface patch-tun on port 3 2016-05-08T07:23:29.486Z|00044|connmgr|INFO|br-tun<->unix: 9 flow_mods in the last 0 s (9 adds) 2016-05-08T07:23:29.494Z|00045|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.556Z|00046|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.565Z|00047|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 modifications) 2016-05-08T07:23:29.689Z|00048|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:29.698Z|00049|ofp_util|INFO|normalization changed ofp_match, details: 2016-05-08T07:23:29.698Z|00050|ofp_util|INFO| pre: in_port=2,nw_proto=58,tp_src=136 2016-05-08T07:23:29.698Z|00051|ofp_util|INFO|post: in_port=2 2016-05-08T07:23:29.698Z|00052|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:29.706Z|00053|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:29.841Z|00054|memory|INFO|247120 kB peak resident set size after 10.1 seconds 2016-05-08T07:23:29.842Z|00055|memory|INFO|handlers:16 ports:10 revalidators:8 rules:30 udpif keys:1 2016-05-08T07:23:29.843Z|00056|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 modifications) 2016-05-08T07:23:29.853Z|00057|connmgr|INFO|br-tun<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.898Z|00058|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:29.908Z|00059|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.917Z|00060|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.926Z|00061|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:29.934Z|00062|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:48.003Z|00063|bridge|WARN|could not open network device vhuc1defa82-3c (No such device) 2016-05-08T07:23:48.127Z|00064|dpdk|INFO|Socket /var/run/openvswitch/vhuc1defa82-3c created for vhost-user port vhuc1defa82-3c 2016-05-08T07:23:48.128Z|00003|dpif_netdev(pmd41)|INFO|Core 0 processing port 'vhub43e910a-49' 2016-05-08T07:23:48.128Z|00004|dpif_netdev(pmd41)|INFO|Core 0 processing port 'dpdk0' 2016-05-08T07:23:48.128Z|00005|dpif_netdev(pmd41)|INFO|Core 0 processing port 'vhuc1defa82-3c' 2016-05-08T07:23:48.128Z|00065|bridge|INFO|bridge br-int: added interface vhuc1defa82-3c on port 4 2016-05-08T07:23:49.396Z|00066|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:49.405Z|00067|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 deletes) 2016-05-08T07:23:49.417Z|00068|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:49.426Z|00069|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:49.435Z|00070|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:49.444Z|00071|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 0 s (1 adds) 2016-05-08T07:23:58.738Z|00001|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhuc1defa82-3c' (0) has been added 2016-05-08T07:24:58.668Z|00002|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhuc1defa82-3c' (0) has been removed 2016-05-08T07:25:05.839Z|00003|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhuc1defa82-3c' (0) has been added [root@puma48 ~]# ovs-vsctl show 1451b387-628a-4a59-9b88-53e3fff6ff07 Bridge br-int fail_mode: secure Port int-br-vlan Interface int-br-vlan type: patch options: {peer=phy-br-vlan} Port "vhuc1defa82-3c" tag: 1 Interface "vhuc1defa82-3c" type: dpdkvhostuser Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "vhub43e910a-49" tag: 1 Interface "vhub43e910a-49" type: dpdkvhostuser Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "dpdk0" Interface "dpdk0" type: dpdk Port "vxlan-0a23a0b7" Interface "vxlan-0a23a0b7" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.35.160.193", out_key=flow, remote_ip="10.35.160.183"} Bridge br-vlan Port br-vlan Interface br-vlan type: internal Port phy-br-vlan Interface phy-br-vlan type: patch options: {peer=int-br-vlan} ovs_version: "2.4.0" As you can see the dpdkvhostuser port created successfully but still I'm having the same issue. Thanks,
Hi, Eyal. Can I get access to the system? Regards. Cascardo.
Hi Cascardo, Details of the servers sent to you in mail, Thanks a lot, Eyal Dannon
Hi, Eyal. A quick look at the compute node and I can explain why this won't work. Your tunnel vxlan-0a23a0b7 has remote_ip 10.35.160.183. And the route to that on the system goes through enp5s0f0, which is not an OVS bridge. You should have a setup similar to the one described in the userspace tunneling cookbook. http://openvswitch.org/support/config-cookbooks/userspace-tunneling/ That means that the tunnel remote IP must be route through another OVS bridge that runs in userspace, that is, is of netdev datapath_type. One option would be to create another bridge and put dpdk0 into that bridge, and use a different subnet for that and use that subnet for the tunnel. Regards. Cascardo.
Hi, I tried to configure the hosts following your suggests, Over the compute node, the interface and the new vxlan0 tunnel is under br-tun bridge, which connected to br-int. over the controller node, I created the vxlan0, assigned it with the DPDK's interface under br-tun bridge which connected to br-int either.. The neutron configuration and the full ovs configuration is attached. Still having the same issue with the dhcp request, any suggests?
Created attachment 1164033 [details] Compute and controller new ovs configuration
As we discussed, the route to the tunnel must go through a different bridge, which was not the case when I checked the configuration. Do you have a new setup with those fixes? Thanks. Cascardo.
Hi, The setup isn't available right now, I'll try to configure it as soon as possible, Thanks,
I managed to get a vxlan-based setup working on my test machines, converted from the existing vlan-based setup. The only real difference from the vlan-based setup is that br-data0 (or whatever bridge that the dpdk interface is on) is *not* added to openvswitch_agent.ini's bridge_mappings (therefor no patch port between it and br-int) and has the IP from the local_ip setting set on br-data0's internal port. This allows the use of the kernel for routing/arp. Then, set up vxlan in neutron just like you normally would. More specifically, this is what I did to convert my existing working vlan-based setup: dpdk vlan to vxlan conversion ============================= CONTROLLER NODE --------------- ml2_conf.ini: [ml2] type_drivers = vxlan tenant_network_types=vxlan ALL NODES --------- openvswitch_agnet.ini: [ovs] bridge_mappings = # not the br-data0 bridge anyway tunnel_bridge=br-tun tunnel_types=vxlan local_ip=$br_data0_ip # fix br-data0 ovs-vsctl del-port phy-br-data0 ovs-vsctl del-port int-br-data0 ip addr add $br_data0_ip/24 dev br-data0 ip link set up br-data0 openstack-service restart neutron EXAMPLE BOOT ------------ $ neutron net-create private $ neutron subnet-create private 10.0.0.0/24 --name private_subnet $ nova boot --flavor m1.nano-dpdk --image cirros --nic net-id=$(neutron net-show -c id -f value private) --num-instances 3 test
Marking ON_QA since this was just a configuration issue. Normal DPDK-based setup for VLANS, but with the changes listed in Comment 33. Specifically, not setting up bridge_mappings for the bridge for the DPDK bridge.
Update: We discovered few issue with VxLAN on OSPd 10. First, see Cascardo's mail : "The br-link had no flows installed. A single NORMAL flow is good enough. Though I am not sure if you want different flows installed, but with no flows, and the controller not allowing any traffic, it was impossible to establish the VXLAN tunnel." Second, the PING test fails, see Dan Sneddon's solutions: "It sounds like we will need some modifications to make this work, I think any of the following would work: * Reorder operations so that the DPDK initialization happens before the ping test, perhaps by moving the setting of /etc/sysconfig/openvswitch and restarting of the openvswitch service to an earlier step in the deployment * Modify the ping test so that it ignores interfaces on DPDK bridges * Modify os-net-config so that it completes all the actions needed to make the interface pingable after os-net-config runs" Karthik, can you please attach the bug number for those issues?
*** Bug 1394901 has been marked as a duplicate of this bug. ***
Hi Eyal, Can we please get BZ numbers for all the different problems. One BZ per problem please. Without individual bzs we might not be able to fix them in a timely manner. As some problems might be in Neutron, some in DPDK, some in OVS, some in RHEL So we will need separate BZ numbers for all the different problems please. Hope you understand. Thanks
Karthik, I suggest you raise a new bug to corner the Ping test failures and approaches. Eyal, With respect to adding the missing flows in OVS i guess would be taken care by Neutron. If my understanding is not right, please let me know if anything is expected from our side. Thanks Vijay.
Absolutely right.
Eyal, can you show what flows were missing and from which bridge? How did dump-flows output of the bridge look like before and after?
Assaf, As Cascardo wrote, there were no flows over the bridge at all. He just added NORMAL flow and that was enough. Here's Cascardo's mail : "The br-link had no flows installed. A single NORMAL flow is good enough. Though I am not sure if you want different flows installed, but with no flows, and the controller not allowing any traffic, it was impossible to establish the VXLAN tunnel."
Hi, I have deployed OSPd + VxLAN with the attached templates(skipping ping tests) After the deployment please run the following as workaround: # ifup br-link # systemctl restart neutron-openvswitch-agent * Add the local IP addr to br-link bridge # ip addr add <local_IP/PREFIX> dev br-link * Tag br-link port with the VLAN used as tenant network VLAN ID. # ovs-vsctl set port br-link tag=<VLAN-ID> Thanks,
Eyal, Thanks for the info, we will look into why vlan port create on the DPDK bridge fails and suggest the next steps.
Thanks Terry for supporting on the late hours of your day. As we discussed, the vlan tag has to be specified directly on the br-link bridge and the current mechanism of additional vlan interface on bridge will not work for netdev bridges. I have modified the network config for DPDK bridge as below and I am able to deploy successfully. And the guest vm is also getting the IP successfully by restarting openvswitch, network and openvswitch-neutron-agent services on the compute node. - type: ovs_user_bridge name: br-link use_dhcp: false ovs_extra: - set port br-link tag=397 addresses: - ip_netmask: {get_param: TenantIpSubnet} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4
We have tested and verified the changes, Right after the deployment, restart of "network" and "neutron-openvswitch-agent" services are needed. I'll raise a documentation bug regarding this issue. Thanks.
(In reply to Eyal Dannon from comment #54) > We have tested and verified the changes, > Right after the deployment, restart of "network" and > "neutron-openvswitch-agent" services are needed. > I'll raise a documentation bug regarding this issue. > Thanks. Also please document the network config template changes to fetch the VLAN ID from the parameter using str_replace: - type: ovs_user_bridge name: br-link use_dhcp: false ovs_extra: - str_replace: template: set port br-link tag=_VLAN_TAG_ params: _VLAN_TAG_: {get_param: TenantNetworkVlanID}