Bug 1035070 - low receive bandwidth for instance when working with Openvswitch
Summary: low receive bandwidth for instance when working with Openvswitch
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: kernel
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 4.0
Assignee: Don Howard
QA Contact: Jean-Tsung Hsiao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-27 02:19 UTC by chen.li
Modified: 2013-11-28 04:39 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-28 04:29:22 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description chen.li 2013-11-27 02:19:21 UTC
Description of problem:
We have an instance, which is booted by Openstack command, and working under VLAN mode with Openvswitch plugin.

The receive bandwidth for the instance is less than 3Gb/s when we using a 10 Gb/s NIC.


Version-Release number of selected component (if applicable):
kernel                  2.6.32-358.123.2.openstack.el6.x86_64
OpenStack    Havana     2013.2-1.el6
Openvswitch             1.11.0_8ce28d-1.el6ost
ixgbe                   3.9.15-k


How reproducible:

Happens every time.


Steps to Reproduce:
1. Install a compute node with 10Gb/s NIC ports.
2. Install Openstack, using Openvswitch as the plugin for neutron, and configure neutron with VLAN mode.
3. Create a network with VLAN tag1.
4. Boot an instance with the network.
5. Start iperf server in the instance: "iperf -s"
6. Prepare a test machine with a 10Gb/s NIC, let the NIC working under VLAN tag1.
7. run iperf client on the test machine: "iperf -c $instance_ip"


Actual results:
Bandwidth is always less than 3Gb/s


Expected results:
Reach 10Gb/s. 

Additional info:
if we update kernel to the newest version 3.12.0 (ixgbe version is  3.15.1 in the new kernel by default), the bandwidth can reach  9.38Gb/s.

Comment 2 Jean-Tsung Hsiao 2013-11-27 03:20:20 UTC
(In reply to chen.li from comment #0)
> Description of problem:
> We have an instance, which is booted by Openstack command, and working under
> VLAN mode with Openvswitch plugin.
> 
> The receive bandwidth for the instance is less than 3Gb/s when we using a 10
> Gb/s NIC.
> 
> 
> Version-Release number of selected component (if applicable):
> kernel                  2.6.32-358.123.2.openstack.el6.x86_64
> OpenStack    Havana     2013.2-1.el6
> Openvswitch             1.11.0_8ce28d-1.el6ost
> ixgbe                   3.9.15-k
> 
> 
> How reproducible:
> 
> Happens every time.
> 
> 
> Steps to Reproduce:
> 1. Install a compute node with 10Gb/s NIC ports.
> 2. Install Openstack, using Openvswitch as the plugin for neutron, and
> configure neutron with VLAN mode.
> 3. Create a network with VLAN tag1.
> 4. Boot an instance with the network.
> 5. Start iperf server in the instance: "iperf -s"
> 6. Prepare a test machine with a 10Gb/s NIC, let the NIC working under VLAN
> tag1.
> 7. run iperf client on the test machine: "iperf -c $instance_ip"
> 
> 
> Actual results:
> Bandwidth is always less than 3Gb/s
> 
> 
> Expected results:
> Reach 10Gb/s. 
> 
> Additional info:
> if we update kernel to the newest version 3.12.0 (ixgbe version is  3.15.1
> in the new kernel by default), the bandwidth can reach  9.38Gb/s.

Hi,

We found the issue earlier and fixed the issue. Please use the following kernel:

358.123.4.openstack

Kernel-358.123.2 didn't have the fix.

Comment 3 chen.li 2013-11-27 03:28:21 UTC
(In reply to Jean-Tsung Hsiao from comment #2)
>
> Hi,
> 
> We found the issue earlier and fixed the issue. Please use the following
> kernel:
> 
> 358.123.4.openstack
> 
> Kernel-358.123.2 didn't have the fix.




Which repo should I use?

I'm using 
http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6/

And 358.123.4.openstack is not there....

Comment 4 chen.li 2013-11-27 03:33:55 UTC
(In reply to Jean-Tsung Hsiao from comment #2)

> Hi,
> 
> We found the issue earlier and fixed the issue. Please use the following
> kernel:
> 
> 358.123.4.openstack
> 
> Kernel-358.123.2 didn't have the fix.



Also, can you provide me a little more information about the fix ??
An earlier bug report ?
The patch fixed this bug ?
I really want to know why this is happening and how you fixed it.




Thanks.
-chen

Comment 5 Jean-Tsung Hsiao 2013-11-27 04:05:08 UTC
(In reply to chen.li from comment #4)
> (In reply to Jean-Tsung Hsiao from comment #2)
> 
> > Hi,
> > 
> > We found the issue earlier and fixed the issue. Please use the following
> > kernel:
> > 
> > 358.123.4.openstack
> > 
> > Kernel-358.123.2 didn't have the fix.
> 
> 
> 
> Also, can you provide me a little more information about the fix ??
> An earlier bug report ?
> The patch fixed this bug ?
> I really want to know why this is happening and how you fixed it.
> 
> 
> 
> 
> Thanks.
> -chen


Let me double check 358.123.2 myself. 

I'll respond to you tomorrow morning Boston time.

Comment 6 chen.li 2013-11-27 04:08:28 UTC
(In reply to Jean-Tsung Hsiao from comment #5)

> 
> Let me double check 358.123.2 myself. 
> 
> I'll respond to you tomorrow morning Boston time.


Thanks for that.


But I just download rpm packages from:

        http://www.vcomtech.net/linux/6/6Server/en/RHOS/RPMS/x86_64/kernel-2.6.32-358.123.4.openstack.el6.x86_64.rpm

and 

        http://springdale.math.ias.edu/data/puias/OS/6/x86_64/kernel-firmware-2.6.32-358.123.4.openstack.el6.noarch.rpm

I did:
1. installed these rpm packages on compute node
2. nova stop $my_instance
3. reboot the compute node
4. nova start $my_instance

But the bandwidth is still low......

Thanks.
-chen

Comment 7 Jean-Tsung Hsiao 2013-11-27 04:29:25 UTC
(In reply to chen.li from comment #6)
> (In reply to Jean-Tsung Hsiao from comment #5)
> 
> > 
> > Let me double check 358.123.2 myself. 
> > 
> > I'll respond to you tomorrow morning Boston time.
> 
> 
> Thanks for that.
> 
> 
> But I just download rpm packages from:
> 
>        
> http://www.vcomtech.net/linux/6/6Server/en/RHOS/RPMS/x86_64/kernel-2.6.32-
> 358.123.4.openstack.el6.x86_64.rpm
> 
> and 
> 
>        
> http://springdale.math.ias.edu/data/puias/OS/6/x86_64/kernel-firmware-2.6.32-
> 358.123.4.openstack.el6.noarch.rpm
> 
> I did:
> 1. installed these rpm packages on compute node
> 2. nova stop $my_instance
> 3. reboot the compute node
> 4. nova start $my_instance
> 
> But the bandwidth is still low......
> 
> Thanks.
> -chen

This is what I got from netperf/TCP_STREAM/ixgbe under 358.123.2.openstack:

[root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.41.3 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.00    8801.44   
[root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.41.3 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.00    8985.17   
[root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.41.3 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.00    8306.32

Comment 8 chen.li 2013-11-27 04:34:14 UTC
(In reply to Jean-Tsung Hsiao from comment #7)
> > 
> This is what I got from netperf/TCP_STREAM/ixgbe under 358.123.2.openstack:
> 
> [root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 172.16.41.3 () port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
> 
>  87380  16384  16384    10.00    8801.44   
> [root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 172.16.41.3 () port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
> 
>  87380  16384  16384    10.00    8985.17   
> [root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 172.16.41.3 () port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
> 
>  87380  16384  16384    10.00    8306.32



You sure the instance's network is under VLAN mode ?
I can get such bandwidth under FLAT mode too.

What NIC card are you using ?

Comment 9 chen.li 2013-11-27 04:40:32 UTC
What I get from compare VLAN and FLAT is :
    Packages size get in from port2 (int-br-eth4) is different.....

VLAN 

ovs-dpctl dump-flows 

in_port(2),eth(src=00:1b:21:a1:19:dc,dst=fa:16:3e:42:77:9b),eth_type(0x8100),vlan(vid=2002,pcp=0),encap(eth_type(0x0800),ipv4(src=10.1.100.254,dst=10.1.100.4,proto=6,tos=0,ttl=64,frag=no),tcp(src=50851,dst=5001)), packets:6202443, bytes:9390495830, used:0.001s, flags:P., actions:pop_vlan,1 

(package get in and get rid of vlan id.  Every package size =1514 Byte)

in_port(6),eth(src=00:1b:21:a1:19:dc,dst=fa:16:3e:42:77:9b),eth_type(0x8100),vlan(vid=2002,pcp=0),encap(eth_type(0x0800),ipv4(src=10.1.100.254,dst=10.1.100.4,proto=6,tos=0,ttl=64,frag=no),tcp(src=50851,dst=5001)), packets:414505, bytes:9008499162, used:0.000s, flags:P., actions:7 

(Get package from physical NIC port, and every package size =21733 Byte)


FLAT 

ovs-dpctl dump-flows

in_port(2),eth(src=00:1b:21:a1:19:dc,dst=fa:16:3e:75:92:d6),eth_type(0x0800),ipv4(src=191.101.0.254,dst=191.101.0.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=34071,dst=5001), packets:226109, bytes:5419916258, used:0.000s, flags:P., actions:5

(Every package size =23970 Byte) 

in_port(6),eth(src=00:1b:21:a1:19:dc,dst=fa:16:3e:75:92:d6),eth_type(0x0800),ipv4(src=191.101.0.254,dst=191.101.0.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=34071,dst=5001), packets:226110, bytes:5419942388, used:0.000s, flags:P., actions:7 

(Every package size =23970 Byte)

Comment 10 Jean-Tsung Hsiao 2013-11-27 12:26:23 UTC
(In reply to chen.li from comment #8)
> (In reply to Jean-Tsung Hsiao from comment #7)
> > > 
> > This is what I got from netperf/TCP_STREAM/ixgbe under 358.123.2.openstack:
> > 
> > [root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
> > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> > 172.16.41.3 () port 0 AF_INET
> > Recv   Send    Send                          
> > Socket Socket  Message  Elapsed              
> > Size   Size    Size     Time     Throughput  
> > bytes  bytes   bytes    secs.    10^6bits/sec  
> > 
> >  87380  16384  16384    10.00    8801.44   
> > [root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
> > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> > 172.16.41.3 () port 0 AF_INET
> > Recv   Send    Send                          
> > Socket Socket  Message  Elapsed              
> > Size   Size    Size     Time     Throughput  
> > bytes  bytes   bytes    secs.    10^6bits/sec  
> > 
> >  87380  16384  16384    10.00    8985.17   
> > [root@host-172-16-41-2 jhsiao]# netperf -4 -H 172.16.41.3
> > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> > 172.16.41.3 () port 0 AF_INET
> > Recv   Send    Send                          
> > Socket Socket  Message  Elapsed              
> > Size   Size    Size     Time     Throughput  
> > bytes  bytes   bytes    secs.    10^6bits/sec  
> > 
> >  87380  16384  16384    10.00    8306.32
> 
> 
> 
> You sure the instance's network is under VLAN mode ?
> I can get such bandwidth under FLAT mode too.
> 
> What NIC card are you using ?

Yes, I am using VLAN mode:

root@qe-dell-ovs3 ~(keystone_admin)]# neutron net-show net1
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 9980b533-0105-44ac-ae40-0866154ca826 |
| name                      | net1                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 10                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 7bd4eb81-bfe0-488a-a01f-8aeb768e8389 |
| tenant_id                 | f5101e09d1e24da4ad4432f357741eb4     |
+---------------------------+--------------------------------------+
[root@qe-dell-ovs3 ~(keystone_admin)]# nova list | grep net1
| 1a832cf3-f24c-4f49-9bf3-d8bf8ac46e7d | net1-1a832cf3-f24c-4f49-9bf3-d8bf8ac46e7d  | ACTIVE  | None       | Running     | net1=172.16.41.2  |
| 5a2f8f5b-c42d-4e40-af8d-84fc576be871 | net1-5a2f8f5b-c42d-4e40-af8d-84fc576be871  | ACTIVE  | None       | Running     | net1=172.16.41.3  |


04:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
        Subsystem: Intel Corporation Ethernet 10G 2P X540-t Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 52
        Memory at d5000000 (64-bit, prefetchable) [size=2M]
        Memory at d55f8000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at da000000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [e0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-08-2b-f8
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1d0] Access Control Services
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe

Comment 11 Jean-Tsung Hsiao 2013-11-27 14:23:57 UTC
Below is iperf data between the same two instances:

[root@host-172-16-41-2 ~]# iperf -c 172.16.41.3 -i 1 -t 30
------------------------------------------------------------
Client connecting to 172.16.41.3, TCP port 5001
TCP window size: 23.2 KByte (default)
------------------------------------------------------------
[  3] local 172.16.41.2 port 55343 connected with 172.16.41.3 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  1.08 GBytes  9.25 Gbits/sec
[  3]  1.0- 2.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3]  2.0- 3.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3]  3.0- 4.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3]  4.0- 5.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3]  5.0- 6.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3]  6.0- 7.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3]  7.0- 8.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3]  8.0- 9.0 sec  1.00 GBytes  8.62 Gbits/sec
[  3]  9.0-10.0 sec  1.03 GBytes  8.88 Gbits/sec
[  3] 10.0-11.0 sec  1.09 GBytes  9.32 Gbits/sec
[  3] 11.0-12.0 sec  1.07 GBytes  9.22 Gbits/sec
[  3] 12.0-13.0 sec  1.07 GBytes  9.17 Gbits/sec
[  3] 13.0-14.0 sec   988 MBytes  8.29 Gbits/sec
[  3] 14.0-15.0 sec  1.09 GBytes  9.34 Gbits/sec
[  3] 15.0-16.0 sec  1.09 GBytes  9.36 Gbits/sec
[  3] 16.0-17.0 sec  1.09 GBytes  9.35 Gbits/sec
[  3] 17.0-18.0 sec  1.00 GBytes  8.61 Gbits/sec
[  3] 18.0-19.0 sec  1.09 GBytes  9.33 Gbits/sec
[  3] 19.0-20.0 sec  1.04 GBytes  8.94 Gbits/sec
[  3] 20.0-21.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3] 21.0-22.0 sec  1.08 GBytes  9.30 Gbits/sec
[  3] 22.0-23.0 sec  1.08 GBytes  9.28 Gbits/sec
[  3] 23.0-24.0 sec  1.04 GBytes  8.91 Gbits/sec
[  3] 24.0-25.0 sec  1.09 GBytes  9.38 Gbits/sec
[  3] 25.0-26.0 sec  1.09 GBytes  9.39 Gbits/sec
[  3] 26.0-27.0 sec  1.00 GBytes  8.62 Gbits/sec
[  3] 27.0-28.0 sec  1.05 GBytes  9.00 Gbits/sec
[  3] 28.0-29.0 sec  1.04 GBytes  8.90 Gbits/sec
[  3] 29.0-30.0 sec  1.09 GBytes  9.40 Gbits/sec
[  3]  0.0-30.0 sec  32.0 GBytes  9.16 Gbits/sec
[root@host-172-16-41-2 ~]#

Comment 12 chen.li 2013-11-28 01:03:30 UTC
(In reply to Jean-Tsung Hsiao from comment #11)
> Below is iperf data between the same two instances:
> 
> [root@host-172-16-41-2 ~]# iperf -c 172.16.41.3 -i 1 -t 30
> ------------------------------------------------------------
> Client connecting to 172.16.41.3, TCP port 5001
> TCP window size: 23.2 KByte (default)
> ------------------------------------------------------------
> [  3] local 172.16.41.2 port 55343 connected with 172.16.41.3 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0- 1.0 sec  1.08 GBytes  9.25 Gbits/sec
> [  3]  1.0- 2.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3]  2.0- 3.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3]  3.0- 4.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3]  4.0- 5.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3]  5.0- 6.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3]  6.0- 7.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3]  7.0- 8.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3]  8.0- 9.0 sec  1.00 GBytes  8.62 Gbits/sec
> [  3]  9.0-10.0 sec  1.03 GBytes  8.88 Gbits/sec
> [  3] 10.0-11.0 sec  1.09 GBytes  9.32 Gbits/sec
> [  3] 11.0-12.0 sec  1.07 GBytes  9.22 Gbits/sec
> [  3] 12.0-13.0 sec  1.07 GBytes  9.17 Gbits/sec
> [  3] 13.0-14.0 sec   988 MBytes  8.29 Gbits/sec
> [  3] 14.0-15.0 sec  1.09 GBytes  9.34 Gbits/sec
> [  3] 15.0-16.0 sec  1.09 GBytes  9.36 Gbits/sec
> [  3] 16.0-17.0 sec  1.09 GBytes  9.35 Gbits/sec
> [  3] 17.0-18.0 sec  1.00 GBytes  8.61 Gbits/sec
> [  3] 18.0-19.0 sec  1.09 GBytes  9.33 Gbits/sec
> [  3] 19.0-20.0 sec  1.04 GBytes  8.94 Gbits/sec
> [  3] 20.0-21.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3] 21.0-22.0 sec  1.08 GBytes  9.30 Gbits/sec
> [  3] 22.0-23.0 sec  1.08 GBytes  9.28 Gbits/sec
> [  3] 23.0-24.0 sec  1.04 GBytes  8.91 Gbits/sec
> [  3] 24.0-25.0 sec  1.09 GBytes  9.38 Gbits/sec
> [  3] 25.0-26.0 sec  1.09 GBytes  9.39 Gbits/sec
> [  3] 26.0-27.0 sec  1.00 GBytes  8.62 Gbits/sec
> [  3] 27.0-28.0 sec  1.05 GBytes  9.00 Gbits/sec
> [  3] 28.0-29.0 sec  1.04 GBytes  8.90 Gbits/sec
> [  3] 29.0-30.0 sec  1.09 GBytes  9.40 Gbits/sec
> [  3]  0.0-30.0 sec  32.0 GBytes  9.16 Gbits/sec
> [root@host-172-16-41-2 ~]#




En..... Then there must be something wrong on my set-up........
Do you have any suggestions for me ??

Just to be sure, you're using the default ixgbe driver in the kernel, right ?


Thanks.
-chen

Comment 13 Jean-Tsung Hsiao 2013-11-28 01:10:51 UTC
(In reply to chen.li from comment #12)
 
 
> 
> En..... Then there must be something wrong on my set-up........
> Do you have any suggestions for me ??
> 
> Just to be sure, you're using the default ixgbe driver in the kernel, right ?
> 
> 
> Thanks.
> -chen

Make sure TSO got turned on at both VM's: 

to check: ethtool -k eth0
to set: ethtool -K eth0 tso on


root@qe-dell-ovs3 jhsiao]# modinfo ixgbe
filename:       /lib/modules/2.6.32-358.123.2.openstack.el6.x86_64/kernel/drivers/net/ixgbe/ixgbe.ko
version:        3.9.15-k

Comment 14 chen.li 2013-11-28 01:19:50 UTC
In my VM:

ethtool -k eth0

Features for eth0:
rx-checksumming: off [fixed]
tx-checksumming: on
        tx-checksum-ipv4: off [fixed]
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off [fixed]
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: on
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: on
        tx-tcp6-segmentation: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: on
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]



If it is caused by the virtual network port setting, how can I explain the difference between VLAN and FLAT mode....

Comment 15 Jean-Tsung Hsiao 2013-11-28 01:36:32 UTC
When I said "VM" I meant openstack instance.

The ethtool output above looks like from RHEL7.0 system.

Sorry, I never run under FLAT mode.

NOTE: My Openstack/OVS test-bed consists of two DELL servers --- PowerEdge R820. One of them is configured as the Controlnode including networking and compute, and the other as an extra compute node.

Comment 16 chen.li 2013-11-28 02:50:56 UTC
(In reply to Jean-Tsung Hsiao from comment #15)
> When I said "VM" I meant openstack instance.
> 
> The ethtool output above looks like from RHEL7.0 system.
> 
> Sorry, I never run under FLAT mode.
> 
> NOTE: My Openstack/OVS test-bed consists of two DELL servers --- PowerEdge
> R820. One of them is configured as the Controlnode including networking and
> compute, and the other as an extra compute node.



Could you check the output for "ovs-dpctl dump-flows" while the iperf test is running ?

 
Thanks.
-chen

Comment 17 Jean-Tsung Hsiao 2013-11-28 03:04:14 UTC
[root@qe-dell-ovs1 log]# ovs-dpctl dump-flows
in_port(20),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=55363,dst=5001), packets:255191, bytes:15774746334, used:0.000s, flags:P., actions:push_vlan(vid=1,pcp=0),9
in_port(28),eth(src=d0:67:e5:a1:3d:3c,dst=01:80:c2:00:00:00), packets:161, bytes:9660, used:1.316s, actions:drop
in_port(28),eth(src=d0:67:e5:a1:3d:3c,dst=01:80:c2:00:00:0e),eth_type(0x88cc), packets:0, bytes:0, used:never, actions:drop
in_port(17),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=55363,dst=5001)), packets:255191, bytes:15774746334, used:0.000s, flags:P., actions:pop_vlan,push_vlan(vid=10,pcp=0),28
in_port(5),eth(src=fa:16:3e:0e:9d:dc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=3,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.55.4,op=1,sha=fa:16:3e:0e:9d:dc,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:3.962s, actions:drop
skb_priority(0x6),in_port(24),eth(src=fa:16:3e:9b:71:3b,dst=fa:16:3e:3a:09:c5),eth_type(0x0800),ipv4(src=172.16.41.4,dst=172.16.41.2,proto=6,tos=0x10,ttl=64,frag=no),tcp(src=59399,dst=22), packets:22, bytes:1548, used:0.272s, flags:P., actions:20
in_port(19),eth(src=fa:16:3e:3a:09:c5,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.41.2,op=1,sha=fa:16:3e:3a:09:c5,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:2.294s, actions:drop
in_port(28),eth(src=fa:16:3e:20:55:91,dst=fa:16:3e:3a:09:c5),eth_type(0x8100),vlan(vid=10,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.3,dst=172.16.41.2,proto=6,tos=0,ttl=64,frag=no),tcp(src=5001,dst=55363)), packets:657024, bytes:43363584, used:0.000s, flags:., actions:17
in_port(17),eth(src=fa:16:3e:3a:09:c5,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.41.2,op=1,sha=fa:16:3e:3a:09:c5,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:2.294s, actions:pop_vlan,push_vlan(vid=10,pcp=0),2,28
in_port(22),eth(src=fa:16:3e:0e:9d:dc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.55.4,op=1,sha=fa:16:3e:0e:9d:dc,tha=ff:ff:ff:ff:ff:ff), packets:1, bytes:42, used:3.962s, actions:push_vlan(vid=3,pcp=0),4,18,pop_vlan,26,push_vlan(vid=3,pcp=0),9,6,7,set(tunnel(tun_id=0x37,src=10.1.0.101,dst=10.1.0.103,tos=0x0,ttl=64,flags(df,key))),pop_vlan,11
in_port(8),eth(src=fa:16:3e:3a:09:c5,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.41.2,op=1,sha=fa:16:3e:3a:09:c5,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:2.294s, actions:drop
in_port(20),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:9b:71:3b),eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.4,proto=17,tos=0,ttl=64,frag=no),udp(src=68,dst=67), packets:0, bytes:0, used:never, actions:24
in_port(22),eth(src=fa:16:3e:0e:9d:dc,dst=fa:16:3e:2c:95:c3),eth_type(0x0800),ipv4(src=172.16.55.4,dst=172.16.55.3,proto=17,tos=0,ttl=64,frag=no),udp(src=68,dst=67), packets:0, bytes:0, used:never, actions:26
in_port(9),eth(src=fa:16:3e:20:55:91,dst=fa:16:3e:3a:09:c5),eth_type(0x8100),vlan(vid=10,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.3,dst=172.16.41.2,proto=6,tos=0,ttl=64,frag=no),tcp(src=5001,dst=55363)), packets:657024, bytes:43363584, used:0.000s, flags:., actions:pop_vlan,20
in_port(5),eth(src=fa:16:3e:3a:09:c5,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.41.2,op=1,sha=fa:16:3e:3a:09:c5,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:2.294s, actions:drop
in_port(8),eth(src=fa:16:3e:0e:9d:dc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=3,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.55.4,op=1,sha=fa:16:3e:0e:9d:dc,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:3.962s, actions:drop
in_port(20),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:9b:71:3b),eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.4,proto=6,tos=0x10,ttl=64,frag=no),tcp(src=22,dst=59399), packets:19, bytes:3254, used:0.272s, flags:P., actions:24
in_port(20),eth(src=fa:16:3e:3a:09:c5,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.41.2,op=1,sha=fa:16:3e:3a:09:c5,tha=ff:ff:ff:ff:ff:ff), packets:1, bytes:42, used:2.294s, actions:push_vlan(vid=1,pcp=0),4,18,9,6,7,pop_vlan,24
in_port(26),eth(src=fa:16:3e:2c:95:c3,dst=fa:16:3e:0e:9d:dc),eth_type(0x0800),ipv4(src=172.16.55.3,dst=172.16.55.4,proto=17,tos=0,ttl=64,frag=no),udp(src=67,dst=68), packets:0, bytes:0, used:never, actions:22
in_port(24),eth(src=fa:16:3e:9b:71:3b,dst=fa:16:3e:3a:09:c5),eth_type(0x0800),ipv4(src=172.16.41.4,dst=172.16.41.2,proto=17,tos=0,ttl=64,frag=no),udp(src=67,dst=68), packets:0, bytes:0, used:never, actions:20
in_port(19),eth(src=fa:16:3e:0e:9d:dc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=3,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.55.4,op=1,sha=fa:16:3e:0e:9d:dc,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:3.962s, actions:drop
in_port(17),eth(src=fa:16:3e:0e:9d:dc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=3,pcp=0),encap(eth_type(0x0806),arp(sip=0.0.0.0,tip=172.16.55.4,op=1,sha=fa:16:3e:0e:9d:dc,tha=ff:ff:ff:ff:ff:ff)), packets:1, bytes:42, used:3.962s, actions:drop
[root@qe-dell-ovs1 log]#

Comment 18 chen.li 2013-11-28 03:28:57 UTC
The action you have is not exactly the same as it on my system:

in_port(20),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=55363,dst=5001), packets:255191, bytes:15774746334, used:0.000s, flags:P., actions:push_vlan(vid=1,pcp=0),9

in_port(17),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=55363,dst=5001)), packets:255191, bytes:15774746334, used:0.000s, flags:P., actions:pop_vlan,push_vlan(vid=10,pcp=0),28


If my analysis  is correct:

1. can be checked by "ovs-dpctl show"
    port17 => your physical NIC port
    port28 => phy_${ovs_br_for_physical NIC port}
    port20 => int_${ovs_br_for_physical NIC port}, on ovs bridge br-int
    port 9 => instance specific port, on ovs bridge br-int

2. when the package get into your physial NIC port, your set-up have an extra action: "actions:pop_vlan,push_vlan".

    Obviously, the push_vlan not really worked, but pop operation happened. 
    And this caused packages eth_type=0x0800 when the package get into port20.



What is the output for your
ovs-ofctl dump-flows br-int |grep "dl_vlan=${net_vlan_tag}"



Did you have any specific configurations for openvswitch ??

Comment 19 Jean-Tsung Hsiao 2013-11-28 03:42:45 UTC
[root@qe-dell-ovs1 log]# ovs-dpctl show
system@ovs-system:
	lookups: hit:1270399060 missed:58239 lost:0
	flows: 2
	port 0: ovs-system (internal)
	port 1: br-ex (internal)
	port 2: br-link1 (internal)
	port 3: br-link3 (internal)
	port 4: int-br-link3
	port 5: phy-br-link3
	port 6: br-int (internal)
	port 7: int-br-link2
	port 8: phy-br-link2
	port 9: int-br-link1
	port 10: br-tun (internal)
	port 11: gre_system (gre: df_default=false, ttl=0)
	port 12: p6p2
	port 13: br-link4 (internal)
	port 14: p2p2
	port 15: br-link2 (internal)
	port 16: ovsbr0 (internal)
	port 17: phy-br-link1
	port 18: int-br-link4
	port 19: phy-br-link4
	port 20: qvo73c45009-87
	port 21: qvodb040dd3-e6
	port 22: qvo22ee04a5-82
	port 23: tap7770593b-77 (internal)
	port 24: tapa8095690-75 (internal)
	port 25: tap17a002ae-03 (internal)
	port 26: tap099744a3-72 (internal)
	port 27: tap95cff40a-81 (internal)
	port 28: p6p1

NOTE: p6p1 is the active ixgbe on the iperf client side.

No specific configurations for openvswitch!

Comment 20 chen.li 2013-11-28 03:50:04 UTC
o....sorry....my mistake.... 

You have run "ovs-dpctl dump-flows" on the iperf client side.....

Could you run "ovs-dpctl dump-flows" on the iperf server side ??

Thanks.
-chen

Comment 21 Jean-Tsung Hsiao 2013-11-28 03:52:47 UTC
[root@qe-dell-ovs3 yum.repos.d]# ovs-dpctl dump-flows
in_port(15),eth(src=fa:16:3e:20:55:91,dst=fa:16:3e:3a:09:c5),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.3,dst=172.16.41.2,proto=6,tos=0x10,ttl=64,frag=no),tcp(src=22,dst=58626)), packets:0, bytes:0, used:never, actions:pop_vlan,push_vlan(vid=10,pcp=0),21
in_port(15),eth(src=fa:16:3e:20:55:91,dst=fa:16:3e:3a:09:c5),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.3,dst=172.16.41.2,proto=6,tos=0,ttl=64,frag=no),tcp(src=5001,dst=55369)), packets:96415, bytes:6363390, used:0.000s, flags:., actions:pop_vlan,push_vlan(vid=10,pcp=0),21
in_port(21),eth(src=d0:67:e5:a1:3d:3c,dst=01:80:c2:00:00:00), packets:8981, bytes:538860, used:0.807s, actions:drop
in_port(21),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x8100),vlan(vid=10,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=55369,dst=5001)), packets:119391, bytes:2831388494, used:0.000s, flags:P., actions:15
in_port(18),eth(src=fa:16:3e:20:55:91,dst=fa:16:3e:3a:09:c5),eth_type(0x0800),ipv4(src=172.16.41.3,dst=172.16.41.2,proto=6,tos=0x10,ttl=64,frag=no),tcp(src=22,dst=58626), packets:0, bytes:0, used:never, actions:push_vlan(vid=1,pcp=0),14
in_port(21),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x8100),vlan(vid=10,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0x10,ttl=64,frag=no),tcp(src=58626,dst=22)), packets:0, bytes:0, used:never, actions:15
in_port(18),eth(src=fa:16:3e:20:55:91,dst=fa:16:3e:3a:09:c5),eth_type(0x0800),ipv4(src=172.16.41.3,dst=172.16.41.2,proto=6,tos=0,ttl=64,frag=no),tcp(src=5001,dst=55369), packets:96415, bytes:6363390, used:0.000s, flags:., actions:push_vlan(vid=1,pcp=0),14
in_port(14),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x8100),vlan(vid=10,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0x10,ttl=64,frag=no),tcp(src=58626,dst=22)), packets:0, bytes:0, used:never, actions:pop_vlan,18
in_port(14),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x8100),vlan(vid=10,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=55369,dst=5001)), packets:119391, bytes:2831388494, used:0.000s, flags:P., actions:pop_vlan,18

Comment 22 Jean-Tsung Hsiao 2013-11-28 03:54:23 UTC
[root@qe-dell-ovs3 yum.repos.d]# ovs-dpctl show
system@ovs-system:
	lookups: hit:3312544882 missed:44322 lost:0
	flows: 2
	port 0: ovs-system (internal)
	port 1: br-link1 (internal)
	port 2: br-int (internal)
	port 3: br-link3 (internal)
	port 4: p6p2
	port 5: br-link4 (internal)
	port 6: br-tun (internal)
	port 7: gre_system (gre: df_default=false, ttl=0)
	port 8: p2p2
	port 9: br-link2 (internal)
	port 10: int-br-link3
	port 11: phy-br-link3
	port 12: int-br-link2
	port 13: phy-br-link2
	port 14: int-br-link1
	port 15: phy-br-link1
	port 16: int-br-link4
	port 17: phy-br-link4
	port 18: qvo5354beb3-7f
	port 19: qvoa57b11dc-ee
	port 20: qvoc5c6f77e-0e
	port 21: p6p1

Comment 23 chen.li 2013-11-28 04:02:42 UTC
(In reply to Jean-Tsung Hsiao from comment #22)
> [root@qe-dell-ovs3 yum.repos.d]# ovs-dpctl show
> system@ovs-system:
> 	lookups: hit:3312544882 missed:44322 lost:0
> 	flows: 2
> 	port 0: ovs-system (internal)
> 	port 1: br-link1 (internal)
> 	port 2: br-int (internal)
> 	port 3: br-link3 (internal)
> 	port 4: p6p2
> 	port 5: br-link4 (internal)
> 	port 6: br-tun (internal)
> 	port 7: gre_system (gre: df_default=false, ttl=0)
> 	port 8: p2p2
> 	port 9: br-link2 (internal)
> 	port 10: int-br-link3
> 	port 11: phy-br-link3
> 	port 12: int-br-link2
> 	port 13: phy-br-link2
> 	port 14: int-br-link1
> 	port 15: phy-br-link1
> 	port 16: int-br-link4
> 	port 17: phy-br-link4
> 	port 18: qvo5354beb3-7f
> 	port 19: qvoa57b11dc-ee
> 	port 20: qvoc5c6f77e-0e
> 	port 21: p6p1




Thanks.

This just confirmed my issue, but still not know why it is happening......


in_port(14),eth(src=fa:16:3e:3a:09:c5,dst=fa:16:3e:20:55:91),eth_type(0x8100),vlan(vid=10,pcp=0),encap(eth_type(0x0800),ipv4(src=172.16.41.2,dst=172.16.41.3,proto=6,tos=0,ttl=64,frag=no),tcp(src=55369,dst=5001)), packets:119391, bytes:2831388494, used:0.000s, flags:P., actions:pop_vlan,18

=> every package size = 23715 Byte  
=> in my set-up, here the package size become 1500 Byte

Package segmentation => high CPU% => low bandwidth 

But, why do segmentation  happen ??

Comment 24 Jean-Tsung Hsiao 2013-11-28 04:08:29 UTC
NOTE:

We found the following issue and fixed it for 358.118.1. I verified it on 8/23/2013.

Description of problem: Openstack: 10G link netperf/TCP_STREAM throughput degraded to 1.9G between  between VMs cross two hosts.

[...]

The root cause of this bug has been found. The veth driver misses vlan_features so as openstack was using tagged traffic, TSO wasn't enabled causing the performance issue.

Comment 25 chen.li 2013-11-28 04:21:15 UTC
(In reply to Jean-Tsung Hsiao from comment #24)
> NOTE:
> 
> We found the following issue and fixed it for 358.118.1. I verified it on
> 8/23/2013.
> 
> Description of problem: Openstack: 10G link netperf/TCP_STREAM throughput
> degraded to 1.9G between  between VMs cross two hosts.
> 
> [...]
> 
> The root cause of this bug has been found. The veth driver misses
> vlan_features so as openstack was using tagged traffic, TSO wasn't enabled
> causing the performance issue.


en.....

I just upgrade another compute node's kernel....
And it do works !!!

Not know why the compute node I was always working with not work....
Anyway, really thanks a lot  !!!

Can you share the patch that you fixed the issue ?


Thanks.
-chen

Comment 26 Jean-Tsung Hsiao 2013-11-28 04:25:18 UTC
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 4bd89f1..b57b82c 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -310,6 +310,7 @@  static void veth_setup(struct net_device *dev)
 	dev->ethtool_ops = &veth_ethtool_ops;
 	dev->features |= NETIF_F_LLTX;
 	dev->features |= VETH_FEATURES;
+	dev->vlan_features = dev->features;
 	dev->destructor = veth_dev_free;
 }

Comment 27 chen.li 2013-11-28 04:29:22 UTC
(In reply to Jean-Tsung Hsiao from comment #26)
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 4bd89f1..b57b82c 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -310,6 +310,7 @@  static void veth_setup(struct net_device *dev)
>  	dev->ethtool_ops = &veth_ethtool_ops;
>  	dev->features |= NETIF_F_LLTX;
>  	dev->features |= VETH_FEATURES;
> +	dev->vlan_features = dev->features;
>  	dev->destructor = veth_dev_free;
>  }


Can I ask a little more about how you know you should reach to the veth driver when you saw the issue ?

Comment 28 Jean-Tsung Hsiao 2013-11-28 04:37:08 UTC
(In reply to chen.li from comment #27)
> (In reply to Jean-Tsung Hsiao from comment #26)
> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > index 4bd89f1..b57b82c 100644
> > --- a/drivers/net/veth.c
> > +++ b/drivers/net/veth.c
> > @@ -310,6 +310,7 @@  static void veth_setup(struct net_device *dev)
> >  	dev->ethtool_ops = &veth_ethtool_ops;
> >  	dev->features |= NETIF_F_LLTX;
> >  	dev->features |= VETH_FEATURES;
> > +	dev->vlan_features = dev->features;
> >  	dev->destructor = veth_dev_free;
> >  }
> 
> 
> Can I ask a little more about how you know you should reach to the veth
> driver when you saw the issue ?

As a tester I pointed out that this particular issue only happened to the VLAN case --- as you saw in your situation. One of our devels really identified the issue and fixed it.

Comment 29 chen.li 2013-11-28 04:39:37 UTC
(In reply to Jean-Tsung Hsiao from comment #28)
> 
> As a tester I pointed out that this particular issue only happened to the
> VLAN case --- as you saw in your situation. One of our devels really
> identified the issue and fixed it.

OK.

Anyway, really thanks for your help.


Note You need to log in before you can comment on or make changes to this bug.