RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1303182 - OVS bridge on a vhostuser guest cannot flow traffic from eth0 to eth1 when the guest is configured with mq=4
Summary: OVS bridge on a vhostuser guest cannot flow traffic from eth0 to eth1 when th...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch-dpdk
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Flavio Leitner
QA Contact: Jean-Tsung Hsiao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-29 19:41 UTC by Jean-Tsung Hsiao
Modified: 2016-12-23 19:40 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-23 19:40:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jean-Tsung Hsiao 2016-01-29 19:41:18 UTC
Description of problem: OVS bridge on a vhostuser guest cannot flow traffic from eth0 to eth1 when the guest is configured with mq=4

*** mq=4 ***
[root@localhost jhsiao]# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	4
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	4

[root@localhost jhsiao]# ethtool -l eth1
Channel parameters for eth1:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	4
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	4


*** guest's OVS config ***
[root@localhost jhsiao]# . ovs-config-loop-back-at-eth0-eth1.sh
891146f3-daa1-4e29-940d-58d12770b4ef
    Bridge "ovsbr0"
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
        Port "eth0"
            Interface "eth0"
        Port "eth1"
            Interface "eth1"
    ovs_version: "2.4.0"
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=0.008s, table=0, n_packets=0, n_bytes=0, idle_age=0, in_port=1 actions=output:2
 cookie=0x0, duration=0.006s, table=0, n_packets=0, n_bytes=0, idle_age=0, in_port=2 actions=output:1

*** ovs-ofctl dump-ports ovsbr0 shows traffic flowing into eth0, ***
*** but, no traffic flowing out of eth1 --- see right below***
[root@localhost jhsiao]# ovs-ofctl dump-ports ovsbr0
OFPST_PORT reply (xid=0x2): 3 ports
  port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
           tx pkts=0, bytes=0, drop=0, errs=0, coll=0
  port  1: rx pkts=442211757, bytes=26532705420, drop=0, errs=0, frame=0, over=0, crc=0
           tx pkts=9, bytes=662, drop=0, errs=0, coll=0
  port  2: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
           tx pkts=0, bytes=0, drop=0, errs=0, coll=0

*** sar -n DEV output also shows no traffic out of eth1 ***
01:44:23 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
01:44:25 PM    ovsbr0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:44:25 PM ovs-system      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:44:25 PM      eth0 2403832.50      0.00 140849.56      0.00      0.00      0.00      0.00
01:44:25 PM      eth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:44:25 PM      eth2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
 

Version-Release number of selected component (if applicable):
*** Host ***
[root@netqe5 dpdk-multiques]# rpm -qa | grep openvswitch
openvswitch-2.5.90-1.el7.x86_64
[root@netqe5 dpdk-multiques]# rpm -qa | grep dpdk
dpdk-2.2.0-1.el7.x86_64
dpdk-tools-2.2.0-1.el7.x86_64

[root@netqe5 dpdk-multiques]# rpm -qa | grep qemu
qemu-img-rhev-2.3.0-31.el7_2.1.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.1.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.x86_64
qemu-kvm-common-rhev-2.3.0-31.el7_2.1.x86_64
ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch

[root@netqe5 dpdk-multiques]# rpm -qa | grep libvirt
libvirt-daemon-driver-network-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.2.x86_64
libvirt-client-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.2.x86_64
libvirt-1.2.17-13.el7_2.2.x86_64

*** guest ***
openvswitch-2.4.0-1.el7.x86_64
kernel 3.10.0-229.el7.x86_64

How reproducible:reproducible


Steps to Reproduce:
1. Host OVS script for multiple queues vhostuser configuration
[root@netqe5 dpdk-multiques]# cat ovs_config_dpdk0_vhost0_vhost1_dpdk1.sh
ovs-vsctl --if-exists del-br ovsbr0
ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=4
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0xaa0000
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 dpdk0 \
    -- set interface dpdk0 type=dpdk ofport_request=10
ovs-vsctl add-port ovsbr0 dpdk1 \
    -- set interface dpdk1 type=dpdk ofport_request=11

ovs-vsctl add-port ovsbr0 vhost0 \
    -- set interface vhost0 type=dpdkvhostuser ofport_request=20
ovs-vsctl add-port ovsbr0 vhost1 \
    -- set interface vhost1 type=dpdkvhostuser ofport_request=21

chown qemu /var/run/openvswitch/vhost0
chown qemu /var/run/openvswitch/vhost1
ll /var/run/openvswitch/vhost*

ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 in_port=10,actions=output:20
ovs-ofctl add-flow ovsbr0 in_port=21,actions=output:11
ovs-ofctl dump-flows ovsbr0

2. Guest OVS script
[root@localhost jhsiao]# cat ovs-config-loop-back-at-eth0-eth1.sh
ovs-vsctl --if-exists del-br ovsbr0
ovs-vsctl add-br ovsbr0
ovs-vsctl add-port ovsbr0 eth0
ovs-vsctl add-port ovsbr0 eth1
ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 "in_port=1 actions=2"
ovs-ofctl add-flow ovsbr0 "in_port=2 actions=1"
ovs-vsctl show
ovs-ofctl dump-flows ovsbr0

3.

Actual results:

At the guest traffic going into eth0, not flowing out of eth1 as configured.

Expected results:

At the guest traffic going into eth0 should get out of eth1.

Additional info:

Comment 2 Jean-Tsung Hsiao 2016-02-04 16:22:47 UTC
Update Notes

* Note #1
After adding the following statement
ovs-vsctl set Open_vSwitch . other_config:n-dpdk-txqs=4

the OVS traffic at the guest will flow from eth0 to eth1 on the very first Xena test after reboot. But, the second test usually fails.

* Note #2

With 64 bytes traffic from Xena using multiple_streams --- 4, 16, or 64 sequences --- less than 3 Mpps reached to eth0 port, and about 1.5 Mpps returned from the eth1 port.

See below for some test outputs --- Xena results and sar -n DEV outputs:


Xena mutiple_streams with seqs=4

[jhsiao@jhsiao XenaScripts]$ python multiple_streams
INFO:root:XenaSocket: Connected
INFO:root:XenaManager: Logged succefully
INFO:root:XenaPort: 1/0 starting traffic
INFO:root:XenaPort: 1/0 stopping traffic
Average: 1507815.00 pps

Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
Average:       ovsbr0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    ovs-system      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:         eth0 2878811.90      0.00 168680.38      0.00      0.00      0.00      0.00
Average:         eth1      0.00 1507869.50      0.00  88351.73      0.00      0.00      0.00
Average:         eth2      0.50      0.50      0.03      0.31      0.00      0.00      0.00
Average:           lo      6.40      6.40      0.53      0.53      0.00      0.00      0.00
[root@localhost jhsiao]#

Xena mutiple_streams with seqs=16

[jhsiao@jhsiao XenaScripts]$ python multiple_streams
INFO:root:XenaSocket: Connected
INFO:root:XenaManager: Logged succefully
INFO:root:XenaPort: 1/0 starting traffic
INFO:root:XenaPort: 1/0 stopping traffic
Average: 1507633.00 pps
[jhsiao@jhsiao XenaScripts]$

Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
Average:       ovsbr0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    ovs-system      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:         eth0 2878213.05      0.00 168645.30      0.00      0.00      0.00      0.00
Average:         eth1      0.00 1507631.70      0.00  88337.79      0.00      0.00      0.00
Average:         eth2      0.50      0.50      0.03      0.31      0.00      0.00      0.00
Average:           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Comment 3 Flavio Leitner 2016-02-15 16:31:23 UTC
Jean,

Looking at ``2. Guest OVS script´´ it seems you missed to use ethtool -L combined to increase the number of queues inside of the guest.  Could you confirm?

Also, there is no such parameter called ``n-dpdk-txqs´´.  The number of TX queues is the number of cores + 1.

Basically if you forgot to increase the number of RX queues in the guest, OVS will push packets to disabled queues and the traffic is never received by the guest.  There is a patch under review in upstream to fix that behavior:
http://openvswitch.org/pipermail/dev/2016-February/066066.html

Please confirm if that is the case you are reporting.

Comment 4 Jean-Tsung Hsiao 2016-02-15 16:37:31 UTC
(In reply to Flavio Leitner from comment #3)
> Jean,
> 
> Looking at ``2. Guest OVS script´´ it seems you missed to use ethtool -L
> combined to increase the number of queues inside of the guest.  Could you
> confirm?

Please take a look at the description part. I did set eth0 and eth1 to "combined 4".

> 
> Also, there is no such parameter called ``n-dpdk-txqs´´.  The number of TX
> queues is the number of cores + 1.
> 
> Basically if you forgot to increase the number of RX queues in the guest,
> OVS will push packets to disabled queues and the traffic is never received
> by the guest.  There is a patch under review in upstream to fix that
> behavior:
> http://openvswitch.org/pipermail/dev/2016-February/066066.html
> 
> Please confirm if that is the case you are reporting.

Comment 5 Jean-Tsung Hsiao 2016-02-18 02:16:58 UTC
Updated openvswitch from openvswitch-2.5.90-1 to openvswitch-2.5.0-0.11430.git61c4e394.1.el7.centos.x86_64. But, same kind of issues still exist --- Sometimes, Xena got zero traffic back; Sometimes, it got partial traffic back.

Comment 6 Flavio Leitner 2016-02-23 16:30:26 UTC
Do you see multiple PMD threads processing packets?

Comment 7 Jean-Tsung Hsiao 2016-02-23 17:47:56 UTC
(In reply to Flavio Leitner from comment #6)
> Do you see multiple PMD threads processing packets?

 
[root@netqe-infra01 XenaScripts]# python multiple_streams
INFO:root:XenaSocket: Connected
INFO:root:XenaManager: Logged succefully
INFO:root:XenaPort: 1/0 starting traffic
INFO:root:XenaPort: 1/0 stopping traffic
Average: 1216622.00 pps


Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
Average:       ovsbr0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    ovs-system      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:         eth0 4382668.00      0.00 256796.95      0.00      0.00      0.00      0.00
Average:         eth1      0.00 1093064.93      0.00  64046.77      0.00      0.00      0.00
Average:         eth2      0.80      0.30      0.04      0.20      0.00      0.00      0.00
Average:           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00

[root@netqe5 dpdk-multiques]# ovs-appctl dpif-netdev/pmd-stats-clear;sleep 10; ovs-appctl dpif-netdev/pmd-stats-show
main thread:
	emc hits:0
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:299871 (100.00%)
	processing cycles:0 (0.00%)
pmd thread numa_id 1 core_id 19:
	emc hits:21866600
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:14157192733 (53.63%)
	processing cycles:12242493037 (46.37%)
	avg cycles per packet: 1207.31 (26399685770/21866600)
	avg processing cycles per packet: 559.87 (12242493037/21866600)
pmd thread numa_id 1 core_id 17:
	emc hits:11662492
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:15575272016 (65.07%)
	processing cycles:8362733875 (34.93%)
	avg cycles per packet: 2052.56 (23938005891/11662492)
	avg processing cycles per packet: 717.06 (8362733875/11662492)
pmd thread numa_id 1 core_id 21:
	emc hits:11661848
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:447242144 (1.32%)
	processing cycles:33472495862 (98.68%)
	avg cycles per packet: 2908.61 (33919738006/11661848)
	avg processing cycles per packet: 2870.26 (33472495862/11661848)
pmd thread numa_id 1 core_id 23:
	emc hits:12390784
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:474821597 (1.40%)
	processing cycles:33440211145 (98.60%)
	avg cycles per packet: 2737.12 (33915032742/12390784)
	avg processing cycles per packet: 2698.80 (33440211145/12390784)
[root@netqe5 dpdk-multiques]#

Comment 8 Jean-Tsung Hsiao 2016-02-23 17:54:01 UTC
Another set of data

[root@netqe-infra01 XenaScripts]# python multiple_streams
INFO:root:XenaSocket: Connected
INFO:root:XenaManager: Logged succefully
INFO:root:XenaPort: 1/0 starting traffic
INFO:root:XenaPort: 1/0 stopping traffic
Average: 1262167.00 pps
[root@netqe-infra01 XenaScripts]# grep range multiple_streams
    m1_s1_p0.set_modifier_range(1, 1, 64)


Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
Average:       ovsbr0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    ovs-system      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:         eth0 5429345.27      0.00 318125.70      0.00      0.00      0.00      0.00
Average:         eth1      0.00 1360600.33      0.00  79722.68      0.00      0.00      0.00
Average:         eth2      0.83      0.33      0.05      0.21      0.00      0.00      0.00
Average:           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00

[root@netqe5 dpdk-multiques]# ovs-appctl dpif-netdev/pmd-stats-clear;sleep 10; ovs-appctl dpif-netdev/pmd-stats-show
main thread:
	emc hits:0
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:287675 (100.00%)
	processing cycles:0 (0.00%)
pmd thread numa_id 1 core_id 19:
	emc hits:29796791
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:6192599692 (18.60%)
	processing cycles:27102726061 (81.40%)
	avg cycles per packet: 1117.41 (33295325753/29796791)
	avg processing cycles per packet: 909.59 (27102726061/29796791)
pmd thread numa_id 1 core_id 17:
	emc hits:15890504
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:863638655 (2.55%)
	processing cycles:33039673438 (97.45%)
	avg cycles per packet: 2133.56 (33903312093/15890504)
	avg processing cycles per packet: 2079.21 (33039673438/15890504)
pmd thread numa_id 1 core_id 21:
	emc hits:15891680
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:597911402 (1.76%)
	processing cycles:33326899237 (98.24%)
	avg cycles per packet: 2134.75 (33924810639/15891680)
	avg processing cycles per packet: 2097.13 (33326899237/15891680)
pmd thread numa_id 1 core_id 23:
	emc hits:16884852
	megaflow hits:0
	miss:0
	lost:0
	polling cycles:12371539986 (52.70%)
	processing cycles:11103618016 (47.30%)
	avg cycles per packet: 1390.31 (23475158002/16884852)
	avg processing cycles per packet: 657.61 (11103618016/16884852)

[root@netqe5 dpdk-multiques]# rpm -qi openvswitch
Name        : openvswitch
Version     : 2.5.0
Release     : 0.11443.git575ceed7.1.el7.centos
Architecture: x86_64
Install Date: Tue 23 Feb 2016 09:46:39 AM EST
Group       : Unspecified
Size        : 14260297
License     : ASL 2.0 and LGPLv2+ and SISSL
Signature   : RSA/SHA1, Mon 22 Feb 2016 04:58:15 AM EST, Key ID 32e9738e99b57f82
Source RPM  : openvswitch-2.5.0-0.11443.git575ceed7.1.el7.centos.src.rpm
Build Date  : Mon 22 Feb 2016 04:57:50 AM EST
Build Host  : copr-builder-81144835.novalocal
Relocations : (not relocatable)
Vendor      : Fedora Project COPR (pmatilai/dpdk)
URL         : http://openvswitch.org
Summary     : Open vSwitch daemon/database/utilities
Description :
Open vSwitch provides standard network bridging functions and
support for the OpenFlow protocol for remote per-flow control of
traffic.

Comment 9 Jean-Tsung Hsiao 2016-02-24 02:47:49 UTC
Xena sent out 14.9 Mpps, but only got 1.3 Mpps back. Below is a breakdown based on "ovs-ofctl dump-ports". The majority of the lost happened at dpdk0 ingress.

Xena to Host
port0 sent 14.9 Mpps
-> dpdk0(RX=5.6 Mpps, RX drops=9.3 Mpps)
-> vhost0(TX=5.5 Mpps, TX drops=0.1 Mpps)

Host to Vhostuser
-> eth0(RX=5.5 Mpps)
-> eth1(TX=1.3 Mpps)
NOTE: Missing 4.2 Mpps here, but not shown up in "ovs-ofctl dump-ports".

Vhostuser to Host
-> vhost1(RX=1.3 Mpps)
-> dpdk1(TX=1.3 Mpps)
-> Xena port1

Comment 10 Jean-Tsung Hsiao 2016-03-08 15:11:11 UTC
Hi Flavio,

For pmd Cores locked forever issue can you just use this one instead of creating another one?

Thanks!

Jean

Comment 11 Flavio Leitner 2016-03-08 17:07:43 UTC
Most probably, could you add a brief summary of the issue here as well?

Comment 12 Jean-Tsung Hsiao 2016-03-09 16:33:17 UTC
For the past six weeks the throughput rate has been very unpredictable as we have mentioned above. And, we have identified the following two issues to be addressed in this BZ:

* During our debugging Flavio identified a possible root cause for the dymanic traffic behavior --- lock on lock. See his assertion below:

Feb 26 15:54:22 <fbl>   # grep 'DBG|TX on queue mapping' /var/log/openvswitch/ovs-vswitchd.log | awk '{ print $8
Feb 26 15:54:23 <fbl>    }' |  sort -u
Feb 26 15:54:23 <fbl>   qid=1
Feb 26 15:54:23 <fbl>   qid=2
Feb 26 15:54:24 <fbl>   qid=3
Feb 26 15:54:30 <fbl>   qid=0 never happened
Feb 26 15:55:06 <fbl>   this is either mq mapping issue or before that
Feb 26 15:59:24 <fbl>   EAL: Error disabling MSI-X interrupts for fd 21

Feb 26 16:24:43 <fbl>   I think I have a possible root cause
Feb 26 16:24:57 <jhsiao>        great
Feb 26 16:24:58 <fbl>   when we change the number of queues, we also change the mapping which is used for locking
Feb 26 16:25:19 <fbl>   it seems we can lock on a lock, then queue changes and we unlock another lock
Feb 26 16:25:39 <fbl>   leaving the original one locked forever

* Another issue is that the guest could stop returning traffic to the host via OVS/eth1 port. This could happen right away or after a long duration testing. The "ovs-ofctl dump-ports" data has indicated no dropps or errs at either eth0 or eth1 port.

Comment 13 Flavio Leitner 2016-12-21 13:31:20 UTC
Jean,

Could you please try again with 2.5.0-22?

Comment 14 Jean-Tsung Hsiao 2016-12-22 06:55:24 UTC
(In reply to Flavio Leitner from comment #13)
> Jean,
> 
> Could you please try again with 2.5.0-22?

Ok, I'll give it a try.

Thanks!

Jean

Comment 15 Jean-Tsung Hsiao 2016-12-22 15:35:35 UTC
(In reply to Flavio Leitner from comment #13)
> Jean,
> 
> Could you please try again with 2.5.0-22?

Yes, the issue does not exist with 2.5.0-22.

Thanks

Jean

Comment 16 Flavio Leitner 2016-12-23 19:40:08 UTC
Thanks Jean!


Note You need to log in before you can comment on or make changes to this bug.