Bug 1451978 - Latest latest virtio driver (network) for Windows drops lots of packets
Summary: Latest latest virtio driver (network) for Windows drops lots of packets
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virtio-win
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: ybendito
QA Contact: Yu Wang
URL:
Whiteboard:
Depends On:
Blocks: 1471073 1473046
TreeView+ depends on / blocked
 
Reported: 2017-05-18 04:29 UTC by Marcus West
Modified: 2020-09-10 10:35 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, high packet loss in some cases occurred on Windows guests that use the virtio interface. This update fixes the underlying code, and the affected guests no longer experience the increased packet loss.
Clone Of:
: 1471073 (view as bug list)
Environment:
Last Closed: 2018-04-10 06:28:08 UTC
Target Upstream Version:


Attachments (Terms of Use)
Driver 1 for test and investigation (based on 139) (5.74 MB, application/zip)
2017-06-25 15:02 UTC, ybendito
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0657 None None None 2018-04-10 06:30:38 UTC

Description Marcus West 2017-05-18 04:29:40 UTC
## Description of problem:

Latest latest virtio driver (network) for Windows drops lots of packets

## Version-Release number of selected component (if applicable):

rhvh--4.1--0.20170417.0
rhevm-4.1.1.8-0.1.el7.noarch (Hosted Engine)
virtio-win-1.9.0-3.el7.noarch
VirtIO Ethernet Adaptor 63.73.104.12600

Hypervisor Hardware info:
Vendor: Cisco Systems, Inc.
Version: B200M3.2.2.3.0.080820141339
Release Date: 08/08/2014
Manufacturer: Cisco Systems Inc
Product Name: UCSB-B200-M3
06:00.0 Ethernet controller [0200]: Cisco Systems Inc VIC Ethernet NIC [1137:0043] (rev a2)
	Subsystem: Cisco Systems Inc VIC 1240 MLOM Ethernet NIC [1137:0084]
0a:00.0 Ethernet controller [0200]: Cisco Systems Inc VIC Ethernet NIC [1137:0043] (rev a2)
	Subsystem: Cisco Systems Inc VIC 1240 MLOM Ethernet NIC [1137:0084]


## How reproducible:

Always

## Steps to Reproduce:
1. Install new RHV4.1 environment
2. Install Windows VM (Windows 2012 R2, and Windows 7 tested)
3. Install Red Hat VirtIO drivers (Ethernet)
4. Ping an internet address (8.8.8.8)

## Actual results:

Many timeout/transmit failed messages

## Expected results:

packet loss should be about 0%

## Additional info:

Customer has tested further and found the following working arounds:

- if they they select e1000 type, the problem goes away
- if they downgrade to drivers in the 4.0.6 release (build 62.72.104.11000) problem goes away
- if the use another logical network other than 'ovirtmgmt', problem goes away (virtio-net, latest drivers).  Note, ovirtmgmt is non-VLAN.  Other networks are on separate physical device, and are VLAN

Comment 16 lijin 2017-05-31 04:08:07 UTC
Hi Marcus,

Any feedback from customer?
Does latest internal build can fix their issue?

Comment 17 Marcus West 2017-05-31 06:21:27 UTC
No updates from the customer, I will check in with them.

Comment 18 Marcus West 2017-06-07 23:50:20 UTC
Hello,

I have feedback, the newer drivers a better, but there are still dropped packets.  Again, rolling back to the old previously-working drivers reduces packet loss to 0%

Comment 23 Yu Wang 2017-06-08 09:01:59 UTC
Hi, 

On QE side,  we ping both same and different subnet hosts , both no timeouts occured.
But our env./switch maybe not as complex as yours. since our pings always Average = 1ms .

Thanks
Yu Wang

Comment 32 ybendito 2017-06-25 15:02:42 UTC
Created attachment 1291721 [details]
Driver 1 for test and investigation (based on 139)

Comment 33 ybendito 2017-06-25 15:12:41 UTC
We assume BADpcap-filter-applied.pcapng is taken simultaneously with logs of the driver (timestamps in pcap file and in driver log are completely different). Pings send in frames (for example) 1, 13, 15, 21, 25, 27 was responded in ~50 ms, responses are valid and correct ICMP packets (all checksums are also correct) but next ping after these frames was sent not after 1 second, but after 5 seconds. This means that respective received ping responses were not delivered to ping application, i.e. lost somewhere on the way. In the logs of the newer driver we see multiple cases when the driver reports checksum error for some packets (non-TCP or UDP) received from host. But there is no report of real checksum verification on these packets. There is some small difference between 110 and 126 for non-IP packets reporting, which should not cause packet loss, but to be on the safe side I'll fix it in custom build and will add more diagnostics to try recognize where the packets are lost.

Driver in comment #32 for test. If the same behavior (lost packets) still exist, please make the same record as in comment #22 and in parallel run tmpdump on host, specifying 'icmp' and '-i <tap name>' targeting the tap created for the VM.

Additional request: results of 'ethtool -k <tap device>'

Comment 39 ybendito 2017-07-04 14:51:14 UTC
Fixed in build virtio-win-prewhql-0.1-140
https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=570835

Comment 40 Peixiu Hou 2017-07-06 05:16:17 UTC
Reproduced this issue with virtio-win-prewhql-139, the result like comment#38, at least 10% of pings timed out.

Verified this issue with virtio-win-prewhql-140, the ping flood test, no timeout occurs.

Steps as comment#38.

Used version:
kernel-3.10.0-680.el7.x86_64
qemu-kvm-rhev-2.9.0-14.el7.x86_64
seabios-1.10.2-3.el7.x86_64

Comment 41 lijin 2017-07-11 06:12:28 UTC
change status to verified according to comment#40

Comment 47 errata-xmlrpc 2018-04-10 06:28:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0657


Note You need to log in before you can comment on or make changes to this bug.