Red Hat Bugzilla – Bug 43801
ping summary computed wrongly
Last modified: 2015-03-04 20:09:09 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)
Description of problem:
The routine taking the responses received and giving the minimum, average,
maximum time and (poorly documented) deviation appears to
mishandle "unusual" data.
It may have trouble when its reported units change from usec to msec or
msec to sec. Last week I observed a ping session whose average time
reported was lower than its minimum:
64 bytes from piglet.tbo.net (220.127.116.11): icmp_seq=403 ttl=127
--- piglet.tbo.net ping statistics ---
459156 packets transmitted, 439547 packets received, 4% packet loss
round-trip min/avg/max/mdev = 7.987/3.602/1406225.176/2123.138 ms
Note the very large (23 minute) maximum response time, that icmp_seq has
wrapped, and that average is lower than minimum.
Today I observed a badly delayed packet reported as having been received
34 minutes before it was sent.
--- 18.104.22.168 ping statistics ---
27660 packets transmitted, 1097 packets received, 96% packet loss
round-trip min/avg/max/mdev = -2070687.-17/3184.346/1110765.804/70882.629
Note the large negative value for minimum seconds, and that the number is
presented with TWO negative signs -- one for each side of the decimal
Steps to Reproduce:
1.ping across a very flakey but mostly idle link with plenty of internal
buffering. piglet.tbo.net, home.tbo.net, jwdci.com, emphyrio.in-con.com,
2.break the link in some way. restart an intermediate router, move one of
the radios out of range, spin an antenna, flood the system with RF noise,
or wait for any of the above to happen. this takes about a day.
3.press INTR to terminate your ping session and run the stats routine.
Tricky... The problem is really how to handle these cases.
I don't necessarily want to change the internal measurement of time differences
to use 64bit instead of 32bit, so the only two sane things to do would be
a) Ignore overflows
b) Use some kind of MAXTIME for overflows and report all packages taking longer
as taking MAXTIME time.
Both approches won't reflect reality though anymore and will certainly lead to
If i have time i might really take a long look and see if i can switch internaly
to 64bit. No promises though ;)...
Read ya, Phil
OK, was actually easier than i thought it would be. Fixed it in rawhide, should
appear real soon now(tm).
Read ya, Phil
PS: You might still be able to trigger an overflow of mdev, but this is not
fixable at all. And after all, the most important things are min, max and avg,