Bug 61111 - tcpdump and ping granularity for > 40 ms rtt's
Summary: tcpdump and ping granularity for > 40 ms rtt's
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: tcpdump   
(Show other bugs)
Version: 7.2
Hardware: i386 Linux
medium
medium
Target Milestone: ---
Assignee: Harald Hoyer
QA Contact:
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2002-03-13 20:00 UTC by Need Real Name
Modified: 2008-05-01 15:38 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2002-03-25 11:31:38 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

Description Need Real Name 2002-03-13 20:00:40 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 95)

Description of problem:
For TCPDUMP as well as PING the timestamps resolve to +-10
milliseconds after the first packet. Notice that after the first packet each 
subsequent packet is X9.XXX. the second digit is always 9.
I tried recompiling tcpdump, and libpcap. The ioctl timestamp call
appears to be the problem. When I get a new kernel 2.4.9 it works.
64 bytes from 10.97.1.1: icmp_seq=0 ttl=252 time=53.655 msec
64 bytes from 10.97.1.1: icmp_seq=1 ttl=252 time=69.976 msec
64 bytes from 10.97.1.1: icmp_seq=2 ttl=252 time=89.980 msec
64 bytes from 10.97.1.1: icmp_seq=3 ttl=252 time=79.982 msec
64 bytes from 10.97.1.1: icmp_seq=4 ttl=252 time=59.982 msec
64 bytes from 10.97.1.1: icmp_seq=5 ttl=252 time=89.977 msec
64 bytes from 10.97.1.1: icmp_seq=6 ttl=252 time=59.980 msec
64 bytes from 10.97.1.1: icmp_seq=7 ttl=252 time=49.981 msec
64 bytes from 10.97.1.1: icmp_seq=8 ttl=252 time=49.982 msec
64 bytes from 10.97.1.1: icmp_seq=9 ttl=252 time=79.983 msec
64 bytes from 10.97.1.1: icmp_seq=10 ttl=252 time=69.975 msec
64 bytes from 10.97.1.1: icmp_seq=11 ttl=252 time=59.982 msec
64 bytes from 10.97.1.1: icmp_seq=12 ttl=252 time=59.981 msec


Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. ping 10.97.1.1 # problem occurs on sites with more than 40 MS RTT
2. tcpdump -i eth0 # gets data bunched up on 10 MS plateaus
3.
	

Actual Results:  64 bytes from 10.97.1.1: icmp_seq=0 ttl=252 time=53.655 msec
64 bytes from 10.97.1.1: icmp_seq=1 ttl=252 time=69.976 msec
64 bytes from 10.97.1.1: icmp_seq=2 ttl=252 time=89.980 msec
64 bytes from 10.97.1.1: icmp_seq=3 ttl=252 time=79.982 msec
64 bytes from 10.97.1.1: icmp_seq=4 ttl=252 time=59.982 msec
64 bytes from 10.97.1.1: icmp_seq=5 ttl=252 time=89.977 msec
64 bytes from 10.97.1.1: icmp_seq=6 ttl=252 time=59.980 msec
64 bytes from 10.97.1.1: icmp_seq=7 ttl=252 time=49.981 msec
64 bytes from 10.97.1.1: icmp_seq=8 ttl=252 time=49.982 msec
64 bytes from 10.97.1.1: icmp_seq=9 ttl=252 time=79.983 msec
64 bytes from 10.97.1.1: icmp_seq=10 ttl=252 time=69.975 msec
64 bytes from 10.97.1.1: icmp_seq=11 ttl=252 time=59.982 msec
64 bytes from 10.97.1.1: icmp_seq=12 ttl=252 time=59.981 msec


Expected Results:  Results should be more random

Additional info:

Comment 1 Need Real Name 2002-03-22 23:28:14 UTC
This is a result of something done to the kernel.  Compiling a vanilla 2.4.7 
kernel from kernel.org results in tcpdump timestamp resolutions of 1us.  
Compiling 2.4.7-10 with an identical configuration (as close as possible)
results in 10ms resolution on tcpdump timestamps.

I suspect somebody has done something shakey to arch/i386/kernel/time.c



Comment 2 Arjan van de Ven 2002-03-25 11:16:42 UTC
What network driver is this ?
(it works fine here)
Also does this happen with the errata kernel (2.4.9-31) ?

Comment 3 Need Real Name 2002-03-26 18:16:28 UTC
Upgrading to the 2.4.9-31 kernel fixed the problem.
Thanks

Comment 4 Need Real Name 2002-04-03 23:37:03 UTC
Since you asked, I got the same behavior (10ms granulatiry under 2.4.7-10) 
with 3c59x (two machines), eepro100 (two machines), and lance (one machine).



Note You need to log in before you can comment on or make changes to this bug.