From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.9) Gecko/20020310 Description of problem: The ping program supplied with RH7x and that compiled from the netkit-base source produce different results. This has become evident with the results returned using ping -c 10 wcarchive.cdrom.com Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.use RH's version of ping to check wcarchive.cdrom.com - high packet loss 2.use netkit's version of ping to check wcarchive.cdrom.com - "normal" loss Actual Results: Swapping between the two programs produces different results. The difference is reproducible - as much as ping results are reproducible! Expected Results: Slight variation instead of the large differences returned. Additional info: I run a script with results at http://members.optushome.com.au/graybeard/status/status.html which for the last 4 days has been returning high packet loss from wcarchive.cdrom.com. The last two graphs on that page display this, they are duplicate daily and weeklys graphs when wcarchive.cdrom.com ping loss was too high to record. Other than that they don't show much else :-) The other status (ping) scripts running on the external (Optus cable) network were not showing the problem (2 *BSD boxes). I blamed the external node that my modem is attached to, however on double checking my internal network I realized one machine had no problems. This machine is compiled from source (LFS); it uses netkit-base-0.17 for it's ping There are three RH72 machines on my internal network and they all exhibit similar behaviour (see attachment).
Created attachment 53769 [details] Ping results showing differences mentioned in bug report
Have you tried the Skipjack version on the RH 7.2 box? I'd be interested if it is somehow connected with the glibc/kernel we have (that is, if those are still original ones from RH that is ;-). Read ya, Phil
>Have you tried the Skipjack version on the RH 7.2 box? Do you mean move the ping-RH7292 over to the Enigma box and run it there? then Nope! The Skipjack ping was run from machine #3 as ping-RH7292 (ping utility, iputils-ss020124) Neither Glibc nor the kernel were altered on any machine - stock standard one and all [ I've got an unnamed fourth machine for playing with ;-) ] The skipjack version did behave differently though, from the attachment.... the real time was displayed as 3m25.240s "real 3m25.240s" whereas ping maintained it was quicker at 194955ms "20 packets transmitted, 20 received, 0% loss, time 194955ms" At least Skipjack returned a comparable result to the netkit-base run, even though in real time it took longer to complete, unlike the other machines (Enigma) where it just went through the roof on packet loss. That Skipjack however is no more - it's now a RH7.3 I quickly redid the test on machine #1 as listed on https://bugzilla.redhat.com/bugzilla/showattachment.cgi?attach_id=53769 but the problem doesn't exist at the moment, it appears to have disappeared as mysteriously as it appeared! It was only ever that one site (out of the six) and it had apparently worked okay for 6+ months prior to that hiccup. I've reinstated the original RedHat ping so that the ping-status script on machine #1 will use it rather than the netkit one. If (when?) it occurs again I'll let you know. Cheers, Glenn
The reason i asked was that in Skipjack resp. 7.3 (Valhala) i switched to a very new version of the iputils package which has a nearly completely rewritten ping in it, so the result with the time is interesting, but no drops. My gut feeling is that it has something to do with DNS. There is another iputils related bug which reports something similar, and there the problem disappears if the hosts are entered in /etc/hosts instead of being looked up via DNS. Maybe you can give that a try when you experience the problems once more. Just put the ip addresses in /etc/hosts. Only for checking, that is. Just want to make sure it might be the same problem so i don't need to track 2 problems unecessarily. :-) Read ya, Phil
I haven't been able to reproduce this problem in the latest releases anymore, so i'm closing this bug as currentrelease. Read ya, Phil