Bug 69220 - ntp not working
ntp not working
Status: CLOSED NOTABUG
Product: Red Hat Public Beta
Classification: Retired
Component: ntp (Show other bugs)
limbo
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Harald Hoyer
Brian Brock
:
Depends On:
Blocks: 67217
  Show dependency treegraph
 
Reported: 2002-07-19 00:57 EDT by Kris Urquhart
Modified: 2007-04-18 12:44 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2002-07-26 14:05:31 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
ntp.conf (2.74 KB, text/plain)
2002-07-24 14:15 EDT, Kris Urquhart
no flags Details

  None (edit)
Description Kris Urquhart 2002-07-19 00:57:53 EDT
Description of Problem:
ntp appears to do nothing in limbo (worked fine before upgrade from 7.3)

Version-Release number of selected component (if applicable):
ntp-4.1.1a-4

How Reproducible:
Very

Steps to Reproduce:
1. Manually sync time with master on local network
2. Run ntpd
3. Observe time drift with respect to master

Additional Information:
I'm not sure if this is an ntp problem, or something wrong in the time
facilities that it relies on.  Any advice on how to further debug the root cause
would be appreciated.
Comment 1 Kris Urquhart 2002-07-20 23:54:51 EDT
After upgrading to kernel-smp-2.4.18-5.73 (from kernel-smp-2.4.18-5.58), ntp 
appears to be finally syncing with the master, but still at a rate much slower 
than I am used to (2 days to resolve less than 30 seconds).
Comment 2 Kris Urquhart 2002-07-21 02:41:59 EDT
Actually, the slave machine drifted right past the master (it is now 8 seconds 
ahead), so ntp does indeed seem to be dead.
Comment 3 Kris Urquhart 2002-07-22 10:58:33 EDT
After watching the times some more, the slave machine is just bouncing around 
the master, somtimes fast, sometimes slow.

There is mention on the limbo list of HZ going from 100 to 1000 in the kernel.  
A factor of 10 in the algorithms for syncing the clocks could cause what I am 
seeing.
Comment 4 Harald Hoyer 2002-07-22 11:55:40 EDT
what does:
# ntpdc
> peers

show?
Comment 5 Kris Urquhart 2002-07-22 12:31:37 EDT
# ntpdc
ntpdc> peers
     remote           local      st poll reach  delay   offset    disp
=======================================================================
=waldo           5.0.0.0         16 1024    0 0.00000  0.000000 0.00000
*LOCAL(0)        127.0.0.1       10   64  377 0.00000  0.000000 0.00095
Comment 6 Harald Hoyer 2002-07-22 12:59:28 EDT
seems to be the output directly after ntpd start... could you please wait some
minutes before pasting "peers"?
Comment 7 Kris Urquhart 2002-07-22 18:34:30 EDT
ntp was last started on July 19:
Jul 19 09:29:09 oscar ntpd[971]: ntpd 4.1.1a@1.791 Sun Jun 23 17:32:49 EDT 2002 (1)
Jul 19 09:29:09 oscar ntpd: ntpd startup succeeded
Jul 19 09:29:09 oscar ntpd[971]: precision = 6 usec
Jul 19 09:29:09 oscar ntpd[971]: kernel time discipline status 0040
Jul 19 09:29:09 oscar ntpd[971]: frequency initialized 67.536 from /etc/ntp/drift
Jul 19 09:32:26 oscar ntpd[971]: kernel time discipline status change 41

Current peers:
# ntpdc
ntpdc> peers
     remote           local      st poll reach  delay   offset    disp
=======================================================================
=waldo           5.0.0.0         16 1024    0 0.00000  0.000000 0.00000
*LOCAL(0)        127.0.0.1       10   64  377 0.00000  0.000000 0.00092

Comment 8 Harald Hoyer 2002-07-23 05:51:01 EDT
hmm, seems like it didn't connect waldo at all.. Can you attach your ntp.conf?

Proper output should look like this:

$ /usr/sbin/ntpdc
ntpdc> peers
     remote           local      st poll reach  delay   offset    disp
=======================================================================
*ns.keso.fi      172.16.2.162     2 1024  377 0.14603  0.007938 0.01488
=ns.redhat.de    172.16.2.162     3 1024  377 0.00102  0.062582 0.01483

Comment 9 Kris Urquhart 2002-07-24 14:15:21 EDT
Created attachment 66826 [details]
ntp.conf
Comment 10 Kris Urquhart 2002-07-24 14:18:39 EDT
I have verified that waldo is responding:
[root@oscar root]# tcpdump port 123
tcpdump: listening on eth0
11:09:39.467514 oscar.kurquhart.net.ntp > waldo.ntp:  v4 client strat 0 poll 6
prec -15 (DF) [tos 0x10]
11:09:39.467765 waldo.ntp > oscar.kurquhart.net.ntp:  v4 server strat 3 poll 6
prec -17 (DF) [tos 0x10]
11:10:44.501024 oscar.kurquhart.net.ntp > waldo.ntp:  v4 client strat 0 poll 6
prec -15 (DF) [tos 0x10]
11:10:44.501316 waldo.ntp > oscar.kurquhart.net.ntp:  v4 server strat 3 poll 6
prec -17 (DF) [tos 0x10]
11:11:50.535047 oscar.kurquhart.net.ntp > waldo.ntp:  v4 client strat 0 poll 6
prec -15 (DF) [tos 0x10]
11:11:50.535333 waldo.ntp > oscar.kurquhart.net.ntp:  v4 server strat 3 poll 6
prec -17 (DF) [tos 0x10]

Why is the local side of the waldo connection listed as 5.0.0.0?  That does not
seem reasonable to me - the local address is 192.168.0.3.  
Comment 11 Harald Hoyer 2002-07-25 06:00:19 EDT
multicastclient?
Comment 12 Kris Urquhart 2002-07-25 10:16:14 EDT
I don't think so, as multicastclient is explicitly commented out in ntp.conf. 
At any rate, wouldn't that be on the 224.0.1.1 network?

Just to be sure, here is the relevant data from netstat:
[root@oscar root]# netstat -a -u -p | grep ntp
udp        0      0 oscar.kurquhart.net:ntp *:*       17284/ntpd
udp        0      0 oscar.kurquhart.net:ntp *:*       17284/ntpd
udp        0      0 *:ntp                   *:*       17284/ntpd
Comment 13 Harald Hoyer 2002-07-26 08:59:43 EDT
please add to your ntp.conf and retry:

restrict waldo mask 255.255.255.255 nomodify notrap noquery
Comment 14 Kris Urquhart 2002-07-26 14:05:26 EDT
Yep, that fixed it (actually I just removed all restrict lines, as this client 
is behind a firewall).  I had read the manual, but obviously not close enough - 
I thought the "restrict" clauses did not impact "server" lines.  Thanks for 
your help and patience.

Note You need to log in before you can comment on or make changes to this bug.