Bug 1290483 - ntpd failure to recover from loss of upstream Server in a DNS Pool
ntpd failure to recover from loss of upstream Server in a DNS Pool
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: ntp (Show other bugs)
6.5
All All
unspecified Severity low
: rc
: ---
Assigned To: Miroslav Lichvar
qe-baseos-daemons
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-10 11:13 EST by Ted Rule
Modified: 2016-08-17 04:22 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-08-17 04:22:48 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ted Rule 2015-12-10 11:13:55 EST
Description of problem:

By a quirk of how ntpd uses DNS as it starts up, a situation can occur where ntpd is running apparently Ok, but completely unable to acheive upstream Sync with the parent NTP Servers. Over time, this leads to the system clock slowly drifting out of sync with "reality".

As far as I can deduce, ntpd on startup takes each "server" entries in ntp.conf, and performs a DNS lookup to deduce a set of IP Addresses with which to populate the active dmpeers list.

However, once started, then if a given IP Address corresponds to a Server which subsequently doesn't appear in DNS at the original Server Name, such as when a Server is removed from a Public NTP Server Pool, ntpd will carry on listing the "broken" Server in the dmpeers listing, and carry on attempting to Sync to the "broken" Server.

In the majority of cases, of course, there are multiple "server"s listed in ntp.conf, and even if all of them reference Public NTP Server Pools, at least one of them is likely to remain operational and keep ntpd synchronised, but I have seen rare cases where at least 3 randomly chosen Pool servers have died.

Detecting this condition with monitoring is tricky as well, as the ntpd process is still apparently running Ok, and the clock takes a potentially long time to drift away from reality. 

A variant of the problem occurs when a machine boots up with temporarily broken DNS resolution. Under these circumstances, ntpd may end up starting up with a completely empty dmpeers list or one that only contains a 127.127.x.x local clock. I have seen similar effects to this when booting an OSX Laptop with the Airport turned off; ntpd fires up, but with no DNS to resolve the Server name to an IP Address, the ntpd process starts with an empty dmpeers list, and sulks like that "forever".

The overall problem is that the running ntpd process has no direct way of determining the original DNS names used to populate the dmpeers listing, and/or a means of replacing a dmpeers entry which has lost sync with a fresh working IP address.


An admittedly kludgy workround to this might be to add something to cron.daily which restarts ntpd if:


ntpd has been running for at least 24 hours
 
AND

ntp.conf only contains NAMEs rather than ADDRESSes for the "server" entries

AND

ntp.conf contains more than one "server"

AND

ntpdc dmpeers localhost shows only one "sync'ed" "server"


The last rule would imply that ntpd was one server away from losing sync entirely. Obviously, variations on the above conditions governing when to restart ntpd could allow for trying to ensure at least 2 working peers, etc. etc., etc.


Because ntpd's operation is very time-critical, I understand that it is unwise to get it to perform and DNS operations directly; hence the suggestion for some sort of parent-process/Cron Job which checks that ntpd is really happy.
Comment 2 Miroslav Lichvar 2015-12-15 09:59:18 EST
I think the problem you describe is actually solved in recent ntp versions (4.2.8). Servers specified with the pool command should be replaced with newly resolved addresses automatically when they are unreachable. The name resolving operation runs in a separate thread or process, it doesn't block the main process.

The current RHEL6 ntp version (4.2.6) supports a pool command too, but the only difference to the server command is that it adds multiple sources. They are not replaced when unreachable.

Unfortunately, this was a major change in the code and I think it would be difficult to backport to 4.2.6. The suggested approach running a script from cron that would restart ntpd when no sources are reachable could work, but I think that could also introduce new problems, e.g. when admin configures ntpd in runtime via ntpq/ntpdc, these changes would be lost after ntpd restart.
Comment 5 Miroslav Lichvar 2016-08-17 04:22:48 EDT
Red Hat Enterprise Linux version 6 is entering the Production 2 phase of its lifetime and this bug doesn't meet the criteria for it, i.e. only high severity issues will be fixed. Please see https://access.redhat.com/support/policy/updates/errata/ for further information.

In order to avoid the problem described in this bug report, it's recommended to switch to the chrony NTP implementation, which was included in RHEL 6.8. When a server becomes unreachable, it will refresh its address and replace the source automatically.

Note You need to log in before you can comment on or make changes to this bug.