Skip to content
GitLab
Menu
Why GitLab
Pricing
Contact Sales
Explore
Why GitLab
Pricing
Contact Sales
Explore
Sign in
Get free trial
Commits on Source (3)
Simplify and remove redundancies, update URL to SSL
· 21bd12f6
Paul Theodoropoulos
authored
Sep 18, 2018
and
Eric S. Raymond
committed
Jan 21, 2019
Significant rewrite to clarify diff between ref spec and ntpsec.
21bd12f6
Some suggested edits.
· e1208d69
Paul Theodoropoulos
authored
Jan 12, 2019
and
Eric S. Raymond
committed
Jan 21, 2019
e1208d69
Simplify 'server pool' to merely 'pool'. Fix a grammar error.
· 4799ae27
Paul Theodoropoulos
authored
Jan 12, 2019
and
Eric S. Raymond
committed
Jan 21, 2019
4799ae27
Hide whitespace changes
Inline
Side-by-side
docs/discover.adoc
View file @
4799ae27
...
...
@@ -24,34 +24,34 @@ include::includes/hand.adoc[]
[[modes]]
== Introduction ==
Th
is page describe
s the automatic server discovery
schemes provided in
NTPv4. There are three automatic server discovery schemes: broadcast,
manycast, and server pool; which are described on this page. The
broadc
as
t
s
cheme utilizes the ubiquitous broadcast or one-to-many
paradigm native to IPv4 and IPv6. The manycast scheme is similar
to specifying to broadcast, but the servers listen on a specific
address known to the client. The server pool scheme uses DNS to resolv
e
addresses of multiple volunteer servers scattered throughout the world.
All three schemes work in much the same way and might be described as
_grab-n'-prune._ Through one means or
another they grab a number of
a
ssociations either directly or indirectly from the configuration file,
order them
from best to worst according
to the NTP mitigation
algorithms, and
prune the
surplus associations.
Th
e NTPv4 *reference specification* support
s th
re
e automatic server discovery
schemes: broadcast, manycast, and server pool. However, NTPsec only supports
the pool mechanism. For details on the reference specification,
ple
as
e
s
ee https://tools.ietf.org/html/rfc5905[RFC 5905].
The pool scheme expands a single DNS name into multiple peer entries.
This is intended for, but not limited to, th
e
https://www.ntppool.org[NTP Pool Project], a worldwide set of servers
volunteered for public use.
The mechanism might be described as
_grab-n'-prune._ Through one means or
a
nother, a number of associations are "grabbed" either directly or indirectly
from the configuration file, and they are ordered
from best to worst according
to the NTP mitigation
algorithms, and surplus associations
are pruned
.
[[assoc]]
== Association Management ==
All schemes
use an iterated process to discover new preemptable client
Pool discovery
use
s
an iterated process to discover new preemptable client
associations as long as the total number of client associations is less
than the +maxclock+ option of the +tos+ command. The +maxclock+ default
is 10, but it should be changed in typical configuration to some lower
number, usually two greater than the +minclock+ option of the same
command.
All schemes
use a stratum filter to select just those servers with
strat
um
considered useful. This can avoid large numbers of clients
ganging up on a small number of low-stratum servers and avoid servers
Pool discovery
use
s
a stratum filter to select just those servers with
strat
a
considered useful. This can avoid large numbers of clients
ganging up on a small number of low-stratum servers and avoid
s
servers
below or above specified stratum levels. By default, servers of all
strata are acceptable; however, the +tos+ command can be used to
restrict the acceptable range from the +floor+ option, inclusive, to the
...
...
@@ -61,12 +61,9 @@ supplied using the methods described on the
link:authentic.html[Authentication Support] page.
The pruning process uses a set of unreach counters, one for each
association created by the configuration or discovery process
es
. At each
association created by the configuration or discovery process. At each
poll interval, the counter is increased by one. If an acceptable packet
arrives for a persistent (configured) or ephemeral (broadcast)
association, the counter is set to zero. If an acceptable packet arrives
for a preemptable (manycast, pool) association and survives the
selection and clustering algorithms, the counter is set to zero. If the
arrives for an association, the counter is set to zero. If the
the counter reaches an arbitrary threshold of 10, the association
becomes a candidate for pruning.
...
...
@@ -78,17 +75,17 @@ maximum. The pruning algorithm design avoids needless discovery/prune
cycles for associations that wander in and out of the survivor list, but
otherwise have similar characteristics.
Following is a
summary of each scheme. Note that reference to option
applies to the commands described on the
link:confopt.html[Configuration
Options] page. See that page for applicability and defaults.
Following is a
description of the lool scheme. Note that
reference to options
applies to the commands described on the
link:confopt.html[Configuration Options] page. See that page
for applicability and defaults.
[[pool]]
==
Server
Pool Scheme ==
== Pool Scheme ==
The idea of targeting servers on a random basis to distribute and
balance the load is not a new one; however, the
http://www.pool.ntp.org/en/use.html[NTP Pool Project] puts
http
s
://www.pool.ntp.org/en/use.html[NTP Pool Project] puts
this on steroids. At present, several thousand operators around the
globe have volunteered their servers for public access. In general,
NTP is a lightweight service and servers used for other purposes don't
...
...