-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathrfc1305
6391 lines (5242 loc) · 300 KB
/
rfc1305
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Network Working Group David L. Mills
Request for Comments: 1305 University of Delaware
Obsoletes RFC-1119, RFC-1059, RFC-958 March 1992
Network Time Protocol (Version 3)
Specification, Implementation and Analysis
Note: This document consists of an approximate rendering in ASCII of the
PostScript document of the same name. It is provided for convenience and
for use in searches, etc. However, most tables, figures, equations and
captions have not been rendered and the pagination and section headings
are not available.
Abstract
This document describes the Network Time Protocol (NTP), specifies its
formal structure and summarizes information useful for its
implementation. NTP provides the mechanisms to synchronize time and
coordinate time distribution in a large, diverse internet operating at
rates from mundane to lightwave. It uses a returnable-time design in
which a distributed subnet of time servers operating in a self-
organizing, hierarchical-master-slave configuration synchronizes local
clocks within the subnet and to national time standards via wire or
radio. The servers can also redistribute reference time via local
routing algorithms and time daemons.
Status of this Memo
This RFC specifies an IAB standards track protocol for the Internet
community and requests discussion and suggestions for improvements.
Please refer to the current edition of the <169>IAB Official Protocol
Standards<170> for the standardization state and status of this
protocol. Distribution of this memo is unlimited.
Keywords: network clock synchronization, standard time distribution,
fault-tolerant architecture, maximum-likelihood estimation, disciplined
oscillator, internet protocol, high-speed networks, formal
specification.
Preface
This document describes Version 3 of the Network Time Protocol (NTP). It
supersedes Version 2 of the protocol described in RFC-1119 dated
September 1989. However, it neither changes the protocol in any
significant way nor obsoletes previous versions or existing
implementations. The main motivation for the new version is to refine
the analysis and implementation models for new applications at much
higher network speeds to the gigabit-per-second regime and to provide
for the enhanced stability, accuracy and precision required at such
speeds. In particular, the sources of time and frequency errors have
been rigorously examined and error bounds established in order to
improve performance, provide a model for correctness assertions and
indicate timekeeping quality to the user. The revision also incorporates
two new optional features, (1) an algorithm to combine the offsets of a
number of peer time servers in order to enhance accuracy and (2)
improved local-clock algorithms which allow the poll intervals on all
synchronization paths to be substantially increased in order to reduce
network overhead. An overview of the changes, which are described in
detail in Appendix D, follows:
1.
In Version 3 The local-clock algorithm has been overhauled to improve
stability and accuracy. Appendix G presents a detailed mathematical
model and design example which has been refined with the aid of
feedback-control analysis and extensive simulation using data collected
over ordinary Internet paths. Section 5 of RFC-1119 on the NTP local
clock has been completely rewritten to describe the new algorithm. Since
the new algorithm can result in message rates far below the old ones, it
is highly recommended that they be used in new implementations. Note
that use of the new algorithm does not affect interoperability with
previous versions or existing implementations.
2.
In Version 3 a new algorithm to combine the offsets of a number of peer
time servers is presented in Appendix F. This algorithm is modelled on
those used by national standards laboratories to combine the weighted
offsets from a number of standard clocks to construct a synthetic
laboratory timescale more accurate than that of any clock separately. It
can be used in an NTP implementation to improve accuracy and stability
and reduce errors due to asymmetric paths in the Internet. The new
algorithm has been simulated using data collected over ordinary Internet
paths and, along with the new local-clock algorithm, implemented and
tested in the Fuzzball time servers now running in the Internet. Note
that use of the new algorithm does not affect interoperability with
previous versions or existing implementations.
3.
Several inconsistencies and minor errors in previous versions have been
corrected in Version 3. The description of the procedures has been
rewritten in pseudo-code augmented by English commentary for clarity and
to avoid ambiguity. Appendix I has been added to illustrate C-language
implementations of the various filtering and selection algorithms
suggested for NTP. Additional information is included in Section 5 and
in Appendix E, which includes the tutorial material formerly included in
Section 2 of RFC-1119, as well as much new material clarifying the
interpretation of timescales and leap seconds.
4.
Minor changes have been made in the Version-3 local-clock algorithms to
avoid problems observed when leap seconds are introduced in the UTC
timescale and also to support an auxiliary precision oscillator, such as
a cesium clock or timing receiver, as a precision timebase. In addition,
changes were made to some procedures described in Section 3 and in the
clock-filter and clock-selection procedures described in Section 4.
While these changes were made to correct minor bugs found as the result
of experience and are recommended for new implementations, they do not
affect interoperability with previous versions or existing
implementations in other than minor ways (at least until the next leap
second).
5.
In Version 3 changes were made to the way delay, offset and dispersion
are defined, calculated and processed in order to reliably bound the
errors inherent in the time-transfer procedures. In particular, the
error accumulations were moved from the delay computation to the
dispersion computation and both included in the clock filter and
selection procedures. The clock-selection procedure was modified to
remove the first of the two sorting/discarding steps and replace with an
algorithm first proposed by Marzullo and later incorporated in the
Digital Time Service. These changes do not significantly affect the
ordinary operation of or compatibility with various versions of NTP, but
they do provide the basis for formal statements of correctness as
described in Appendix H.
Table of Contents
1. Introduction 1
1.1. Related Technology 2
2. System Architecture 4
2.1. Implementation Model 6
2.2. Network Configurations 7
3. Network Time Protocol 8
3.1. Data Formats 8
3.2. State Variables and Parameters 9
3.2.1. Common Variables 9
3.2.2. System Variables 12
3.2.3. Peer Variables 12
3.2.4. Packet Variables 14
3.2.5. Clock-Filter Variables 14
3.2.6. Authentication Variables 15
3.2.7. Parameters 15
3.3. Modes of Operation 17
3.4. Event Processing 19
3.4.1. Notation Conventions 19
3.4.2. Transmit Procedure 20
3.4.3. Receive Procedure 22
3.4.4. Packet Procedure 24
3.4.5. Clock-Update Procedure 27
3.4.6. Primary-Clock Procedure 28
3.4.7. Initialization Procedures 28
3.4.7.1. Initialization Procedure 29
3.4.7.2. Initialization-Instantiation Procedure 29
3.4.7.3. Receive-Instantiation Procedure 30
3.4.7.4. Primary Clock-Instantiation Procedure 31
3.4.8. Clear Procedure 31
3.4.9. Poll-Update Procedure 32
3.5. Synchronization Distance Procedure 32
3.6. Access Control Issues 33
4. Filtering and Selection Algorithms 34
4.1. Clock-Filter Procedure 35
4.2. Clock-Selection Procedure 36
4.2.1. Intersection Algorithm 36
5. Local Clocks 40
5.1. Fuzzball Implementation 41
5.2. Gradual Phase Adjustments 42
5.3. Step Phase Adjustments 43
5.4. Implementation Issues 44
6. Acknowledgments 45
7. References 46
A. Appendix A. NTP Data Format - Version 3 50
B. Appendix B. NTP Control Messages 53
B.1. NTP Control Message Format 54
B.2. Status Words 56
B.2.1. System Status Word 56
B.2.2. Peer Status Word 57
B.2.3. Clock Status Word 58
B.2.4. Error Status Word 58
B.3. Commands 59
C. Appendix C. Authentication Issues 61
C.1. NTP Authentication Mechanism 62
C.2. NTP Authentication Procedures 63
C.2.1. Encrypt Procedure 63
4.2.2. Clustering Algorithm 38
C.2.2. Decrypt Procedure 64
C.2.3. Control-Message Procedures 65
D. Appendix D. Differences from Previous Versions. 66
E. Appendix E. The NTP Timescale and its Chronometry 70
E.1. Introduction 70
E.2. Primary Frequency and Time Standards 70
E.3. Time and Frequency Dissemination 72
E.4. Calendar Systems 74
E.5. The Modified Julian Day System 75
E.6. Determination of Frequency 76
E.7. Determination of Time and Leap Seconds 76
E.8. The NTP Timescale and Reckoning with UTC 78
F. Appendix F. The NTP Clock-Combining Algorithm 80
F.1. Introduction 80
F.2. Determining Time and Frequency 80
F.3. Clock Modelling 81
F.4. Development of a Composite Timescale 81
F.5. Application to NTP 84
F.6. Clock-Combining Procedure 84
G. Appendix G. Computer Clock Modelling and Analysis 86
G.1. Computer Clock Models 86
G.1.1. The Fuzzball Clock Model 88
G.1.2. The Unix Clock Model 89
G.2. Mathematical Model of the NTP Logical Clock 91
G.3. Parameter Management 93
G.4. Adjusting VCO Gain (<$Ebold alpha>) 94
G.5. Adjusting PLL Bandwidth (<$Ebold tau>) 94
G.6. The NTP Clock Model 95
H. Appendix H. Analysis of Errors and Correctness Principles
98
H.1. Introduction 98
H.2. Timestamp Errors 98
H.3. Measurement Errors 100
H.4. Network Errors 101
H.5. Inherited Errors 102
H.6. Correctness Principles 104
I. Appendix I. Selected C-Language Program Listings 107
I.1. Common Definitions and Variables 107
I.2. Clock<196>Filter Algorithm 108
I.3. Interval Intersection Algorithm 109
I.4. Clock<196>Selection Algorithm 110
I.5. Clock<196>Combining Procedure 111
I.6. Subroutine to Compute Synchronization Distance 112
List of Figures
Figure 1. Implementation Model 6
Figure 2. Calculating Delay and Offset 25
Figure 3. Clock Registers 39
Figure 4. NTP Message Header 50
Figure 5. NTP Control Message Header 54
Figure 6. Status Word Formats 55
Figure 7. Authenticator Format 63
Figure 8. Comparison of UTC and NTP Timescales at Leap 79
Figure 9. Network Time Protocol 80
Figure 10. Hardware Clock Models 86
Figure 11. Clock Adjustment Process 90
Figure 12. NTP Phase-Lock Loop (PLL) Model 91
Figure 13. Timing Intervals 96
Figure 14. Measuring Delay and Offset 100
Figure 15. Error Accumulations 103
Figure 16. Confidence Intervals and Intersections 105
List of Tables
Table 1. System Variables 12
Table 2. Peer Variables 13
Table 3. Packet Variables 14
Table 4. Parameters 16
Table 5. Modes and Actions 22
Table 6. Clock Parameters 40
Table 7. Characteristics of Standard Oscillators 71
Table 8. Table of Leap-Second Insertions 77
Table 9. Notation Used in PLL Analysis 91
Table 10. PLL Parameters 91
Table 11. Notation Used in PLL Analysis 95
Table 12. Notation Used in Error Analysis 98
Introduction
This document constitutes a formal specification of the Network Time
Protocol (NTP) Version 3, which is used to synchronize timekeeping among
a set of distributed time servers and clients. It defines the
architectures, algorithms, entities and protocols used by NTP and is
intended primarily for implementors. A companion document [MIL91a]
summarizes the requirements, analytical models, algorithmic analysis and
performance under typical Internet conditions. Another document [MIL91b]
describes the NTP timescale and its relationship to other standard
timescales now in use. NTP was first described in RFC-958 [MIL85c], but
has since evolved in significant ways, culminating in the most recent
NTP Version 2 described in RFC-1119 [MIL89]. It is built on the Internet
Protocol (IP) [DAR81a] and User Datagram Protocol (UDP) [POS80], which
provide a connectionless transport mechanism; however, it is readily
adaptable to other protocol suites. NTP is evolved from the Time
Protocol [POS83b] and the ICMP Timestamp message [DAR81b], but is
specifically designed to maintain accuracy and robustness, even when
used over typical Internet paths involving multiple gateways, highly
dispersive delays and unreliable nets.
The service environment consists of the implementation model and service
model described in Section 2. The implementation model is based on a
multiple-process operating system architecture, although other
architectures could be used as well. The service model is based on a
returnable-time design which depends only on measured clock offsets, but
does not require reliable message delivery. The synchronization subnet
uses a self-organizing, hierarchical-master-slave configuration, with
synchronization paths determined by a minimum-weight spanning tree.
While multiple masters (primary servers) may exist, there is no
requirement for an election protocol.
NTP itself is described in Section 3. It provides the protocol
mechanisms to synchronize time in principle to precisions in the order
of nanoseconds while preserving a non-ambiguous date well into the next
century. The protocol includes provisions to specify the characteristics
and estimate the error of the local clock and the time server to which
it may be synchronized. It also includes provisions for operation with a
number of mutually suspicious, hierarchically distributed primary
reference sources such as radio-synchronized clocks.
Section 4 describes algorithms useful for deglitching and smoothing
clock-offset samples collected on a continuous basis. These algorithms
evolved from those suggested in [MIL85a], were refined as the results of
experiments described in [MIL85b] and further evolved under typical
operating conditions over the last three years. In addition, as the
result of experience in operating multiple-server subnets including
radio clocks at several sites in the U.S. and with clients in the U.S.
and Europe, reliable algorithms for selecting good clocks from a
population possibly including broken ones have been developed [DEC89],
[MIL91a] and are described in Section 4.
The accuracies achievable by NTP depend strongly on the precision of the
local-clock hardware and stringent control of device and process
latencies. Provisions must be included to adjust the software logical-
clock time and frequency in response to corrections produced by NTP.
Section 5 describes a local-clock design evolved from the Fuzzball
implementation described in [MIL83b] and [MIL88b]. This design includes
offset-slewing, frequency compensation and deglitching mechanisms
capable of accuracies in the order of a millisecond, even after extended
periods when synchronization to primary reference sources has been lost.
Details specific to NTP packet formats used with the Internet Protocol
(IP) and User Datagram Protocol (UDP) are presented in Appendix A, while
details of a suggested auxiliary NTP Control Message, which may be used
when comprehensive network-monitoring facilities are not available, are
presented in Appendix B. Appendix C contains specification and
implementation details of an optional authentication mechanism which can
be used to control access and prevent unauthorized data modification,
while Appendix D contains a listing of differences between Version 3 of
NTP and previous versions. Appendix E expands on issues involved with
precision timescales and calendar dating peculiar to computer networks
and NTP. Appendix F describes an optional algorithm to improve accuracy
by combining the time offsets of a number of clocks. Appendix G presents
a detailed mathematical model and analysis of the NTP local-clock
algorithms. Appendix H analyzes the sources and propagation of errors
and presents correctness principles relating to the time-transfer
service. Appendix I illustrates C-language code segments for the clock-
filter, clock-selection and related algorithms described in Section 4.
Related Technology
Other mechanisms have been specified in the Internet protocol suite to
record and transmit the time at which an event takes place, including
the Daytime protocol [POS83a], Time Protocol [POS83b], ICMP Timestamp
message [DAR81b] and IP Timestamp option [SU81]. Experimental results on
measured clock offsets and roundtrip delays in the Internet are
discussed in [MIL83a], [MIL85b], [COL88] and [MIL88a]. Other
synchronization algorithms are discussed in [LAM78], [GUS84], [HAL84],
[LUN84], [LAM85], [MAR85], [MIL85a], [MIL85b], [MIL85c], [GUS85b],
[SCH86], [TRI86], [RIC88], [MIL88a], [DEC89] and [MIL91a], while
protocols based on them are described in [MIL81a], [MIL81b], [MIL83b],
[GUS85a], [MIL85c], [TRI86], [MIL88a], [DEC89] and [MIL91a]. NTP uses
techniques evolved from them and both linear-systems and agreement
methodologies. Linear methods for digital telephone network
synchronization are summarized in [LIN80], while agreement methods for
clock synchronization are summarized in [LAM85].
The Digital Time Service (DTS) [DEC89] has many of the same service
objectives as NTP. The DTS design places heavy emphasis on configuration
management and correctness principles when operated in a managed LAN or
LAN-cluster environment, while NTP places heavy emphasis on the accuracy
and stability of the service operated in an unmanaged, global-internet
environment. In DTS a synchronization subnet consists of clerks,
servers, couriers and time providers. With respect to the NTP
nomenclature, a time provider is a primary reference source, a courier
is a secondary server intended to import time from one or more distant
primary servers for local redistribution and a server is intended to
provide time for possibly many end nodes or clerks. Unlike NTP, DTS does
not need or use mode or stratum information in clock selection and does
not include provisions to filter timing noise, select the most accurate
from a set of presumed correct clocks or compensate for inherent
frequency errors.
In fact, the latest revisions in NTP have adopted certain features of
DTS in order to support correctness principles. These include mechanisms
to bound the maximum errors inherent in the time-transfer procedures and
the use of a provably correct (subject to stated assumptions) mechanism
to reject inappropriate peers in the clock-selection procedures. These
features are described in Section 4 and Appendix H of this document.
The Fuzzball routing protocol [MIL83b], sometimes called Hellospeak,
incorporates time synchronization directly into the routing-protocol
design. One or more processes synchronize to an external reference
source, such as a radio clock or NTP daemon, and the routing algorithm
constructs a minimum-weight spanning tree rooted on these processes. The
clock offsets are then distributed along the arcs of the spanning tree
to all processes in the system and the various process clocks corrected
using the procedure described in Section 5 of this document. While it
can be seen that the design of Hellospeak strongly influenced the design
of NTP, Hellospeak itself is not an Internet protocol and is unsuited
for use outside its local-net environment.
The Unix 4.3bsd time daemon timed [GUS85a] uses a single master-time
daemon to measure offsets of a number of slave hosts and send periodic
corrections to them. In this model the master is determined using an
election algorithm [GUS85b] designed to avoid situations where either no
master is elected or more than one master is elected. The election
process requires a broadcast capability, which is not a ubiquitous
feature of the Internet. While this model has been extended to support
hierarchical configurations in which a slave on one network serves as a
master on the other [TRI86], the model requires handcrafted
configuration tables in order to establish the hierarchy and avoid
loops. In addition to the burdensome, but presumably infrequent,
overheads of the election process, the offset measurement/correction
process requires twice as many messages as NTP per update.
A scheme with features similar to NTP is described in [KOP87]. This
scheme is intended for multi-server LANs where each of a set of possibly
many time servers determines its local-time offset relative to each of
the other servers in the set using periodic timestamped messages, then
determines the local-clock correction using the Fault-Tolerant Average
(FTA) algorithm of [LUN84]. The FTA algorithm, which is useful where up
to k servers may be faulty, sorts the offsets, discards the k highest
and lowest ones and averages the rest. The scheme, as described in
[SCH86], is most suitable to LAN environments which support broadcast
and would result in unacceptable overhead in an internet environment. In
addition, for reasons given in Section 4 of this paper, the statistical
properties of the FTA algorithm are not likely to be optimal in an
internet environment with highly dispersive delays.
A good deal of research has gone into the issue of maintaining accurate
time in a community where some clocks cannot be trusted. A truechimer is
a clock that maintains timekeeping accuracy to a previously published
(and trusted) standard, while a falseticker is a clock that does not.
Determining whether a particular clock is a truechimer or falseticker is
an interesting abstract problem which can be attacked using agreement
methods summarized in [LAM85] and [SRI87].
A convergence function operates upon the offsets between the clocks in a
system to increase the accuracy by reducing or eliminating errors caused
by falsetickers. There are two classes of convergence functions, those
involving interactive-convergence algorithms and those involving
interactive-consistency algorithms. Interactive-convergence algorithms
use statistical clustering techniques such as the fault-tolerant average
algorithm of [HAL84], the CNV algorithm of [LUN84], the majority-subset
algorithm of [MIL85a], the non-Byzantine algorithm of [RIC88], the
egocentric algorithm of [SCH86], the intersection algorithm of [MAR85]
and [DEC89] and the algorithms in Section 4 of this document.
Interactive-consistency algorithms are designed to detect faulty clock
processes which might indicate grossly inconsistent offsets in
successive readings or to different readers. These algorithms use an
agreement protocol involving successive rounds of readings, possibly
relayed and possibly augmented by digital signatures. Examples include
the fireworks algorithm of [HAL84] and the optimum algorithm of [SRI87].
However, these algorithms require large numbers of messages, especially
when large numbers of clocks are involved, and are designed to detect
faults that have rarely been found in the Internet experience. For these
reasons they are not considered further in this document.
In practice it is not possible to determine the truechimers from the
falsetickers on other than a statistical basis, especially with
hierarchical configurations and a statistically noisy Internet. While it
is possible to bound the maximum errors in the time-transfer procedures,
assuming sufficiently generous tolerances are adopted for the hardware
components, this generally results in rather poor accuracies and
stabilities. The approach taken in the NTP design and its predecessors
involves mutually coupled oscillators and maximum-likelihood estimation
and clock-selection procedures, together with a design that allows
provable assertions on error bounds to be made relative to stated
assumptions on the correctness of the primary reference sources. From
the analytical point of view, the system of distributed NTP peers
operates as a set of coupled phase-locked oscillators, with the update
algorithm functioning as a phase detector and the local clock as a
disciplined oscillator, but with deterministic error bounds calculated
at each step in the time-transfer process.
The particular choice of offset measurement and computation procedure
described in Section 3 is a variant of the returnable-time system used
in some digital telephone networks [LIN80]. The clock filter and
selection algorithms are designed so that the clock synchronization
subnet self-organizes into a hierarchical-master-slave configuration
[MIT80]. With respect to timekeeping accuracy and stability, the
similarity of NTP to digital telephone systems is not accidental, since
systems like this have been studied extensively [LIN80], [BRA80]. What
makes the NTP model unique is the adaptive configuration, polling,
filtering, selection and correctness mechanisms which tailor the
dynamics of the system to fit the ubiquitous Internet environment.
System Architecture
In the NTP model a number of primary reference sources, synchronized by
wire or radio to national standards, are connected to widely accessible
resources, such as backbone gateways, and operated as primary time
servers. The purpose of NTP is to convey timekeeping information from
these servers to other time servers via the Internet and also to cross-
check clocks and mitigate errors due to equipment or propagation
failures. Some number of local-net hosts or gateways, acting as
secondary time servers, run NTP with one or more of the primary servers.
In order to reduce the protocol overhead, the secondary servers
distribute time via NTP to the remaining local-net hosts. In the
interest of reliability, selected hosts can be equipped with less
accurate but less expensive radio clocks and used for backup in case of
failure of the primary and/or secondary servers or communication paths
between them.
Throughout this document a standard nomenclature has been adopted: the
stability of a clock is how well it can maintain a constant frequency,
the accuracy is how well its frequency and time compare with national
standards and the precision is how precisely these quantities can be
maintained within a particular timekeeping system. Unless indicated
otherwise, the offset of two clocks is the time difference between them,
while the skew is the frequency difference (first derivative of offset
with time) between them. Real clocks exhibit some variation in skew
(second derivative of offset with time), which is called drift; however,
in this version of the specification the drift is assumed zero.
NTP is designed to produce three products: clock offset, roundtrip delay
and dispersion, all of which are relative to a selected reference clock.
Clock offset represents the amount to adjust the local clock to bring it
into correspondence with the reference clock. Roundtrip delay provides
the capability to launch a message to arrive at the reference clock at a
specified time. Dispersion represents the maximum error of the local
clock relative to the reference clock. Since most host time servers will
synchronize via another peer time server, there are two components in
each of these three products, those determined by the peer relative to
the primary reference source of standard time and those measured by the
host relative to the peer. Each of these components are maintained
separately in the protocol in order to facilitate error control and
management of the subnet itself. They provide not only precision
measurements of offset and delay, but also definitive maximum error
bounds, so that the user interface can determine not only the time, but
the quality of the time as well.
There is no provision for peer discovery or virtual-circuit management
in NTP. Data integrity is provided by the IP and UDP checksums. No flow-
control or retransmission facilities are provided or necessary.
Duplicate detection is inherent in the processing algorithms. The
service can operate in a symmetric mode, in which servers and clients
are indistinguishable, yet maintain a small amount of state information,
or in client/server mode, in which servers need maintain no state other
than that contained in the client request. A lightweight association-
management capability, including dynamic reachability and variable poll-
rate mechanisms, is included only to manage the state information and
reduce resource requirements. Since only a single NTP message format is
used, the protocol is easily implemented and can be used in a variety of
solicited or unsolicited polling mechanisms.
It should be recognized that clock synchronization requires by its
nature long periods and multiple comparisons in order to maintain
accurate timekeeping. While only a few measurements are usually adequate
to reliably determine local time to within a second or so, periods of
many hours and dozens of measurements are required to resolve oscillator
skew and maintain local time to the order of a millisecond. Thus, the
accuracy achieved is directly dependent on the time taken to achieve it.
Fortunately, the frequency of measurements can be quite low and almost
always non-intrusive to normal net operations.
Implementation Model
In what may be the most common client/server model a client sends an NTP
message to one or more servers and processes the replies as received.
The server interchanges addresses and ports, overwrites certain fields
in the message, recalculates the checksum and returns the message
immediately. Information included in the NTP message allows the client
to determine the server time with respect to local time and adjust the
local clock accordingly. In addition, the message includes information
to calculate the expected timekeeping accuracy and reliability, as well
as select the best from possibly several servers.
While the client/server model may suffice for use on local nets
involving a public server and perhaps many workstation clients, the full
generality of NTP requires distributed participation of a number of
client/servers or peers arranged in a dynamically reconfigurable,
hierarchically distributed configuration. It also requires sophisticated
algorithms for association management, data manipulation and local-clock
control. Throughout the remainder of this document the term host refers
to an instantiation of the protocol on a local processor, while the term
peer refers to the instantiation of the protocol on a remote processor
connected by a network path.
Figure 1<$&fig1> shows an implementation model for a host including
three processes sharing a partitioned data base, with a partition
dedicated to each peer, and interconnected by a message-passing system.
The transmit process, driven by independent timers for each peer,
collects information in the data base and sends NTP messages to the
peers. Each message contains the local timestamp when the message is
sent, together with previously received timestamps and other information
necessary to determine the hierarchy and manage the association. The
message transmission rate is determined by the accuracy required of the
local clock, as well as the accuracies of its peers.
The receive process receives NTP messages and perhaps messages in other
protocols, as well as information from directly connected radio clocks.
When an NTP message is received, the offset between the peer clock and
the local clock is computed and incorporated into the data base along
with other information useful for error determination and peer
selection. A filtering algorithm described in Section 4 improves the
accuracy by discarding inferior data.
The update procedure is initiated upon receipt of a message and at other
times. It processes the offset data from each peer and selects the best
one using the algorithms of Section 4. This may involve many
observations of a few peers or a few observations of many peers,
depending on the accuracies required.
The local-clock process operates upon the offset data produced by the
update procedure and adjusts the phase and frequency of the local clock
using the mechanisms described in Section 5. This may result in either a
step-change or a gradual phase adjustment of the local clock to reduce
the offset to zero. The local clock provides a stable source of time
information to other users of the system and for subsequent reference by
NTP itself.
Network Configurations
The synchronization subnet is a connected network of primary and
secondary time servers, clients and interconnecting transmission paths.
A primary time server is directly synchronized to a primary reference
source, usually a radio clock. A secondary time server derives
synchronization, possibly via other secondary servers, from a primary
server over network paths possibly shared with other services. Under
normal circumstances it is intended that the synchronization subnet of
primary and secondary servers assumes a hierarchical-master-slave
configuration with the primary servers at the root and secondary servers
of decreasing accuracy at successive levels toward the leaves.
Following conventions established by the telephone industry [BEL86], the
accuracy of each server is defined by a number called the stratum, with
the topmost level (primary servers) assigned as one and each level
downwards (secondary servers) in the hierarchy assigned as one greater
than the preceding level. With current technology and available radio
clocks, single-sample accuracies in the order of a millisecond can be
achieved at the network interface of a primary server. Accuracies of
this order require special care in the design and implementation of the
operating system and the local-clock mechanism, such as described in
Section 5.
As the stratum increases from one, the single-sample accuracies
achievable will degrade depending on the network paths and local-clock
stabilities. In order to avoid the tedious calculations [BRA80]
necessary to estimate errors in each specific configuration, it is
useful to assume the mean measurement errors accumulate approximately in
proportion to the measured delay and dispersion relative to the root of
the synchronization subnet. Appendix H contains an analysis of errors,
including a derivation of maximum error as a function of delay and
dispersion, where the latter quantity depends on the precision of the
timekeeping system, frequency tolerance of the local clock and various
residuals. Assuming the primary servers are synchronized to standard
time within known accuracies, this provides a reliable, determistic
specification on timekeeping accuracies throughout the synchronization
subnet.
Again drawing from the experience of the telephone industry, which
learned such lessons at considerable cost [ABA89], the synchronization
subnet topology should be organized to produce the highest accuracy, but
must never be allowed to form a loop. An additional factor is that each
increment in stratum involves a potentially unreliable time server which
introduces additional measurement errors. The selection algorithm used
in NTP uses a variant of the Bellman-Ford distributed routing algorithm
[37] to compute the minimum-weight spanning trees rooted on the primary
servers. The distance metric used by the algorithm consists of the
(scaled) stratum plus the synchronization distance, which itself
consists of the dispersion plus one-half the absolute delay. Thus, the
synchronization path will always take the minimum number of servers to
the root, with ties resolved on the basis of maximum error.
As a result of this design, the subnet reconfigures automatically in a
hierarchical-master-slave configuration to produce the most accurate and
reliable time, even when one or more primary or secondary servers or the
network paths between them fail. This includes the case where all normal
primary servers (e.g., highly accurate WWVB radio clock operating at the
lowest synchronization distances) on a possibly partitioned subnet fail,
but one or more backup primary servers (e.g., less accurate WWV radio
clock operating at higher synchronization distances) continue operation.
However, should all primary servers throughout the subnet fail, the
remaining secondary servers will synchronize among themselves while
distances ratchet upwards to a preselected maximum <169>infinity<170>
due to the well-known properties of the Bellman-Ford algorithm. Upon
reaching the maximum on all paths, a server will drop off the subnet and
free-run using its last determined time and frequency. Since these
computations are expected to be very precise, especially in frequency,
even extended outage periods can result in timekeeping errors not
greater than a few milliseconds per day with appropriately stabilized
oscillators (see Section 5).
In the case of multiple primary servers, the spanning-tree computation
will usually select the server at minimum synchronization distance.
However, when these servers are at approximately the same distance, the
computation may result in random selections among them as the result of
normal dispersive delays. Ordinarily, this does not degrade accuracy as
long as any discrepancy between the primary servers is small compared to
the synchronization distance. If not, the filter and selection
algorithms will select the best of the available servers and cast out
outlyers as intended.
Network Time Protocol
This section consists of a formal definition of the Network Time
Protocol, including its data formats, entities, state variables, events
and event-processing procedures. The specification is based on the
implementation model illustrated in Figure 1, but it is not intended
that this model is the only one upon which a specification can be based.
In particular, the specification is intended to illustrate and clarify
the intrinsic operations of NTP, as well as to serve as a foundation for
a more rigorous, comprehensive and verifiable specification.
Data Formats
All mathematical operations expressed or implied herein are in two's-
complement, fixed-point arithmetic. Data are specified as integer or
fixed-point quantities, with bits numbered in big-endian fashion from
zero starting at the left, or high-order, position. Since various
implementations may scale externally derived quantities for internal
use, neither the precision nor decimal-point placement for fixed-point
quantities is specified. Unless specified otherwise, all quantities are
unsigned and may occupy the full field width with an implied zero
preceding bit zero. Hardware and software packages designed to work with
signed quantities will thus yield surprising results when the most
significant (sign) bit is set. It is suggested that externally derived,
unsigned fixed-point quantities such as timestamps be shifted right one
bit for internal use, since the precision represented by the full field
width is seldom justified.
Since NTP timestamps are cherished data and, in fact, represent the main
product of the protocol, a special timestamp format has been
established. NTP timestamps are represented as a 64-bit unsigned fixed-
point number, in seconds relative to 0h on 1 January 1900. The integer
part is in the first 32 bits and the fraction part in the last 32 bits.
This format allows convenient multiple-precision arithmetic and
conversion to Time Protocol representation (seconds), but does
complicate the conversion to ICMP Timestamp message representation
(milliseconds). The precision of this representation is about 200
picoseconds, which should be adequate for even the most exotic
requirements.
Timestamps are determined by copying the current value of the local
clock to a timestamp when some significant event, such as the arrival of
a message, occurs. In order to maintain the highest accuracy, it is
important that this be done as close to the hardware or software driver
associated with the event as possible. In particular, departure
timestamps should be redetermined for each link-level retransmission. In
some cases a particular timestamp may not be available, such as when the
host is rebooted or the protocol first starts up. In these cases the 64-
bit field is set to zero, indicating the value is invalid or undefined.
Note that since some time in 1968 the most significant bit (bit 0 of the
integer part) has been set and that the 64-bit field will overflow some
time in 2036. Should NTP be in use in 2036, some external means will be
necessary to qualify time relative to 1900 and time relative to 2036
(and other multiples of 136 years). Timestamped data requiring such
qualification will be so precious that appropriate means should be
readily available. There will exist an 200-picosecond interval,
henceforth ignored, every 136 years when the 64-bit field will be zero
and thus considered invalid.
State Variables and Parameters
Following is a summary of the various state variables and parameters
used by the protocol. They are separated into classes of system
variables, which relate to the operating system environment and local-
clock mechanism; peer variables, which represent the state of the
protocol machine specific to each peer; packet variables, which
represent the contents of the NTP message; and parameters, which
represent fixed configuration constants for all implementations of the
current version. For each class the description of the variable is
followed by its name and the procedure or value which controls it. Note
that variables are in lower case, while parameters are in upper case.
Additional details on formats and use are presented in later sections
and Appendices.
Common Variables
The following variables are common to two or more of the system, peer
and packet classes. Additional variables are specific to the optional
authentication mechanism as described in Appendix C. When necessary to
distinguish between common variables of the same name, the variable
identifier will be used.
Peer Address (peer.peeraddr, pkt.peeraddr), Peer Port (peer.peerport,
pkt.peerport): These are the 32-bit Internet address and 16-bit port
number of the peer.
Host Address (peer.hostaddr, pkt.hostaddr), Host Port (peer.hostport,
pkt.hostport): These are the 32-bit Internet address and 16-bit port
number of the host. They are included among the state variables to
support multi-homing.
Leap Indicator (sys.leap, peer.leap, pkt.leap): This is a two-bit code
warning of an impending leap second to be inserted in the NTP timescale.
The bits are set before 23:59 on the day of insertion and reset after
00:00 on the following day. This causes the number of seconds (rollover
interval) in the day of insertion to be increased or decreased by one.
In the case of primary servers the bits are set by operator
intervention, while in the case of secondary servers the bits are set by
the protocol. The two bits, bit 0 and bit 1, respectively, are coded as
follows:
@Z_TBL_BEG = COLUMNS(2), DIMENSION(IN), COLWIDTHS(E1,E8), WIDTH(5.0000),
ABOVE(.0830), BELOW(.0830), HGUTTER(.0560), KEEP(OFF), ALIGN(CT)
@Z_TBL_BODY = TABLE TEXT, TABLE TEXT
00, no warning
01, last minute has 61 seconds
10, last minute has 59 seconds
11, alarm condition (clock not synchronized)
@Z_TBL_END =
In all except the alarm condition (112), NTP itself does nothing with
these bits, except pass them on to the time-conversion routines that are
not part of NTP. The alarm condition occurs when, for whatever reason,
the local clock is not synchronized, such as when first coming up or
after an extended period when no primary reference source is available.
Mode (peer.mode, pkt.mode): This is an integer indicating the
association mode, with values coded as follows:
@Z_TBL_BEG = COLUMNS(2), DIMENSION(IN), COLWIDTHS(E1,E8), WIDTH(5.0000),
ABOVE(.0830), BELOW(.0830), HGUTTER(.0560), KEEP(OFF), ALIGN(CT)
@Z_TBL_BODY = TABLE TEXT, TABLE TEXT
0, unspecified
1, symmetric active
2, symmetric passive
3, client
4, server
5, broadcast
6, reserved for NTP control messages
7, reserved for private use
@Z_TBL_END =
Stratum (sys.stratum, peer.stratum, pkt.stratum): This is an integer
indicating the stratum of the local clock, with values defined as
follows:
@Z_TBL_BEG = COLUMNS(2), DIMENSION(IN), COLWIDTHS(E1,E8), WIDTH(5.0000),
ABOVE(.0830), BELOW(.0830), HGUTTER(.0560), KEEP(OFF), ALIGN(CT)
@Z_TBL_BODY = TABLE TEXT, TABLE TEXT
0, unspecified
1, primary reference (e.g.,, calibrated atomic clock,, radio clock)
2-255, secondary reference (via NTP)
@Z_TBL_END =
For comparison purposes a value of zero is considered greater than any
other value. Note that the maximum value of the integer encoded as a
packet variable is limited by the parameter NTP.MAXSTRATUM.
Poll Interval (sys.poll, peer.hostpoll, peer.peerpoll, pkt.poll): This
is a signed integer indicating the minimum interval between transmitted
messages, in seconds as a power of two. For instance, a value of six
indicates a minimum interval of 64 seconds.
Precision (sys.precision, peer.precision, pkt.precision): This is a
signed integer indicating the precision of the various clocks, in
seconds to the nearest power of two. The value must be rounded to the
next larger power of two; for instance, a 50-Hz (20 ms) or 60-Hz (16.67
ms) power-frequency clock would be assigned the value -5 (31.25 ms),
while a 1000-Hz (1 ms) crystal-controlled clock would be assigned the
value -9 (1.95 ms).