comp.lang.ada
 help / color / mirror / Atom feed
* how to analyze clock drift
@ 2014-11-18 22:12 Emanuel Berg
  2014-11-19  1:41 ` tmoran
                   ` (2 more replies)
  0 siblings, 3 replies; 42+ messages in thread
From: Emanuel Berg @ 2014-11-18 22:12 UTC (permalink / raw)


I have a long list of samples of clock readings. It
"should" be periodical, by 1 ms, but of course it
isn't. The data is in nanoseconds.

Anyone knows how I can apply some math/stat method to
find out the average drift, and/or if the drift will
average out, or just about any conclusions you can get
out of the material?

Also, do you know of any rule-of-thumb how many
readings I will need to make it "good science"? I will
of course tell how many readings were used in the
examination, but what would you say is a good
benchmark were patterns will (almost foolproof) be
visible?

If you know of some tool that'd be great, or some
formula, I'll just write a shell function, all such
things are appreciated, or if you just want to share
your knowledge.

The file with the readings looks like this:

85033101286784
85033108461718
85033109544537
85033110621490
85033111714366
85033112794112
85033113871903
85033114934049
85033116009605
85033117089909
85033118169656
85033119256945
85033120336411
85033121409174

and so on.

Hope to hear from you Ada real-time experts :)

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-18 22:12 how to analyze clock drift Emanuel Berg
@ 2014-11-19  1:41 ` tmoran
  2014-11-19  2:10   ` Emanuel Berg
  2014-11-19 13:08   ` Brian Drummond
  2014-11-19  2:10 ` Simon Clubley
  2014-11-19  2:28 ` Dennis Lee Bieber
  2 siblings, 2 replies; 42+ messages in thread
From: tmoran @ 2014-11-19  1:41 UTC (permalink / raw)


Not exactly an Ada question, but...
The differences between successive values make a sawtooth pattern. Any
idea why?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  1:41 ` tmoran
@ 2014-11-19  2:10   ` Emanuel Berg
  2014-11-19 10:30     ` Jacob Sparre Andersen
  2014-11-19 13:08   ` Brian Drummond
  1 sibling, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-19  2:10 UTC (permalink / raw)


tmoran@acm.org writes:

> Not exactly an Ada question, but... The differences
> between successive values make a sawtooth pattern.
> Any idea why?

None.

Here is the file with some 500+ readings:

    http://user.it.uu.se/~embe8573/hs-linux/src/tick_times.log

This should really be fed some old-school batch
number-crunching algorithm.

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-18 22:12 how to analyze clock drift Emanuel Berg
  2014-11-19  1:41 ` tmoran
@ 2014-11-19  2:10 ` Simon Clubley
  2014-11-19  2:37   ` Emanuel Berg
  2014-11-19  2:28 ` Dennis Lee Bieber
  2 siblings, 1 reply; 42+ messages in thread
From: Simon Clubley @ 2014-11-19  2:10 UTC (permalink / raw)


On 2014-11-18, Emanuel Berg <embe8573@student.uu.se> wrote:
> I have a long list of samples of clock readings. It
> "should" be periodical, by 1 ms, but of course it
> isn't. The data is in nanoseconds.
>

How are you gathering the data ?

Is the data being stored in a hardware register as the result of some
trigger until it can be collected by your program or is your collector
directly gathering the clock reading ?

IOW, how do you know the same amount of _actual_ real world time has
elapsed between the clock being sampled ?

What's the hardware and software environment ? Is your code running
on bare metal ?

If you are gathering the data directly, instead of via some hardware
buffer, are any instruction/data caches enabled on your processor ?

> Anyone knows how I can apply some math/stat method to
> find out the average drift, and/or if the drift will
> average out, or just about any conclusions you can get
> out of the material?
>

Before you get that far, you need to make sure you are gathering
what you think you are gathering.

Simon.

-- 
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-18 22:12 how to analyze clock drift Emanuel Berg
  2014-11-19  1:41 ` tmoran
  2014-11-19  2:10 ` Simon Clubley
@ 2014-11-19  2:28 ` Dennis Lee Bieber
  2014-11-19  2:44   ` tmoran
  2 siblings, 1 reply; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-19  2:28 UTC (permalink / raw)


On Tue, 18 Nov 2014 23:12:04 +0100, Emanuel Berg <embe8573@student.uu.se>
declaimed the following:

>I have a long list of samples of clock readings. It
>"should" be periodical, by 1 ms, but of course it
>isn't. The data is in nanoseconds.
>

	By what criteria? That is, how are the readings being produced? An
interrupt handler with an external millisecond clock source? A
multi-tasking system with variable load, round-robin scheduling?

>Anyone knows how I can apply some math/stat method to
>find out the average drift, and/or if the drift will
>average out, or just about any conclusions you can get
>out of the material?
>

	Feed the data to "R" and play around with various commands? (Maybe the
ones that run cyclic differences... compare deltas of each pair, each
third, etc.)


>Also, do you know of any rule-of-thumb how many
>readings I will need to make it "good science"? I will
>of course tell how many readings were used in the
>examination, but what would you say is a good
>benchmark were patterns will (almost foolproof) be
>visible?
>
	Other than "as many as you can get"... I've never understood how polls
work out their "margin of error".

>If you know of some tool that'd be great, or some

	As mentioned - the R system is just made for statistical operations.

>The file with the readings looks like this:

	What defines "truth" -- I see a significant delta between the first
two, but small change between 2&3...

>
>85033101286784
>85033108461718
>85033109544537
>85033110621490
>85033111714366
>85033112794112
>85033113871903
>85033114934049
>85033116009605
>85033117089909
>85033118169656
>85033119256945
>85033120336411
>85033121409174
>
>and so on.
>
>Hope to hear from you Ada real-time experts :)

	Just some games with Excel... Column 2 is the difference between each
original row. Subsequent columns are differences skipping the appropriate
number of items. The bottom row is the mean, ignoring the extreme (first
difference of the column)

85033101286784									
85033108461718	7174934								
85033109544537	1082819	-6092115							
85033110621490	1076953	-5866	-6097981						
85033111714366	1092876	15923	10057	-6082058					
85033112794112	1079746	-13130	2793	-3073	-6095188				
85033113871903	1077791	-1955	-15085	838	-5028	-6097143			
85033114934049	1062146	-15645	-17600	-30730	-14807	-20673	-6112788
85033116009605	1075556	13410	-2235	-4190	-17320	-1397	-7263
-6099378	
85033117089909	1080304	4748	18158	2513	558	-12572	3351	-2515
-6094630
85033118169656	1079747	-557	4191	17601	1956	1	-13129	2794
-3072
85033119256945	1087289	7542	6985	11733	25143	9498	7543 -5587
10336
85033120336411	1079466	-7823	-281	-838	3910	17320	1675 -280
-13410
85033121409174	1072763	-6703	-14526	-6984	-7541	-2793	10617 -5028
-6983
									
	1078955	-914	-754	-1459	-1641	-1517	466	-2123	-3282
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  2:10 ` Simon Clubley
@ 2014-11-19  2:37   ` Emanuel Berg
  0 siblings, 0 replies; 42+ messages in thread
From: Emanuel Berg @ 2014-11-19  2:37 UTC (permalink / raw)


Simon Clubley
<clubley@remove_me.eisner.decus.org-Earth.UFP> writes:

>> I have a long list of samples of clock readings. It
>> "should" be periodical, by 1 ms, but of course it
>> isn't. The data is in nanoseconds.
>>
>
> How are you gathering the data?

There is a C++ program. It uses the sleep_until
function to control the "1 ms" period. Then time is
dumped using other C++ methods.

No, the isn't the correct time. That is the whole
idea. It isn't correct, but in what ways is it
incorrect - does it drift? does it average out? does
it drift but evenly, i.e. the drift doesn't drift?
etc.

I want to answer such questions (and others) if
possible with math/stat methods and a large set of
data, but I'm not a star in science, I do tool
programming, so I thought you can help me, and I can
program, and it'll be interesting... :)

Here is everything - including docs, even a man page:

    http://user.it.uu.se/~embe8573/hs-linux/

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  2:28 ` Dennis Lee Bieber
@ 2014-11-19  2:44   ` tmoran
  2014-11-19  2:51     ` Emanuel Berg
  0 siblings, 1 reply; 42+ messages in thread
From: tmoran @ 2014-11-19  2:44 UTC (permalink / raw)


>>Anyone knows how I can apply some math/stat method to
>>find out the average drift, and/or if the drift will
>>average out, or just about any conclusions you can get
>>out of the material?
>>
>
>    Feed the data to "R" and play around with various commands?

To quote Richard Hamming "The purpose of computing is insight, not
numbers".  What is it that you want to know here?  How do you hope
to make use of what you learn?


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  2:44   ` tmoran
@ 2014-11-19  2:51     ` Emanuel Berg
  2014-11-19  9:01       ` Dmitry A. Kazakov
  0 siblings, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-19  2:51 UTC (permalink / raw)


tmoran@acm.org writes:

> To quote Richard Hamming "The purpose of computing
> is insight, not numbers". What is it that you want
> to know here? How do you hope to make use of what
> you learn?

It is a Master project in CS.

My teacher said those C++ timers aren't rock-solid.

Can I use them anyway? I asked.

Yes, but output the tick times, and analyze just how
"not rock-solid" they are, he said. (Pseudo quotes.)

That's what I'm trying to do.

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  2:51     ` Emanuel Berg
@ 2014-11-19  9:01       ` Dmitry A. Kazakov
  2014-11-19 22:12         ` Emanuel Berg
  0 siblings, 1 reply; 42+ messages in thread
From: Dmitry A. Kazakov @ 2014-11-19  9:01 UTC (permalink / raw)


On Wed, 19 Nov 2014 03:51:47 +0100, Emanuel Berg wrote:

> tmoran@acm.org writes:
> 
>> To quote Richard Hamming "The purpose of computing
>> is insight, not numbers". What is it that you want
>> to know here? How do you hope to make use of what
>> you learn?
> 
> It is a Master project in CS.
> 
> My teacher said those C++ timers aren't rock-solid.

You certainly can use OS services from C++, instead of them.

> Can I use them anyway? I asked.
> 
> Yes, but output the tick times, and analyze just how
> "not rock-solid" they are, he said. (Pseudo quotes.)

You need a reference clock to measure drift and jitter. If you use sleep,
there is a trend you cannot estimate (that is why Ada has delay and delay
until). So you would not know which part of the drift is due to the clock
deviation (that is typically somewhere around 5 µs/s) and which part is due
to the sleep's systematic error.

> That's what I'm trying to do.

Basically you need to identify the model

   Tc = Tr + a * Tr + e

Tc is C++ clock. Tr is the reference clock. Both start at 0 (the epoch). a
is the constant deviation. e is a random error (jitter). Both a and e are
contaminated by the systematic sleep error.

This is a simple linear regression, which you can find in almost any
statistical package or application. You also can implement it yourself.

The regression will give you the value of a and the mean of e. Having a you
can estimate the dispersion of e. It is to expect that e is not really
normally distributed, so you could try to verify other hypotheses of
distributions (statistical packages have tools for that). However I would
consider it wasting time.

Regarding being rock-solid, it probably the jitter was meant. In your case
jitter also includes errors induced by OS services, e.g. by scheduling of
the process and the task, by timer interrupt frequency when the clock is
driven from there etc. All this contributes to the mean and the dispersion
of e.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  2:10   ` Emanuel Berg
@ 2014-11-19 10:30     ` Jacob Sparre Andersen
  2014-11-19 22:15       ` Emanuel Berg
  2014-11-20  1:10       ` Emanuel Berg
  0 siblings, 2 replies; 42+ messages in thread
From: Jacob Sparre Andersen @ 2014-11-19 10:30 UTC (permalink / raw)


Emanuel Berg wrote:

>     http://user.it.uu.se/~embe8573/hs-linux/src/tick_times.log

Try to subtract a linear fit and plot the result.  There may be a
short-term systematic pattern, but over longer time, it looks like you
have a slowly, randomly drifting function.

Greetings,

Jacob
-- 
"If it's a mess, hide it..." -- J-P. Rosen

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  1:41 ` tmoran
  2014-11-19  2:10   ` Emanuel Berg
@ 2014-11-19 13:08   ` Brian Drummond
  1 sibling, 0 replies; 42+ messages in thread
From: Brian Drummond @ 2014-11-19 13:08 UTC (permalink / raw)


On Wed, 19 Nov 2014 01:41:19 +0000, tmoran wrote:

> Not exactly an Ada question, but...
> The differences between successive values make a sawtooth pattern. Any
> idea why?

If this is on a networked machine, perhaps an NTP daemon is periodically 
resetting one of the clocks being compared.

I've had trouble with that in the past. Experiment would be to turn that 
off (or disconnect networking for a while) and see if there's a steady 
monotonic drift instead...

- Brian


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19  9:01       ` Dmitry A. Kazakov
@ 2014-11-19 22:12         ` Emanuel Berg
  2014-11-20  9:42           ` Dmitry A. Kazakov
  0 siblings, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-19 22:12 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
writes:

> You certainly can use OS services from C++, instead
> of them.

What I use is the <chrono> stuff, e.g.:

    std::chrono::system_clock::now()
    std::this_thread::sleep_until(re_sched_time)

> You need a reference clock [big cut]

Thank you for that post, it is a bit too advanced to
me, so I'll print it and read it again and probably be
back with some questions.

I got a mail from my teacher - it sounds like a lot of
what you said (?):

    You could [...] calculate with the offsets from
    the desired values [...]

    o0 = t1 - t0 - DESIRED_TICK
    o1 = t2 - t1 - DESIRED_TICK
    o2 = t3 - t2 - DESIRED_TICK
    ...

    where DESIRED_TICK is the tick lengths you were
    aiming for. [...]

    From these points you can easily calculate the
    average, minimum and maximum values and their
    standard deviation. The average will be a measure
    of the drift, the min/max the worst-case
    (observed) behaviors and the standard deviation a
    measure of the stability.

    > Also, you mention the short trace. How long a
    > trace is needed (like a rule-of-thumb) to cover
    > all or most patterns?

    It's difficult to say. As a rule-of-thumb I
    suppose one can say that when the values you are
    calculating don't change significantly with longer
    traces, the trace is long enough. But by the
    nature of the problem you can never know, for
    example, if a larger maximum value would be seen
    if the trace was just a little longer.

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19 10:30     ` Jacob Sparre Andersen
@ 2014-11-19 22:15       ` Emanuel Berg
  2014-11-20 16:27         ` Stephen Leake
  2014-11-20  1:10       ` Emanuel Berg
  1 sibling, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-19 22:15 UTC (permalink / raw)


Jacob Sparre Andersen <jacob@jacob-sparre.dk> writes:

> Try to subtract a linear fit and plot the result.
> There may be a short-term systematic pattern, but
> over longer time, it looks like you have a slowly,
> randomly drifting function.

You mean, I should make a figure (graph) and then
learn from it by inspecting it?

By "subtract a linear fit", do you mean I should
broadly visualize this as a linear function by some
smoothing-out filter even though the data of course
isn't a straight line?

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19 10:30     ` Jacob Sparre Andersen
  2014-11-19 22:15       ` Emanuel Berg
@ 2014-11-20  1:10       ` Emanuel Berg
  2014-11-20 14:11         ` Dennis Lee Bieber
  1 sibling, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-20  1:10 UTC (permalink / raw)


I have written a program [1] to get the data items
that have been mentioned in different posts in this
thread and in mails.

Does it make sense? If so, what do those digits tell
you?

For these clock ticks (in nanoseconds):

    85033108461718
    85033109544537
    85033110621490
    85033111714366
    85033112794112
    85033113871903
    85033114934049
    85033116009605
    85033117089909
    85033118169656
    85033119256945
    85033120336411
    ...

The output is:

    radings: 543
    mean: 1076366.000000
    variance: 14127140.000000
    standard deviation: 3758.608785
    min: 1062145
    max: 1096507

    1082818
    1076952
    1092875
    1079745
    1077790
    1062145
    1075555
    1080303
    1079746
    1087288
    1079465
    1072762
    ...

[1] http://user.it.uu.se/~embe8573/tick/

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19 22:12         ` Emanuel Berg
@ 2014-11-20  9:42           ` Dmitry A. Kazakov
  2014-11-20 20:41             ` Emanuel Berg
  0 siblings, 1 reply; 42+ messages in thread
From: Dmitry A. Kazakov @ 2014-11-20  9:42 UTC (permalink / raw)


On Wed, 19 Nov 2014 23:12:41 +0100, Emanuel Berg wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
> writes:
> 
>> You certainly can use OS services from C++, instead
>> of them.
> 
> What I use is the <chrono> stuff, e.g.:
> 
>     std::chrono::system_clock::now()
>     std::this_thread::sleep_until(re_sched_time)

I don't use C++'s libraries, so I cannot tell what the thing actually does.
Usually you would use simple tests to check timer functions, that is by
waiting 0.01ms (a value much lower than the minimal waitable duration
greater than zero). You do that in a sequence and print real-time clock
(the reference clock) differences between consequent calls. That will give
you first impression of the accuracy of waitable timer. Depending on the OS
and system settings and can be from 10ms under Windows to 0.1ms under
VxWorks.

>> You need a reference clock [big cut]
> 
> Thank you for that post, it is a bit too advanced to
> me,

Come on, linear regression is simple:

http://en.wikipedia.org/wiki/Linear_regression

You can do it in Excel:

http://www.clemson.edu/ces/phoenix/tutorials/excel/regression.html

> so I'll print it and read it again and probably be
> back with some questions.
> 
> I got a mail from my teacher - it sounds like a lot of
> what you said (?):
> 
>     You could [...] calculate with the offsets from
>     the desired values [...]
> 
>     o0 = t1 - t0 - DESIRED_TICK
>     o1 = t2 - t1 - DESIRED_TICK
>     o2 = t3 - t2 - DESIRED_TICK
>     ...
> 
>     where DESIRED_TICK is the tick lengths you were
>     aiming for. [...]

Yes, that is when you already know the expected time. I assumed that the
reference clock and the clock used in wait are different clocks, deviating
at constant speed.

Modern computers have many time sources with many clocks derived from them,
which is why.

When you *know* that the time source is same, THEN you know that the
deviation a=0. Which looks like what your teacher assumed. If a/=0 then you
first estimate a, e.g. using regression which will appear in the formula
calculating differences.

To summarize, you calculate

   D = Tset - Tis

Where Tset is the expected time after waiting and Tis the actual measured
time.

Then you calculate the mean and dispersion. The mean must be DESIRED_TICK.
A difference indicates a systematic error, including the error in estimated
drift, e.g. when the drift is not really 0. Dispersion (standard deviation)
characterizes jitter.

>     From these points you can easily calculate the
>     average, minimum and maximum values and their
>     standard deviation. The average will be a measure
>     of the drift, the min/max the worst-case
>     (observed) behaviors and the standard deviation a
>     measure of the stability.

Minimum and maximum can be used to verify how far the jitter wanders from
the Normal distribution. Jitter is never distributed quite normally. From
the normal distribution you can get the probabilities of P(D>Max) and
P(D<Min) and compare them with actual values and what the three-sigma rule
says (see below).

>     > Also, you mention the short trace. How long a
>     > trace is needed (like a rule-of-thumb) to cover
>     > all or most patterns?
> 
>     It's difficult to say. As a rule-of-thumb I
>     suppose one can say that when the values you are
>     calculating don't change significantly with longer
>     traces, the trace is long enough. But by the
>     nature of the problem you can never know, for
>     example, if a larger maximum value would be seen
>     if the trace was just a little longer.

For a normal distribution there is so-called three-sigma rule which you
could use to estimate the sample set size:

http://en.wikipedia.org/wiki/68-95-99.7_rule

Of course, jitter is not distributed normally, but it is a good starting
point.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-20  1:10       ` Emanuel Berg
@ 2014-11-20 14:11         ` Dennis Lee Bieber
  0 siblings, 0 replies; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-20 14:11 UTC (permalink / raw)


On Thu, 20 Nov 2014 02:10:53 +0100, Emanuel Berg <embe8573@student.uu.se>
declaimed the following:

>I have written a program [1] to get the data items
>that have been mentioned in different posts in this
>thread and in mails.
>
	Well -- it's not Ada...

>Does it make sense? If so, what do those digits tell
>you?
>

	Nothing without knowledge of the collection method.

	If you aren't using a separate, regulated, clock signal to trigger the
data collection you won't be able to determine clock drift.

	That is, something in the form of (pseudo-code) ...

	t0 = time.now() + msec * 2
	t1 = msec
	loop
		delay until t0 + t1
		ticks = clock()
		write(ticks)
		t1 = t1 + msec
	end loop

... is using the same clock for timing as you are trying to analyze...
Doesn't matter how much it drifts -- it is counting based upon some ticks
per second value. The only thing the collected numbers can give you is the
overhead, in clock ticks, from when the delay until "wakes up" to when the
clock() reads the actual clock. That delay can include OS overhead, process
scheduling (just because the delay expired doesn't mean this task
immediately gets CPU time -- there may be other higher priority tasks that
run before it; delay until only promises not to wake up BEFORE the
specified time).

	To determine clock /drift/ you need an external stable signal at some
known frequency, and either a CPU intensive busy-wait (and kick up your
priority so the OS doesn't get in the way <G>); or a relatively high
priority interrupt attached...

	loop
		// wait for external clock to go high
		while xclk.pin() = LOW loop null end loop
		// capture system clock
		ticks = clock()
		write(ticks)
		// wait for external clock to go back low
		while xclk.pin() = HIGH loop null end loop
	end loop
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-19 22:15       ` Emanuel Berg
@ 2014-11-20 16:27         ` Stephen Leake
  0 siblings, 0 replies; 42+ messages in thread
From: Stephen Leake @ 2014-11-20 16:27 UTC (permalink / raw)


Emanuel Berg <embe8573@student.uu.se> writes:

> Jacob Sparre Andersen <jacob@jacob-sparre.dk> writes:
>
>> Try to subtract a linear fit and plot the result.
>> There may be a short-term systematic pattern, but
>> over longer time, it looks like you have a slowly,
>> randomly drifting function.
>
> You mean, I should make a figure (graph) and then
> learn from it by inspecting it?
>
> By "subtract a linear fit", do you mean I should
> broadly visualize this as a linear function by some
> smoothing-out filter even though the data of course
> isn't a straight line?

Just use a standard "linear fit" algorithm
(http://en.wikipedia.org/wiki/Linear_regression,
http://stephe-leake.org/ada/sal.html sal-math_double-linear_fit.ads) to
fit the data to y_fit = mx + b, then plot y - y_fit.

That will show the short term sawtooth, plus any longer term drift.

The short term sawtooth is probably due to the way the software is using
a hardware clock; the clock period doesn't quite divide the desired
software period.

-- 
-- Stephe


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-20  9:42           ` Dmitry A. Kazakov
@ 2014-11-20 20:41             ` Emanuel Berg
  2014-11-20 21:27               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-20 20:41 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
writes:

> Yes, that is when you already know the expected
> time. I assumed that the reference clock and the
> clock used in wait are different clocks, deviating
> at constant speed.

Yes, I should have told you. The data I posted
yesterday from the Lisp program - the first trace are
the measured tick times in nanoseconds, and the
intended tick is *1 ms*.

The outputs are by doing the suggested:

    offset = time1 - time2 - DESIRED_TICK

(I wonder if I made a mistake here - I assume I should
put DESIRED_TICK in nanos as well? - and I don't
remember doing that.)

Anyway if this method is good, can I make any
conclusions from the data? For example, how big a mean
would be considered a big drift, how big a deviation
an uneven drift, and so on?

Here is the original data again:

For these clock ticks (in nanoseconds):

    85033108461718
    85033109544537
    85033110621490
    85033111714366
    85033112794112
    85033113871903
    85033114934049
    85033116009605
    85033117089909
    85033118169656
    85033119256945
    85033120336411
    ...

The output is:

    readings: 543
    mean: 1076366.000000
    variance: 14127140.000000
    standard deviation: 3758.608785
    min: 1062145
    max: 1096507

    1082818
    1076952
    1092875
    1079745
    1077790
    1062145
    1075555
    1080303
    1079746
    1087288
    1079465
    1072762
    ...

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-20 20:41             ` Emanuel Berg
@ 2014-11-20 21:27               ` Dmitry A. Kazakov
  2014-11-20 21:54                 ` Emanuel Berg
  0 siblings, 1 reply; 42+ messages in thread
From: Dmitry A. Kazakov @ 2014-11-20 21:27 UTC (permalink / raw)


On Thu, 20 Nov 2014 21:41:41 +0100, Emanuel Berg wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
> writes:
> 
>> Yes, that is when you already know the expected
>> time. I assumed that the reference clock and the
>> clock used in wait are different clocks, deviating
>> at constant speed.
> 
> Yes, I should have told you.

Then, as others already pointed out, it is NOT clock drift. For clocks to
drift you need two independent time sources.

> The data I posted
> yesterday from the Lisp program - the first trace are
> the measured tick times in nanoseconds, and the
> intended tick is *1 ms*.

You should check the clock resolution. Though the clock precision might be
1ns it does not mean that the resolution is. Depending on the time source
it could be some multiplier of the front bus frequency (a few nanoseconds)
or 1s in the case of BIOS clock. Clock resolution is measured when you read
clock in a sequence of calls and compare results. Clocks derived from
real-time counters like TSC will give a new value on each call. Clocks
derived from poor-quality sources will give a sequence of same values and
then jump to a next value. This jump is roughly the clock resolution. Note
that clocks with lesser resolution might be more accurate than
higher-resolution clocks. It is not that simple.

Regarding 1ms, it is not always possible to wait for 1ms. E.g. under
Windows XP you cannot wait shorter than for 10ms, unless you change system
settings. If you see a saw pattern it is an indicator that 1ms is too short
for your OS. And this has nothing to do with the clock quality. It is the
quality of the waitable timer, which is only remotely related to the clock.

> The outputs are by doing the suggested:
> 
>     offset = time1 - time2 - DESIRED_TICK
> 
> (I wonder if I made a mistake here - I assume I should
> put DESIRED_TICK in nanos as well? - and I don't
> remember doing that.)

Of course everything must be in the same time unit.

> Anyway if this method is good, can I make any
> conclusions from the data? For example, how big a mean
> would be considered a big drift,

There is no drift, so long the time source is same.

> how big a deviation an uneven drift, and so on?

You should eliminate a possibility of systematic errors first. See above.

What you actually have measured is certainly not the clock drift or jitter.
You cannot measure them without an independent time source (another clock).
Clock drift and jitter are relative terms.

> Here is the original data again:
> 
> For these clock ticks (in nanoseconds):
> 
>     85033108461718
>     85033109544537
>     85033110621490
>     85033111714366
>     85033112794112
>     85033113871903
>     85033114934049
>     85033116009605
>     85033117089909
>     85033118169656
>     85033119256945
>     85033120336411
>     ...
> 
> The output is:
> 
>     readings: 543
>     mean: 1076366.000000
>     variance: 14127140.000000
>     standard deviation: 3758.608785
>     min: 1062145
>     max: 1096507
> 
>     1082818
>     1076952
>     1092875
>     1079745
>     1077790
>     1062145
>     1075555
>     1080303
>     1079746
>     1087288
>     1079465
>     1072762
>     ...

This is not bad for a waitable timer. Which is what you actually measured.

BTW, typically such timer measurements are made once without load (e.g. you
set the test program at the highest priority) and once under stress load
(e.g. you run a background CPU time consuming process or a heavy duty I/O
process). For timers it is important to work reliable under time-sharing
load.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-20 21:27               ` Dmitry A. Kazakov
@ 2014-11-20 21:54                 ` Emanuel Berg
  2014-11-20 21:57                   ` Emanuel Berg
  2014-11-21  2:27                   ` Dennis Lee Bieber
  0 siblings, 2 replies; 42+ messages in thread
From: Emanuel Berg @ 2014-11-20 21:54 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
writes:

> E.g. under Windows XP you cannot wait shorter than
> for 10ms, unless you change system settings. If you
> see a saw pattern it is an indicator that 1ms is too
> short for your OS.

I use Debian:

    Linux debian 3.17.1 #9 SMP Fri Nov 7 23:05:01 CET
    2014 x86_64 GNU/Linux

On this CPU:

    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                2
    On-line CPU(s) list:   0,1
    Thread(s) per core:    1
    Core(s) per socket:    2
    Socket(s):             1
    NUMA node(s):          1
    Vendor ID:             AuthenticAMD
    CPU family:            15
    Model:                 35
    Model name:            AMD Athlon(tm) 64 X2 Dual Core Processor 3800+
    Stepping:              2
    CPU MHz:               1000.000
    CPU max MHz:           2000.0000
    CPU min MHz:           1000.0000
    BogoMIPS:              1989.92
    L1d cache:             64K
    L1i cache:             64K
    L2 cache:              512K
    NUMA node0 CPU(s):     0,1

>>> Yes, that is when you already know the expected
>>> time. I assumed that the reference clock and the
>>> clock used in wait are different clocks, deviating
>>> at constant speed.
>>  Yes, I should have told you.
>
> Then, as others already pointed out, it is NOT clock
> drift. For clocks to drift you need two independent
> time sources.

No, I mean, the deviation isn't constant, the period
is *supposed* to be constant, but it isn't, that's the
whole thing - it isn't, and to what degree and with
what characteristics? That's what I want to examine.

The program is a hierarchical scheduler. It has a
global period which you can specify (in ms). For the
trace I showed you, the period is 1 ms, and the
readings are in nanos. The trace are the measured,
actual times.

Here is how it works:

1. I specify the global period to 1 ms.

2. In the C++ program, at every tick (supposedly every
   1 ms) I log the actual time. I do both those things
   - interrupt at every tick, and output the actual
   time - with the C++ <chrono> stuff.

3. I execute the C++ program, and get the trace, which
   I get the stats from with the Lisp program.

4. Now I want to understand the stats (what they
   express, because I understand how to compute them
   assuming the Lisp program is correct).

> Of course everything must be in the same time unit.
> ...
> This is not bad ... for a waitable timer. Which is
> what you actually measured.

That data isn't correct, I forgot to turn the desired
tick from millis to nanos, I'll fix that and post the
correct data here in a minute.

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-20 21:54                 ` Emanuel Berg
@ 2014-11-20 21:57                   ` Emanuel Berg
  2014-11-21  2:27                   ` Dennis Lee Bieber
  1 sibling, 0 replies; 42+ messages in thread
From: Emanuel Berg @ 2014-11-20 21:57 UTC (permalink / raw)


Emanuel Berg <embe8573@student.uu.se> writes:

> That data isn't correct, I forgot to turn the
> desired tick from millis to nanos, I'll fix that and
> post the correct data here in a minute.

Here is the correct data:

readings: 543
mean: 76367.000000
variance: 14127140.000000
standard deviation: 3758.608785
min: 62146
max: 96508

82819
76953
92876
79746
77791
62146
75556
80304
79747
...

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-20 21:54                 ` Emanuel Berg
  2014-11-20 21:57                   ` Emanuel Berg
@ 2014-11-21  2:27                   ` Dennis Lee Bieber
  2014-11-21  3:02                     ` Emanuel Berg
  1 sibling, 1 reply; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-21  2:27 UTC (permalink / raw)


On Thu, 20 Nov 2014 22:54:13 +0100, Emanuel Berg <embe8573@student.uu.se>
declaimed the following:

>
>No, I mean, the deviation isn't constant, the period
>is *supposed* to be constant, but it isn't, that's the
>whole thing - it isn't, and to what degree and with
>what characteristics? That's what I want to examine.
>

	Without a separate clock, it is not /drift/... It is OS latency in the
process of reading the clock after the delay expires... And that latency is
going to vary by how many other processes are running at the same or higher
priority.

	Delay expires, task is put at the end of its priority queue, and you
only get to /read/ the clock value when the OS gives your task its quantum
of time.

>Here is how it works:
>
>1. I specify the global period to 1 ms.
>
>2. In the C++ program, at every tick (supposedly every
>   1 ms) I log the actual time. I do both those things

	Define "actual time" -- if that actual time is coming from the same
clock, the clock drift is masked. The clock says, say, every 1000 "ticks"
is 1microsec... So, after 1000000 "ticks" your millisec delay expires.
Without an external timebase (frequency standard) it doesn't matter if your
processor counts 1000000 ticks in one millisec or 10 seconds -- if the
system has been programmed to assume 1000 ticks is a microsecond, that is
what it will increment.

>3. I execute the C++ program, and get the trace, which
>   I get the stats from with the Lisp program.
>

	And what do C++ and Lisp have to do with Ada (since I'm reading this in
comp.lang.ada).

>4. Now I want to understand the stats (what they
>   express, because I understand how to compute them
>   assuming the Lisp program is correct).
>

	Isn't that basically your assignment?

	To interpret the numbers you first have to define what you are
measuring.

	If there is a "sawtooth", running an autocorrelation function over the
data might indicate the period. Then it becomes a task to explain such a
period -- perhaps the OS resolution is some fraction of your desired value,
and what you are seeing is an accumulating error until the next multiple...
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-21  2:27                   ` Dennis Lee Bieber
@ 2014-11-21  3:02                     ` Emanuel Berg
  2014-11-21 16:49                       ` Dennis Lee Bieber
  0 siblings, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-21  3:02 UTC (permalink / raw)


Dennis Lee Bieber <wlfraed@ix.netcom.com> writes:

> Without a separate clock, it is not /drift/... It is
> OS latency in the process of reading the clock after
> the delay expires... And that latency is going to
> vary by how many other processes are running at the
> same or higher priority.
>
> Delay expires, task is put at the end of its
> priority queue, and you only get to /read/ the clock
> value when the OS gives your task its quantum of
> time.

I am aware there are many factors why this is as it
is, including the OS, what configurations of other
processes (and their nature) that runs in parallel,
scheduling and priorities, and so on almost
indefinitely, however I'm not analyzing those factors
here, I'm only analyzing how the outputted data
differs from the ideal, perfect time, which is defined
(here) as a tick every millisecond. The "drift"
mentioned is the drift from that perfect time.

>> Here is how it works:
>>
>> 1. I specify the global period to 1 ms.
>>
>> 2. In the C++ program, at every tick (supposedly
>>    every 1 ms) I log the actual time. I do both those
>>    things
>
> Define "actual time" -- if that actual time is
> coming from the same clock, the clock drift is
> masked.

"Actual time" is defined, in the usage above, as: the
time in nanoseconds that the C++ outputs every global
tick, which is programmed to occur every millisecond
but doesn't keep to that, in ways that are the target
of the inquiry.

>> 4. Now I want to understand the stats (what they
>> express, because I understand how to compute them
>> assuming the Lisp program is correct).
>
> Isn't that basically your assignment?

Well. Have have a 39 page report at this point. This
is a short section but the report is made up of short
sections, for sure.

> To interpret the numbers you first have to define
> what you are measuring.

I'm measuring how the logged times differs from an
imaginary perfect log that would look:

1000000
2000000
3000000
...

> If there is a "sawtooth", running an autocorrelation
> function over the data might indicate the period.
> Then it becomes a task to explain such a period --
> perhaps the OS resolution is some fraction of your
> desired value, and what you are seeing is an
> accumulating error until the next multiple...

No, I'm not going to do that. I'm only going to show
the data and what the data means. What is lacking is
some sort of conclusion, because the stats don't
really tell if you don't know what is normal - is it
bad, good, and how would that show (i.e., could that
be computed, as well - perhaps as a function of all
values...), stuff like that.

Here is what I've written so far. Feel free to suggest
improvements. I'll paste the LaTeX source, I think you
can read it just the same:

\subsection{the C++ and Linux timers}

To uphold an even rate of periodic scheduling, the C++
library function $std::this\_thread::sleep\_until$ is
used. However, on

\url{http://en.cppreference.com/w/cpp/thread/sleep_until}

they mention that $sleep\_until$ may actually ``block
for longer [than what has been specified] due to
scheduling or resource contention delays''.

To get a grip on how jittery the global tick is, $-l$
(or equivalently $--log$) can be used to have the
hierarchical scheduler output the current time (in
nanoseconds), at every tick, again, according to the
\texttt{<chrono>} library functions.

Here is a sample output for a period of one
millisecond:

\begin{verbatim}

    85033684319264
    85033685397613
    85033686471213
    85033687542299
    85033688624839
    85033689696763
    85033690770643
    85033691846478
    85033692929297
    ...

\end{verbatim}

The following method is then used to calculate the
offsets from the intended tick times:

\begin{verbatim}

  offset_0 = time_1 - time_0 - DESIRED_TICK
  offset_1 = time_2 - time_1 - DESIRED_TICK
  ...

\end{verbatim}

This produced, for this example run:

\begin{verbatim}

  82819
  76953
  92876
  79746
  77791
  62146
  75556
  80304
  79747
  ...

\end{verbatim}

Last, for all 543 offsets acquired in this example
computation, the following statistical data were
acquired:

\begin{verbatim}

    readings:            543
    mean:                76367.000000
    variance:            14127140.000000
    standard deviation:  3758.608785
    min:                 62146
    max:                 96508

\end{verbatim}

The mean value is a measure of the size of the drift.
The minimum and maximum values are the worst-case
behaviors for this particular run: they are the
smallest and biggest distances observed off the
intended tick time. The standard deviation is a
measure of the stability of the drift.

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-21  3:02                     ` Emanuel Berg
@ 2014-11-21 16:49                       ` Dennis Lee Bieber
  2014-11-21 21:06                         ` Emanuel Berg
  2014-11-21 21:15                         ` Emanuel Berg
  0 siblings, 2 replies; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-21 16:49 UTC (permalink / raw)


On Fri, 21 Nov 2014 04:02:34 +0100, Emanuel Berg <embe8573@student.uu.se>
declaimed the following:

>
>I am aware there are many factors why this is as it
>is, including the OS, what configurations of other
>processes (and their nature) that runs in parallel,
>scheduling and priorities, and so on almost
>indefinitely, however I'm not analyzing those factors
>here, I'm only analyzing how the outputted data
>differs from the ideal, perfect time, which is defined
>(here) as a tick every millisecond. The "drift"
>mentioned is the drift from that perfect time.
>
	It is not drift as most of the industry would define it.

>
>"Actual time" is defined, in the usage above, as: the
>time in nanoseconds that the C++ outputs every global
>tick, which is programmed to occur every millisecond
>but doesn't keep to that, in ways that are the target
>of the inquiry.
>

	Nothing in your test, as described, can be used to confirm that
"target" statement, as nothing in your description is measuring the rate of
the clock itself.

	If your system suddenly changed the clock speed without making changes
in the ticks-per-second value, your program would still report "1ms" but
those would take place at 2ms real intervals.

	To measure actual clock variation you MUST have an independent
time-base for comparison.


>
>I'm measuring how the logged times differs from an
>imaginary perfect log that would look:
>
>1000000
>2000000
>3000000
>...
>
	<snip>

>No, I'm not going to do that. I'm only going to show
>the data and what the data means. What is lacking is
>some sort of conclusion, because the stats don't
>really tell if you don't know what is normal - is it
>bad, good, and how would that show (i.e., could that
>be computed, as well - perhaps as a function of all
>values...), stuff like that.
>

	Which still comes down to "what are you measuring". You are NOT
measuring real clock "drift" since you do not have reliable time base
standard with which to determining the system reported times.

	The ONLY conclusion /I/ could draw from this experiment is that you are
measuring the OS LATENCY between the expiration of a delay statement and
the reading of the clock counter. If you know the system ticks-per-second
of the clock I could put the conclusion on firmer footing (eg: the latency
between delay statement expiration to reading of the clock value is NNN
clock ticks +/- mmm ticks){since you don't have an external measurement on
your clock, times reported are all relative to clock ticks passing; a slow
clock still reports the same number of ticks as a fast clock}

	Try it: change your delay from a nominal 1msec to, say 50msec. Rerun
the experiment. I'd expect your differences to still be very similar the
numbers you obtained at 1msec -- ie; the differences are in the latency
between delay and read operations, independent of timer rate itself.

	Actually... Also try using a delay of 0.0 -- that should be recording
only the latency between a delay statement that immediately returns and the
reading of the clock value. If the clock is slow enough, you'll likely get
a delta of 0s with the occasional 1 when the clock updates.


	While this
(http://msdn.microsoft.com/en-us/library/windows/hardware/jj602805%28v=vs.85%29.aspx)
is discussing Windows, this paragraph may be of interest:

"""
The system time is updated on every tick of the system clock, and is
accurate only to the latest tick. If the caller specifies an absolute
expiration time, the expiration of the timer is detected during processing
of the first system clock tick that occurs after the specified time. Thus,
the timer can expire as much as one system clock period later than the
specified absolute expiration time. If a timer interval, or relative
expiration time, is instead specified, the expiration can occur up to a
period earlier than or a period later than the specified time, depending on
where exactly the start and end times of this interval fall between system
clock ticks. Regardless of whether an absolute or a relative time is
specified, the timer expiration might not be detected until even later if
interrupt processing for the system clock is delayed by interrupt
processing for other devices.
"""
>Here is what I've written so far. Feel free to suggest
>improvements. I'll paste the LaTeX source, I think you
>can read it just the same:
>
>\subsection{the C++ and Linux timers}
>
>To uphold an even rate of periodic scheduling, the C++
>library function $std::this\_thread::sleep\_until$ is
>used. However, on
>
>\url{http://en.cppreference.com/w/cpp/thread/sleep_until}
>
>they mention that $sleep\_until$ may actually ``block
>for longer [than what has been specified] due to
>scheduling or resource contention delays''.
>
>To get a grip on how jittery the global tick is, $-l$
>(or equivalently $--log$) can be used to have the
>hierarchical scheduler output the current time (in
>nanoseconds), at every tick, again, according to the
>\texttt{<chrono>} library functions.
>
>Here is a sample output for a period of one
>millisecond:
>
>\begin{verbatim}
>
>    85033684319264
>    85033685397613
>    85033686471213
>    85033687542299
>    85033688624839
>    85033689696763
>    85033690770643
>    85033691846478
>    85033692929297
>    ...
>
>\end{verbatim}
>
>The following method is then used to calculate the
>offsets from the intended tick times:
>
>\begin{verbatim}
>
>  offset_0 = time_1 - time_0 - DESIRED_TICK
>  offset_1 = time_2 - time_1 - DESIRED_TICK
>  ...
>
>\end{verbatim}
>
>This produced, for this example run:
>
>\begin{verbatim}
>
>  82819
>  76953
>  92876
>  79746
>  77791
>  62146
>  75556
>  80304
>  79747
>  ...
>
>\end{verbatim}
>
>Last, for all 543 offsets acquired in this example
>computation, the following statistical data were
>acquired:
>
>\begin{verbatim}
>
>    readings:            543
>    mean:                76367.000000
>    variance:            14127140.000000
>    standard deviation:  3758.608785
>    min:                 62146
>    max:                 96508
>
>\end{verbatim}
>
>The mean value is a measure of the size of the drift.
>The minimum and maximum values are the worst-case
>behaviors for this particular run: they are the
>smallest and biggest distances observed off the
>intended tick time. The standard deviation is a
>measure of the stability of the drift.
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-21 16:49                       ` Dennis Lee Bieber
@ 2014-11-21 21:06                         ` Emanuel Berg
  2014-11-22 18:18                           ` Dennis Lee Bieber
  2014-11-21 21:15                         ` Emanuel Berg
  1 sibling, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-21 21:06 UTC (permalink / raw)


Dennis Lee Bieber <wlfraed@ix.netcom.com> writes:

>> I am aware there are many factors why this is as it
>> is, including the OS, what configurations of other
>> processes (and their nature) that runs in parallel,
>> scheduling and priorities, and so on almost
>> indefinitely, however I'm not analyzing those
>> factors here, I'm only analyzing how the outputted
>> data differs from the ideal, perfect time, which is
>> defined (here) as a tick every millisecond. The
>> "drift" mentioned is the drift from that perfect
>> time.
>>
> It is not drift as most of the industry would
> define it.

OK. I'll CC this to my teacher (including the full
quotes of your message) and see what he says.

Intuitively, I think it is clear: time is perfect but
the clock isn't, it is a close but never perfect
estimate of time, and if there is a drift away (which
of course it'll always be, otherwise you could just
reset the clock if it was "correctly" off by some
constant margin), i.e., with a drift, with time, the
clock will be even worse (i.e., further away from
time).

>> "Actual time" is defined, in the usage above, as:
>> the time in nanoseconds that the C++ outputs every
>> global tick, which is programmed to occur every
>> millisecond but doesn't keep to that, in ways that
>> are the target of the inquiry.
>
> Nothing in your test, as described, can be used to
> confirm that "target" statement, as nothing in your
> description is measuring the rate of the clock
> itself.
>
> If your system suddenly changed the clock speed
> without making changes in the ticks-per-second
> value, your program would still report "1ms" but
> those would take place at 2ms real intervals.
>
> To measure actual clock variation you MUST have an
> independent time-base for comparison.

Are you saying:

1. we know that sleep_until isn't perfect - it can
   sleep longer than x, for an argument x, as the docs
   say

2. I am attempting to log the error of sleep_until
   with now() but that can't be done as now() uses the
   same technology and thus can likewise be incorrect

*Or*, are you saying:

we can't trust now() *at all* because it invokes an OS
routine which can give it whatever data?

If you are saying the second, I think it is assumed
that the system time is an adequate measure of
physical (perfect) time. So the ideal time I describe
above is assumed to be what is outputted by now, and
that is assumed to be a adequately correct reading of
how much the delay (sleep_until) actually sleep, i.e.
its drift from the desired perfect periodicity.
(Again, I'm CC-ing this to my teacher, hopefully he'll
mail me and I'll get back on this, anyway this is how
I always assumed the situation so this discussion is
really great if it helps to remove a
misunderstanding.)

> Which still comes down to "what are you measuring".
> You are NOT measuring real clock "drift" since you do
> not have reliable time base standard with which to
> determining the system reported times.

Yes, now I understand what you mean. I think it is
agreed that time is good enough for the purposes of
this project. This isn't about examining the lack of
quality of the <chrono> stuff in general, it is about
specifically logging the "over-delay" of sleep_until
which now() is deemed capable of.

> The ONLY conclusion /I/ could draw from this
> experiment is that you are measuring the OS LATENCY
> between the expiration of a delay statement and the
> reading of the clock counter. If you know the system
> ticks-per-second of the clock I could put the
> conclusion on firmer footing (eg: the latency
> between delay statement expiration to reading of the
> clock value is NNN clock ticks +/- mmm ticks){since
> you don't have an external measurement on your
> clock, times reported are all relative to clock
> ticks passing; a slow clock still reports the same
> number of ticks as a fast clock}
>
> Try it: change your delay from a nominal 1msec to,
> say 50msec. Rerun the experiment. I'd expect your
> differences to still be very similar the numbers you
> obtained at 1msec -- ie; the differences are in the
> latency between delay and read operations,
> independent of timer rate itself.
>
> Actually... Also try using a delay of 0.0 -- that
> should be recording only the latency between a delay
> statement that immediately returns and the reading
> of the clock value. If the clock is slow enough,
> you'll likely get a delta of 0s with the occasional
> 1 when the clock updates.
>
> While this
> (http://msdn.microsoft.com/en-us/library/windows/hardware/jj602805%28v=vs.85%29.aspx)
> is discussing Windows, this paragraph may be of
> interest:
>
> """ The system time is updated on every tick of the
> system clock, and is accurate only to the latest
> tick. If the caller specifies an absolute expiration
> time, the expiration of the timer is detected during
> processing of the first system clock tick that
> occurs after the specified time. Thus, the timer can
> expire as much as one system clock period later than
> the specified absolute expiration time. If a timer
> interval, or relative expiration time, is instead
> specified, the expiration can occur up to a period
> earlier than or a period later than the specified
> time, depending on where exactly the start and end
> times of this interval fall between system clock
> ticks. Regardless of whether an absolute or a
> relative time is specified, the timer expiration
> might not be detected until even later if interrupt
> processing for the system clock is delayed by
> interrupt processing for other devices. """

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-21 16:49                       ` Dennis Lee Bieber
  2014-11-21 21:06                         ` Emanuel Berg
@ 2014-11-21 21:15                         ` Emanuel Berg
  2014-11-21 22:31                           ` Emanuel Berg
  1 sibling, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-21 21:15 UTC (permalink / raw)


Dennis Lee Bieber <wlfraed@ix.netcom.com> writes:

> To measure actual clock variation you MUST have an
> independent time-base for comparison.

I'm starting to think this is a good idea. I thought
the problem was sleep_until specifically, not all the
C++ <chrono> stuff, in what case using that to
diagnose itself of course can't be done.

So then, where do I get a better clock?

My Linux gets it time from this command, I think (it
was a while since a mucked around that stuff)

    ntpdate pool.ntp.org

Is that considered reliable to the point it makes
sense diagnosing sleep_until with that instead of the
C++'s now()?

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-21 21:15                         ` Emanuel Berg
@ 2014-11-21 22:31                           ` Emanuel Berg
  0 siblings, 0 replies; 42+ messages in thread
From: Emanuel Berg @ 2014-11-21 22:31 UTC (permalink / raw)


Emanuel Berg <embe8573@student.uu.se> writes:

> Is that considered reliable to the point it makes
> sense diagnosing sleep_until with that instead of
> the C++'s now()?

On the other hand, isn't that from where now() gets
its data?

I have checked the following docs:

    http://en.cppreference.com/w/cpp/chrono/system_clock/now
    http://en.cppreference.com/w/cpp/thread/sleep_until

For sleep_until, they say:

    The clock tied to sleep_time is used, which means
    that adjustments of the clock are taken into
    account. Thus, the duration of the block might,
    but might not, be less or more than sleep_time -
    Clock::now() at the time of the call, depending on
    the direction of the adjustment. The function also
    may block for longer than until after sleep_time
    has been reached due to scheduling or resource
    contention delays.

But for now(), they don't say one word it should be
unreliable so it should be enough to diagnose
sleep_until.

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-21 21:06                         ` Emanuel Berg
@ 2014-11-22 18:18                           ` Dennis Lee Bieber
  2014-11-23 20:15                             ` Emanuel Berg
  0 siblings, 1 reply; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-22 18:18 UTC (permalink / raw)


On Fri, 21 Nov 2014 22:06:03 +0100, Emanuel Berg <embe8573@student.uu.se>
declaimed the following:

>
>Are you saying:
>
>1. we know that sleep_until isn't perfect - it can
>   sleep longer than x, for an argument x, as the docs
>   say
>
>2. I am attempting to log the error of sleep_until
>   with now() but that can't be done as now() uses the
>   same technology and thus can likewise be incorrect
>
>*Or*, are you saying:
>
>we can't trust now() *at all* because it invokes an OS
>routine which can give it whatever data?
>
	All three apply. You can not use the system clock to measure errors in
the system clock.

	Sleep_Until may be off by one system tick (which itself may be a rather
large quantity -- 15.6mSec is common in Windows OS, though I believe
privileged code can change that); Sleep_Until and Now are both using the
same clock, so if the clock ran slow (meaning the sleep runs longer than
nominal), Now will report the slow time value -- not the real time that is
longer; and Now may be affected by any background updates of the system
time (NTP updates, for example).

	At the least, you need to know the granularity at which "now" is
updated, as that is likely to also be the granularity at which sleep_until
triggers.

	System time is determined by some counter (of ticks) divided by some
constant representing the expected ticks per second (TPS). Regardless of
how the clock is varying, it will report that one second has passed when
TPS ticks have been counted. (Worse, the OS ticks are, themselves, based
upon some number of hardware clock cycles OR some external clock interrupt
-- given that my current systems have a "turbo boost" in which all but one
core is halted while the running core gets clocked much faster, having the
system tick based on the CPU clock frequency would be hairy)

	In order to determine clock drift, you must measure how many ticks
really took place relative to an external time-base (which is presumed to
be "truth" -- eg; a 1 pulse-per-second output from a cesium time-base
[atomic clock]). You would read the system clock value each time the
external time base pulses some interrupt line (or you use a tight polling
loop). You also need to know the system's defined value for TPS.

(apologies for another M$ reference -- worse, .NET specific, and the .NET
runtime may be doing some other translations from the hardware and OS clock
ticks. -- on Windows you could expect to see the clock value update in
jumps of 156000 ticks -- unless it internally calls some other counter to
get finer resolution)

http://msdn.microsoft.com/en-us/library/system.timespan.tickspersecond%28v=vs.110%29.aspx

	That 15.6mSec would also affect the jitter in your measurements as you
could start a delay at either just before an update or just after the
update -- that means 15mSec of potential variance.

	Borrowing from the Python documentation:
"""
time.clock() 
On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition of the
meaning of “processor time”, depends on that of the C function of the same
name, but in any case, this is the function to use for benchmarking Python
or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the Win32
function QueryPerformanceCounter(). The resolution is typically better than
one microsecond.
"""
"""
time.sleep(secs) 
Suspend execution for the given number of seconds. The argument may be a
floating point number to indicate a more precise sleep time. The actual
suspension time may be less than that requested because any caught signal
will terminate the sleep() following execution of that signal’s catching
routine. Also, the suspension time may be longer than requested by an
arbitrary amount because of the scheduling of other activity in the system.
"""
"""
time.time() 
Return the time in seconds since the epoch as a floating point number. Note
that even though the time is always returned as a floating point number,
not all systems provide time with a better precision than 1 second. While
this function normally returns non-decreasing values, it can return a lower
value than a previous call if the system clock has been set back between
the two calls.
"""

	In Python, to do really fine sleeps requires using a busy loop as the
system sleep routine is too coarse (especially in Windows) -- and it makes
delay-until tricky to implement as time.clock() is not wall-clock related.

	
	Note that the Python clock() call, on Windows systems, is not using the
system clock, but rather the finer resolution performance counters -- but
the sleep() call is implied to be using the system clock [Python doesn't
implement a sleep_until()].

>If you are saying the second, I think it is assumed
>that the system time is an adequate measure of
>physical (perfect) time. So the ideal time I describe
>above is assumed to be what is outputted by now, and
>that is assumed to be a adequately correct reading of
>how much the delay (sleep_until) actually sleep, i.e.
>its drift from the desired perfect periodicity.

	Let me try to put this into a more physical example... An old fashioned
alarm clock...

	Set the alarm for, say, 6AM (this is the Delay_Until). Now, when the
alarm goes off, you read the clock face to determine when it went off...
The clock will show 6AM -- even if the clock is losing one minute per hour.
After two days, the clock still shows the alarm going off at 6AM, but it is
going off 48 minutes away from a standard clock.

	What your numbers are showing, to the best of my interpretation, is the
equivalent of you going somewhere else in your house and waiting for the
delay to expire (the alarm goes off), and then running through the house to
get to the room with the clock and reading the face and using /that/ as the
time it went off (6:02 if you had to navigate a few floors) ("that" is the
latency from when the OS detected the delay expired and rescheduled the
tast through to when the read operation captured the clock value).

-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-22 18:18                           ` Dennis Lee Bieber
@ 2014-11-23 20:15                             ` Emanuel Berg
  2014-11-24  1:15                               ` Dennis Lee Bieber
  0 siblings, 1 reply; 42+ messages in thread
From: Emanuel Berg @ 2014-11-23 20:15 UTC (permalink / raw)


Dennis Lee Bieber <wlfraed@ix.netcom.com> writes:

>> Are you saying:
>>
>> 1. we know that sleep_until isn't perfect - it can
>> sleep longer than x, for an argument x, as the docs
>> say
>>
>> 2. I am attempting to log the error of sleep_until
>> with now() but that can't be done as now() uses the
>> same technology and thus can likewise be incorrect
>>
>> *Or*, are you saying:
>>
>> we can't trust now() *at all* because it invokes an
>> OS routine which can give it whatever data?
>>
> All three apply. You can not use the system clock to
> measure errors in the system clock.
>
> Sleep_Until may be off by one system tick (which
> itself may be a rather large quantity -- 15.6mSec is
> common in Windows OS, though I believe privileged
> code can change that); Sleep_Until and Now are both
> using the same clock, so if the clock ran slow
> (meaning the sleep runs longer than nominal), Now
> will report the slow time value -- not the real time
> that is longer; and Now may be affected by any
> background updates of the system time (NTP updates,
> for example).
>
> At the least, you need to know the granularity at
> which "now" is updated, as that is likely to also be
> the granularity at which sleep_until triggers.
>
> System time is determined by some counter (of ticks)
> divided by some constant representing the expected
> ticks per second (TPS). Regardless of how the clock
> is varying, it will report that one second has
> passed when TPS ticks have been counted. (Worse, the
> OS ticks are, themselves, based upon some number of
> hardware clock cycles OR some external clock
> interrupt -- given that my current systems have a
> "turbo boost" in which all but one core is halted
> while the running core gets clocked much faster,
> having the system tick based on the CPU clock
> frequency would be hairy)
>
> In order to determine clock drift, you must measure
> how many ticks really took place relative to an
> external time-base (which is presumed to be "truth"
> -- eg; a 1 pulse-per-second output from a cesium
> time-base [atomic clock]). You would read the system
> clock value each time the external time base pulses
> some interrupt line (or you use a tight polling
> loop). You also need to know the system's defined
> value for TPS.
>
> (apologies for another M$ reference -- worse, .NET
> specific, and the .NET runtime may be doing some
> other translations from the hardware and OS clock
> ticks. -- on Windows you could expect to see the
> clock value update in jumps of 156000 ticks --
> unless it internally calls some other counter to get
> finer resolution)
>
> http://msdn.microsoft.com/en-us/library/system.timespan.tickspersecond%28v=vs.110%29.aspx
>
> That 15.6mSec would also affect the jitter in your
> measurements as you could start a delay at either
> just before an update or just after the update --
> that means 15mSec of potential variance.
>
> Borrowing from the Python documentation: """
> time.clock() On Unix, return the current processor
> time as a floating point number expressed in
> seconds. The precision, and in fact the very
> definition of the meaning of “processor time”,
> depends on that of the C function of the same name,
> but in any case, this is the function to use for
> benchmarking Python or timing algorithms.
>
> On Windows, this function returns wall-clock seconds
> elapsed since the first call to this function, as a
> floating point number, based on the Win32 function
> QueryPerformanceCounter(). The resolution is
> typically better than one microsecond. """ """
> time.sleep(secs) Suspend execution for the given
> number of seconds. The argument may be a floating
> point number to indicate a more precise sleep time.
> The actual suspension time may be less than that
> requested because any caught signal will terminate
> the sleep() following execution of that signal’s
> catching routine. Also, the suspension time may be
> longer than requested by an arbitrary amount because
> of the scheduling of other activity in the system.
> """ """ time.time() Return the time in seconds since
> the epoch as a floating point number. Note that even
> though the time is always returned as a floating
> point number, not all systems provide time with a
> better precision than 1 second. While this function
> normally returns non-decreasing values, it can
> return a lower value than a previous call if the
> system clock has been set back between the two
> calls. """
>
> In Python, to do really fine sleeps requires using a
> busy loop as the system sleep routine is too coarse
> (especially in Windows) -- and it makes delay-until
> tricky to implement as time.clock() is not
> wall-clock related.
>
> Note that the Python clock() call, on Windows
> systems, is not using the system clock, but rather
> the finer resolution performance counters -- but the
> sleep() call is implied to be using the system clock
> [Python doesn't implement a sleep_until()].
>
>> If you are saying the second, I think it is assumed
>> that the system time is an adequate measure of
>> physical (perfect) time. So the ideal time I
>> describe above is assumed to be what is outputted
>> by now, and that is assumed to be a adequately
>> correct reading of how much the delay (sleep_until)
>> actually sleep, i.e. its drift from the desired
>> perfect periodicity.
>
> Let me try to put this into a more physical
> example... An old fashioned alarm clock...
>
> Set the alarm for, say, 6AM (this is the
> Delay_Until). Now, when the alarm goes off, you read
> the clock face to determine when it went off... The
> clock will show 6AM -- even if the clock is losing
> one minute per hour. After two days, the clock still
> shows the alarm going off at 6AM, but it is going
> off 48 minutes away from a standard clock.
>
> What your numbers are showing, to the best of my
> interpretation, is the equivalent of you going
> somewhere else in your house and waiting for the
> delay to expire (the alarm goes off), and then
> running through the house to get to the room with
> the clock and reading the face and using /that/ as
> the time it went off (6:02 if you had to navigate a
> few floors) ("that" is the latency from when the OS
> detected the delay expired and rescheduled the tast
> through to when the read operation captured the
> clock value).

That's a lot to digest :) But I understand the last
example, and that is how I thought about it all along.
So I'll keep the section in the report as it stands.
But I should add another paragraph explaining "it is
complicated", or at least change the terminology
before the examination (mine) so people won't think I
do the kind of clock analysis that you describe (with
an external atomic clock), and be all confused about
it (them, or them thinking me, or both). I never
intended to do something like that and that's where
the confusion started. So if I leave it as it is it is
likely people will be confused once more the exact
same way. So instead of having the terms drift and
jitter - or can I have them? - I should say, in your
words:

> the latency from when the OS detected the delay
> expired and rescheduled the tast through to when the
> read operation captured the clock value

I can explain that, and then add like, "If you were to
employ this system in a critical setting, it would be
necessary to have a much more reliable clock to
trigger the interrupts. Although you can have widely
diverging results due to many factors, to illustrate
the lack of periodicity, and to some degree how that
behaves and fluctuates [I'm referring to the stats
here], run the program with `-l' and study the
outputs..." (I'm not going to put it exactly like that,
but you get the idea.) Is that better?

-- 
underground experts united

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-23 20:15                             ` Emanuel Berg
@ 2014-11-24  1:15                               ` Dennis Lee Bieber
  2014-11-24  1:34                                 ` Emanuel Berg
  2014-11-24  8:44                                 ` Dmitry A. Kazakov
  0 siblings, 2 replies; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-24  1:15 UTC (permalink / raw)


On Sun, 23 Nov 2014 21:15:05 +0100, Emanuel Berg <embe8573@student.uu.se>
declaimed the following:

>I can explain that, and then add like, "If you were to
>employ this system in a critical setting, it would be
>necessary to have a much more reliable clock to
>trigger the interrupts. Although you can have widely
>diverging results due to many factors, to illustrate
>the lack of periodicity, and to some degree how that
>behaves and fluctuates [I'm referring to the stats
>here], run the program with `-l' and study the
>outputs..." (I'm not going to put it exactly like that,
>but you get the idea.) Is that better?

	Well -- but you haven't /tested/ the reliability of the clock itself;
only of the latency in responding to it...

	And just to put this back into an Ada context, I did spend some time
hacking (and I confess to the folks for just how badly hacked this is...
I've not used the packages before, and taking stuff from one package
internal form to something I can manipulate for stats took some klutzing
around) my attempt at collecting some data.

-=-=-=-=-=-
-- Simple experiment of Timer/Clock operation

with Text_IO; use Text_IO;
with Ada.Real_Time;
with Ada.Numerics.Generic_Elementary_Functions;

procedure Timer is

   package Rt renames Ada.Real_Time;

   type Statistic is digits 15;
   package M is new Ada.Numerics.Generic_Elementary_Functions (Statistic);


   procedure Runner (Ivl : Rt.Time_Span) is
      Num_Samples : constant Integer := 500;

      Samples     : array (1 .. Num_Samples) of Statistic;

      -- statistics stuff
      Sum         : Statistic := 0.0;
      Sum_Var     : Statistic := 0.0;
      Mean        : Statistic;
      Min         : Statistic;
      Max         : Statistic;
      Variance    : Statistic;

      Start       : Rt.Time;
      Stop        : Rt.Time;

      Log_File : File_Type;

      use type Rt.Time_Span;

   begin
      Put_Line ("Generating data for Time span: "
		& Duration'Image (Rt.To_Duration (Ivl)));

      Create (File => Log_File,
	      Mode => Out_File,
	      Name => "T"
	      & Duration'Image (Rt.To_Duration (Ivl))
	      & ".log");

      New_Line (Log_File);
      Put_Line (Log_File, "Data for Rt.Time span: "
		& Duration'Image (Rt.To_Duration (Ivl)));
      New_Line (Log_File);

      -- Just a bit of Rt.Time waster before starting actual loop
      -- probably not needed as I'm capturing the Rt.Clock before
      -- and after the delay statement
      delay until Rt.Clock + Ivl + Ivl;

      for I in 1 .. Num_Samples loop

	 -- capture the Rt.Clock at the start of the Rt.Time delay
	 Start := Rt.Clock;

	 -- delay until the captured Rt.Clock plus one Rt.Time-span interval
	 delay until Start + Ivl;

	 -- capture the Rt.Clock after the delay expired
	 Stop := Rt.Clock;

	 -- record the difference between stop and start Rt.Clock values
	 -- less the expected interval;
	 Put_Line (Log_File, Duration'Image (
	    Rt.To_Duration (Stop - Start - Ivl)));
	 Samples (I) := Statistic (Rt.To_Duration (Stop - Start - Ivl));

      end loop;

      -- compute statistics
      Min := Samples (1);
      Max := Samples (1);

      for I in 1 .. Num_Samples loop
	 Sum := Sum + Samples (I);
	 if Samples (I) > Max then
	    Max := Samples (I);
	 end if;
	 if Samples (I) < Min then
	    Min := Samples (I);
	 end if;
      end loop;

      Mean := Sum / Statistic (Num_Samples);

      for I in 1 .. Num_Samples loop
	 Sum_Var := Sum_Var + (Samples (I) - Mean) * (Samples (I) - Mean);
      end loop;
      Variance := Sum_Var / Statistic (Num_Samples - 1);


      Put_Line ("Statistics");
      New_Line;

      Put_Line ("Max:       " & Statistic'Image (Max));
      Put_Line ("Min:       " & Statistic'Image (Min));
      Put_Line ("Mean:      " & Statistic'Image (Mean));
--        Put_Line ("Variance:  " & Statistic'Image (Variance));
      Put_Line ("Std. Dev.: " & Statistic'Image (M.Sqrt (Variance)));

      New_Line(5);

   end Runner;

begin

   Put_Line ("Time Span Unit is " &
	       Duration'Image (Rt.To_Duration (Rt.Time_Span_Unit)));
   New_Line;

   Runner (Rt.Nanoseconds (1));
   Runner (Rt.Nanoseconds (10));
   Runner (Rt.Nanoseconds (100));
   Runner (Rt.Microseconds (1));
   Runner (Rt.Microseconds (10));
   Runner (Rt.Microseconds (100));
   Runner (Rt.Milliseconds (1));
   Runner (Rt.Milliseconds (10));
   Runner (Rt.Milliseconds (100));

end Timer;
-=-=-=-=-
Time Span Unit is  0.000000001

Generating data for Time span:  0.000000001
Statistics

Max:        8.45100000000000E-06
Min:        3.00000000000000E-07
Mean:       8.97321999999994E-07
Std. Dev.:  4.72299434498552E-07





Generating data for Time span:  0.000000010
Statistics

Max:        1.80100000000000E-06
Min:        2.91000000000000E-07
Mean:       8.82286000000004E-07
Std. Dev.:  1.24592289815071E-07





Generating data for Time span:  0.000000100
Statistics

Max:        1.40900000000000E-06
Min:        2.01000000000000E-07
Mean:       7.67528000000000E-07
Std. Dev.:  1.59364913224758E-07





Generating data for Time span:  0.000001000
Statistics

Max:        8.12000000000000E-07
Min:        5.09000000000000E-07
Mean:       7.43505999999995E-07
Std. Dev.:  1.25971025885818E-07





Generating data for Time span:  0.000010000
Statistics

Max:        9.31900000000000E-06
Min:        2.63000000000000E-07
Mean:       6.92286000000001E-07
Std. Dev.:  9.61985163666378E-07





Generating data for Time span:  0.000100000
Statistics

Max:        1.59120000000000E-05
Min:        2.15000000000000E-07
Mean:       7.22622000000002E-07
Std. Dev.:  1.10564358388459E-06





Generating data for Time span:  0.001000000
Statistics

Max:        5.10477000000000E-04
Min:        3.43000000000000E-07
Mean:       2.97962600000000E-06
Std. Dev.:  3.19996443996105E-05





Generating data for Time span:  0.010000000
Statistics

Max:        5.31683000000000E-04
Min:        4.19000000000000E-07
Mean:       5.65434000000001E-06
Std. Dev.:  4.44572946856412E-05





Generating data for Time span:  0.100000000
Statistics

Max:        5.01349000000000E-04
Min:        5.74000000000000E-07
Mean:       3.99113399999998E-06
Std. Dev.:  3.06653091148658E-05

-=-=-=-=-

	One thing not seen in the above, is that under Windows, there are
uncontrollable events that will throw a data point out to the extreme...
Look at the millisecond data (0.001). The max latency was 5.1E-4, while the
mean was 2.9E-6. In a run of 500 samples, only 2 or 3 data points jumped to
that high value. That's a sign of the OS doing some house-keeping and
blocking the program from responding. But to see it requires plotting the
data -- once seen, one can attempt to explain that data point. Excluding
those data points will bring the mean down a small amount, but will reduce
the standard deviation significantly.

	Also note that for intervals less than 1microsecond, the latency swamps
the delay. Even for 1microsecond, the latency is 0.7microseconds (the
numbers shown above are AFTER the expected delay value has been subtracted
from the stop-start clock times, leaving only the latency from delay
expiration to the read).

	
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24  1:15                               ` Dennis Lee Bieber
@ 2014-11-24  1:34                                 ` Emanuel Berg
  2014-11-24  9:22                                   ` Jacob Sparre Andersen
  2014-11-24 17:30                                   ` Dennis Lee Bieber
  2014-11-24  8:44                                 ` Dmitry A. Kazakov
  1 sibling, 2 replies; 42+ messages in thread
From: Emanuel Berg @ 2014-11-24  1:34 UTC (permalink / raw)


Dennis Lee Bieber <wlfraed@ix.netcom.com> writes:

> And just to put this back into an Ada context

There is no doubt that Ada is cooler than C++ but Ada
is too cool, so people didn't master it in enough
numbers, instead C++ conquered not the world but
almost, at least in the 90s. Correct interpretation?

What I've learned is that Ada was developed by the US
military (the DoD) because all branches of the US
military were using different programming languages,
virtually reinventing the wheel all the time. This
time a"round", it wouldn't be that way so there was a
competition to get a better language so everyone would
do it once and in compatible ways. Only Ada was too
good so people didn't felt confident using it, so they
stuck to using their old-fashion languages where at
least they still felt like number one programming. Now
Ada can be found in real-time programming, trains and
other vehicles, and stuff like that.

The only Ada I did was what they told me to do at the
university. Lots of technologies I found by myself,
but I didn't find Ada, and actually I would be
surprised if I ever get to do it again.

Anyway, here is one thing we did. Probably very
schoolboyish to you professional Adaites (?)
["professional" as in either in your wallet or heart,
or both, optionally] but it can be interesting for you
to see how Ada looks in the university world to
students just doing a general computer education:

http://user.it.uu.se/~embe8573/part4/

-- 
underground experts united


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24  1:15                               ` Dennis Lee Bieber
  2014-11-24  1:34                                 ` Emanuel Berg
@ 2014-11-24  8:44                                 ` Dmitry A. Kazakov
  2014-11-24 17:24                                   ` Dennis Lee Bieber
  1 sibling, 1 reply; 42+ messages in thread
From: Dmitry A. Kazakov @ 2014-11-24  8:44 UTC (permalink / raw)


On Sun, 23 Nov 2014 20:15:36 -0500, Dennis Lee Bieber wrote:

[...]
> 	One thing not seen in the above, is that under Windows, there are
> uncontrollable events that will throw a data point out to the extreme...
> Look at the millisecond data (0.001). The max latency was 5.1E-4, while the
> mean was 2.9E-6. In a run of 500 samples, only 2 or 3 data points jumped to
> that high value. That's a sign of the OS doing some house-keeping and
> blocking the program from responding. But to see it requires plotting the
> data -- once seen, one can attempt to explain that data point. Excluding
> those data points will bring the mean down a small amount, but will reduce
> the standard deviation significantly.

A few notes:

1. It is impossible under Windows (Win32) to wait (non-busily) for a time
period shorter than 1ms. This is the highest possible resolution (but not
accuracy) of all waitable services.

Furthermore, depending on the Windows version, you should probably change
the minimum timer resolution. The function for this is timeBeginPeriod from
winmm.dll. 

http://msdn.microsoft.com/en-us/library/windows/desktop/dd757624%28v=vs.85%29.aspx

I don't remember of Win32Ada include it, but it is no problem to call it
without bindings.

2. If there is an assumption that some other processes intervene, you could
change the priority of the task. E.g.

http://msdn.microsoft.com/en-us/library/windows/desktop/ms686277%28v=vs.85%29.aspx

Setting THREAD_PRIORITY_TIME_CRITICAL blocks practically everything (except
for drivers).

However I doubt that background processes or services are the problem when
waiting for 10 or 5ms

The rule of thumb for Windows is that anything below 5ms is unreliable.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24  1:34                                 ` Emanuel Berg
@ 2014-11-24  9:22                                   ` Jacob Sparre Andersen
  2014-11-24 17:30                                   ` Dennis Lee Bieber
  1 sibling, 0 replies; 42+ messages in thread
From: Jacob Sparre Andersen @ 2014-11-24  9:22 UTC (permalink / raw)


Emanuel Berg wrote:

> http://user.it.uu.se/~embe8573/part4/

Are you aware of the problem with your "Random_Integer" function?  (Has
the exercise been graded?)

Greetings,

Jacob
-- 
»You have to blow things up to get anything useful.«
                                  -- Archchancellor Ridcully

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24  8:44                                 ` Dmitry A. Kazakov
@ 2014-11-24 17:24                                   ` Dennis Lee Bieber
  2014-11-24 18:28                                     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-24 17:24 UTC (permalink / raw)


On Mon, 24 Nov 2014 09:44:39 +0100, "Dmitry A. Kazakov"
<mailbox@dmitry-kazakov.de> declaimed the following:

>Setting THREAD_PRIORITY_TIME_CRITICAL blocks practically everything (except
>for drivers).
>
>However I doubt that background processes or services are the problem when
>waiting for 10 or 5ms
>
	Many moons ago I had a task using a GFE W98 (!) laptop [needed to be
W9x to allow direct access to the parallel port]. Wasn't a timer situation
-- it was more a busy wait since the main timing control was obtained from
an external (1KHz as I recall) clock to a signal pin on the parallel port.
The task required writing 6-bits on each clock transition (I forget if
L->H, or H->L) -- the 6-bits representing three RS-422-style balanced data
lines... 

	Even with the priority values (as I recall, there were two that needed
to be set, a process class priority and a priority within the class) at
maximum, Windows still tended to trash the data transfer every 250mSec or
so with some overhead operation.

	Fortunately, that laptop never did go operational (the overall
assignment was to use the laptop as a mini-command formatter to transfer
/red/ GPS decryption keys to a GPS receiver in the testing lab without
causing the entire lab to go limited-access classified -- instead the
laptop would be locked in a safe when not loading keys). By the time the
key loading was really needed, the CONOPS changed to /black/ keys which
could go through the normal lab equipment.

	If given that assignment today -- I'd recommend dropping the parallel
port requirement and use something like an Arduino (a BASIC Stamp 2p has
enough memory and pins to support it, just not the speed)... Let the laptop
transfer the command string to the Arduino, which would then handle the
clock synch and output. Definitely easier to communicate with <G>
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24  1:34                                 ` Emanuel Berg
  2014-11-24  9:22                                   ` Jacob Sparre Andersen
@ 2014-11-24 17:30                                   ` Dennis Lee Bieber
  1 sibling, 0 replies; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-24 17:30 UTC (permalink / raw)


On Mon, 24 Nov 2014 02:34:41 +0100, Emanuel Berg <embe8573@student.uu.se>
declaimed the following:

>Dennis Lee Bieber <wlfraed@ix.netcom.com> writes:
>
>> And just to put this back into an Ada context
>
>There is no doubt that Ada is cooler than C++ but Ada

	Thing is -- I've been on this discussion for a whole week -- in a
newsgroup named comp.lang.ada -- and not seeing anything Ada related. I
don't know if you've been posting to multiple groups or not; if you have
the others aren't making it through headers to here.

	I actually did try to follow the function calls you were citing --
loading Visual Studio Express -- but I don't think M$ has implemented all
of them (as I recall, M$ had no plans to even provide C99 standard
compliance, and getting C++2011 compliance in a 2012 VS Express would be a
miracle)
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24 17:24                                   ` Dennis Lee Bieber
@ 2014-11-24 18:28                                     ` Dmitry A. Kazakov
  2014-11-24 20:30                                       ` brbarkstrom
  0 siblings, 1 reply; 42+ messages in thread
From: Dmitry A. Kazakov @ 2014-11-24 18:28 UTC (permalink / raw)


On Mon, 24 Nov 2014 12:24:15 -0500, Dennis Lee Bieber wrote:

> On Mon, 24 Nov 2014 09:44:39 +0100, "Dmitry A. Kazakov"
> <mailbox@dmitry-kazakov.de> declaimed the following:
> 
>>Setting THREAD_PRIORITY_TIME_CRITICAL blocks practically everything (except
>>for drivers).
>>
>>However I doubt that background processes or services are the problem when
>>waiting for 10 or 5ms
>>
> 	Many moons ago I had a task using a GFE W98 (!) laptop [needed to be
> W9x to allow direct access to the parallel port].

Windows 98 cannot be compared with NT.

> Wasn't a timer situation
> -- it was more a busy wait since the main timing control was obtained from
> an external (1KHz as I recall) clock to a signal pin on the parallel port.
> The task required writing 6-bits on each clock transition (I forget if
> L->H, or H->L) -- the 6-bits representing three RS-422-style balanced data
> lines... 

You should have written a proper driver for this with interrupts and
deferred I/O.

BTW, TIME_CRITICAL blocks even deferred I/O processing, if I correctly
remember. Windows internals are not that bad as painted.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24 18:28                                     ` Dmitry A. Kazakov
@ 2014-11-24 20:30                                       ` brbarkstrom
  2014-11-24 21:03                                         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 42+ messages in thread
From: brbarkstrom @ 2014-11-24 20:30 UTC (permalink / raw)


On Monday, November 24, 2014 1:28:59 PM UTC-5, Dmitry A. Kazakov wrote:
> On Mon, 24 Nov 2014 12:24:15 -0500, Dennis Lee Bieber wrote:
> 
> > On Mon, 24 Nov 2014 09:44:39 +0100, "Dmitry A. Kazakov"
> > declaimed the following:
> > 
> >>Setting THREAD_PRIORITY_TIME_CRITICAL blocks practically everything (except
> >>for drivers).
> >>
> >>However I doubt that background processes or services are the problem when
> >>waiting for 10 or 5ms
> >>
> > 	Many moons ago I had a task using a GFE W98 (!) laptop [needed to be
> > W9x to allow direct access to the parallel port].
> 
> Windows 98 cannot be compared with NT.
> 
> > Wasn't a timer situation
> > -- it was more a busy wait since the main timing control was obtained from
> > an external (1KHz as I recall) clock to a signal pin on the parallel port.
> > The task required writing 6-bits on each clock transition (I forget if
> > L->H, or H->L) -- the 6-bits representing three RS-422-style balanced data
> > lines... 
> 
> You should have written a proper driver for this with interrupts and
> deferred I/O.
> 
> BTW, TIME_CRITICAL blocks even deferred I/O processing, if I correctly
> remember. Windows internals are not that bad as painted.
> 
> -- 
> Regards,
> Dmitry A. Kazakov


If you want to get time standard information, you can start with the
very short background piece from Wikipedia:

http://en.wikipedia.org/wiki/Standard_time_and_frequency_signal_service

The US Government Agency that maintains time standards is a division within
the National Institute of Standards and Technology (NIST).  They broadcast
time signals from WWV.  The Web page that provides the entry to this
information is

http://www.nist.gov/pml/div688/grp40/wwv.cfm

The following page gives suggestions on how to access time signals from
a computer connected to the Internet:

http://www.nist.gov/pml/div688/grp40/its.cfm

The following pdf has information on computer time-keeping, although I'm
not sure what its date is.  Even so, this may be useful reading.

http://tf.nist.gov/service/pdf/computertime.pdf

A standard reference on time and related astronomical matters is

Seidelmann, P. K., 2006: Explanatory Supplement to the Astronomical
Almanac: Completely Revised and Rewritten, University Science Books,
Sausalito, CA.

If you want to look into algorithms with a modern (Bayesian) flavor,
you might look at

Pole, A., West, M., and Harriaon, J., 1994: Applied Bayesian Forecasting
and Time Series Analysis, Chapman & Hall/CRC, Boca Raton, FL

This is pretty readable, even though it may seem a bit old.  There's
an interesting (and probably useful) piece of DOS software that might
be fun to update if you can get copyright permission and have some
spare time.  If your taste runs to much fancier math, there's a whole
special interest group of the Society of Industrial and Applied Mathematics
(SIAM) that's devoted to Uncertainty Quantification.

Sorry to be a bit late in responding.

Bruce B.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24 20:30                                       ` brbarkstrom
@ 2014-11-24 21:03                                         ` Dmitry A. Kazakov
  2014-11-24 21:34                                           ` brbarkstrom
  2014-11-25 14:04                                           ` brbarkstrom
  0 siblings, 2 replies; 42+ messages in thread
From: Dmitry A. Kazakov @ 2014-11-24 21:03 UTC (permalink / raw)


On Mon, 24 Nov 2014 12:30:53 -0800 (PST), brbarkstrom@gmail.com wrote:

> If you want to get time standard information, you can start with the
> very short background piece from Wikipedia:
> 
> http://en.wikipedia.org/wiki/Standard_time_and_frequency_signal_service

The earth radius Re=6_371_000 m. Pi * Re / c where c is the speed of light
299_792_458 m/s is a rough estimation of the delay the signal takes
traveling from the US to EU, ignoring refraction, interference, amplifiers,
encoders/decoders etc. This is catastrophic 67 ms. It could be improved by
statistical processing to, maybe, 10 ms or so. Now compare that with the
resolution of a typical real-time clock, which is >3 ns!

Add here times required to sample the signal, to pass it through the system
layers, and you will understand how poor the thing is for any time
measurement (except maybe for the continental shift times (:-)).

Fortunately, neither global time signals nor NTP is needed for time
measurements, clock drift included.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24 21:03                                         ` Dmitry A. Kazakov
@ 2014-11-24 21:34                                           ` brbarkstrom
  2014-11-25 14:04                                           ` brbarkstrom
  1 sibling, 0 replies; 42+ messages in thread
From: brbarkstrom @ 2014-11-24 21:34 UTC (permalink / raw)


On Monday, November 24, 2014 4:03:25 PM UTC-5, Dmitry A. Kazakov wrote:
> On Mon, 24 Nov 2014 12:30:53 -0800 (PST), brbarkstrom wrote:
> 
> > If you want to get time standard information, you can start with the
> > very short background piece from Wikipedia:
> > 
> > http://en.wikipedia.org/wiki/Standard_time_and_frequency_signal_service
> 
> The earth radius Re=6_371_000 m. Pi * Re / c where c is the speed of light
> 299_792_458 m/s is a rough estimation of the delay the signal takes
> traveling from the US to EU, ignoring refraction, interference, amplifiers,
> encoders/decoders etc. This is catastrophic 67 ms. It could be improved by
> statistical processing to, maybe, 10 ms or so. Now compare that with the
> resolution of a typical real-time clock, which is >3 ns!
> 
> Add here times required to sample the signal, to pass it through the system
> layers, and you will understand how poor the thing is for any time
> measurement (except maybe for the continental shift times (:-)).
> 
> Fortunately, neither global time signals nor NTP is needed for time
> measurements, clock drift included.
> 
> -- 
> Regards,
> Dmitry A. Kazakov
> http://www.dmitry-kazakov.de

Usually we don't have to worry about things travelling at 7 km/sec (typical
Low Earth Orbits satellite ground track speeds) or 7 m/ms.  Maybe that's 
of some comfort.  Of course if Google has to map stop lights at 1 cm resolution,
maybe they need ns time resolution so they can stop their driverless car
for a stop sign. (:))-

Bruce B.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-24 21:03                                         ` Dmitry A. Kazakov
  2014-11-24 21:34                                           ` brbarkstrom
@ 2014-11-25 14:04                                           ` brbarkstrom
  2014-11-25 18:16                                             ` Dennis Lee Bieber
  1 sibling, 1 reply; 42+ messages in thread
From: brbarkstrom @ 2014-11-25 14:04 UTC (permalink / raw)


On Monday, November 24, 2014 4:03:25 PM UTC-5, Dmitry A. Kazakov wrote:
> On Mon, 24 Nov 2014 12:30:53 -0800 (PST), brbarkstrom wrote:
> 
> > If you want to get time standard information, you can start with the
> > very short background piece from Wikipedia:
> > 
> > http://en.wikipedia.org/wiki/Standard_time_and_frequency_signal_service
> 
> The earth radius Re=6_371_000 m. Pi * Re / c where c is the speed of light
> 299_792_458 m/s is a rough estimation of the delay the signal takes
> traveling from the US to EU, ignoring refraction, interference, amplifiers,
> encoders/decoders etc. This is catastrophic 67 ms. It could be improved by
> statistical processing to, maybe, 10 ms or so. Now compare that with the
> resolution of a typical real-time clock, which is >3 ns!
> 
> Add here times required to sample the signal, to pass it through the system
> layers, and you will understand how poor the thing is for any time
> measurement (except maybe for the continental shift times (:-)).
> 
> Fortunately, neither global time signals nor NTP is needed for time
> measurements, clock drift included.
> 
> -- 
> Regards,
> Dmitry A. Kazakov

The third reference in my previous post mentions software NIST provides
that give time signals from NIST in several formates.  One format is the
Network Time Protocol (RFC-1395), where "The NIST servers listen for a NTP request on port 123, and respond by sending a udp/ip data packet in the NTP
format. The data packet includes a 64-bit timestamp containing the time in 
UTC seconds since January 1, 1900 with a resolution of 200 ps."  I think 
that should probably be sufficient for the 3 ns accuracy desired in determining
clock drift.

I suspect that it might also be possible to get time from a gps-equipped
smartphone.  Since the GPS satellites maintain atomic time and are carefully
cross-checking with ground stations, they are probably a potential source
of data for this problem.

More exotic solutions might be uncovered from a bit of further research.
For example, the astronomers doing Very Long Baseline Interferometry need
to do remote time synchronization of high accuracy.  I don't know the
methods they use, but maybe they have something that could be turned into
a useful tool.

Finally, after thinking about your response a bit, I think the average time
delay between a WWWV station in the US and a receiver in the EU would be 
fairly constant -- except for variations due to changes in the index of
refraction and reflected signals bounced off the ionosphere.  The constant
part of the delay becomes an offset in a linear regression data reduction.  
Of course, this is minor quibble with your comment.

Bruce B.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-25 14:04                                           ` brbarkstrom
@ 2014-11-25 18:16                                             ` Dennis Lee Bieber
  2014-11-25 20:50                                               ` brbarkstrom
  0 siblings, 1 reply; 42+ messages in thread
From: Dennis Lee Bieber @ 2014-11-25 18:16 UTC (permalink / raw)


On Tue, 25 Nov 2014 06:04:28 -0800 (PST), brbarkstrom@gmail.com declaimed
the following:

>
>The third reference in my previous post mentions software NIST provides
>that give time signals from NIST in several formates.  One format is the
>Network Time Protocol (RFC-1395), where "The NIST servers listen for a NTP request on port 123, and respond by sending a udp/ip data packet in the NTP
>format. The data packet includes a 64-bit timestamp containing the time in 
>UTC seconds since January 1, 1900 with a resolution of 200 ps."  I think 
>that should probably be sufficient for the 3 ns accuracy desired in determining
>clock drift.
>
	Note that part of the NTP protocol (or receiving computer
implementations) also incorporates lots of stuff to determine correction
factors for the receiving computer and the latencies in the network.

	As a result, it is not as precise as you may want it to be.

http://en.wikipedia.org/wiki/Network_Time_Protocol
"""
NTP is intended to synchronize all participating computers to within a few
milliseconds of Coordinated Universal Time 
"""
and
"""
NTP can usually maintain time to within tens of milliseconds over the
public Internet, and can achieve better than one millisecond accuracy in
local area networks under ideal conditions.
"""

	Note that: milliseconds

NTP is used to synchronize wall-clock time between computers by bouncing
packets between them, but does not provide a fixed/stable clock signal
itself.


	
>I suspect that it might also be possible to get time from a gps-equipped
>smartphone.  Since the GPS satellites maintain atomic time and are carefully
>cross-checking with ground stations, they are probably a potential source
>of data for this problem.
>
	Yes, but ...

	The time at the receiver is adjusted by fitting the delays from the
satellites... One reason you need four satellites for a decent fix... you
have to fit a local time along with distances from the satellites to
determine location. They are a source for standard time these days...
Consumer GPS and phones likely have a simple quartz clock for normal
time-keeping that gets updated when ever a GPS fix is performed.

	For a computer lab, a standard time base is something like this
http://www.arbiter.com/catalog/product/model-1084a-b-c-gps-satellite-precision-time-clock-%2840ns%29.php
but again it is only meant to synchronize the wall clock time of disparate
computers [the equivalent of setting your watch while listening to a time
announcement on a radio]. You have to move up to
http://www.arbiter.com/catalog/product/model-1083b.php to get standardized
frequency outputs which can be used for clock differencing.

>More exotic solutions might be uncovered from a bit of further research.
>For example, the astronomers doing Very Long Baseline Interferometry need
>to do remote time synchronization of high accuracy.  I don't know the
>methods they use, but maybe they have something that could be turned into
>a useful tool.
>
	Probably boxes like the above 1083b or 1084a -- depending upon whether
they need a civil time-stamp or a reference frequency (you'd need the
latter to calibrate a receiver, for example -- and wouldn't be using NTP
with its latencies; rather you'd be using a distribution amplifier and lots
of carefully measured coax so that the signal gets delivered to all end
points at the same time).

>Finally, after thinking about your response a bit, I think the average time
>delay between a WWWV station in the US and a receiver in the EU would be 
>fairly constant -- except for variations due to changes in the index of
>refraction and reflected signals bounced off the ionosphere.  The constant
>part of the delay becomes an offset in a linear regression data reduction.  
>Of course, this is minor quibble with your comment.
>
	If you are in the EU, you shouldn't be using WWVB (WWV is an AM
voice/tick signal [though there is a BCD subcarrier]; WWVB is a digital
signal for automated setting of clocks). The UK has MSF on the same 60kHz
as WWVB, and Japan has JJY. {I suspect those are the stations my watch
handles as all three are on the same frequency and just need to decode the
strongest signal -- the Citizen Skyhawk will identify which US/EU/JP was
used on the last synchronization}.

http://en.wikipedia.org/wiki/Time_from_NPL
http://en.wikipedia.org/wiki/WWVB


-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: how to analyze clock drift
  2014-11-25 18:16                                             ` Dennis Lee Bieber
@ 2014-11-25 20:50                                               ` brbarkstrom
  0 siblings, 0 replies; 42+ messages in thread
From: brbarkstrom @ 2014-11-25 20:50 UTC (permalink / raw)


On Tuesday, November 25, 2014 1:15:56 PM UTC-5, Dennis Lee Bieber wrote:
> On Tue, 25 Nov 2014 06:04:28 -0800 (PST), brbarkstrom declaimed
> the following:
> 
> >
> >The third reference in my previous post mentions software NIST provides
> >that give time signals from NIST in several formates.  One format is the
> >Network Time Protocol (RFC-1395), where "The NIST servers listen for a NTP request on port 123, and respond by sending a udp/ip data packet in the NTP
> >format. The data packet includes a 64-bit timestamp containing the time in 
> >UTC seconds since January 1, 1900 with a resolution of 200 ps."  I think 
> >that should probably be sufficient for the 3 ns accuracy desired in determining
> >clock drift.
> >
> 	Note that part of the NTP protocol (or receiving computer
> implementations) also incorporates lots of stuff to determine correction
> factors for the receiving computer and the latencies in the network.
> 
> 	As a result, it is not as precise as you may want it to be.
> 
> http://en.wikipedia.org/wiki/Network_Time_Protocol
> """
> NTP is intended to synchronize all participating computers to within a few
> milliseconds of Coordinated Universal Time 
> """
> and
> """
> NTP can usually maintain time to within tens of milliseconds over the
> public Internet, and can achieve better than one millisecond accuracy in
> local area networks under ideal conditions.
> """
> 
> 	Note that: milliseconds
> 
> NTP is used to synchronize wall-clock time between computers by bouncing
> packets between them, but does not provide a fixed/stable clock signal
> itself.
> 
> 
> 	
> >I suspect that it might also be possible to get time from a gps-equipped
> >smartphone.  Since the GPS satellites maintain atomic time and are carefully
> >cross-checking with ground stations, they are probably a potential source
> >of data for this problem.
> >
> 	Yes, but ...
> 
> 	The time at the receiver is adjusted by fitting the delays from the
> satellites... One reason you need four satellites for a decent fix... you
> have to fit a local time along with distances from the satellites to
> determine location. They are a source for standard time these days...
> Consumer GPS and phones likely have a simple quartz clock for normal
> time-keeping that gets updated when ever a GPS fix is performed.
> 
> 	For a computer lab, a standard time base is something like this
> http://www.arbiter.com/catalog/product/model-1084a-b-c-gps-satellite-precision-time-clock-%2840ns%29.php
> but again it is only meant to synchronize the wall clock time of disparate
> computers [the equivalent of setting your watch while listening to a time
> announcement on a radio]. You have to move up to
> http://www.arbiter.com/catalog/product/model-1083b.php to get standardized
> frequency outputs which can be used for clock differencing.
> 
> >More exotic solutions might be uncovered from a bit of further research.
> >For example, the astronomers doing Very Long Baseline Interferometry need
> >to do remote time synchronization of high accuracy.  I don't know the
> >methods they use, but maybe they have something that could be turned into
> >a useful tool.
> >
> 	Probably boxes like the above 1083b or 1084a -- depending upon whether
> they need a civil time-stamp or a reference frequency (you'd need the
> latter to calibrate a receiver, for example -- and wouldn't be using NTP
> with its latencies; rather you'd be using a distribution amplifier and lots
> of carefully measured coax so that the signal gets delivered to all end
> points at the same time).
> 
> >Finally, after thinking about your response a bit, I think the average time
> >delay between a WWWV station in the US and a receiver in the EU would be 
> >fairly constant -- except for variations due to changes in the index of
> >refraction and reflected signals bounced off the ionosphere.  The constant
> >part of the delay becomes an offset in a linear regression data reduction.  
> >Of course, this is minor quibble with your comment.
> >
> 	If you are in the EU, you shouldn't be using WWVB (WWV is an AM
> voice/tick signal [though there is a BCD subcarrier]; WWVB is a digital
> signal for automated setting of clocks). The UK has MSF on the same 60kHz
> as WWVB, and Japan has JJY. {I suspect those are the stations my watch
> handles as all three are on the same frequency and just need to decode the
> strongest signal -- the Citizen Skyhawk will identify which US/EU/JP was
> used on the last synchronization}.
> 
> http://en.wikipedia.org/wiki/Time_from_NPL
> http://en.wikipedia.org/wiki/WWVB
> 
> 
> -- 
> 	Wulfraed                 Dennis Lee Bieber         AF6VN

Thanks for the information and the cautions.  Hopefully,
these items may be helpful to the original poster of the
problem.

Bruce B.


^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2014-11-25 20:50 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-18 22:12 how to analyze clock drift Emanuel Berg
2014-11-19  1:41 ` tmoran
2014-11-19  2:10   ` Emanuel Berg
2014-11-19 10:30     ` Jacob Sparre Andersen
2014-11-19 22:15       ` Emanuel Berg
2014-11-20 16:27         ` Stephen Leake
2014-11-20  1:10       ` Emanuel Berg
2014-11-20 14:11         ` Dennis Lee Bieber
2014-11-19 13:08   ` Brian Drummond
2014-11-19  2:10 ` Simon Clubley
2014-11-19  2:37   ` Emanuel Berg
2014-11-19  2:28 ` Dennis Lee Bieber
2014-11-19  2:44   ` tmoran
2014-11-19  2:51     ` Emanuel Berg
2014-11-19  9:01       ` Dmitry A. Kazakov
2014-11-19 22:12         ` Emanuel Berg
2014-11-20  9:42           ` Dmitry A. Kazakov
2014-11-20 20:41             ` Emanuel Berg
2014-11-20 21:27               ` Dmitry A. Kazakov
2014-11-20 21:54                 ` Emanuel Berg
2014-11-20 21:57                   ` Emanuel Berg
2014-11-21  2:27                   ` Dennis Lee Bieber
2014-11-21  3:02                     ` Emanuel Berg
2014-11-21 16:49                       ` Dennis Lee Bieber
2014-11-21 21:06                         ` Emanuel Berg
2014-11-22 18:18                           ` Dennis Lee Bieber
2014-11-23 20:15                             ` Emanuel Berg
2014-11-24  1:15                               ` Dennis Lee Bieber
2014-11-24  1:34                                 ` Emanuel Berg
2014-11-24  9:22                                   ` Jacob Sparre Andersen
2014-11-24 17:30                                   ` Dennis Lee Bieber
2014-11-24  8:44                                 ` Dmitry A. Kazakov
2014-11-24 17:24                                   ` Dennis Lee Bieber
2014-11-24 18:28                                     ` Dmitry A. Kazakov
2014-11-24 20:30                                       ` brbarkstrom
2014-11-24 21:03                                         ` Dmitry A. Kazakov
2014-11-24 21:34                                           ` brbarkstrom
2014-11-25 14:04                                           ` brbarkstrom
2014-11-25 18:16                                             ` Dennis Lee Bieber
2014-11-25 20:50                                               ` brbarkstrom
2014-11-21 21:15                         ` Emanuel Berg
2014-11-21 22:31                           ` Emanuel Berg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox