comp.lang.ada
 help / color / mirror / Atom feed
* silly ravenscar question
@ 2015-02-24  9:07 jan.de.kruyf
  2015-02-24 10:29 ` Dmitry A. Kazakov
                   ` (4 more replies)
  0 siblings, 5 replies; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-24  9:07 UTC (permalink / raw)


Hallo,

I have a dumb question for which I don't find the answer:

Doing a variable length linked list on Gnat for Arm I found I can do 

object_access := new object;

but I can not free this object. I follow that dynamic memory allocation is something bad (tm), under certain circumstances. 
But now since 'new' seems to be allowed, does it work properly at all times?
I mean I could link any freed objects into a free list for later use. That does not cost much, probably less than allocating by way of 'new'.

But this way would be a type of dynamic memory allocation that then settles out at the maximum needed amount. Would that be 'Ravenscar-Pure' then?

--------
2nd silly question:
What did the gods do to libre.adacore.com?

For a while now it is very rare that I get access, and when I do it does not handle a download request properly. Is it just my internet feed? or is it systemic? Libre2 does work by the way, but for lots of things it links to libre or adacore.

-----------
I am busy doing a very light ethernet fieldbus for a few Olimex stm34 boards and a pc.
Since I did not see I could get any speed out of openPowerlink and EtherCAT needs more involved hardware (although it is very fast)

Is there any interest in the community? I am willing to put it under GPL.


Cheers, and thanks for your attention.

jan.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24  9:07 silly ravenscar question jan.de.kruyf
@ 2015-02-24 10:29 ` Dmitry A. Kazakov
  2015-02-24 11:11   ` jan.de.kruyf
  2015-02-24 11:02 ` Jacob Sparre Andersen
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-02-24 10:29 UTC (permalink / raw)


On Tue, 24 Feb 2015 01:07:31 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> I am busy doing a very light ethernet fieldbus for a few Olimex stm34 boards and a pc.

You mean the boards will be sort of terminals. And the master will be?
Which cycles you are talking about, 1ms, 0.1ms, 0.01ms?

> and EtherCAT needs more involved hardware (although it is very fast)

EtherCAT does not need special hardware. If you mean Beckhoff's piggyback,
you don't have to use it. You can do all it does with a plain board and raw
Ethernet. Admittedly it would be a lot of work and EtherCAT slave protocol
(all upper levels) is a horrific mess.

> Is there any interest in the community? I am willing to put it under GPL.

If that would not hinder selling the terminals to the end customers without
exposing the sources...

All existing fieldbuses are extremely poor designed, in my view. But I
doubt it would be a work for one man.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24  9:07 silly ravenscar question jan.de.kruyf
  2015-02-24 10:29 ` Dmitry A. Kazakov
@ 2015-02-24 11:02 ` Jacob Sparre Andersen
  2015-02-24 11:23   ` jan.de.kruyf
  2015-02-24 11:22 ` slos
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 39+ messages in thread
From: Jacob Sparre Andersen @ 2015-02-24 11:02 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> object_access := new object;
>
> but I can not free this object.

Look for "Unchecked_Deallocation" in the LRM.

Greetings,

Jacob
-- 
"... but I don't think even Tucker can do scheduling with no cost."
                                                  -- Randy Brukardt


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 10:29 ` Dmitry A. Kazakov
@ 2015-02-24 11:11   ` jan.de.kruyf
  2015-02-24 13:38     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-24 11:11 UTC (permalink / raw)


On Tuesday, February 24, 2015 at 12:29:02 PM UTC+2, Dmitry A. Kazakov wrote:
> On Tue, 24 Feb 2015 01:07:31 -0800 (PST), jan.de.kruyf wrote:
> 
> > I am busy doing a very light ethernet fieldbus for a few Olimex stm34 boards and a pc.
> 
> You mean the boards will be sort of terminals. And the master will be?
> Which cycles you are talking about, 1ms, 0.1ms, 0.01ms?

Pc, havn't dug into that part yet. I was going to use Proview scada/plc with powerlink, but there is a sizable custom section in my code. So its 'to be decided' at the moment.
I am aiming for 1 msec cycletime, but if it comes out at 2 I will also be happy. Remember STM runs only at 168 Mhz. 


> 
> EtherCAT does not need special hardware. If you mean Beckhoff's piggyback,
> you don't have to use it. You can do all it does with a plain board and raw
> Ethernet. Admittedly it would be a lot of work and EtherCAT slave protocol
> (all upper levels) is a horrific mess.
> 
yes I know, but then the timing advantage is virtually gone. And you are very right about the mess. I sorted a Beckhoff installation with comms issues a few years back. I  was on the phone to Germany daily, but I got send around in circles. From this supplier to that one to that one. . . .
So in the end -I- sorted it. Help from any of them was well below standard. 

> > Is there any interest in the community? I am willing to put it under GPL.
> 
> If that would not hinder selling the terminals to the end customers without
> exposing the sources...

You raise an interesting point. Do I give things away then for everybody to get rich with? . . . 

> 
> All existing fieldbuses are extremely poor designed, in my view. But I
> doubt it would be a work for one man.

[Begriffe, welche sich bei der Ordnung der Dinge als nützlich erwiesen haben, erlangen über uns leicht eine solche Autorität, dass wir ihres irdischen Ursprungs vergessen und sie als unabänderliche Gegebenheiten hinnehmen.]

from: http://en.wikiquote.org/wiki/Albert_Einstein

So it took me a week of sweating to get to the depth of that truth, but now it looks rather doable.

So, did you work at all with the gnat ravenscar package for STM, Dimitry?


cheers.

j.



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24  9:07 silly ravenscar question jan.de.kruyf
  2015-02-24 10:29 ` Dmitry A. Kazakov
  2015-02-24 11:02 ` Jacob Sparre Andersen
@ 2015-02-24 11:22 ` slos
  2015-02-24 12:16   ` jan.de.kruyf
  2015-02-24 11:24 ` J-P. Rosen
  2015-02-24 13:58 ` Simon Wright
  4 siblings, 1 reply; 39+ messages in thread
From: slos @ 2015-02-24 11:22 UTC (permalink / raw)


Le mardi 24 février 2015 10:07:32 UTC+1, jan.de...@gmail.com a écrit :
> Hallo,
> 
> I have a dumb question for which I don't find the answer:
> 
> Doing a variable length linked list on Gnat for Arm I found I can do 
> 
> object_access := new object;
> 
> but I can not free this object. I follow that dynamic memory allocation is something bad (tm), under certain circumstances. 
> But now since 'new' seems to be allowed, does it work properly at all times?
> I mean I could link any freed objects into a free list for later use. That does not cost much, probably less than allocating by way of 'new'.
> 
> But this way would be a type of dynamic memory allocation that then settles out at the maximum needed amount. Would that be 'Ravenscar-Pure' then?
> 
> --------
> 2nd silly question:
> What did the gods do to libre.adacore.com?
> 
> For a while now it is very rare that I get access, and when I do it does not handle a download request properly. Is it just my internet feed? or is it systemic? Libre2 does work by the way, but for lots of things it links to libre or adacore.
> 
> -----------
> I am busy doing a very light ethernet fieldbus for a few Olimex stm34 boards and a pc.
> Since I did not see I could get any speed out of openPowerlink and EtherCAT needs more involved hardware (although it is very fast)
> 
> Is there any interest in the community? I am willing to put it under GPL.
> 
> 
> Cheers, and thanks for your attention.
> 
> jan.

Hello,

Could that be an inspiration ?
http://marte.unican.es/projects.htm#rtep

What about this one ?
https://xenomai.org/rtnet/

Best Regards,
Stéphane
http://slo-ist.fr/ada4autom

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 11:02 ` Jacob Sparre Andersen
@ 2015-02-24 11:23   ` jan.de.kruyf
  2015-02-24 13:43     ` Bob Duff
  2015-02-24 15:30     ` Brad Moore
  0 siblings, 2 replies; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-24 11:23 UTC (permalink / raw)



> 
> > object_access := new object;
> >
> > but I can not free this object.
> 
> Look for "Unchecked_Deallocation" in the LRM.

but there is no "NO_Unchecked_Deallocation" in the Ravenscar profile (D.13.1)

And yes thanks Sparre, for pointing me to LRM, its open on my desk but I did not look there yet (blushes)


Cheers,

j.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24  9:07 silly ravenscar question jan.de.kruyf
                   ` (2 preceding siblings ...)
  2015-02-24 11:22 ` slos
@ 2015-02-24 11:24 ` J-P. Rosen
  2015-02-24 12:10   ` jan.de.kruyf
  2015-02-24 13:58 ` Simon Wright
  4 siblings, 1 reply; 39+ messages in thread
From: J-P. Rosen @ 2015-02-24 11:24 UTC (permalink / raw)


Le 24/02/2015 10:07, jan.de.kruyf@gmail.com a écrit :
> Doing a variable length linked list on Gnat for Arm I found I can do
> 
> object_access := new object;
> 
> but I can not free this object. I follow that dynamic memory
> allocation is something bad (tm), under certain circumstances.

1) Ravenscar is purely about multi-tasking, and says nothing about the
sequential aspects of the language. Allocators (aka "new") are allowed
(but may be disallowed in safety critical contexts by other rules).

2) Did you look into Ada.Unchecked_Deallocation? It is a generic that
you instantiate to get the equivalent of "free".
-- 
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00
http://www.adalog.fr

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 11:24 ` J-P. Rosen
@ 2015-02-24 12:10   ` jan.de.kruyf
  0 siblings, 0 replies; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-24 12:10 UTC (permalink / raw)



> 1) Ravenscar is purely about multi-tasking, and says nothing about the
> sequential aspects of the language. Allocators (aka "new") are allowed
> (but may be disallowed in safety critical contexts by other rules).
> 
> 2) Did you look into Ada.Unchecked_Deallocation? It is a generic that
> you instantiate to get the equivalent of "free".
> 

well . . . , what brought me to this question is that in Gnat for Arm

this works:
     Job_Entry : Job_Entry_P_Type := new Job_Entry_Type;

but this combination does not:

  procedure Free is
	 new Ada.Unchecked_Deallocation(Job_Entry_Type, Job_Entry_P_Type);
  .
  .
  Free (Job_Entry);

it complains about some '__gnat' routine thats missing.

-----------------

By the way Mr. Rosen, do you still have the pdf for the original HOOD book available, I meant to ask you for some time already. At the moment my diagram drawing is very understandable to me, but otherwise it is peanut butter on hot toast. 
I would think that with a little bit of persuasion Umlet could make HOOD diagrams quite beautifully.

Thanks,

j.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 11:22 ` slos
@ 2015-02-24 12:16   ` jan.de.kruyf
  0 siblings, 0 replies; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-24 12:16 UTC (permalink / raw)


On Tuesday, February 24, 2015 at 1:22:43 PM UTC+2, slos wrote:

> 
> Hello,
> 
> Could that be an inspiration ?
> http://marte.unican.es/projects.htm#rtep
> 
> What about this one ?
> https://xenomai.org/rtnet/
> 

Thanks for pointing them out. But just lets see where we get by stripping all the layers of ingenuity. Because I definitely do not need them, they are only in the way of doing the job.

by the way I am NOT saying I know better, I am just curious. And many years of experience taught me that it does work, that.

Cheers,

j.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 11:11   ` jan.de.kruyf
@ 2015-02-24 13:38     ` Dmitry A. Kazakov
  2015-02-25  8:48       ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-02-24 13:38 UTC (permalink / raw)


On Tue, 24 Feb 2015 03:11:45 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> On Tuesday, February 24, 2015 at 12:29:02 PM UTC+2, Dmitry A. Kazakov wrote:
>> On Tue, 24 Feb 2015 01:07:31 -0800 (PST), jan.de.kruyf wrote:
>> 
> I am aiming for 1 msec cycletime, but if it comes out at 2 I will also be
> happy. Remember STM runs only at 168 Mhz. 

It is OK for most applications, but might be not OK for the transport
layer. It depends on the protocol architecture. If you have it synchronous,
chained, triggered in a way EtherCAT is, then latencies of the terminals
tend to accumulate and you need much tighter cycles to come at 1ms of
overall performance. If it is asynchronous, that solves a lot of issues,
but becomes non-real-time.

>> EtherCAT does not need special hardware. If you mean Beckhoff's piggyback,
>> you don't have to use it. You can do all it does with a plain board and raw
>> Ethernet. Admittedly it would be a lot of work and EtherCAT slave protocol
>> (all upper levels) is a horrific mess.
>> 
> yes I know, but then the timing advantage is virtually gone. And you are
> very right about the mess.

It is a mess regardless. If I ran the circus I would throw everything above
the PDU level.

> I sorted a Beckhoff installation with comms
> issues a few years back. I  was on the phone to Germany daily, but I got
> send around in circles. From this supplier to that one to that one. . . .
> So in the end -I- sorted it. Help from any of them was well below standard. 

Exactly same here.

I have strong suspicion that they do not really know how it works either.
It seems and feels as if different parts were designed by different people
who are all gone ever since...

>>> Is there any interest in the community? I am willing to put it under GPL.
>> 
>> If that would not hinder selling the terminals to the end customers without
>> exposing the sources...
> 
> You raise an interesting point. Do I give things away then for everybody
> to get rich with? . . . 

It depends on the target customers. Integrators of automation systems have
no interest in source code. But generally, I don't think one could sell
code anyway. You could sell a bundle hardware+software+service, with the
software part ignored on the balance sheet.

I understood your question as if somebody would be interested in an open
platform to design terminals. Yes, there are some of terminals we could
develop and sell to our customers - special signal generators, incremental
decoders, oversampling ADC (Beckhoff's have systematic design flaw).

This one

   http://www.secureplugandwork.de/servlet/is/10291/

could take use of such a platform. Presently it is BeagleBone.

> So, did you work at all with the gnat ravenscar package for STM, Dimitry?

No. Our system deploys full Ada 2005 (e.g. on a BeagleBone). I cannot
imagine EtherCAT master or the middleware data distribution layer in
Ravenscar. It is beyond my feeble imagination...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 11:23   ` jan.de.kruyf
@ 2015-02-24 13:43     ` Bob Duff
  2015-02-25  9:07       ` jan.de.kruyf
  2015-02-24 15:30     ` Brad Moore
  1 sibling, 1 reply; 39+ messages in thread
From: Bob Duff @ 2015-02-24 13:43 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> but there is no "NO_Unchecked_Deallocation" in the Ravenscar profile (D.13.1)

?

There is no section D.13.1 in the RM.  Ravenscar is defined in D.13,
and there is nothing about No_Unchecked_Deallocation there.

- Bob

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24  9:07 silly ravenscar question jan.de.kruyf
                   ` (3 preceding siblings ...)
  2015-02-24 11:24 ` J-P. Rosen
@ 2015-02-24 13:58 ` Simon Wright
  4 siblings, 0 replies; 39+ messages in thread
From: Simon Wright @ 2015-02-24 13:58 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> Doing a variable length linked list on Gnat for Arm I found I can do 
>
> object_access := new object;
>
> but I can not free this object. I follow that dynamic memory
> allocation is something bad (tm), under certain circumstances.  But
> now since 'new' seems to be allowed, does it work properly at all
> times?  I mean I could link any freed objects into a free list for
> later use. That does not cost much, probably less than allocating by
> way of 'new'.

GNAT allocates & frees memory via System.Memory. The AdaCore Ravenscar
profile's version says

--  This is a simplified version of this package, for use with a configurable
--  run-time library that does not provide Ada tasking. It does not provide
--  any deallocation routine.

The spec doesn't contain a Free (the body contains a null version).

I haven't worked on such a system, but I understand that systems that
require restricted runtimes often either forbid dynamic allocation
altogether or insist that all allocations take place during startup.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 11:23   ` jan.de.kruyf
  2015-02-24 13:43     ` Bob Duff
@ 2015-02-24 15:30     ` Brad Moore
  2015-02-24 16:52       ` Simon Wright
  1 sibling, 1 reply; 39+ messages in thread
From: Brad Moore @ 2015-02-24 15:30 UTC (permalink / raw)


On 15-02-24 04:23 AM, jan.de.kruyf@gmail.com wrote:
>
>>
>>> object_access := new object;
>>>
>>> but I can not free this object.
>>
>> Look for "Unchecked_Deallocation" in the LRM.
>
> but there is no "NO_Unchecked_Deallocation" in the Ravenscar profile (D.13.1)

No_Unchecked_Deallocation is not associated with the Ravenscar profile. 
It is also obsolescent. If you want that restriction, you should use

No_Dependence(Ada.Unchecked_Deallocation) in new code.

You might be thinking of the restriction, No_Implicit_Heap_Allocations 
which is part of the Ravenscar Profile. That restricts the runtime from 
making allocations from the heap, but it does not restrict application 
code from using the heap to allocate objecs.

Brad

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 15:30     ` Brad Moore
@ 2015-02-24 16:52       ` Simon Wright
  2015-02-25  3:01         ` Dennis Lee Bieber
  0 siblings, 1 reply; 39+ messages in thread
From: Simon Wright @ 2015-02-24 16:52 UTC (permalink / raw)


Brad Moore <brad.moore@shaw.ca> writes:

> You might be thinking of the restriction, No_Implicit_Heap_Allocations
> which is part of the Ravenscar Profile. That restricts the runtime
> from making allocations from the heap, but it does not restrict
> application code from using the heap to allocate objecs.

From a hobbyist point of view, the Ravenscar approach to _tasking_ makes
a lot of sense, because full Ada tasking is very hairy (the restricted
tasking in GNAT's Ravenscar is hairy enough).

But some of the other restrictions, like this one, are less appropriate
outside the critical area.

OK, so far the only user-visible effect I see is in packages like
Interfaces.C.Strings, where it's obvious to you and me that
New_Char_Array and New_String are going to allocate memory, but
No_Implicit_Heap_Allocations prevents 'new' (I used FreeRTOS's memory
management routines instead), and in Containers, where you're restricted
to the Bounded versions (no great problem there).

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 16:52       ` Simon Wright
@ 2015-02-25  3:01         ` Dennis Lee Bieber
  0 siblings, 0 replies; 39+ messages in thread
From: Dennis Lee Bieber @ 2015-02-25  3:01 UTC (permalink / raw)


On Tue, 24 Feb 2015 16:52:02 +0000, Simon Wright <simon@pushface.org>
declaimed the following:

>
>From a hobbyist point of view, the Ravenscar approach to _tasking_ makes
>a lot of sense, because full Ada tasking is very hairy (the restricted
>tasking in GNAT's Ravenscar is hairy enough).
>

	My brain must work differently... "From a hobbyist point of view" I
find Ravenscar to be "hairy"... As a hobbyist, I'm not usually thinking of
hard real-time response of an avionics system (NO dynamic memory
allocations after system initialization, for example). Trying to manage the
restrictions on entries of Ravenscar is something I can't easily conceive
of. Heck -- the non-Ravenscar restriction of no-blocking-calls inside a
protected object is a bit of a problem for me.

	But then, I also seem to be on the "threading is easier than async
dispatching" in the Python group.

-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 13:38     ` Dmitry A. Kazakov
@ 2015-02-25  8:48       ` jan.de.kruyf
  2015-02-25 10:46         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-25  8:48 UTC (permalink / raw)


On Tuesday, February 24, 2015 at 3:38:17 PM UTC+2, Dmitry A. Kazakov wrote:

> 
> It is OK for most applications, but might be not OK for the transport
> layer. It depends on the protocol architecture. If you have it synchronous,
> chained, triggered in a way EtherCAT is, then latencies of the terminals
> tend to accumulate and you need much tighter cycles to come at 1ms of
> overall performance. If it is asynchronous, that solves a lot of issues,
> but becomes non-real-time.
> 

Let me mess around for a bit before I shoot off my mouth . . .

> It is a mess regardless. If I ran the circus I would throw everything above
> the PDU level.
> 

Its a code museum, something like windows or linux, i386 - today, nothing can be thrown away, but everybody had clever ideas along the way.

> 
> I understood your question as if somebody would be interested in an open
> platform to design terminals. 

I was just in a sharing mood, since it helps to find the bugs, and get more ideas. Customer pays, otherwise no specific business plan as yet.

> Yes, there are some of terminals we could
> develop and sell to our customers - special signal generators, incremental
> decoders, oversampling ADC (Beckhoff's have systematic design flaw).
> 
> This one
> 
>    http://www.secureplugandwork.de/servlet/is/10291/
> 
> could take use of such a platform. Presently it is BeagleBone.
> 

I read through it, I would be very interested in any of that. But I am a prophet of a different age though. "strip it to the bone and see what we really need" otherwise no realtime stuff will ever be build.
So if you could bear with all my questioning accepted wisdom then, yes.

As a bit of background many years ago I did a multi-threaded system on one of the early pic controllers. there was the soft-stepping of 2 steppermotors, the display and keyboard (in software off course), path calculation, calculating the s-curve for accel/decel and what not. All in assembly. did the hardware and the software.
All in 16 MHz. clock.

There have been many more interesting and challenging projects off course, just that then I also did a reasonable size project alone, and well beyond the accepted wisdom of the day. I remember then already also you saw those big eproms in embedded boxes full of bloatware.


> No. Our system deploys full Ada 2005 (e.g. on a BeagleBone). I cannot
> imagine EtherCAT master or the middleware data distribution layer in
> Ravenscar. It is beyond my feeble imagination...
> 

I think I have a plan for my immediate problem, but if you could let me see some spec with data volumes and timing restraints, I would be in a better position to comment.
What I would say though is that Ada has quite a few rather expensive constructs (timewise) that can be done simpler with little overhead. When stepping through a new piece of code I have a habit of keeping the assembly window open. It is quite an education.

cheers

jan.




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-24 13:43     ` Bob Duff
@ 2015-02-25  9:07       ` jan.de.kruyf
  2015-02-25 17:50         ` Simon Wright
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-25  9:07 UTC (permalink / raw)


On Tuesday, February 24, 2015 at 3:43:18 PM UTC+2, Bob Duff wrote:

> 
> > but there is no "NO_Unchecked_Deallocation" in the Ravenscar profile (D.13.1)
> 
> ?
> 
> There is no section D.13.1 in the RM.  Ravenscar is defined in D.13,
> and there is nothing about No_Unchecked_Deallocation there.
> 
> - Bob

my silliness. I opened RM2005 by accident . . .

thanks for the tip.

j.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-25  8:48       ` jan.de.kruyf
@ 2015-02-25 10:46         ` Dmitry A. Kazakov
  2015-02-25 17:35           ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-02-25 10:46 UTC (permalink / raw)


On Wed, 25 Feb 2015 00:48:42 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> On Tuesday, February 24, 2015 at 3:38:17 PM UTC+2, Dmitry A. Kazakov wrote:

>> No. Our system deploys full Ada 2005 (e.g. on a BeagleBone). I cannot
>> imagine EtherCAT master or the middleware data distribution layer in
>> Ravenscar. It is beyond my feeble imagination...
> 
> I think I have a plan for my immediate problem, but if you could let me
> see some spec with data volumes and timing restraints, I would be in a
> better position to comment.

The high end is 8 analogue channels 100µs over the network + hundreds of
lower speed channels. Data volumes is not a big problem. E.g. we can sample
8 100µs channels and distribute them over the network without data loss,
doing 10ms oversampling.

> What I would say though is that Ada has quite a few rather expensive
> constructs (timewise) that can be done simpler with little overhead. When
> stepping through a new piece of code I have a habit of keeping the
> assembly window open. It is quite an education.

Well if you have a very small application you could do that. But stepping
through TCP or EtherCAT stack is barely possible. So basically we just rely
on statistical values.

The optimization of Ada code we do frequently is flattening smart pointers.
We keep a plain access type together with the controlled smart pointer in
critical parts. I suppose that maintaining finalization lists of controlled
objects should take much time as well. Probably S in T'Class test could be
expensive too. And we try to minimize the number of individual protected
actions as much as possible, which may be not an issue for single-core
board with non-preemptive scheduling.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-25 10:46         ` Dmitry A. Kazakov
@ 2015-02-25 17:35           ` jan.de.kruyf
  2015-02-25 17:55             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-25 17:35 UTC (permalink / raw)


On Wednesday, February 25, 2015 at 12:45:53 PM UTC+2, Dmitry A. Kazakov wrote:

> The high end is 8 analogue channels 100盜 over the network + hundreds of
> lower speed channels. Data volumes is not a big problem. E.g. we can sample
> 8 100盜 channels and distribute them over the network without data loss,
> doing 10ms oversampling.
> 

What is '100盜 ' ?  '0xc79b9c' was nowhere to be found.
what do you mean to say by 10msec oversampling? Average in the terminal and send only every 10 ms?

in any case to focus the thinking, I did some quick sums:
A packet with a payload of 50 bytes, according to the theory should be able to do the roundtrip (100 m cable, 1 switchbox, no packet contention) in 25 usecs through an stm34 board, with time to spare. that includes DMA and offloading and onloading of data.
So that makes 40 packets per msec. Leave some space for syncframes, arp frames and data loss, then 20 roundtrip packets /msec should be doable, provided the jitter is strictly controlled. 
An individual terminal has all the time in the world to handle the data between commframes, but I bet that, depending on what you want out of the system, the pc might feel the strain at that rate. (and the rt scheme must be up to scratch in the pc)
So the bottleneck according to me is in the pc  in most cases.

In any case so according to your specs we would need some kind of multiplexing scheme. I did not need that, the project is too small; but it is good in this case to do a little bit of thinking up front, before too much code is written.
 


> . . . which may be not an issue for single-core
> board with non-preemptive scheduling.

Thats how we did it before Noahs flood, and boy did it run fast.

cheers,

j.

 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-25  9:07       ` jan.de.kruyf
@ 2015-02-25 17:50         ` Simon Wright
  2015-02-26  7:35           ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Simon Wright @ 2015-02-25 17:50 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> On Tuesday, February 24, 2015 at 3:43:18 PM UTC+2, Bob Duff wrote:
>
>> 
>> > but there is no "NO_Unchecked_Deallocation" in the Ravenscar
>> profile (D.13.1)
>> 
>> ?
>> 
>> There is no section D.13.1 in the RM.  Ravenscar is defined in D.13,
>> and there is nothing about No_Unchecked_Deallocation there.
>> 
>> - Bob
>
> my silliness. I opened RM2005 by accident . . .

In fairness, you did say that there is *no* "No_Unchecked_Deallocation"!

> thanks for the tip.
>
> j.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-25 17:35           ` jan.de.kruyf
@ 2015-02-25 17:55             ` Dmitry A. Kazakov
  2015-02-26  8:48               ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-02-25 17:55 UTC (permalink / raw)


On Wed, 25 Feb 2015 09:35:26 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> On Wednesday, February 25, 2015 at 12:45:53 PM UTC+2, Dmitry A. Kazakov wrote:
> 
>> The high end is 8 analogue channels 100盜 over the network + hundreds of
>> lower speed channels. Data volumes is not a big problem. E.g. we can sample
>> 8 100盜 channels and distribute them over the network without data loss,
>> doing 10ms oversampling.
> 
> What is '100盜 ' ?  '0xc79b9c' was nowhere to be found.

micro

> what do you mean to say by 10msec oversampling?

That would be to store each 100 values into the buffer and get the
wave-form each 10ms.

Normally, high-speed channels are not used for control, it means that we
don't need to react each 0.1ms, but we must catch each value, stamp it and
pass further, e.g. to the data logger or to the software oscilloscope.

> in any case to focus the thinking, I did some quick sums:
> A packet with a payload of 50 bytes, according to the theory should be
> able to do the roundtrip (100 m cable, 1 switchbox, no packet contention)
> in 25 usecs through an stm34 board, with time to spare. that includes DMA
> and offloading and onloading of data.

> So that makes 40 packets per msec. Leave some space for syncframes, arp
> frames and data loss, then 20 roundtrip packets /msec should be doable,
> provided the jitter is strictly controlled. 
> An individual terminal has all the time in the world to handle the data
> between commframes, but I bet that, depending on what you want out of the
> system, the pc might feel the strain at that rate. (and the rt scheme must
> be up to scratch in the pc)

We managed 0.2ms with 4 channels without oversampling, transported over XCP
under VxWorks. XCP is a UDP-based protocol. The data were coalesced into
one frame.

> So the bottleneck according to me is in the pc  in most cases.

This is true.

> In any case so according to your specs we would need some kind of
> multiplexing scheme.

This is one case when you trade latencies for throughout. In other cases
latencies are more important, e.g. for control and for time
synchronization. The latter is the weakest spot of all existing field
buses. Most of them do not have any, others have a garbage clock
synchronization, like EtherCAT's distributed clock is.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-25 17:50         ` Simon Wright
@ 2015-02-26  7:35           ` jan.de.kruyf
  2015-02-26 14:57             ` Simon Wright
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-26  7:35 UTC (permalink / raw)


On Wednesday, February 25, 2015 at 7:50:17 PM UTC+2, Simon Wright wrote:

> 
> In fairness, you did say that there is *no* "No_Unchecked_Deallocation"!
> 

Thats what I love about programming in Ada. Its full of sh*t, but ultimately it _is_ fair. (; 

Was it you that had read the Gnat spec for the STM34 x-compiler? About memory allocation and that? cause I did not find it.

I had a first look at the zfp package. Do you know what would be missing in the language there?
Because the interrupt latencies in the Ravenscar implementation are NOT pleasant. Adacore has long and involved things before we get to the handler.

cheers,
j.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-25 17:55             ` Dmitry A. Kazakov
@ 2015-02-26  8:48               ` jan.de.kruyf
  2015-02-26  9:47                 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-26  8:48 UTC (permalink / raw)


On Wednesday, February 25, 2015 at 7:55:59 PM UTC+2, Dmitry A. Kazakov wrote:


> 
> Normally, high-speed channels are not used for control, it means that we
> don't need to react each 0.1ms, but we must catch each value, stamp it and
> pass further, e.g. to the data logger or to the software oscilloscope.
> 
> > in any case to focus the thinking, I did some quick sums:
> > A packet with a payload of 50 bytes, according to the theory should be
> > able to do the roundtrip (100 m cable, 1 switchbox, no packet contention)
> > in 25 usecs through an stm34 board, with time to spare. that includes DMA
> > and offloading and onloading of data.
> 
> > So that makes 40 packets per msec. Leave some space for syncframes, arp
> > frames and data loss, then 20 roundtrip packets /msec should be doable,
> > provided the jitter is strictly controlled. 
> > An individual terminal has all the time in the world to handle the data
> > between commframes, but I bet that, depending on what you want out of the
> > system, the pc might feel the strain at that rate. (and the rt scheme must
> > be up to scratch in the pc)
> 
> We managed 0.2ms with 4 channels without oversampling, transported over XCP
> under VxWorks. XCP is a UDP-based protocol. The data were coalesced into
> one frame.
> 

what processor, what clockspeed, what net topology?
But it is probably as good as it gets with prefab software. 
openPowerlink was reported to have a latency up to 450usecs Sync to 1st. Response between 2 VIA Nehemiah processors running at 1GHz with linux-RT. The main reason I dropped it. Off course modern boards are better, but I also have to deal with STM running at 168MHz


> 
> This is one case when you trade latencies for throughout. In other cases
> latencies are more important, e.g. for control and for time
> synchronization. The latter is the weakest spot of all existing field
> buses. Most of them do not have any, others have a garbage clock
> synchronization, like EtherCAT's distributed clock is.
> 

STM implements the hardware for Precision time protocol IEEE1588 PTP.
So its up to me to implement the software properly. It promises precision in the sub usec range. Personally I think its quite doable, provided the latency from packet arrival through software has little or no jitter.

Otherwise I do not see great problems getting your data collection issue sorted out. But we might forgo on CAN and the like until we really need it (i.e at the interfacing with the data-digesting software.

The numbers I gave you are definitely squeezable, that was just a budget.
The pc side is another kettle of fish though. There is a software stack around that promises things like 10usec cycletime on RTAI but there is no indication of the packet size or the net topology. 
But then there is still the digesting of the data. Do you plan to store a sample in memory for some time? any hardrive activity will most likely bug the cycle response, unless you have a dual port plugin that does nothing but comms
and temp storage, until it can get DMA'ed to the main board.

Ok time for some real work,
cheers,

j.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-26  8:48               ` jan.de.kruyf
@ 2015-02-26  9:47                 ` Dmitry A. Kazakov
  2015-02-26 12:07                   ` jan.de.kruyf
  2015-02-26 19:09                   ` jan.de.kruyf
  0 siblings, 2 replies; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-02-26  9:47 UTC (permalink / raw)


On Thu, 26 Feb 2015 00:48:01 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> On Wednesday, February 25, 2015 at 7:55:59 PM UTC+2, Dmitry A. Kazakov wrote:
> 
>> We managed 0.2ms with 4 channels without oversampling, transported over XCP
>> under VxWorks. XCP is a UDP-based protocol. The data were coalesced into
>> one frame.
> 
> what processor, what clockspeed, what net topology?

i7, I don't remember the frequency, some average CPU and motherboard. BTW,
a high end motherboard may turn actually slower in terms of latencies. As
for the network topology it was an industrial-grade one, a cross-over cable
(:-)).

[ Switches we measured imposed latencies somewhere between 6 and 15
microseconds. ]

> But it is probably as good as it gets with prefab software. 
> openPowerlink was reported to have a latency up to 450usecs Sync to 1st.

I never worked with it, but from the short description I read, looks like
they try to mimic CANOpen-nonsense and have master/slave architecture. Not
good.

We are using a proprietary TCP/IP based protocol (LabNet) for distribution
between the nodes.

It is a design question about the field bus network topology, in particular
switch vs. chained nodes (like in EtherCAT). Real-time crowd will try to
convince us that the only way is chained nodes, but I honestly believe that
normal networks will eventually win.

>> This is one case when you trade latencies for throughout. In other cases
>> latencies are more important, e.g. for control and for time
>> synchronization. The latter is the weakest spot of all existing field
>> buses. Most of them do not have any, others have a garbage clock
>> synchronization, like EtherCAT's distributed clock is.
> 
> STM implements the hardware for Precision time protocol IEEE1588 PTP.
> So its up to me to implement the software properly. It promises precision
> in the sub usec range. Personally I think its quite doable, provided the
> latency from packet arrival through software has little or no jitter.

OK. However, it is not much an issue to reduce latencies in time
synchronization as to actually to use it to compensate clock drift (not
necessarily by tuning the clocks) in order to allow the subscriber to get
data stamped by the *subscriber's* clock. The problem with actual protocols
is that they do it by the *publishe's* clock.

0.005ms accuracy of clocks shift estimation is achievable using normal
network stack. The best ADC we ever had had 0.040ms conversion time. So it
need not to be hardware clock synchronization, but nice to have, of course.

> Otherwise I do not see great problems getting your data collection issue
> sorted out.

Yes. But you know, once you give customers 8 10kHz channels they start
looking forward 16 channels... (:-))

> The numbers I gave you are definitely squeezable, that was just a budget.
> The pc side is another kettle of fish though. There is a software stack
> around that promises things like 10usec cycletime on RTAI but there is no
> indication of the packet size or the net topology.

Maybe, but we must keep a big picture. It does not make sense to sacrifice
for making latencies 50% shorter. Latencies are not the issue most of the
time, though customers will always tell you otherwise. They lie. The real
problem is having it configurable, scalable, usable.

> But then there is still the digesting of the data. Do you plan to store a
> sample in memory for some time?

Usually data are stored away to be analyzed later. Typically it is a
data-logger which continuously writes a log file. Another subscriber to the
data is the HMI, which may render some curves. Yet another is a health
monitoring subsystem etc.

> any hardrive activity will most likely bug
> the cycle response,

Not really. The architecture is usually such that the controlling unit is
physically separate from the logger, or the HMI. E.g. consider an EtherCAT
installation. There is a controller (an industrial PC) running Linux or
VxWorks with an EtherCAT master. It does all real-time work. It also
samples the high-speed channels and publishes them over the middleware. The
data logger is another PC which subscribes to it.

But there would be no problem to use a hard drive because the control
cycles are 1-10ms. At least in the application area I am working in, there
is no processes requiring control under 1ms. Any normal PC can do that.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-26  9:47                 ` Dmitry A. Kazakov
@ 2015-02-26 12:07                   ` jan.de.kruyf
  2015-02-26 19:09                   ` jan.de.kruyf
  1 sibling, 0 replies; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-26 12:07 UTC (permalink / raw)


Here is something for your poster wall, (I will digest ypour writing later)


"In case of discrepancy, you must ignore what they ask for
and give what they need, ignore what they would like and
tell them what they don't want to hear but need to know."
-- E.W. Dijkstra




On Thursday, February 26, 2015 at 11:47:31 AM UTC+2, Dmitry A. Kazakov wrote:


> > On Wednesday, February 25, 2015 at 7:55:59 PM UTC+2, Dmitry A. Kazakov wrote:
> > 
> >> We managed 0.2ms with 4 channels without oversampling, transported over XCP
> >> under VxWorks. XCP is a UDP-based protocol. The data were coalesced into
> >> one frame.
> > 
> > what processor, what clockspeed, what net topology?
> 
> i7, I don't remember the frequency, some average CPU and motherboard. BTW,
> a high end motherboard may turn actually slower in terms of latencies. As
> for the network topology it was an industrial-grade one, a cross-over cable
> (:-)).
> 
> [ Switches we measured imposed latencies somewhere between 6 and 15
> microseconds. ]
> 
> > But it is probably as good as it gets with prefab software. 
> > openPowerlink was reported to have a latency up to 450usecs Sync to 1st.
> 
> I never worked with it, but from the short description I read, looks like
> they try to mimic CANOpen-nonsense and have master/slave architecture. Not
> good.
> 
> We are using a proprietary TCP/IP based protocol (LabNet) for distribution
> between the nodes.
> 
> It is a design question about the field bus network topology, in particular
> switch vs. chained nodes (like in EtherCAT). Real-time crowd will try to
> convince us that the only way is chained nodes, but I honestly believe that
> normal networks will eventually win.
> 
> >> This is one case when you trade latencies for throughout. In other cases
> >> latencies are more important, e.g. for control and for time
> >> synchronization. The latter is the weakest spot of all existing field
> >> buses. Most of them do not have any, others have a garbage clock
> >> synchronization, like EtherCAT's distributed clock is.
> > 
> > STM implements the hardware for Precision time protocol IEEE1588 PTP.
> > So its up to me to implement the software properly. It promises precision
> > in the sub usec range. Personally I think its quite doable, provided the
> > latency from packet arrival through software has little or no jitter.
> 
> OK. However, it is not much an issue to reduce latencies in time
> synchronization as to actually to use it to compensate clock drift (not
> necessarily by tuning the clocks) in order to allow the subscriber to get
> data stamped by the *subscriber's* clock. The problem with actual protocols
> is that they do it by the *publishe's* clock.
> 
> 0.005ms accuracy of clocks shift estimation is achievable using normal
> network stack. The best ADC we ever had had 0.040ms conversion time. So it
> need not to be hardware clock synchronization, but nice to have, of course.
> 
> > Otherwise I do not see great problems getting your data collection issue
> > sorted out.
> 
> Yes. But you know, once you give customers 8 10kHz channels they start
> looking forward 16 channels... (:-))
> 
> > The numbers I gave you are definitely squeezable, that was just a budget.
> > The pc side is another kettle of fish though. There is a software stack
> > around that promises things like 10usec cycletime on RTAI but there is no
> > indication of the packet size or the net topology.
> 
> Maybe, but we must keep a big picture. It does not make sense to sacrifice
> for making latencies 50% shorter. Latencies are not the issue most of the
> time, though customers will always tell you otherwise. They lie. The real
> problem is having it configurable, scalable, usable.
> 
> > But then there is still the digesting of the data. Do you plan to store a
> > sample in memory for some time?
> 
> Usually data are stored away to be analyzed later. Typically it is a
> data-logger which continuously writes a log file. Another subscriber to the
> data is the HMI, which may render some curves. Yet another is a health
> monitoring subsystem etc.
> 
> > any hardrive activity will most likely bug
> > the cycle response,
> 
> Not really. The architecture is usually such that the controlling unit is
> physically separate from the logger, or the HMI. E.g. consider an EtherCAT
> installation. There is a controller (an industrial PC) running Linux or
> VxWorks with an EtherCAT master. It does all real-time work. It also
> samples the high-speed channels and publishes them over the middleware. The
> data logger is another PC which subscribes to it.
> 
> But there would be no problem to use a hard drive because the control
> cycles are 1-10ms. At least in the application area I am working in, there
> is no processes requiring control under 1ms. Any normal PC can do that.
> 
> -- 
> Regards,
> Dmitry A. Kazakov
> http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-26  7:35           ` jan.de.kruyf
@ 2015-02-26 14:57             ` Simon Wright
  2015-02-26 19:36               ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Simon Wright @ 2015-02-26 14:57 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> On Wednesday, February 25, 2015 at 7:50:17 PM UTC+2, Simon Wright wrote:

> Was it you that had read the Gnat spec for the STM34 x-compiler? About
> memory allocation and that? cause I did not find it.

In the gnat-gpl-2014-arm-elf-linux-bin.tar.gz look at
lib/gnat/arm-eabi/ravenscar-sfp-stm32f4/adainclude/s-memory.ads (as I've
said elsewhere, what an odd place to find it!)

> I had a first look at the zfp package. Do you know what would be
> missing in the language there?

My equivalent has

   pragma Restrictions (No_Allocators);
   pragma Restrictions (No_Delay);
   pragma Restrictions (No_Dispatch);
   pragma Restrictions (No_Enumeration_Maps);
   pragma Restrictions (No_Exception_Propagation);
   pragma Restrictions (No_Finalization);
   pragma Restrictions (No_Implicit_Dynamic_Code);
   pragma Restrictions (No_Protected_Types);
   pragma Restrictions (No_Recursion);
   pragma Restrictions (No_Secondary_Stack);
   pragma Restrictions (No_Tasking);

> Because the interrupt latencies in the Ravenscar implementation are
> NOT pleasant. Adacore has long and involved things before we get to
> the handler.

My version has the Cortex handler registered as an invocation of my
dummy_handler macro:

   /* Pointer to the interrupt handler wrapper created by Ada; the
      'object' is the actual PO. */
   typedef void (*handler_wrapper)(void *object);

   /* Array, to be indexed from Ada as Interrupt_ID (0 .. 90), of handler
      wrappers. The index values also match IRQn_Type, in
      $CUBE/Drivers/CMSIS/Device/ST/STM32F4xx/Include/stm32f429xx.h.

      Called from the weak IRQ handlers defined below if not null.

      The Ada side will register handlers here; see
      System.Interupts.Install_Restricted_Handlers. */
   handler_wrapper _gnat_interrupt_handlers[91] = {0, };

   /* Parallel array containing the actual parameter to be passed to the
      handler wrapper.

      Implemented as parallel arrays rather than array of structs to be
      sure that interrupt_handlers[] is initialized. */
   void * _gnat_interrupt_handler_parameters[91] = {0, };

   #define dummy_handler(name, offset)                     \
   __attribute__((weak)) void name()                       \
   {                                                       \
     if (_gnat_interrupt_handlers[offset]) {               \
       _gnat_interrupt_handlers[offset]                    \
         (_gnat_interrupt_handler_parameters[offset]);     \
     } else {                                              \
       while (1) {};                                       \
     }                                                     \
   }

and the Ada handler (as reported by -gnatdg, and the same as for the
AdaCore RTS) is

   procedure buttons__button__handlerP (_object : in out
     buttons__buttonTV) is
   begin
      %push_constraint_error_label ()
      %push_program_error_label ()
      %push_storage_error_label ()
      $system__tasking__protected_objects__single_entry__lock_entry (
        _object._object'unchecked_access);
      buttons__button__handlerN (_object);
      $system__tasking__protected_objects__single_entry__service_entry
        (_object._object'unchecked_access);
      %pop_constraint_error_label
      %pop_program_error_label
      %pop_storage_error_label
      return;
   end buttons__button__handlerP;

I don't know how the % lines translate to code; for me, the Lock_Entry
call will find that we're in an ISR and do nothing; the HandlerN is the
code you wrote; and Service_Entry will check to see whether (the) entry
has been released and execute its code as well if so.

HTH!


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-26  9:47                 ` Dmitry A. Kazakov
  2015-02-26 12:07                   ` jan.de.kruyf
@ 2015-02-26 19:09                   ` jan.de.kruyf
  2015-02-27  8:58                     ` Dmitry A. Kazakov
  1 sibling, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-26 19:09 UTC (permalink / raw)


On Thursday, February 26, 2015 at 11:47:31 AM UTC+2, Dmitry A. Kazakov wrote:


> 
> [ Switches we measured imposed latencies somewhere between 6 and 15
> microseconds. ]
> 

found some HP boxes with a guaranteed latency, and it has a spy connector
all at reasonable price.

> 
> I never worked with it, but from the short description I read, looks like
> they try to mimic CANOpen-nonsense and have master/slave architecture. Not
> good.

dont. it makes your eyes go squint when you try to read the code.

> 
> We are using a proprietary TCP/IP based protocol (LabNet) for distribution
> between the nodes.

is not -fast-
bare E-frames is better, i would think. Just repeat the poll/response cycle when no good data arrives. And 3 tries is death. (or whatever)

> 
> It is a design question about the field bus network topology, in particular
> switch vs. chained nodes (like in EtherCAT). Real-time crowd will try to
> convince us that the only way is chained nodes, but I honestly believe that
> normal networks will eventually win.

How do you do IEEE1588 ? this way around the loop or that way?
The whole scheme becomes dependent on the jitter tolerance of the pc because the pc is part of the delay loop. Nah!
while if one of the outstations takes the syncer role then the pc can be as floppy as it likes, we dont care. It will only show up in the inter-poll gaps.
But the terminals all run in perfect step and have one sense of time.

So yes, I am agnostic as far as the software writing goes. The changes are not very big, And yes more traffic is possible, because there is less cruft. But it involves non standard hardware with 2 phys or a 3way phy.
(STM will deal graciously with either that is not the point.) And the timing will be less acurate. 
Further when 1 station is down its is a major upheaval, if you look carefully at an etherCAT mac / phy you will know why. And that structure is not readily emulated.
While in a star one can unplug a box, which then will be declared dead by the software, until the time that a new box announces itself.

The pc does get offloaded a bit. less cruft in the frames to throw away, But there are many ways to kill a mocking bird.
I just realized to my horror that using a STM terminal connected through ---- just wait for it --- a parallel card handles the data load easily and acts like a beautiful buffer. and the system will for sure have less jitter, so can handle more outstations. Crazy hey. (Dont try to sell this to the RT guys) But then again a micro controller connected via a pci interface on a plugin card at E400.- will please everybody mightily, and it might do the same job.


> OK. However, it is not much an issue to reduce latencies in time
> synchronization as to actually to use it to compensate clock drift (not
> necessarily by tuning the clocks) in order to allow the subscriber to get
> data stamped by the *subscriber's* clock. The problem with actual protocols
> is that they do it by the *publishe's* clock.

Your wishes have been fulfilled, there is drift correction incorporated in the hardware, that is controlled via software from IEEE1588.

>
> The best ADC we ever had had 0.040ms conversion time.

Those were wonderful ADC's then. but now how about this:
 ADC conversion rate with 12 bit resolution is up to:
     2.4 M.sample/s in single ADC mode,
     4.5 M.sample/s in dual interleaved ADC mode,
     7.2 M.sample/s in triple interleaved ADC mode.

24 channels split over 3 ADC's (that is where the 'triple interleaved' comes in)

Nice hey, we might do a bit of oversampling now to get the accuracy right.

> 
> Yes. But you know, once you give customers 8 10kHz channels they start
> looking forward 16 channels... (:-))

its there, just don't sell it all at once. my uncle would say: "Remember the Philips shaver. They full well knew they were eventually going to make a 3 headed one." (back in 1950 or so, when they still made them with 1 head)

> 
> Maybe, but we must keep a big picture. It does not make sense to sacrifice
> for making latencies 50% shorter. Latencies are not the issue most of the
> time, though customers will always tell you otherwise. They lie. The real
> problem is having it 

> configurable, scalable, usable.

Tell me more . . .
(And remember old Miesel, Dijkstra. I dont know if you have read any of his notes, all written in beautiful long-hand, and all about computer science.
But customer relations was not one of his fortes.)

> 
> Not really. The architecture is usually such that the controlling unit is
> physically separate from the logger, or the HMI. E.g. consider an EtherCAT
> installation. There is a controller (an industrial PC) running Linux or
> VxWorks with an EtherCAT master. It does all real-time work. It also
> samples the high-speed channels and publishes them over the middleware. The
> data logger is another PC which subscribes to it.
> 

Ok, so we dont need a parallel port, he, I was sweating already. . . .


cheers,

j.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-26 14:57             ` Simon Wright
@ 2015-02-26 19:36               ` jan.de.kruyf
  2015-02-27  8:45                 ` Simon Wright
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-26 19:36 UTC (permalink / raw)


On Thursday, February 26, 2015 at 4:57:13 PM UTC+2, Simon Wright wrote:

> said elsewhere, what an odd place to find it!)

indeed! Someone does not use doxygen. How unsettling. . . .

Thanks for your long post. I will look throught it later. There is a lot to be learned still.

Would it be unsettling if I asked why there is no system.ads in the zfp collection?

especially the compiler switches at the end.

Cheers,

j.


> 
> > I had a first look at the zfp package. Do you know what would be
> > missing in the language there?
> 
> My equivalent has
> 
>    pragma Restrictions (No_Allocators);
>    pragma Restrictions (No_Delay);
>    pragma Restrictions (No_Dispatch);
>    pragma Restrictions (No_Enumeration_Maps);
>    pragma Restrictions (No_Exception_Propagation);
>    pragma Restrictions (No_Finalization);
>    pragma Restrictions (No_Implicit_Dynamic_Code);
>    pragma Restrictions (No_Protected_Types);
>    pragma Restrictions (No_Recursion);
>    pragma Restrictions (No_Secondary_Stack);
>    pragma Restrictions (No_Tasking);
> 
> > Because the interrupt latencies in the Ravenscar implementation are
> > NOT pleasant. Adacore has long and involved things before we get to
> > the handler.
> 
> My version has the Cortex handler registered as an invocation of my
> dummy_handler macro:
> 
>    /* Pointer to the interrupt handler wrapper created by Ada; the
>       'object' is the actual PO. */
>    typedef void (*handler_wrapper)(void *object);
> 
>    /* Array, to be indexed from Ada as Interrupt_ID (0 .. 90), of handler
>       wrappers. The index values also match IRQn_Type, in
>       $CUBE/Drivers/CMSIS/Device/ST/STM32F4xx/Include/stm32f429xx.h.
> 
>       Called from the weak IRQ handlers defined below if not null.
> 
>       The Ada side will register handlers here; see
>       System.Interupts.Install_Restricted_Handlers. */
>    handler_wrapper _gnat_interrupt_handlers[91] = {0, };
> 
>    /* Parallel array containing the actual parameter to be passed to the
>       handler wrapper.
> 
>       Implemented as parallel arrays rather than array of structs to be
>       sure that interrupt_handlers[] is initialized. */
>    void * _gnat_interrupt_handler_parameters[91] = {0, };
> 
>    #define dummy_handler(name, offset)                     \
>    __attribute__((weak)) void name()                       \
>    {                                                       \
>      if (_gnat_interrupt_handlers[offset]) {               \
>        _gnat_interrupt_handlers[offset]                    \
>          (_gnat_interrupt_handler_parameters[offset]);     \
>      } else {                                              \
>        while (1) {};                                       \
>      }                                                     \
>    }
> 
> and the Ada handler (as reported by -gnatdg, and the same as for the
> AdaCore RTS) is
> 
>    procedure buttons__button__handlerP (_object : in out
>      buttons__buttonTV) is
>    begin
>       %push_constraint_error_label ()
>       %push_program_error_label ()
>       %push_storage_error_label ()
>       $system__tasking__protected_objects__single_entry__lock_entry (
>         _object._object'unchecked_access);
>       buttons__button__handlerN (_object);
>       $system__tasking__protected_objects__single_entry__service_entry
>         (_object._object'unchecked_access);
>       %pop_constraint_error_label
>       %pop_program_error_label
>       %pop_storage_error_label
>       return;
>    end buttons__button__handlerP;
> 
> I don't know how the % lines translate to code; for me, the Lock_Entry
> call will find that we're in an ISR and do nothing; the HandlerN is the
> code you wrote; and Service_Entry will check to see whether (the) entry
> has been released and execute its code as well if so.
> 
> HTH!

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-26 19:36               ` jan.de.kruyf
@ 2015-02-27  8:45                 ` Simon Wright
  2015-02-27  9:59                   ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Simon Wright @ 2015-02-27  8:45 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> Would it be unsettling if I asked why there is no system.ads in the
> zfp collection?

There is a system.ads at
gnat-gpl-2014-arm-elf-linux-bin/lib/gcc/arm-eabi/4.7.4/rts-zfp/adainclude/.

The zfp package appears to be a construction kit with missing parts and
no instructions.

> especially the compiler switches at the end.

Not sure what you mean here? The top of the system.ads above includes

   pragma Restrictions (No_Exception_Propagation);
   pragma Restrictions (No_Exception_Registration);
   pragma Restrictions (No_Implicit_Dynamic_Code);
   pragma Restrictions (No_Finalization);
   pragma Restrictions (No_Tasking);
   pragma Discard_Names;

but I don't know what else is missing (no System.Memory, for a start).

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-26 19:09                   ` jan.de.kruyf
@ 2015-02-27  8:58                     ` Dmitry A. Kazakov
  2015-02-28 19:57                       ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-02-27  8:58 UTC (permalink / raw)


On Thu, 26 Feb 2015 11:09:02 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> On Thursday, February 26, 2015 at 11:47:31 AM UTC+2, Dmitry A. Kazakov wrote:
> 
>> We are using a proprietary TCP/IP based protocol (LabNet) for distribution
>> between the nodes.
> 
> is not -fast-
> bare E-frames is better, i would think. Just repeat the poll/response
> cycle when no good data arrives. And 3 tries is death. (or whatever)

But TCP/IP is more flexible. Furthermore, 1Gbaud Ethernet is fast enough to
outperform anything we could do at the nodes. It is not the bottleneck, so
far. E.g. practically all field bus couplers are 100Kbaud. You will easily
beat them with TCP/IP + NO_DELAY by going 1GBaud.

>> It is a design question about the field bus network topology, in particular
>> switch vs. chained nodes (like in EtherCAT). Real-time crowd will try to
>> convince us that the only way is chained nodes, but I honestly believe that
>> normal networks will eventually win.
> 
> How do you do IEEE1588 ? this way around the loop or that way?

We don't. Time synchronization is integrated into the middleware protocol
we are using.

> The whole scheme becomes dependent on the jitter tolerance of the pc
> because the pc is part of the delay loop.

We collect statistics of round trip times. It is filtered for artifacts and
does weighted average of good samples with a floating window (for the cases
when the network performance fluctuates). So jitter is eliminated.

> while if one of the outstations takes the syncer role then the pc can be
> as floppy as it likes, we dont care.

But you still have to translate the clock of that terminal into the PC
clock. The schema of dedicated clock master does not work.

> It will only show up in the inter-poll gaps.
> But the terminals all run in perfect step and have one sense of time.

Right, but the problem is that in most cases the terminals run some
asynchronous tasks which must be time stamped. For this they would use
internal counters and you have the problem of translation these counters
into the master clock and then into the PC clock. This is what does not
work in EtherCAT.

Consider as an example an analogue input. You should be able to latch the
clock at the end of a conversion and convert it into the PC clock. This
would be "free run" mode.

As another example take a synchronous case with triggered AD conversions in
multiple terminals. This does not work with EtherCAT either. It only looks
synchronous, because you need to distribute the trigger signal to all
terminals. Since the signal's frame arrives at the terminals with different
latencies you must have time stamp (of some near future) in it. So, in
fact, this is also asynchronous.

> Further when 1 station is down its is a major upheaval, if you look
> carefully at an etherCAT mac / phy you will know why. And that structure
> is not readily emulated.

Yes, they have a mode when the cable goes back to the master. But there is
no way to use distributed clock in that mode, so we don't use it anyway.

> While in a star one can unplug a box, which then will be declared dead by
> the software, until the time that a new box announces itself.

Yes, though to keep it balanced, one still could argue that the switch is
the new weak point then, and that wiring is more complicated. Nevertheless
star topology will win as it did when going from 10BNC to twisted pair.
 
>> OK. However, it is not much an issue to reduce latencies in time
>> synchronization as to actually to use it to compensate clock drift (not
>> necessarily by tuning the clocks) in order to allow the subscriber to get
>> data stamped by the *subscriber's* clock. The problem with actual protocols
>> is that they do it by the *publishe's* clock.
> 
> Your wishes have been fulfilled, there is drift correction incorporated in
> the hardware, that is controlled via software from IEEE1588.

If it consistently manipulates all time sources on the board, then OK. From
our experience, it is better to leave clocks alone (and have no problem
with negative adjustments) and only translate time stamps.

>> The best ADC we ever had had 0.040ms conversion time.
> 
> Those were wonderful ADC's then. but now how about this:
>  ADC conversion rate with 12 bit resolution is up to:
>      2.4 M.sample/s in single ADC mode,
>      4.5 M.sample/s in dual interleaved ADC mode,
>      7.2 M.sample/s in triple interleaved ADC mode.

M = mega? Wow!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-27  8:45                 ` Simon Wright
@ 2015-02-27  9:59                   ` jan.de.kruyf
  2015-02-28  9:57                     ` Simon Wright
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-27  9:59 UTC (permalink / raw)


On Friday, February 27, 2015 at 10:45:39 AM UTC+2, Simon Wright wrote:

> 
> There is a system.ads at
> gnat-gpl-2014-arm-elf-linux-bin/lib/gcc/arm-eabi/4.7.4/rts-zfp/adainclude/.

phew, I did find that also, maybe some more reading the manual is in order.

> 
> The zfp package appears to be a construction kit with missing parts and
> no instructions.
> 
> > especially the compiler switches at the end.
> 
> Not sure what you mean here? The top of the system.ads above includes
> 

In the private section:
      -- System Implementation Parameters

> but I don't know what else is missing (no System.Memory, for a start).

and no 'image routines. But all that is easily fixed.

Perhaps the zfp tar file was meant as an extension to the zfp-rts (for those who understand adacore-ish and cuneiform script)

Thanks a lot for your help. It is highly appreciated. I think I am on my way now to understand the whole system in terms of a highly sophisticated and fast microprocessor, that can be made to sing. Clever abstractions are needed sometimes, but they are often in the way of the need for speed. 

cheers,

j.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-27  9:59                   ` jan.de.kruyf
@ 2015-02-28  9:57                     ` Simon Wright
  2015-02-28 19:08                       ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Simon Wright @ 2015-02-28  9:57 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> On Friday, February 27, 2015 at 10:45:39 AM UTC+2, Simon Wright wrote:

>> > especially the compiler switches at the end.
>> 
>> Not sure what you mean here? The top of the system.ads above includes
>> 
>
> In the private section:
>       -- System Implementation Parameters

I see what you mean. Because of needing not to copy the AdaCore code, I
worked from the GNU/Linux ARMEL version (I think this is what's used in
the RPi).

I now need to get back and review the differences between the AdaCore
version and mine (for example, I have Duration_32_Bits True; I think
that at that moment I was stuck on Time_Span being 32 bits .. but now I
have it at 64 bits ..)

> Thanks a lot for your help.

Glad to be able to.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-28  9:57                     ` Simon Wright
@ 2015-02-28 19:08                       ` jan.de.kruyf
  2015-02-28 20:23                         ` Simon Wright
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-28 19:08 UTC (permalink / raw)


On Saturday, February 28, 2015 at 11:57:14 AM UTC+2, Simon Wright wrote:

> I now need to get back and review the differences between the AdaCore
> version and mine (for example, I have Duration_32_Bits True; I think
> that at that moment I was stuck on Time_Span being 32 bits .. but now I
> have it at 64 bits ..)

32 bits? that wont do. for time synching you need 64 bits. . . 

In any case I thought lemme quickly (note the word 'quickly') port the little rs232 terminal program that I did, to zfp. So I want to use the system time-tick timer (for measuring), so it is in the board support packages, so I plant what I need in some subdirectory and start dropping what I dont need and compiling and catching bugs. A great learning experience. 

So now I have a question you might have seen the answer to, on one of your travels:

Where exactly does the system drop from supervisor mode into user mode?
and also where is the silly routine to go back into supervisor mode?
I thought I combed the ravenscar init section through properly but no luck.
I know the theory about it. But it would be great to see an example.

And I did not spot any system setup in the zfp collection! (That is the worst).


May Oberon live forever (I even wrote a single pass compiler for a mini version long ago, for some other embedded project).

cheers,

j.


> 
> > Thanks a lot for your help.
> 
> Glad to be able to.



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-27  8:58                     ` Dmitry A. Kazakov
@ 2015-02-28 19:57                       ` jan.de.kruyf
  2015-03-01  9:27                         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-02-28 19:57 UTC (permalink / raw)


On Friday, February 27, 2015 at 10:58:30 AM UTC+2, Dmitry A. Kazakov wrote:

> But TCP/IP is more flexible. Furthermore, 1Gbaud Ethernet is fast enough to
> outperform anything we could do at the nodes. It is not the bottleneck, so
> far. E.g. practically all field bus couplers are 100Kbaud. You will easily
> beat them with TCP/IP + NO_DELAY by going 1GBaud.

Yes, on bigger systems, but on cheap and not so nasty embedded stuff we might still have 100M ethernet for a while. So I do see the need still to limit my verbosity in the software (and in the frame size). Otherwise I will be back at the bandwidth you quoted from the measurements you did.

> 
> Right, but the problem is that in most cases the terminals run some
> asynchronous tasks which must be time stamped. For this they would use
> internal counters and you have the problem of translation these counters
> into the master clock and then into the PC clock. This is what does not
> work in EtherCAT.
> 
> Consider as an example an analogue input. You should be able to latch the
> clock at the end of a conversion and convert it into the PC clock. This
> would be "free run" mode.
> 
> As another example take a synchronous case with triggered AD conversions in
> multiple terminals. This does not work with EtherCAT either. It only looks
> synchronous, because you need to distribute the trigger signal to all
> terminals. Since the signal's frame arrives at the terminals with different
> latencies you must have time stamp (of some near future) in it. So, in
> fact, this is also asynchronous.
> 

I seem to smell some contradiction in you reasoning but nevermind that.
(I am a bit tired) 
On my side of things I need perfectly synchronized slaves for fast motion control. The hard real time is less important, so that is how I got to my last opinion.
then if I am ever in a position that I have to split the work over 2 pc's (and that might be closer as what I wish) I might just be very pleased when both the slave groups have the same sense of time.
On your side you need to be able to stamp your measurements with hard real time I understand. Or at least deduct the hard real time in some way.

So I did some more research on the ieee protocol, it looks as if it can handle it without a great investment in design time, etc.
There are 2 good linux stacks around, together with a hardware-time-stamping network card it would work well to sync the whole caboudle to hard real time. And in the case the master drops, one of the other boxes in the network becomes master. There can be some voting process every now and again.

>
> Yes, though to keep it balanced, one still could argue that the switch is
> the new weak point then, and that wiring is more complicated. Nevertheless
> star topology will win as it did when going from 10BNC to twisted pair.

Probably double the wiring cost. That is a great killer with the junior project managers. (my son is one, so I hear these noises on and off) 
I do not feel that a switch is a weak point. Only if you try to get away with office quality RJ45 without bolting down the cables right next to the switch, as you aught to do in any case, for grounding purposes.

> 
> If it consistently manipulates all time sources on the board, then OK. From
> our experience, it is better to leave clocks alone (and have no problem
> with negative adjustments) and only translate time stamps.

Before I got deeply into this fieldbus thing I set up a small model on the laptop to demonstrate some motordriver idea. I did have a jitter averaging design in there that worked quite well. It was basically the same as what is in the ieee circuitry in the STM controller. It works as a PI controller on the drift correction. Except my algorithm locked a lot faster than basic PI.

But that I did on the basis of a sync pulse every scan period like powerlink.
Ettercat has not got that I believe.
In any case that still only gives network time that then  must be translated for data gathering.

So with ieee you slowly walk past all stations once every 2 seconds or even slower and measure the cable delay. and add or subtract 1 bit to/from the drift register as needed. that is, as soon as the course time correction has been determined.


> M = mega? Wow!

Yes, but in my experience you might have to stop the processor clock for a bit while you measure. Especially on cheap 2 layer boards.

cheers, and thanks for your time. I highly appreciate your feedback.

j.



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-28 19:08                       ` jan.de.kruyf
@ 2015-02-28 20:23                         ` Simon Wright
  2015-03-03  8:52                           ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Simon Wright @ 2015-02-28 20:23 UTC (permalink / raw)


jan.de.kruyf@gmail.com writes:

> 32 bits? that wont do. for time synching you need 64 bits. . . 

Yes, I suppose so, but I've not yet needed to convert Duration to
Time_Span or vice versa .. will post bug report to myself.

> Where exactly does the system drop from supervisor mode into user mode?
> and also where is the silly routine to go back into supervisor mode?

I think it's in s-bbcppr.adb, Pend_SV_Handler.

I may well be wrong, but doesn't the Cortex switch to supervisor mode
automatically when an interrupt occurs?


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-28 19:57                       ` jan.de.kruyf
@ 2015-03-01  9:27                         ` Dmitry A. Kazakov
  2015-03-03  8:42                           ` jan.de.kruyf
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-03-01  9:27 UTC (permalink / raw)


On Sat, 28 Feb 2015 11:57:00 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> On Friday, February 27, 2015 at 10:58:30 AM UTC+2, Dmitry A. Kazakov wrote:
> 
>> Right, but the problem is that in most cases the terminals run some
>> asynchronous tasks which must be time stamped. For this they would use
>> internal counters and you have the problem of translation these counters
>> into the master clock and then into the PC clock. This is what does not
>> work in EtherCAT.
>> 
>> Consider as an example an analogue input. You should be able to latch the
>> clock at the end of a conversion and convert it into the PC clock. This
>> would be "free run" mode.
>> 
>> As another example take a synchronous case with triggered AD conversions in
>> multiple terminals. This does not work with EtherCAT either. It only looks
>> synchronous, because you need to distribute the trigger signal to all
>> terminals. Since the signal's frame arrives at the terminals with different
>> latencies you must have time stamp (of some near future) in it. So, in
>> fact, this is also asynchronous.
> 
> I seem to smell some contradiction in you reasoning but nevermind that.
> (I am a bit tired) 
> On my side of things I need perfectly synchronized slaves for fast motion
> control. The hard real time is less important, so that is how I got to my
> last opinion.
> then if I am ever in a position that I have to split the work over 2 pc's
> (and that might be closer as what I wish) I might just be very pleased
> when both the slave groups have the same sense of time.
> On your side you need to be able to stamp your measurements with hard real
> time I understand. Or at least deduct the hard real time in some way.

But the point is you cannot have it well synchronized over Ethernet without
time stamping. Synchronized slaves is the second scenario I described,
which apply even more for 100BASE. In the core it is triggering synchronous
actions by a *local* timer derived from incoming protocol packets/frames
rather than directly from the frames.

And from the software perspective, the idea that you could enforce
real-time/synchronicity per real-time protocol is flawed. More complex the
software running on top of the protocol less chances you have to keep
synchronicity up to the application level because you need time to
propagate the signal though levels of hardware and software.

> But that I did on the basis of a sync pulse every scan period like
> powerlink. Ettercat has not got that I believe.
> In any case that still only gives network time that then  must be
> translated for data gathering.
> 
> So with ieee you slowly walk past all stations once every 2 seconds or
> even slower and measure the cable delay. and add or subtract 1 bit to/from
> the drift register as needed. that is, as soon as the course time
> correction has been determined.

When it is only cable delay without in-terminal(s) delay, it is useless.
And you still need to adjust terminal time sources or translate their
readings to and from master clock.
 
-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-03-01  9:27                         ` Dmitry A. Kazakov
@ 2015-03-03  8:42                           ` jan.de.kruyf
  2015-03-03 10:57                             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: jan.de.kruyf @ 2015-03-03  8:42 UTC (permalink / raw)


On Sunday, March 1, 2015 at 11:28:01 AM UTC+2, Dmitry A. Kazakov wrote:

>
> But the point is you cannot have it well synchronized over Ethernet without
> time stamping. Synchronized slaves is the second scenario I described,
> which apply even more for 100BASE. In the core it is triggering synchronous
> actions by a *local* timer derived from incoming protocol packets/frames
> rather than directly from the frames.

Yes, so IEEE lays down a packet exchange for synching that measures the cable delay and gives the real time for each packet at sending. (it is done by hardware timestamping at exit, in the NIC or  PHY) and timestamping at entry.
and both parties know about the exit time on the other side, it is send in a second message, since it can only be read once the message is gone.

> 
> And from the software perspective, the idea that you could enforce
> real-time/synchronicity per real-time protocol is flawed. More complex the
> software running on top of the protocol less chances you have to keep
> synchronicity up to the application level because you need time to
> propagate the signal though levels of hardware and software.

Yes again. So in my humble opinion the sums for synching and the return-packet-sending must be done, as the incoming packet leaves the dma channel. Preferably even without any interrupt on a micro controller, cause before you know it some clever new feature has increased the latency, and worse: made it variable.
In any case let me study a bit more. The linux ptpd stack people got accuracies of about 15 usecs between 2 boxes with kernel timestamping only. With hardware timestamping it will go well below 1usec. 
So that is the difference between software and hardware _in the terminals_
but there is also the variable delay in a network switch. For that there is also some clever trick but I am not sure I need that at the moment.
It certainly is important as the network layout grows.

> And you still need to adjust terminal time sources or translate their
> readings to and from master clock.
>  

yes, people have experimented synching it with ntp (frome some outside real time server) I did not delve into that at all yet.

So to sum up:
To me there are broadly 3 alternatives

1. a sync pulse broadcast every scan period, which triggers the data reading etc. This might then be refined with some averaging circuitry to create a stable local clock to do the actual triggering.
Then for each station the master must keep track of the offset from the master.

2. Synchronize a whole subnet to itself. this is very attractive since with a good algorithm the individual boards will track one-another quite perfectly and dead terminals give no real problem. ( If you can imagine a flock of birds making a turn: each bird only sees what the nearest neighbors do and reacts accordingly)
against: 
The master still needs to keep track of the offset from real time for that net.
(So there will be only 1 offset per subnet) And there is possibly more network traffic involved, since all exchanges between neighbors must happen over 1 net.

3. IEEE1588:
Time is imposed top-down. It works very accurately if implemented properly in a small net. Everybody knows real time.
Against:
As the net grows jitter and inaccuracies get bigger. Switchboxes need to be preferably IEEE1588 enabled in bigger networks. 
And as with all top-down structures, collapses will be quite spectacular.

So this is what I have dreamed up so far.
Please comment as you see fit.

Cheers,

j.



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-02-28 20:23                         ` Simon Wright
@ 2015-03-03  8:52                           ` jan.de.kruyf
  0 siblings, 0 replies; 39+ messages in thread
From: jan.de.kruyf @ 2015-03-03  8:52 UTC (permalink / raw)


On Saturday, February 28, 2015 at 10:23:20 PM UTC+2, Simon Wright wrote:

> I think it's in s-bbcppr.adb, Pend_SV_Handler.

Yep, I did see that now, but the structure is not enabled, it just traps.
So I sidestepped the issue with the system tick timer, and started up the ptpd timer in the ethernet block and used that.

But see my question about libgnat.a in zfp elsewhere.

I seem to feel pressured into learning how to build a runtime. BUT I will resist till the bitter end. . . (which is probably just around the corner!)

cheers,

j.



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: silly ravenscar question
  2015-03-03  8:42                           ` jan.de.kruyf
@ 2015-03-03 10:57                             ` Dmitry A. Kazakov
  0 siblings, 0 replies; 39+ messages in thread
From: Dmitry A. Kazakov @ 2015-03-03 10:57 UTC (permalink / raw)


On Tue, 3 Mar 2015 00:42:51 -0800 (PST), jan.de.kruyf@gmail.com wrote:

> On Sunday, March 1, 2015 at 11:28:01 AM UTC+2, Dmitry A. Kazakov wrote:
> 
>> And from the software perspective, the idea that you could enforce
>> real-time/synchronicity per real-time protocol is flawed. More complex the
>> software running on top of the protocol less chances you have to keep
>> synchronicity up to the application level because you need time to
>> propagate the signal though levels of hardware and software.
> 
> Yes again. So in my humble opinion the sums for synching and the
> return-packet-sending must be done, as the incoming packet leaves the dma
> channel.

It is not the issue. You cannot hard sync anything complex enough to be
useful.

> So to sum up:
> To me there are broadly 3 alternatives
> 
> 1. a sync pulse broadcast every scan period, which triggers the data
> reading etc. This might then be refined with some averaging circuitry to
> create a stable local clock to do the actual triggering.
> Then for each station the master must keep track of the offset from the master.
> 
> 2. Synchronize a whole subnet to itself. this is very attractive since
> with a good algorithm the individual boards will track one-another quite
> perfectly and dead terminals give no real problem. ( If you can imagine a
> flock of birds making a turn: each bird only sees what the nearest
> neighbors do and reacts accordingly)
> against: 
> The master still needs to keep track of the offset from real time for that
> net.
> (So there will be only 1 offset per subnet) And there is possibly more
> network traffic involved, since all exchanges between neighbors must
> happen over 1 net.
> 
> 3. IEEE1588:
> Time is imposed top-down. It works very accurately if implemented properly
> in a small net. Everybody knows real time.
> Against:
> As the net grows jitter and inaccuracies get bigger. Switchboxes need to
> be preferably IEEE1588 enabled in bigger networks. 
> And as with all top-down structures, collapses will be quite spectacular.
>
> So this is what I have dreamed up so far.
> Please comment as you see fit.

There is no reason why not to support IEEE1588 if available and, when not
available, not to use other means to estimate clock differences. It is not
a big problem to use both in the same network and even between same nodes
weighting the result according to QoS.

The protocol should not synchronize or trigger anything. The latter is an
application-level issue. The former is impossible for described above
reasons and is not required anyway.

If you can schedule actions ahead of the time and know clock differences,
that is the best real-time possible. Since you will never get latencies
shorter than the board can switch contexts anyway.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2015-03-03 10:57 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-24  9:07 silly ravenscar question jan.de.kruyf
2015-02-24 10:29 ` Dmitry A. Kazakov
2015-02-24 11:11   ` jan.de.kruyf
2015-02-24 13:38     ` Dmitry A. Kazakov
2015-02-25  8:48       ` jan.de.kruyf
2015-02-25 10:46         ` Dmitry A. Kazakov
2015-02-25 17:35           ` jan.de.kruyf
2015-02-25 17:55             ` Dmitry A. Kazakov
2015-02-26  8:48               ` jan.de.kruyf
2015-02-26  9:47                 ` Dmitry A. Kazakov
2015-02-26 12:07                   ` jan.de.kruyf
2015-02-26 19:09                   ` jan.de.kruyf
2015-02-27  8:58                     ` Dmitry A. Kazakov
2015-02-28 19:57                       ` jan.de.kruyf
2015-03-01  9:27                         ` Dmitry A. Kazakov
2015-03-03  8:42                           ` jan.de.kruyf
2015-03-03 10:57                             ` Dmitry A. Kazakov
2015-02-24 11:02 ` Jacob Sparre Andersen
2015-02-24 11:23   ` jan.de.kruyf
2015-02-24 13:43     ` Bob Duff
2015-02-25  9:07       ` jan.de.kruyf
2015-02-25 17:50         ` Simon Wright
2015-02-26  7:35           ` jan.de.kruyf
2015-02-26 14:57             ` Simon Wright
2015-02-26 19:36               ` jan.de.kruyf
2015-02-27  8:45                 ` Simon Wright
2015-02-27  9:59                   ` jan.de.kruyf
2015-02-28  9:57                     ` Simon Wright
2015-02-28 19:08                       ` jan.de.kruyf
2015-02-28 20:23                         ` Simon Wright
2015-03-03  8:52                           ` jan.de.kruyf
2015-02-24 15:30     ` Brad Moore
2015-02-24 16:52       ` Simon Wright
2015-02-25  3:01         ` Dennis Lee Bieber
2015-02-24 11:22 ` slos
2015-02-24 12:16   ` jan.de.kruyf
2015-02-24 11:24 ` J-P. Rosen
2015-02-24 12:10   ` jan.de.kruyf
2015-02-24 13:58 ` Simon Wright

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox