comp.lang.ada
 help / color / mirror / Atom feed
* Getting started with bare-board development
@ 2016-11-11 22:19 Adam Jensen
  2016-11-11 22:43 ` Maciej Sobczak
                   ` (4 more replies)
  0 siblings, 5 replies; 27+ messages in thread
From: Adam Jensen @ 2016-11-11 22:19 UTC (permalink / raw)


Hi, I've recently began to have a serious look at Ada-2012 and
Spark-2014, and using GNAT for the development of real-time software in
embedded systems. What is a good way to get started? I am currently
reading an ebook of "Concurrent and Real-Time Programming in Ada"[1] and
I've recently ordered a paper copy of "Analysable Real-Time Systems:
Programmed in Ada"[2]; I also have the LRM and I come from a VHDL
background.

[1]:
https://www.amazon.com/Concurrent-Real-Time-Programming-Alan-Burns/dp/0521866979/
[2]:
https://www.amazon.com/Analysable-Real-Time-Systems-Programmed-Ada/dp/1530265509/

So I guess my question has more to do with sorting out the tool-chain
and a methodology than anything else. For example, would it be essential
(or especially convenient) to have a hardware development kit or is it
common to develop this kind of software using an emulator of some kind?
Adacore has an example for the STM32F4-Discovery[3], and more elaborate
documentation is available for the Nucleo[4], but both of those kits
seem to have very limited memory. How much can be done with that?

[3]:
http://docs.adacore.com/gnat_ugx-docs/html/gnat_ugx/gnat_ugx/arm-elf_topics_and_tutorial.html
[4]: http://www.inspirel.com/articles/Ada_On_Cortex.html

Also, do ARM processors make sense for safety critical systems? If not,
would it make more sense to target a different platform from the beginning?




^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-11 22:19 Getting started with bare-board development Adam Jensen
@ 2016-11-11 22:43 ` Maciej Sobczak
  2016-11-12  9:45 ` G.B.
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Maciej Sobczak @ 2016-11-11 22:43 UTC (permalink / raw)



> Hi, I've recently began to have a serious look at Ada-2012 and
> Spark-2014, and using GNAT for the development of real-time software in
> embedded systems. What is a good way to get started?

In addition to the material that you have found already, you might also have a look at this:

http://www.inspirel.com/articles/Ada_On_Cortex.html

This tutorial presents complete examples on several popular boards, and focuses on zero-runtime, bare-board development.

> Also, do ARM processors make sense for safety critical systems? If not,
> would it make more sense to target a different platform from the beginning?

ARM processors become more and more visible and surely there will be safety critical projects using them in the future. Some of these processors are either specifically designed to target such systems or are evaluated and assessed by third-party certification companies to provide evidence for their safety values.

-- 
Maciej Sobczak * http://www.inspirel.com


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-11 22:19 Getting started with bare-board development Adam Jensen
  2016-11-11 22:43 ` Maciej Sobczak
@ 2016-11-12  9:45 ` G.B.
  2016-11-12 16:14   ` Adam Jensen
  2016-11-12 20:59 ` Brian Drummond
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 27+ messages in thread
From: G.B. @ 2016-11-12  9:45 UTC (permalink / raw)


On 11.11.16 23:19, Adam Jensen wrote:
> For example, would it be essential
> (or especially convenient) to have a hardware development kit or is it
> common to develop this kind of software using an emulator of some kind?

Given your background in VHDL, this is perhaps familiar,
but I'll mention it anyway, collected from here and there:

- would timers be more real on hardware? Certainly more realistic.

- would you miss the physical experience and inspirational
   power (if any) of the real thing?

- does the simulator support sensors and actuators of the
   kind you would like to operate?

Also, if you move to a Ravenscar model of Ada, then chances
are that your RTS will be smaller. So, less memory will be
needed.  At some point, I'd want to try my programs on hardware.

-- 
"HOTDOGS ARE NOT BOOKMARKS"
Springfield Elementary teaching staff


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-12  9:45 ` G.B.
@ 2016-11-12 16:14   ` Adam Jensen
  2016-11-12 19:15     ` artium
                       ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Adam Jensen @ 2016-11-12 16:14 UTC (permalink / raw)


On 11/12/2016 04:45 AM, G.B. wrote:
> On 11.11.16 23:19, Adam Jensen wrote:
>> For example, would it be essential
>> (or especially convenient) to have a hardware development kit or is it
>> common to develop this kind of software using an emulator of some kind?
> 
> Given your background in VHDL, this is perhaps familiar,
> but I'll mention it anyway, collected from here and there:
> 
> - would timers be more real on hardware? Certainly more realistic.
> 
> - would you miss the physical experience and inspirational
>   power (if any) of the real thing?
> 
> - does the simulator support sensors and actuators of the
>   kind you would like to operate?
> 
> Also, if you move to a Ravenscar model of Ada, then chances
> are that your RTS will be smaller. So, less memory will be
> needed.  At some point, I'd want to try my programs on hardware.

Hi,

Thanks for your answer. I can easily imagine that following one or both
of those tutorials[1] would be an informative an rewarding experience
but is the process outlined in those tutorials representative of a
typical design cycle for real-time, safety-critical, bare-board software
development?  For example, I suppose one could design logic for an FPGA
without the use of a simulator by synthesizing the logic, loading the
result into the hardware, then verifying the design by monitoring
various test-points during operation. From my perspective, that approach
seems a bit retarded. I am looking for the non-retarded methodology for
embedded software engineering :)

[1]: Web links to a couple tutorials were posted earlier in this thread.

The model-based engineering approach in hardware design (digital logic,
especially) enables almost omniscient visibility into a model's behavior
through simulation. In ASIC design, there is very high confidence in the
design long before the [very expensive] fabrication of prototypes. A
"first pass success" was common where an implementation would go from
prototype to production without any design changes ("a spin").

How is it done in embedded software engineering? (Links and/or
references are very welcome)!

I also have many other questions, like:

* How does one develop and verify a Board Support Package (device
drivers, bootloader, etc.)?

* Do the various typical embedded platform profiles (e.g., Ravenscar)
require any Run-Time System implementation or extension?

* Is the BSP and RTS the kind of software that might/should be
implemented in Spark?

I am trying to get a realistic view of the most successful techniques
that are used by professional engineers to build high-integrity and
safety-critical real-time embedded software systems. Again, any links,
references, discussion, etc. will be very appreciated!



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-12 16:14   ` Adam Jensen
@ 2016-11-12 19:15     ` artium
  2016-11-12 21:37       ` Adam Jensen
  2016-11-13  4:01     ` Jeffrey R. Carter
  2016-11-14 18:17     ` Simon Wright
  2 siblings, 1 reply; 27+ messages in thread
From: artium @ 2016-11-12 19:15 UTC (permalink / raw)


On Saturday, November 12, 2016 at 6:14:49 PM UTC+2, Adam Jensen wrote:
> On 11/12/2016 04:45 AM, G.B. wrote:
> > On 11.11.16 23:19, Adam Jensen wrote:
> >> For example, would it be essential
> >> (or especially convenient) to have a hardware development kit or is it
> >> common to develop this kind of software using an emulator of some kind?
> > 
> > Given your background in VHDL, this is perhaps familiar,
> > but I'll mention it anyway, collected from here and there:
> > 
> > - would timers be more real on hardware? Certainly more realistic.
> > 
> > - would you miss the physical experience and inspirational
> >   power (if any) of the real thing?
> > 
> > - does the simulator support sensors and actuators of the
> >   kind you would like to operate?
> > 
> > Also, if you move to a Ravenscar model of Ada, then chances
> > are that your RTS will be smaller. So, less memory will be
> > needed.  At some point, I'd want to try my programs on hardware.
> 
> Hi,
> 
> Thanks for your answer. I can easily imagine that following one or both
> of those tutorials[1] would be an informative an rewarding experience
> but is the process outlined in those tutorials representative of a
> typical design cycle for real-time, safety-critical, bare-board software
> development?  For example, I suppose one could design logic for an FPGA
> without the use of a simulator by synthesizing the logic, loading the
> result into the hardware, then verifying the design by monitoring
> various test-points during operation. From my perspective, that approach
> seems a bit retarded. I am looking for the non-retarded methodology for
> embedded software engineering :)
> 
> [1]: Web links to a couple tutorials were posted earlier in this thread.
> 
> The model-based engineering approach in hardware design (digital logic,
> especially) enables almost omniscient visibility into a model's behavior
> through simulation. In ASIC design, there is very high confidence in the
> design long before the [very expensive] fabrication of prototypes. A
> "first pass success" was common where an implementation would go from
> prototype to production without any design changes ("a spin").
> 
> How is it done in embedded software engineering? (Links and/or
> references are very welcome)!
> 
> I also have many other questions, like:
> 
> * How does one develop and verify a Board Support Package (device
> drivers, bootloader, etc.)?
> 
> * Do the various typical embedded platform profiles (e.g., Ravenscar)
> require any Run-Time System implementation or extension?
> 
> * Is the BSP and RTS the kind of software that might/should be
> implemented in Spark?
> 
> I am trying to get a realistic view of the most successful techniques
> that are used by professional engineers to build high-integrity and
> safety-critical real-time embedded software systems. Again, any links,
> references, discussion, etc. will be very appreciated!

Using a hardware emulator is not cost efficient. 

If you are developing for an microcontroller (eg BSP, HAL, drivers etc), you usually begin with an evaluation board that resembles the final design the best, and move to the custom hardware when it is ready (which will be derived from the evaluation board). This is a "board for each developer" approach[1].

If you are developing high level code for expensive hardware, you usually encapsulate the applicative part and compile it for your PC architecture[2]. The environment will be simulated using models of real hardware pieces. 

For example you are developing a mission computer for an aircraft, using this approach you will need to write a simulation of the Inertial Navigation System, but you will not need to write a simulation of the ethernet chip that allows communication with said systems. You simulate communication using sockets or shared memory, simulate flash with a file operations etc.

[1] For example, Texas Instruments stopped supporting simulators in their flagship IDE (http://processors.wiki.ti.com/index.php/CCSv6_Changes#Simulation)
[2] That is where using Ada helps a lot. It allows moving between hardwares with relative ease. 











^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-11 22:19 Getting started with bare-board development Adam Jensen
  2016-11-11 22:43 ` Maciej Sobczak
  2016-11-12  9:45 ` G.B.
@ 2016-11-12 20:59 ` Brian Drummond
  2016-11-15  1:14 ` antispam
  2016-11-15 19:34 ` Robert Eachus
  4 siblings, 0 replies; 27+ messages in thread
From: Brian Drummond @ 2016-11-12 20:59 UTC (permalink / raw)


On Fri, 11 Nov 2016 17:19:33 -0500, Adam Jensen wrote:


> 
> Also, do ARM processors make sense for safety critical systems? If not,
> would it make more sense to target a different platform from the
> beginning?

Just to address this point : TI make two ARM processor families, the TI 
Hercules, aimed at safety critical applications.

They consist of dual ARMs in lockstep with comparators, (actually one 
operates 2 cycles behind the other, so that a glitch will affect 
different cycles on each) so that any difference between CPU behaviours 
will trap.

(What the trap handler does is up to your application, I presume)

One's aimed at automotive, the other at industrial, with slight 
differences in temperature range, on-chip peripherals, etc. 

There's a low cost Launchpad devboard for them.

-- Brian

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-12 19:15     ` artium
@ 2016-11-12 21:37       ` Adam Jensen
  0 siblings, 0 replies; 27+ messages in thread
From: Adam Jensen @ 2016-11-12 21:37 UTC (permalink / raw)


Hi! Thanks for weighing in. A faint, fuzzy vision of the situation is
beginning to coalesce.

On 11/12/2016 02:15 PM, artium@nihamkin.com wrote:
> Using a hardware emulator is not cost efficient. 

By "hardware emulator", in this context, is it safe to assume that you
mean something like a hosted hypervisor that performs hardware
virtualization (e.g., QEMU[1])?

[1]: http://wiki.qemu.org/

One scenario using that approach might be:

A) GNAT can generate ARM-ELF for the Cortex-A9 processor series and
Run-Time support is available for the Xilinx Zynq-7000[2].

[2]:
http://docs.adacore.com/gnat_ugx-docs/html/gnat_ugx/gnat_ugx/arm-elf_topics_and_tutorial.html

B) QEMU can emulate the Xilinx Cortex A9-based Zynq SoC including models
for many of its peripherals[3].

[3]: https://en.wikipedia.org/wiki/Qemu#ARM

On the surface, that seems like it could potentially be a viable
approach. More practically, that's a lot of software to validate and
models to verify; it seems like there are a lot of opportunities for
things to go wrong. Mostly, the software (and hardware) seems a bit
hokey (and tawdry).

Is that what you mean by "not cost efficient"?

> If you are developing for an microcontroller (eg BSP, HAL, drivers etc), you usually begin with an evaluation board that resembles the final design the best, and move to the custom hardware when it is ready (which will be derived from the evaluation board). This is a "board for each developer" approach[1].

Forgive my naivety but is it generally expected/required by software
developers that there will be some kind of "On-Chip Debug" or
"In-Circuit Emulation" capability in the hardware?

If the hardware has integrated instrumentation such that the behavior of
the software can be analyzed without any probe effect, I could see how
that might be simpler than the simulation/virtualization approach. On
the other hand, the software is being validated on hardware that is
running in a special, atypical (debug) mode of operation.

Is this the typical method/approach for high-assurance systems design?

> If you are developing high level code for expensive hardware, you usually encapsulate the applicative part and compile it for your PC architecture[2]. The environment will be simulated using models of real hardware pieces. 

Does that hold true for real-time software?

> For example you are developing a mission computer for an aircraft, using this approach you will need to write a simulation of the Inertial Navigation System, but you will not need to write a simulation of the ethernet chip that allows communication with said systems. You simulate communication using sockets or shared memory, simulate flash with a file operations etc.
> 
> [1] For example, Texas Instruments stopped supporting simulators in their flagship IDE (http://processors.wiki.ti.com/index.php/CCSv6_Changes#Simulation)
> [2] That is where using Ada helps a lot. It allows moving between hardwares with relative ease. 

Having a deeply ingrained hardware designer's mindset, I've put together
a tool-chain (and an implied methodology) for a fairly narrow class of
real-time embedded system. Maybe some readers of the news group can
comment on the costs and/or benefits of this approach.

* Freely available chip design software. Synopsys for synthesis;
ModelSim for simulation.

http://www.microsemi.com/products/fpga-soc/design-resources/design-software/libero-soc

* A flash-based FPGA (available in radiation tolerant versions) and a
development kit ($600!) with processor accessories/peripherals.
(Realistically, I would probably shop around for a board that better
fits this application or just invest the time to design one).

www.microsemi.com/products/fpga-soc/fpga/proasic3l
http://www.microsemi.com/products/fpga-soc/radtolerant-fpgas/rt-proasic3
http://www.microsemi.com/products/fpga-soc/design-resources/dev-kits/proasic3/cortex-m1-enabled-proasic3l-development-kit#overview

* The LEON3 processor - a synthesisable VHDL model of a 32-bit processor
compliant with the SPARC V8 architecture. (Also available in a
fault-tolerant version).

http://www.gaisler.com/index.php/products/processors/leon3
http://gaisler.com/index.php/products/processors/leon3ft

* The GRLIB IP Library - an integrated set of reusable IP cores,
designed for system-on-chip (SOC) development.

http://www.gaisler.com/index.php/products/ipcores/soclibrary

* GRSIM is a simulation framework for LEON3/GRLIB SOC devices.

http://www.gaisler.com/index.php/products/simulators/grsim

* GNAT Pro for LEON3

http://www.adacore.com/gnatpro/embedded/erc32

This last one is a bit confusing. I guess the Libre edition of GNAT
doesn't include support for the LEON3. I'm not sure if it is LEON3 BSP,
RTS, or ELF that is missing/excluded from the FOSS version of Adacore
GNAT. (If anyone has any information or insight into this, I would
really like to hear about it). On the other hand, in might be super cool
to write LEON3/GRLIB drivers and run-time support as open source Spark.

So is this Lovecraftian tool-chain over-the-top, just completely
demented, or fairly representative of high-assurance, real-time,
embedded software engineering?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-12 16:14   ` Adam Jensen
  2016-11-12 19:15     ` artium
@ 2016-11-13  4:01     ` Jeffrey R. Carter
  2016-11-13 20:03       ` Adam Jensen
  2016-11-14 18:17     ` Simon Wright
  2 siblings, 1 reply; 27+ messages in thread
From: Jeffrey R. Carter @ 2016-11-13  4:01 UTC (permalink / raw)


On 11/12/2016 09:14 AM, Adam Jensen wrote:
>
> How is it done in embedded software engineering? (Links and/or
> references are very welcome)!

Typically embedded S/W has to interface to various H/W devices (sensors and 
actuators). Frequently such S/W is designed around the capabilities and features 
of the intended H/W. This is not a good idea. When the intended H/W changes (as 
it does frequently on the projects I've been involved in) the entire design has 
to be revised.

What I have done when designing such S/W is to 1st design the core S/W without 
regard to the capabilities and features of the intended H/W. I create the 
simplest and clearest design, and this identifies the kind of information the 
S/W needs to obtain and the kind of external actions it needs to take.

Next, for each piece of external information the S/W needs to obtain, I write a 
pkg spec for a S/W-leaning interface. This keeps the S/W simple and clear by 
providing just the kind of I/F it needs.

Then, for each intended H/W device, I write a pkg spec for a H/W-leaning 
interface. This reflects the capabilities and features of the device.

Then I write bodies for each of the S/W I/F pkgs that use the H/W I/F pkgs.

Now comes the fun part. I write an environment pkg that simulates reality, and 
write simulation bodies for the H/W I/F pkgs that read or modify that simulated 
reality. The body can take do things to make its behavior realistic; for 
example, if a sensor is noisy, the body would add noise to the real value.

This lets you play with your S/W and see if it behaves reasonably.

When it's time to run the S/W on the real system, you eliminate the environment 
pkg and replace the H/W I/F bodies with ones that actually I/F with the H/W. 
Note that the only differences between the simulated and actual systems are 
those bodies.

This approach has a number of benefits:

* Changing a device only affects a S/W I/F body and a H/W I/F pkg.

* Often the simplest and clearest design for the core S/W wants to access 
information or take action in a way the intended H/W doesn't support. The S/W 
I/F pkg provides a single place to convert between the 2 views, keeping the core 
S/W uncoupled. For example, in the ubiquitous cruise-control problem, the best 
approach for the core S/W might be for it to decide when it obtains the car's 
speed, but a common design for the speed sensor is something that generates an 
interrupt every time something rotates a certain amount.

* While there is usually a 1:1 correspondence between S/W and H/W I/F pkgs, 
there need not be. I've seen sensors that returned multiple, unrelated values. 
The design had multiple S/W I/F pkgs interacting with a single H/W I/F pkg.

* I've worked on projects where the whole point was to create a simulation to 
see if the approach is viable, with no idea what the H/W devices would be like 
in a real system. By using this approach, when it was decided to go ahead with a 
real system, only the H/W I/F pkgs and the S/W I/F bodies had to be rewritten.

When I present such a design, coders usually start whining about "efficiency". 
In my decades of experience, such a design has never been responsible for a 
system not meeting its timing requirements.

-- 
Jeff Carter
"Many times we're given rhymes that are quite unsingable."
Monty Python and the Holy Grail
57


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-13  4:01     ` Jeffrey R. Carter
@ 2016-11-13 20:03       ` Adam Jensen
  2016-11-13 21:04         ` Jeffrey R. Carter
  0 siblings, 1 reply; 27+ messages in thread
From: Adam Jensen @ 2016-11-13 20:03 UTC (permalink / raw)


On 11/12/2016 11:01 PM, Jeffrey R. Carter wrote:
[snip]
> Then, for each intended H/W device, I write a pkg spec for a H/W-leaning
> interface. This reflects the capabilities and features of the device.
[snip]
> When it's time to run the S/W on the real system, you eliminate the
> environment pkg and replace the H/W I/F bodies with ones that actually
> I/F with the H/W. Note that the only differences between the simulated
> and actual systems are those bodies.

When writing device drivers, how do you mock the memory map of the
target hardware?

In the mocked the hardware, how is timing controlled?

When extending and mapping run-time support to the mocked hardware, how
does that fit into the run-time system for the native platform (your
workstation)?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-13 20:03       ` Adam Jensen
@ 2016-11-13 21:04         ` Jeffrey R. Carter
  2016-11-13 22:00           ` Adam Jensen
  0 siblings, 1 reply; 27+ messages in thread
From: Jeffrey R. Carter @ 2016-11-13 21:04 UTC (permalink / raw)


On 11/13/2016 01:03 PM, Adam Jensen wrote:
>
> When writing device drivers, how do you mock the memory map of the
> target hardware?
>
> In the mocked the hardware, how is timing controlled?
>
> When extending and mapping run-time support to the mocked hardware, how
> does that fit into the run-time system for the native platform (your
> workstation)?

You seem to be thinking at too low a level. There isn't any "mocked H/W", only 
mocked behavior. The H/W simulation bodies give the information or have the 
effect expected of the devices given the state of the reality modeled in the 
environment pkg, but they need have no similarity to the real bodies, and 
usually don't. The device may be memory mapped, but there's no reason for the 
simulation to be. If access the device takes appreciable time, that's usually 
simulated using a delay. There's usually no reason to limit these parts of the 
S/W to the constraints of the target run time.

-- 
Jeff Carter
"Beyond 100,000 lines of code you
should probably be coding in Ada."
P. J. Plauger
26

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-13 21:04         ` Jeffrey R. Carter
@ 2016-11-13 22:00           ` Adam Jensen
  2016-11-14  8:11             ` Paul Rubin
                               ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Adam Jensen @ 2016-11-13 22:00 UTC (permalink / raw)


On 11/13/2016 04:04 PM, Jeffrey R. Carter wrote:
> You seem to be thinking at too low a level. There isn't any "mocked
> H/W", only mocked behavior. The H/W simulation bodies give the
> information or have the effect expected of the devices given the state
> of the reality modeled in the environment pkg, but they need have no
> similarity to the real bodies, and usually don't. The device may be
> memory mapped, but there's no reason for the simulation to be. If access
> the device takes appreciable time, that's usually simulated using a
> delay. There's usually no reason to limit these parts of the S/W to the
> constraints of the target run time.

I suppose software developers might be accustomed to ignoring time, the
Turing machine/model-of-computation having no explicit representation of
time. But you are correct, I very much retain the perspective of an
electrical engineer and I most definitely think about the machine as
something that exists in time.

Doesn't the Real Time Annex related parts of the run-time support system
expect timing information from the hardware? (I am almost entirely
guessing about this, I haven't yet finished reading the basic
introductory materials on real-time programming).

It would probably help a lot to see a very basic little ("Hello,
Real-Time World") example of [your development approach to] real-time
software with a mocked hardware interface that can be executed directly
on a workstation. I suppose the hardware could be as simple as a clock
and maybe a counter or two. Maybe there could be some interrupts and two
or three tasks that do something very simple. And maybe all of this
could take place under the Ravenscar profile. Would that be a lot of
effort to write and post?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-13 22:00           ` Adam Jensen
@ 2016-11-14  8:11             ` Paul Rubin
  2016-11-14 23:03               ` Adam Jensen
  2016-11-14  9:04             ` Dmitry A. Kazakov
  2016-11-15  0:06             ` Jeffrey R. Carter
  2 siblings, 1 reply; 27+ messages in thread
From: Paul Rubin @ 2016-11-14  8:11 UTC (permalink / raw)


Adam Jensen <hanzer@riseup.net> writes:
> It would probably help a lot to see a very basic little ("Hello,
> Real-Time World") example of [your development approach to] real-time
> software with a mocked hardware interface that can be executed directly
> on a workstation. I suppose the hardware could be as simple as a clock
> and maybe a counter or two.

If it were me, I'd set up the test harness with some kind of event queue
that would allow scheduling i/o completions, interrupts etc. to happen
at specified timestamps in the future.  Then when you run the test, only
the time ticks where something happens would get executed, without the
idle ticks causing any delays in the test.  The idea is you want your
tests to be fast even if the system being simulated is much slower.
That lets you run the tests more frequently, which is a good thing.

> Maybe there could be some interrupts and two or three tasks that do
> something very simple. And maybe all of this could take place under
> the Ravenscar profile. Would that be a lot of effort to write and
> post?

To me that sounds like a lot of work for the purpose of a news post.
You might look at some existing test framework (maybe not in Ada) and
some programs that use it.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-13 22:00           ` Adam Jensen
  2016-11-14  8:11             ` Paul Rubin
@ 2016-11-14  9:04             ` Dmitry A. Kazakov
  2016-11-14 23:35               ` Adam Jensen
  2016-11-15  0:06             ` Jeffrey R. Carter
  2 siblings, 1 reply; 27+ messages in thread
From: Dmitry A. Kazakov @ 2016-11-14  9:04 UTC (permalink / raw)


On 13/11/2016 23:00, Adam Jensen wrote:

> It would probably help a lot to see a very basic little ("Hello,
> Real-Time World") example of [your development approach to] real-time
> software with a mocked hardware interface that can be executed directly
> on a workstation.  I suppose the hardware could be as simple as a clock
> and maybe a counter or two. Maybe there could be some interrupts and two
> or three tasks that do something very simple. And maybe all of this
> could take place under the Ravenscar profile. Would that be a lot of
> effort to write and post?

I think you are confusing things a bit. If you have computing hardware 
mocked you are doing simulation and the time is simulation time. If the 
peripheral hardware is real or partially real it is hardware-in-the-loop 
simulation (HIL). HIL is usually real-time because. What people are 
saying is that HIL is much more cost efficient developing platform than 
some embedded board. Furthermore Ada is ideal for HIL because Ada 
software is portable. So you can develop almost everything on the PC and 
test almost everything in the loop. Then if some hardware (except the 
board itself) is too expensive or difficult to use, it can be simulated 
(mocked) in turn. Which is especially important when you want to test 
some catastrophic or improbable scenarios.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-12 16:14   ` Adam Jensen
  2016-11-12 19:15     ` artium
  2016-11-13  4:01     ` Jeffrey R. Carter
@ 2016-11-14 18:17     ` Simon Wright
  2016-11-14 22:52       ` Adam Jensen
  2 siblings, 1 reply; 27+ messages in thread
From: Simon Wright @ 2016-11-14 18:17 UTC (permalink / raw)


Adam Jensen <hanzer@riseup.net> writes:

> * How does one develop and verify a Board Support Package (device
> drivers, bootloader, etc.)?

The Cortex-M4 boards developed for e.g.PixRacer[1], based on STM32F427,
support DFU[2] and JTAG.

Starting from AdaCore's STM32F429 offering, only a very few packages
need to be modified for the BSP: setting up the board's clocks to use a
24 MHz crystal rather than 8 MHz, and terminal i/o via UART7 rather than
USART1.

[1] https://pixhawk.org/modules/pixracer
[2] https://en.wikipedia.org/wiki/USB#Device_Firmware_Upgrade

> * Do the various typical embedded platform profiles (e.g., Ravenscar)
> require any Run-Time System implementation or extension?

Yes, indeed! you can see AdaCore's implementations at [3].

[3] https://github.com/AdaCore/embedded-runtimes

> * Is the BSP and RTS the kind of software that might/should be
> implemented in Spark?

AdaCore have certainly added pre- and post-conditions on a couple of the
tasking RTS components. My feeling is that it would be quite hard to
retrofit SPARK to their RTS. This may be conditioned by trying to use
SPARK to prove exception freedom for device drivers - but things like
volatility, pointers and time would be much better addressed in a
context that had budget for training and support.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-14 18:17     ` Simon Wright
@ 2016-11-14 22:52       ` Adam Jensen
  0 siblings, 0 replies; 27+ messages in thread
From: Adam Jensen @ 2016-11-14 22:52 UTC (permalink / raw)


Hi Simon, thanks for your input and the links.

On 11/14/2016 01:17 PM, Simon Wright wrote:
[snip]
> volatility, pointers and time would be much better addressed in a
> context that had budget for training and support.

Hey, that's a fun idea. Maybe rather than a budget for a few people to
toil at something that will have a short life, how about a more
sustainable system? First-pass desiderata for such a system:

* A method for harnessing human effort (users of the system) to
generate, maintain, and refine documentation such as:

* LRM, Rationale (writing style needs to be edited), tutorials, tech
notes, reference material, question/answer system (like Stack Exchange),
etc.

* A training system that generates meaningful programmer certifications.

* An integrated contractor system for software development projects.

* Various forms of integrated information/software review and
certification to maintain system integrity and qualify system output.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-14  8:11             ` Paul Rubin
@ 2016-11-14 23:03               ` Adam Jensen
  0 siblings, 0 replies; 27+ messages in thread
From: Adam Jensen @ 2016-11-14 23:03 UTC (permalink / raw)


On 11/14/2016 03:11 AM, Paul Rubin wrote:
> If it were me, I'd set up the test harness with some kind of event queue
> that would allow scheduling i/o completions, interrupts etc. to happen
> at specified timestamps in the future.  Then when you run the test, only
> the time ticks where something happens would get executed, without the
> idle ticks causing any delays in the test.  The idea is you want your
> tests to be fast even if the system being simulated is much slower.
> That lets you run the tests more frequently, which is a good thing.

Hi Paul, thanks for the hints. I'll probably need to continue my myopic
crawl through the LRM and the [(tragically) poorly written] Ada
programming textbooks before I can understand the implications and
subtleties of what you're suggesting. I think the consensus is that I
need to develop more information and understanding.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-14  9:04             ` Dmitry A. Kazakov
@ 2016-11-14 23:35               ` Adam Jensen
  2016-11-15  8:38                 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 27+ messages in thread
From: Adam Jensen @ 2016-11-14 23:35 UTC (permalink / raw)


On 11/14/2016 04:04 AM, Dmitry A. Kazakov wrote:
> I think you are confusing things a bit. If you have computing hardware
> mocked you are doing simulation and the time is simulation time. If the
> peripheral hardware is real or partially real it is hardware-in-the-loop
> simulation (HIL). HIL is usually real-time because. What people are
> saying is that HIL is much more cost efficient developing platform than
> some embedded board. Furthermore Ada is ideal for HIL because Ada
> software is portable. So you can develop almost everything on the PC and
> test almost everything in the loop. Then if some hardware (except the
> board itself) is too expensive or difficult to use, it can be simulated
> (mocked) in turn. Which is especially important when you want to test
> some catastrophic or improbable scenarios.

Yeah, I can imagine this stack (basic):
 workstation's hardware - operating system - run-time system - program

And this stack:
 embedded hardware - embedded run-time system - RT-program

And this stack:
 RT-program
 embedded run-time system
 emulator (Qemu)
 operating system
 workstation

But this setup is confusing:
 RT-program
 embedded target run-time system
 simulated hardware
 native run-time system
 operating system
 workstation



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-13 22:00           ` Adam Jensen
  2016-11-14  8:11             ` Paul Rubin
  2016-11-14  9:04             ` Dmitry A. Kazakov
@ 2016-11-15  0:06             ` Jeffrey R. Carter
  2 siblings, 0 replies; 27+ messages in thread
From: Jeffrey R. Carter @ 2016-11-15  0:06 UTC (permalink / raw)


On 11/13/2016 03:00 PM, Adam Jensen wrote:
>
> It would probably help a lot to see a very basic little ("Hello,
> Real-Time World") example of [your development approach to] real-time
> software with a mocked hardware interface that can be executed directly
> on a workstation. I suppose the hardware could be as simple as a clock
> and maybe a counter or two. Maybe there could be some interrupts and two
> or three tasks that do something very simple. And maybe all of this
> could take place under the Ravenscar profile. Would that be a lot of
> effort to write and post?

What you're asking for sounds like a small project in its own right; hardly 
suitable for a quick response on c.l.a. I'll see if I can come up with 
something, but don't hold your breath.

-- 
Jeff Carter
"Damn it, Jim, I'm an actor, not a doctor."
124

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-11 22:19 Getting started with bare-board development Adam Jensen
                   ` (2 preceding siblings ...)
  2016-11-12 20:59 ` Brian Drummond
@ 2016-11-15  1:14 ` antispam
  2016-11-15  4:20   ` Adam Jensen
  2016-11-15 19:34 ` Robert Eachus
  4 siblings, 1 reply; 27+ messages in thread
From: antispam @ 2016-11-15  1:14 UTC (permalink / raw)


Adam Jensen <hanzer@riseup.net> wrote:
> Hi, I've recently began to have a serious look at Ada-2012 and
> Spark-2014, and using GNAT for the development of real-time software in
> embedded systems.
<snip>
> Adacore has an example for the STM32F4-Discovery[3], and more elaborate
> documentation is available for the Nucleo[4], but both of those kits
> seem to have very limited memory. How much can be done with that?

Just to address this point: real time embedded systems are frequently
done with single chip microcontrollers.  Microcontroller contains
combinatorial logic (processor core, digital peripherials),
SRAM, flash and analog parts.  Having all this on single chip
brings substantial advantages.  But the drawback is that
various parts have conflicting manufacturing requirements.
So no part is as good as "pure" chip could be.  In particular,
microcontroller at lower end may have as little as 32 bytes
of RAM and at high end rarely go into megabyte range.
Basically, if you need more RAM you need to go into multi-chip
design with separate memory chip.  Then you can easily get
say 256 MB, but latency of external memory is much larger
than internal SRAM.  So while you may have plenty of
external memory program using it will run slower.  Worse,
high latency means that it is hard to give assurance
of real time behaviour.

STM32F4-Discovery contains relatively large microcontoller.
Nucleo boards have several versions containg both middle
sized and large microcontollers.

Note that flash is typically much larger than RAM, so
in fact you can have quite a lot functionalty inside
a single chip microcontroller.  When talking about
critical system I would say that modern microcontollers
give you quite a lot of space where bugs can hide.
In other words, to limit effort spent on validating
code you may wish to limit size of your system so
that in effect it fits in small device.

-- 
                              Waldek Hebisch

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-15  1:14 ` antispam
@ 2016-11-15  4:20   ` Adam Jensen
  2016-11-19 22:46     ` antispam
  0 siblings, 1 reply; 27+ messages in thread
From: Adam Jensen @ 2016-11-15  4:20 UTC (permalink / raw)


On 11/14/2016 08:14 PM, antispam@math.uni.wroc.pl wrote:
> Just to address this point: real time embedded systems are frequently
> done with single chip microcontrollers.  Microcontroller contains
> combinatorial logic (processor core, digital peripherials),

https://en.wikipedia.org/wiki/Combinational_logic
https://en.wikipedia.org/wiki/Combinatory_logic

> SRAM, flash and analog parts.  Having all this on single chip
> brings substantial advantages.  But the drawback is that
> various parts have conflicting manufacturing requirements.
> So no part is as good as "pure" chip could be.  In particular,

I'm inclined towards an approach that involves an FPGA such that signal
and control processing could take place in custom logic. Given that, it
might make sense [for me and my applications] to use a soft processor
core like the LEON3. With this approach, the peripheral components,
being implemented in the FPGA, could be selected and tuned specifically
for the application.

> microcontroller at lower end may have as little as 32 bytes
> of RAM and at high end rarely go into megabyte range.
> Basically, if you need more RAM you need to go into multi-chip
> design with separate memory chip.  Then you can easily get
> say 256 MB, but latency of external memory is much larger
> than internal SRAM.  So while you may have plenty of
> external memory program using it will run slower.  Worse,
> high latency means that it is hard to give assurance
> of real time behaviour.

Slow doesn't mean less deterministic, right?

> STM32F4-Discovery contains relatively large microcontoller.
> Nucleo boards have several versions containg both middle
> sized and large microcontollers.
> 
> Note that flash is typically much larger than RAM, so
> in fact you can have quite a lot functionalty inside
> a single chip microcontroller.  When talking about
> critical system I would say that modern microcontollers
> give you quite a lot of space where bugs can hide.
> In other words, to limit effort spent on validating
> code you may wish to limit size of your system so
> that in effect it fits in small device.

Do you have any estimates and/or examples of how much flash
and RAM are required/used for various run-time profiles and
programs of varying complexity?



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-14 23:35               ` Adam Jensen
@ 2016-11-15  8:38                 ` Dmitry A. Kazakov
  2016-11-15  9:58                   ` Niklas Holsti
  2016-11-15 17:32                   ` Adam Jensen
  0 siblings, 2 replies; 27+ messages in thread
From: Dmitry A. Kazakov @ 2016-11-15  8:38 UTC (permalink / raw)


On 15/11/2016 00:35, Adam Jensen wrote:
> On 11/14/2016 04:04 AM, Dmitry A. Kazakov wrote:
>> I think you are confusing things a bit. If you have computing hardware
>> mocked you are doing simulation and the time is simulation time. If the
>> peripheral hardware is real or partially real it is hardware-in-the-loop
>> simulation (HIL). HIL is usually real-time because. What people are
>> saying is that HIL is much more cost efficient developing platform than
>> some embedded board. Furthermore Ada is ideal for HIL because Ada
>> software is portable. So you can develop almost everything on the PC and
>> test almost everything in the loop. Then if some hardware (except the
>> board itself) is too expensive or difficult to use, it can be simulated
>> (mocked) in turn. Which is especially important when you want to test
>> some catastrophic or improbable scenarios.
>
> Yeah, I can imagine this stack (basic):
>  workstation's hardware - operating system - run-time system - program
>
> And this stack:
>  embedded hardware - embedded run-time system - RT-program
>
> And this stack:
>  RT-program
>  embedded run-time system
>  emulator (Qemu)
>  operating system
>  workstation
>
> But this setup is confusing:
>  RT-program
>  embedded target run-time system
>  simulated hardware
>  native run-time system
>  operating system
>  workstation

Typical developing process stages, at least in my area, is like this

1. Workstation    Simulation time
    Application
    [HAL]
    Mock actuators/sensors

2. Workstation    Hardware-in-the-loop, real-time
    Application
    [HAL]
    Real/Mock actuators/sensors

3. Embedded       Target platform
    Application
    [HAL]
    Real/Mock actuators/sensors

Most of developing is done in #1 or #2. Most of testing in #2. #3 is 
limited to final integration tests.

QEMU et al is not used, because it makes no sense to emulate 
computational hardware when you have Ada, unless you are an OS 
developer. So long the application is really an application you don't 
need that sort of emulators.

Whatever OS/platform-dependent parts requiring test under an emulator, 
they are quite minuscule or non-existent if an OS is used. Which is also 
the reason why bare-board targets should be avoided where possible.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-15  8:38                 ` Dmitry A. Kazakov
@ 2016-11-15  9:58                   ` Niklas Holsti
  2016-11-15 17:32                   ` Adam Jensen
  1 sibling, 0 replies; 27+ messages in thread
From: Niklas Holsti @ 2016-11-15  9:58 UTC (permalink / raw)


On 16-11-15 10:38 , Dmitry A. Kazakov wrote:
>
> Typical developing process stages, at least in my area, is like this
>
> 1. Workstation    Simulation time
>    Application
>    [HAL]
>    Mock actuators/sensors
>
> 2. Workstation    Hardware-in-the-loop, real-time
>    Application
>    [HAL]
>    Real/Mock actuators/sensors
>
> 3. Embedded       Target platform
>    Application
>    [HAL]
>    Real/Mock actuators/sensors
>
> Most of developing is done in #1 or #2. Most of testing in #2. #3 is
> limited to final integration tests.

In my domain (subcontractor for embedded SW in spacecraft) we typically 
use only one of the stages 1 and 2, not both. But otherwise our work is 
very similar.

> QEMU et al is not used, because it makes no sense to emulate
> computational hardware when you have Ada, unless you are an OS
> developer.

Or unless you worry about compiler bugs being different in the native 
and cross compilers, or about platform-dependencies introduced by 
mistake in the Ada application. Endian-dependency is easily introduced 
by mistake if the application does a lot of communication with HW. Our 
targets are usually big-endian SPARCs, but workstations are 
little-endian PCs.

Typically we do the final unit-testing runs both on workstations and on 
a target emulator, to settle such worries.

> Whatever OS/platform-dependent parts requiring test under an emulator,
> they are quite minuscule or non-existent if an OS is used.

We rarely (well, never) use an OS in embedded systems. Ravenscar or 
bare-board (zero runtime) is the norm for us. Though, there is a trend 
to have application-independent but domain-specific "execution platform" 
SW components which are like domain-specific OSs and can support various 
applications in this domain.

-- 
Niklas Holsti
Tidorum Ltd, working for SSF
niklas holsti tidorum fi
       .      @       .


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-15  8:38                 ` Dmitry A. Kazakov
  2016-11-15  9:58                   ` Niklas Holsti
@ 2016-11-15 17:32                   ` Adam Jensen
  2016-11-16  9:30                     ` Dmitry A. Kazakov
  1 sibling, 1 reply; 27+ messages in thread
From: Adam Jensen @ 2016-11-15 17:32 UTC (permalink / raw)


On 11/15/2016 03:38 AM, Dmitry A. Kazakov wrote:
> Typical developing process stages, at least in my area, is like this
> 
> 1. Workstation    Simulation time
>    Application
>    [HAL]
>    Mock actuators/sensors
> 
> 2. Workstation    Hardware-in-the-loop, real-time
>    Application
>    [HAL]
>    Real/Mock actuators/sensors
> 
> 3. Embedded       Target platform
>    Application
>    [HAL]
>    Real/Mock actuators/sensors
> 
> Most of developing is done in #1 or #2. Most of testing in #2. #3 is
> limited to final integration tests.

I think I am beginning to understand this perspective. (Thanks for your
patience).

> QEMU et al is not used, because it makes no sense to emulate
> computational hardware when you have Ada, unless you are an OS
> developer. So long the application is really an application you don't
> need that sort of emulators.

"unless you are an OS developer" might be a key issue here. I have been
thinking very much about device driver and run time kernel development
for custom hardware. Ideally, what I am looking for (or trying to sort
out) is a development methodology and tool-chain that fits into and
extends the hardware development process.

It still seems to me that the ability to compartmentalize the
emulated/simulated/HIL environment from the workstation's environment
would be helpful, if not essential, at various stages of development and
verification.

Does this make sense or is my view still somewhat askew?

> Whatever OS/platform-dependent parts requiring test under an emulator,
> they are quite minuscule or non-existent if an OS is used. Which is also
> the reason why bare-board targets should be avoided where possible.

I can appreciate how it might be desirable for the workstation and the
embedded target to provide the same OS/RTS environmental abstractions
(for a software application developer's convenience), but the class of
embedded software that I have in mind probably needs to have deep
integration with the hardware, and the hardware definitely will have
very deep traction with reality.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-11 22:19 Getting started with bare-board development Adam Jensen
                   ` (3 preceding siblings ...)
  2016-11-15  1:14 ` antispam
@ 2016-11-15 19:34 ` Robert Eachus
  2016-11-15 22:07   ` Adam Jensen
  4 siblings, 1 reply; 27+ messages in thread
From: Robert Eachus @ 2016-11-15 19:34 UTC (permalink / raw)


On Friday, November 11, 2016 at 5:19:35 PM UTC-5, Adam Jensen wrote:
> Hi, I've recently began to have a serious look at Ada-2012 and
> Spark-2014, and using GNAT for the development of real-time software in
> embedded systems. What is a good way to get started?
... 
> Also, do ARM processors make sense for safety critical systems? If not,
> would it make more sense to target a different platform from the beginning?

Just a couple of reminders that you may want to put on your screen saver, or a plaque above your desk:

Make it run, then make it right, then make it fast [Kent Beck]

Premature optimization is the root of all evil [Donald Knuth]

Moving from hardware to software, or mixing them together, it is sometimes hard to remember this key to software development. With Ada you are best off pulling any interface or (software) algorithm into a package.  Sometimes a package you create this way will be implemented by interfacing to a library package, from the Ada RM, the device manufacturer, or a code repository.  Let the compiler do the hard work of eliminating unused code, call throughs, and inlining a short procedures and functions. You will be amazed at the small ratio between lines of source code (SLOC) and the size in bytes of the resulting hardware module.  (Don't forget to strip out the debugging support, enumeration literals, etc., before measuring. ;-)


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-15 19:34 ` Robert Eachus
@ 2016-11-15 22:07   ` Adam Jensen
  0 siblings, 0 replies; 27+ messages in thread
From: Adam Jensen @ 2016-11-15 22:07 UTC (permalink / raw)


On 11/15/2016 02:34 PM, Robert Eachus wrote:
> Just a couple of reminders that you may want to put on your screen saver, or a plaque above your desk:
> 
> Make it run, then make it right, then make it fast [Kent Beck]
> 
> Premature optimization is the root of all evil [Donald Knuth]
> 
> Moving from hardware to software, or mixing them together, it is sometimes hard to remember this key to software development. With Ada you are best off pulling any interface or (software) algorithm into a package.  Sometimes a package you create this way will be implemented by interfacing to a library package, from the Ada RM, the device manufacturer, or a code repository.  Let the compiler do the hard work of eliminating unused code, call throughs, and inlining a short procedures and functions. You will be amazed at the small ratio between lines of source code (SLOC) and the size in bytes of the resulting hardware module.  (Don't forget to strip out the debugging support, enumeration literals, etc., before measuring. ;-)

I am well steeped in the Tao of Unix[1..4], but I don't think I would
apply a bottom-up approach to high assurance system development if I
didn't have to [5]; and for the system I have in mind, I don't have to
[6] <smirk>.

[1]: http://recycle.lbl.gov/~ldoolitt/unix.tao.txt
[2]: https://gist.github.com/wmayner/d3a0ebf059982abbe3ad
[3]: http://catb.org/~esr/writings/taoup/
[4]: http://huffman.sourceforge.net/tao/tao-of-programming.html
[5]: http://recklessabandonlabs.com/img/logo.png
[6]: https://youtu.be/gQxke0REFMA


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-15 17:32                   ` Adam Jensen
@ 2016-11-16  9:30                     ` Dmitry A. Kazakov
  0 siblings, 0 replies; 27+ messages in thread
From: Dmitry A. Kazakov @ 2016-11-16  9:30 UTC (permalink / raw)


On 2016-11-15 18:32, Adam Jensen wrote:

> It still seems to me that the ability to compartmentalize the
> emulated/simulated/HIL environment from the workstation's environment
> would be helpful, if not essential, at various stages of development and
> verification.
>
> Does this make sense or is my view still somewhat askew?

Of course it does. We use GNAT project files for this with scenario 
variables controlling the choice of the tool-chain (e.g. cross-compiler) 
and source code directories (to handle target dependencies). It works 
pretty well.

>> Whatever OS/platform-dependent parts requiring test under an emulator,
>> they are quite minuscule or non-existent if an OS is used. Which is also
>> the reason why bare-board targets should be avoided where possible.
>
> I can appreciate how it might be desirable for the workstation and the
> embedded target to provide the same OS/RTS environmental abstractions
> (for a software application developer's convenience), but the class of
> embedded software that I have in mind probably needs to have deep
> integration with the hardware, and the hardware definitely will have
> very deep traction with reality.

AdaCore did some good job abstracting basic hardware interfaces missing 
in the standard library. I mean sockets, OS basic I/O, COM ports.

Most of other things are either:

1. hardware-specific. These are a part of the developing itself. Thus we 
would likely have to mock/simulate them anyway.

2. domain-specific. These are abstracted by other means, by a middleware 
in my case. So in the end they don't hinder switching from the 
workstation to target and back.

In short, it is not that horrific or complicated as it appears.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Getting started with bare-board development
  2016-11-15  4:20   ` Adam Jensen
@ 2016-11-19 22:46     ` antispam
  0 siblings, 0 replies; 27+ messages in thread
From: antispam @ 2016-11-19 22:46 UTC (permalink / raw)


Adam Jensen <hanzer@riseup.net> wrote:
> On 11/14/2016 08:14 PM, antispam@math.uni.wroc.pl wrote:
> > Just to address this point: real time embedded systems are frequently
> > done with single chip microcontrollers.  Microcontroller contains
> > combinatorial logic (processor core, digital peripherials),
> 
> https://en.wikipedia.org/wiki/Combinational_logic
> https://en.wikipedia.org/wiki/Combinatory_logic
> 
> > SRAM, flash and analog parts.  Having all this on single chip
> > brings substantial advantages.  But the drawback is that
> > various parts have conflicting manufacturing requirements.
> > So no part is as good as "pure" chip could be.  In particular,
> 
> I'm inclined towards an approach that involves an FPGA such that signal
> and control processing could take place in custom logic. Given that, it
> might make sense [for me and my applications] to use a soft processor
> core like the LEON3. With this approach, the peripheral components,
> being implemented in the FPGA, could be selected and tuned specifically
> for the application.

Yes, that is valid approach.  But there are also limitations.
IIUC FPGA-s have no analog parts.  Frequently (always???) flash
is external.  Prices seem to be high.  And soft processors
seem to have rather unimpressive parameters -- they are good
if you have modes need for processor power and have some
free space in FPGA that you need for other reasons.  But
if all functions could be done by existing hard processor,
then soft processor just adds cost.

> > microcontroller at lower end may have as little as 32 bytes
> > of RAM and at high end rarely go into megabyte range.
> > Basically, if you need more RAM you need to go into multi-chip
> > design with separate memory chip.  Then you can easily get
> > say 256 MB, but latency of external memory is much larger
> > than internal SRAM.  So while you may have plenty of
> > external memory program using it will run slower.  Worse,
> > high latency means that it is hard to give assurance
> > of real time behaviour.
> 
> Slow doesn't mean less deterministic, right?

Well, smallest and slowest MCU-s tend to be the most
deterministic ones.  But common DRAM timing means that you
need processor with clock significantly faster than DRAM
latency.  So usualy DRAM is used together with caches
and determinism iss lost: mostly access is quite fast
but sometimes gets much slower.

> > STM32F4-Discovery contains relatively large microcontoller.
> > Nucleo boards have several versions containg both middle
> > sized and large microcontollers.
> > 
> > Note that flash is typically much larger than RAM, so
> > in fact you can have quite a lot functionalty inside
> > a single chip microcontroller.  When talking about
> > critical system I would say that modern microcontollers
> > give you quite a lot of space where bugs can hide.
> > In other words, to limit effort spent on validating
> > code you may wish to limit size of your system so
> > that in effect it fits in small device.
> 
> Do you have any estimates and/or examples of how much flash
> and RAM are required/used for various run-time profiles and
> programs of varying complexity?

ATM I have examples in C.  On STM Cortex-M0 (STM32F030) "do nothing"
bare-bone program is about 392 bytes of flash and uses single
word (4 bytes) of stack (and no RAM otherwise) -- that basically measures
fixed overhead payed by all programs.  Blinking LED program
using 552 bytes of flash (and no RAM beside stack) when using
defult clock frequency, 1028 bytes flash and 8 bytes of RAM
when using a library to configure processor clock to 48 MHz
and 684 bytes flash when clock is configured via port access.
With extra 100 bytes of flash one gets various simple gadgets
that flash LED-s in response to button presses.  One needs
about 1000 bytes of flash to handle few digits LED display
using funky serial protocol for communication (+ 1000 bytes
of common overhead/setup).  About 400 bytes for interrupt
driven serial transmission.  About 1000 bytes to setup and
use ADC.  About 650 bytes to drive monochrome graphic LCD
via polling SPI interface, about 900 the same using DMA
chanell (that uses about 149 bytes of RAM, including
84 byte buffer for DMA transfer).  PetitFS (FAT filesystem
used on SD cards) is about 1500 bytes of flash.  RAM usage
of PetitFS is normally dominated by two 512 byte sector
buffers.  TinyOS (tasks with preemptive scheduler, ques
and semaphores -- IIUC this is more or less what Ravenscar
offers) is slightly less than 8 kB of flash.
Libmaple library (Arduino style library which contains
drivers for most devices in STM32F103) is about 20 kB.
Several devices (like fast battery charger with modes
for various kinds of batteris and a lot of user settings)
use STM32F103C8 which is 72 MHz Cortex-M3 with 20 kB of RAM
and 64kB of flash.  IIRC to send and receive Enhernet
packet via extenal Ethernet chip one need about 4 kB
of code.  To implement "full" TCP/IP stack one needs
about 80kB of code (and probaly at least 40 kB of
RAM for buffering).  If you want to add fancy GUI to
embedded system you may need copy of the screen in RAM,
which will take something between tens of kilobyte and
megabytes.  If you are building sampling osciloscope,
then you need sample buffer -- while 10-20 kB may
work you can offer extra features with large buffers
(say hundreds of megbytes or more).

Much depends on problem and strategy used for solution.  For
example, for simple messaging 16 byte serial buffer may be
enough.  OTOH to transfer large amounts of data concurrently
with higher priority tasks one may want bigger buffers, say
2kB.  If you have 8 serial ports and allocate 2kB buffer
for each serial port (which naive serial library may do)
you get 16 kB, which may be large portion of available
RAM (such thing happened on MCU with 32 kB of RAM and
8 serial ports).  On PC-s there is tendency to use very
large buffers to minimize cost of context switches.
MCU-s tend to have smaller context and fast context swiches,
so buffers may be smaller.  Also, control messages
frequently are small.

-- 
                              Waldek Hebisch

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2016-11-19 22:46 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-11 22:19 Getting started with bare-board development Adam Jensen
2016-11-11 22:43 ` Maciej Sobczak
2016-11-12  9:45 ` G.B.
2016-11-12 16:14   ` Adam Jensen
2016-11-12 19:15     ` artium
2016-11-12 21:37       ` Adam Jensen
2016-11-13  4:01     ` Jeffrey R. Carter
2016-11-13 20:03       ` Adam Jensen
2016-11-13 21:04         ` Jeffrey R. Carter
2016-11-13 22:00           ` Adam Jensen
2016-11-14  8:11             ` Paul Rubin
2016-11-14 23:03               ` Adam Jensen
2016-11-14  9:04             ` Dmitry A. Kazakov
2016-11-14 23:35               ` Adam Jensen
2016-11-15  8:38                 ` Dmitry A. Kazakov
2016-11-15  9:58                   ` Niklas Holsti
2016-11-15 17:32                   ` Adam Jensen
2016-11-16  9:30                     ` Dmitry A. Kazakov
2016-11-15  0:06             ` Jeffrey R. Carter
2016-11-14 18:17     ` Simon Wright
2016-11-14 22:52       ` Adam Jensen
2016-11-12 20:59 ` Brian Drummond
2016-11-15  1:14 ` antispam
2016-11-15  4:20   ` Adam Jensen
2016-11-19 22:46     ` antispam
2016-11-15 19:34 ` Robert Eachus
2016-11-15 22:07   ` Adam Jensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox