comp.lang.ada
 help / color / mirror / Atom feed
* Rational Edge Design Papers
@ 2001-01-18  1:50 Al Christians
  2001-01-18 18:26 ` Marc A. Criley
  0 siblings, 1 reply; 8+ messages in thread
From: Al Christians @ 2001-01-18  1:50 UTC (permalink / raw)


See:

http://www.therationaledge.com/content/jan_01/f_craftsci_kb.html


I've read this paper without getting a real good idea how one would
follow the method to construct larger progams out of the simple
components described.  

I'm thinking that to implement a complicated process, one needs a way to 
hierarchically combine or connect the data flow controllers that control 
simple processes.  Are the data flow controllers allowed to control each
other?  If so, is a strict hierarchy the best way to combine them?  Is
it ok for me to have lots of single-instance buffers that I call 
I/O servers and use to pass data from one data flow controller to
another?

The typical application that I would like to imagine is the simple
100-screen vertical-market business app.  The author of the 
Rational piece seems to hint that concurrency is the way to do it,
but I'm wary of that. 

TIA for any discussion.


Al



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Rational Edge Design Papers
  2001-01-18  1:50 Rational Edge Design Papers Al Christians
@ 2001-01-18 18:26 ` Marc A. Criley
  2001-01-18 19:21   ` Al Christians
  0 siblings, 1 reply; 8+ messages in thread
From: Marc A. Criley @ 2001-01-18 18:26 UTC (permalink / raw)


Al Christians wrote:
> 
> See:
> 
> http://www.therationaledge.com/content/jan_01/f_craftsci_kb.html
> 
> I've read this paper without getting a real good idea how one would
> follow the method to construct larger progams out of the simple
> components described.
> 

As I read the article I observed that its suggested approach fit well
with the software architecture of a project I previously worked on for
several years.

That project was the rearchitected Engagement Planning and Control (EPC)
subsystem of the Advanced Tomahawk Weapon Control System (ATWCS). 
(ATWCS is not the Tomahawk flight software, it's the on-ship launch
planning facility.)

In the article Buhrer identifies four types of design elements.  I won't
repeat his description (see the article), but I will describe how they
fit with the EPC software architecture.

o Data Entity - Each of the different types of ATWCS planning data were
tightly encapsulated into highly cohesive data collections, usually
implemented as Ada records.  Within the system there was always a
"master" instance of each particular data type that was always
considered Truth.

o I/O Server (external interface) - EPC interacted with externals in two
ways:  First, with peer subsystems (such as the launch hardware
controller) the interaction was via a formally controlled API, with
dedicated software monitoring the APIs and through which all external
communication was funneled--both incoming and outgoing.  Second, for
subordinate subsystems, such as the operator's GUI interface,
interaction simply occurred through sockets, but again, with dedicated
software controlling all traffic passing through those sockets.  The
idea was to have single points of control for all external interfaces,
to simpify the architecture and design, ensure a consistent approach to
external interfaces, and to aid debugging.

o Transformation server - These were the straighforward internal state
managers, route planners, etc. that provided the planning functionality.

o Data Flow Manager - A pair of Ada tasks provided this functionality. 
One of the tasks managed all updates to planning data based on the
information coming in to the system, initiated "transformations", and
saw to it their outputs were passed on.  The other task drove the
planning timeline, initiating activities at the scheduled times.  One
aspect Buhrer notes is that a Data Flow Manager's "activity" may
sometimes be "waiting for input".  The EPC software architecture was
designed such that its tasks spent the vast majority of their time
blocked while waiting for input.  The net result of this was that the
system was quiescent most of the time, consuming only a few percentage
points of CPU, despite the existence of _literally_ hundreds of
simultaneously active tasks.

Marc A. Criley
Senior Staff Engineer
Quadrus Corporation
www.quadruscorp.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Rational Edge Design Papers
  2001-01-18 18:26 ` Marc A. Criley
@ 2001-01-18 19:21   ` Al Christians
  2001-01-19 14:25     ` Marc A. Criley
  0 siblings, 1 reply; 8+ messages in thread
From: Al Christians @ 2001-01-18 19:21 UTC (permalink / raw)


Marc,

Thanks very much.

Is it right that in the style of architecture (morphology?) used in 
your system,  the Data Flow Managers could be as large and complex 
(you only had two, you say) as needed, and that's not a problem? 

The paper says "Any structured programming language is suitable for
implementing a data flow manager."  Does that mean also structured
design with hierarchical decomposition?  If I do that, then I would 
summarize the paper as recommending:

1. use structured design.

2. Whenever you see a transformation, I/O, or encapsulatable data
entity, split that out into a separate design element.

Am I missing something here?

This is structured-on-top O-O on bottom programming, right?  The O-O 
evangelists would not like that, I think, since O-O systems are not
supposed to have a top. 

Keeping this on-topic for CLA, can anyone offer gratuitous explanations
of what Ada features best support this morphology?


Al



"Marc A. Criley" wrote:
> 
> Al Christians wrote:
> >
> > See:
> >
> > http://www.therationaledge.com/content/jan_01/f_craftsci_kb.html
> >
> > I've read this paper without getting a real good idea how one would
> > follow the method to construct larger progams out of the simple
> > components described.
> >
> 
> As I read the article I observed that its suggested approach fit well
> with the software architecture of a project I previously worked on for
> several years.
> 
> That project was the rearchitected Engagement Planning and Control (EPC)
> subsystem of the Advanced Tomahawk Weapon Control System (ATWCS).
> (ATWCS is not the Tomahawk flight software, it's the on-ship launch
> planning facility.)
> 
> In the article Buhrer identifies four types of design elements.  I won't
> repeat his description (see the article), but I will describe how they
> fit with the EPC software architecture.
> 
> o Data Entity - Each of the different types of ATWCS planning data were
> tightly encapsulated into highly cohesive data collections, usually
> implemented as Ada records.  Within the system there was always a
> "master" instance of each particular data type that was always
> considered Truth.
> 
> o I/O Server (external interface) - EPC interacted with externals in two
> ways:  First, with peer subsystems (such as the launch hardware
> controller) the interaction was via a formally controlled API, with
> dedicated software monitoring the APIs and through which all external
> communication was funneled--both incoming and outgoing.  Second, for
> subordinate subsystems, such as the operator's GUI interface,
> interaction simply occurred through sockets, but again, with dedicated
> software controlling all traffic passing through those sockets.  The
> idea was to have single points of control for all external interfaces,
> to simpify the architecture and design, ensure a consistent approach to
> external interfaces, and to aid debugging.
> 
> o Transformation server - These were the straighforward internal state
> managers, route planners, etc. that provided the planning functionality.
> 
> o Data Flow Manager - A pair of Ada tasks provided this functionality.
> One of the tasks managed all updates to planning data based on the
> information coming in to the system, initiated "transformations", and
> saw to it their outputs were passed on.  The other task drove the
> planning timeline, initiating activities at the scheduled times.  One
> aspect Buhrer notes is that a Data Flow Manager's "activity" may
> sometimes be "waiting for input".  The EPC software architecture was
> designed such that its tasks spent the vast majority of their time
> blocked while waiting for input.  The net result of this was that the
> system was quiescent most of the time, consuming only a few percentage
> points of CPU, despite the existence of _literally_ hundreds of
> simultaneously active tasks.
> 
> Marc A. Criley
> Senior Staff Engineer
> Quadrus Corporation
> www.quadruscorp.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Rational Edge Design Papers
  2001-01-18 19:21   ` Al Christians
@ 2001-01-19 14:25     ` Marc A. Criley
  2001-01-20  3:26       ` Al Christians
  0 siblings, 1 reply; 8+ messages in thread
From: Marc A. Criley @ 2001-01-19 14:25 UTC (permalink / raw)


Al Christians wrote:
> 
> Marc,
> 
> Thanks very much.
> 
> Is it right that in the style of architecture (morphology?) used in
> your system,  the Data Flow Managers could be as large and complex
> (you only had two, you say) as needed, and that's not a problem?

A driving goal of the architecture was to make the Data Flow Managers
(tasks) _not_ be large and complex.  The "transformation servers", i.e.,
algorithms, were where the bulk of the processing was done.  The tasks
orchestrated the data flow through the system.  That is, data arrived
via an I/O server, the Data Flow Manager determined what kind of data it
was, and then invoked the appropriate transformation.  Each
planning-data entity had its two dedicated flow managers, which dealt
solely with managing the processing for that Tomahawk engagement.

These transformations were typically quick, so the task blocked while
the processing was being done.  Some, though, required significantly
more time (routing a Tomahawk around sea-borne friends and foes,
islands, etc. could be complex).  In those cases, the primary Data Flow
Manager interacted with a subordinate one, which then drove the
appropriate processing, and the primary manager went back to waiting for
input.

(Note that ATWCS EPC does not perfectly conform to Buhrer's concept, but
it turns out his concepts provide a useful framework with which to
describe the architecture of that subsystem.)

> 
> The paper says "Any structured programming language is suitable for
> implementing a data flow manager."  Does that mean also structured
> design with hierarchical decomposition?  If I do that, then I would
> summarize the paper as recommending:
> 
> 1. use structured design.
> 
> 2. Whenever you see a transformation, I/O, or encapsulatable data
> entity, split that out into a separate design element.
> 
> Am I missing something here?

That appears to be what he is saying.  Partition the functionality into
those design entities, then tie them together with Data Flow Managers.

> 
> This is structured-on-top O-O on bottom programming, right?  The O-O
> evangelists would not like that, I think, since O-O systems are not
> supposed to have a top.

I think that's not a bad way to characterize Buhrer's approach.

A problem with OO sometimes cited by its critics is that it results in
an explosion of objects, and doesn't address well how all these objects
are supposed to be coordinated, i.e., there's "no top".

A few years ago Link Flight Simulation (a company not known for software
innovation) in Binghamton, NY, tackled this very issue when they started
the move towards OOD.  They came up with an approach (and I'm not saying
they were the first to come up with it, but I have seen it reinvented a
couple times since then) where you developed your classes and objects,
and then created a top-level data/control flow diagram to tie all the
objects together.

(The notation they used for this top-level flow diagram was devised and
published by William Bennett, with the notation subsequently being known
within the company as "Bennett Notation".  There's a handful of us
ex-Linkers around the industry who know and continue to swear by that
notation as the neatest thing since sliced bread :-)

This approach neatly addressed the OO coordination problem, and since
Buhrer is about the third incarnation of it that I've now seen, there
must be something to it!

> 
> Keeping this on-topic for CLA, can anyone offer gratuitous explanations
> of what Ada features best support this morphology?

Data Entity : records and/or tagged types, embedded in packages
I/O servers : procedural constructs...data comes in, data goes out...
Transformers: procedural constructs...Inputs -> magic -> Outputs
Flow Manager: tasks if multiple flows ought to be occurring
               simultaneously, otherwise a plain loop would
               likely suffice: wait..process..wait..process...etc.

Marc


> 
> Al
> 
> "Marc A. Criley" wrote:
> >
> > Al Christians wrote:
> > >
> > > See:
> > >
> > > http://www.therationaledge.com/content/jan_01/f_craftsci_kb.html
> > >
> > > I've read this paper without getting a real good idea how one would
> > > follow the method to construct larger progams out of the simple
> > > components described.
> > >
> >
> > As I read the article I observed that its suggested approach fit well
> > with the software architecture of a project I previously worked on for
> > several years.
> >
> > That project was the rearchitected Engagement Planning and Control (EPC)
> > subsystem of the Advanced Tomahawk Weapon Control System (ATWCS).
> > (ATWCS is not the Tomahawk flight software, it's the on-ship launch
> > planning facility.)
> >
> > In the article Buhrer identifies four types of design elements.  I won't
> > repeat his description (see the article), but I will describe how they
> > fit with the EPC software architecture.
> >
> > o Data Entity - Each of the different types of ATWCS planning data were
> > tightly encapsulated into highly cohesive data collections, usually
> > implemented as Ada records.  Within the system there was always a
> > "master" instance of each particular data type that was always
> > considered Truth.
> >
> > o I/O Server (external interface) - EPC interacted with externals in two
> > ways:  First, with peer subsystems (such as the launch hardware
> > controller) the interaction was via a formally controlled API, with
> > dedicated software monitoring the APIs and through which all external
> > communication was funneled--both incoming and outgoing.  Second, for
> > subordinate subsystems, such as the operator's GUI interface,
> > interaction simply occurred through sockets, but again, with dedicated
> > software controlling all traffic passing through those sockets.  The
> > idea was to have single points of control for all external interfaces,
> > to simpify the architecture and design, ensure a consistent approach to
> > external interfaces, and to aid debugging.
> >
> > o Transformation server - These were the straighforward internal state
> > managers, route planners, etc. that provided the planning functionality.
> >
> > o Data Flow Manager - A pair of Ada tasks provided this functionality.
> > One of the tasks managed all updates to planning data based on the
> > information coming in to the system, initiated "transformations", and
> > saw to it their outputs were passed on.  The other task drove the
> > planning timeline, initiating activities at the scheduled times.  One
> > aspect Buhrer notes is that a Data Flow Manager's "activity" may
> > sometimes be "waiting for input".  The EPC software architecture was
> > designed such that its tasks spent the vast majority of their time
> > blocked while waiting for input.  The net result of this was that the
> > system was quiescent most of the time, consuming only a few percentage
> > points of CPU, despite the existence of _literally_ hundreds of
> > simultaneously active tasks.
> >
> > Marc A. Criley
> > Senior Staff Engineer
> > Quadrus Corporation
> > www.quadruscorp.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Rational Edge Design Papers
  2001-01-19 14:25     ` Marc A. Criley
@ 2001-01-20  3:26       ` Al Christians
  2001-01-28  0:08         ` Koni Buhrer
       [not found]         ` <3A7362F5.11E74D20@rational.com>
  0 siblings, 2 replies; 8+ messages in thread
From: Al Christians @ 2001-01-20  3:26 UTC (permalink / raw)


"Marc A. Criley" wrote:
>  ... Lots of interesting stuff
> 

Thanks very much again.

I, too, like where this is going, ... except that in the business
software world that I've inhabited,  the requirement for purely
functional data transformations with no state could easily produce some
monstrosity.  This seems a common problem,  state-changing I/O and
functional logic don't mix well.   It would be very ugly in a business
program to have all the I/O at the top.  I guess I would change or
interpret the rules to allow the  data entities to include database
handles and let the primitive operations of the data entities do some
database operations.

Thoughts welcome again.


Al



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Rational Edge Design Papers
  2001-01-20  3:26       ` Al Christians
@ 2001-01-28  0:08         ` Koni Buhrer
       [not found]         ` <3A7362F5.11E74D20@rational.com>
  1 sibling, 0 replies; 8+ messages in thread
From: Koni Buhrer @ 2001-01-28  0:08 UTC (permalink / raw)
  To: comp.lang.ada@ada.eu.org



Al Christians wrote:
> 
> "Marc A. Criley" wrote:
> >  ... Lots of interesting stuff
> >
> 
> Thanks very much again.
> 
> I, too, like where this is going, ... except that in the business
> software world that I've inhabited,  the requirement for purely
> functional data transformations with no state could easily produce some
> monstrosity.

Well, you are right, but I am not saying data transformations can't have
state.
All I'm saying is that any data transformation state must be passed in
and out as the transformation server operation is invoked.
The rule ensures that transformation servers are deterministic and their
execution behavior predictable.
The data flow managers "own" the data transformation state.

>  This seems a common problem,  state-changing I/O and
> functional logic don't mix well.   It would be very ugly in a business
> program to have all the I/O at the top.  I guess I would change or
> interpret the rules to allow the  data entities to include database
> handles and let the primitive operations of the data entities do some
> database operations.

Yes, I understand your concern.
In the article I mention in a footnote that "files" (and databases are
just the same) are the exception to the rule that data entities must not
have embedded external interfaces.
Let me elaborate.

There are two ways in which we can look at a file (or a database):
A file can be viewed as an external interface from which we can read
records and to which we can write records.
Or a file can be viewed as a data object with somewhat funny and
convoluted rules on how its components are accessed.

If we consider the file to be a data object, it makes a fine data
entity.
A transformation server can read from it or write to it just like it can
access the public components of any other data entity.
Reading from a file or writing to a file thus becomes a data
transformation: the data is transformed from its file format to some
internal format, or from an internal format to the file format.
And the transformation server can read from or write to a file data
entity anywhere within its guts without intervention of data flow
managers and I/O servers.

Treating files and databases as data entities is very common.
I myself have written numerous programs doing just that.
And I believe that's ok, as long as the program is essentially
sequential and essentially free of real-time constraints.
If however a software system contains substantial concurrency or is
subject to substantial real-time constraints, data entities with hidden
external interfaces are very problematic.

External interfaces typically cause an execution thread to be delayed
(hardware latency), to synchronize with other threads (mutual
exclusion), and to block (data availability).
Sure, these issues are well known and a software design can deal with
them - usually.
What makes data entities with hidden external interfaces so dreadful is
the secrecy and subtlety by which they can introduce these problems into
a design.

Please consider:
To determine whether a software architecture is viable (for example
whether it can meet timing requirements), a software developer usually
looks at the high-level, architectural design.
At that level the details of data transformation operations and data
objects are rarely exposed.
It is therefore very easy for the software architect to overlook the
delay, synchronization, or blocking issues that are introduced by a data
entity with hidden external interfaces.

And even if the software architect is aware of the data entity, it may
be difficult for him to gage the impact the hidden external interface
has on overall system performance.
After all, we are looking at a data entity that can be used in any
number of places in any number of ways.
And even if the software architect takes all those uses and interactions
into account - during detailed design somebody may use the data entity
(and its hidden internal interface) in a new and unexpected way, thus
causing havoc on the system.

This is poison for a real time software project!

And it can be worse, because software developers - or should I say
humans? - have the tendency to defer difficult problems they don't know
how to deal with.
So it is very enticing for a software architect to hide a
hard-to-deal-with external interface inside a data entity to solve the
problem later, during detailed design.
That's dangerous, because the data entity might introduce delay,
synchronization, and blocking issues that affect overall system
viability.

A software project should deal with the hard issues - and concurrency is
definitely one of them - early during system design.
One of the benefits of the universal design pattern is that it exposes
the architecturally complex issues at the highest level.
The tedious but architecturally simple bulk of the software however is
encapsulated in data entities, transformation servers, and I/O servers.

> 
> Thoughts welcome again.
> 
> Al

Appreciate your comment.

Koni




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Rational Edge Design Papers
       [not found]         ` <3A7362F5.11E74D20@rational.com>
@ 2001-01-28 23:39           ` Hans-Olof Danielsson
  2001-02-06  5:45             ` Koni Buhrer
  0 siblings, 1 reply; 8+ messages in thread
From: Hans-Olof Danielsson @ 2001-01-28 23:39 UTC (permalink / raw)
  To: comp.lang.ada

"Koni Buhrer" <koni@Rational.Com> wrote:

> ....
> One of the benefits of the universal design pattern is that it exposes
> the architecturally complex issues at the highest level.
> ....

> Appreciate your comment.

One comment is that a software architecture should be analysable regarding
qualities such as performance, dependability, security and
modifiability/interoperability making it possibly to determine the
attributes/value-pairs of an architecture to find out  its suitability for
its purpose.

And a question is;  how and how well an architecture designed with the
universial design pattern supports analysis of the mentioned qualities?

Hans-Olof







^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Rational Edge Design Papers
  2001-01-28 23:39           ` Hans-Olof Danielsson
@ 2001-02-06  5:45             ` Koni Buhrer
  0 siblings, 0 replies; 8+ messages in thread
From: Koni Buhrer @ 2001-02-06  5:45 UTC (permalink / raw)
  To: comp.lang.ada@ada.eu.org



Hans-Olof Danielsson wrote:
> 
> "Koni Buhrer" <koni@Rational.Com> wrote:
> 
> > ....
> > One of the benefits of the universal design pattern is that it exposes
> > the architecturally complex issues at the highest level.
> > ....
> 
> > Appreciate your comment.
> 
> One comment is that a software architecture should be analysable regarding
> qualities such as performance, dependability, security and
> modifiability/interoperability making it possibly to determine the
> attributes/value-pairs of an architecture to find out  its suitability for
> its purpose.
> 
> And a question is;  how and how well an architecture designed with the
> universial design pattern supports analysis of the mentioned qualities?
> 
> Hans-Olof

The universal design pattern has a slightly different goal.

In theory, using the universal design pattern should GUARANTEE that a
software design has a (reasonably) high level of quality across all
quality factors.
Much like applying a residential construction code guarantees that a
house is safe.

Of course, that does not guarantee that a quality (say performance) is
in fact high enough for a specific purpose.
Neither does the universal design pattern guarantee that a software
system meets any other specific requirements.

However - and this is what I was referring to in the initial quotation
above - the universal design pattern makes software designs highly
transparent.
Transparency is after all a quality.
And therefore, I believe the universal design pattern generally does
support analysis of quality factors.

Koni




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2001-02-06  5:45 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-01-18  1:50 Rational Edge Design Papers Al Christians
2001-01-18 18:26 ` Marc A. Criley
2001-01-18 19:21   ` Al Christians
2001-01-19 14:25     ` Marc A. Criley
2001-01-20  3:26       ` Al Christians
2001-01-28  0:08         ` Koni Buhrer
     [not found]         ` <3A7362F5.11E74D20@rational.com>
2001-01-28 23:39           ` Hans-Olof Danielsson
2001-02-06  5:45             ` Koni Buhrer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox