* Re: GC in Ada
@ 1986-04-04 3:41 Rick Conn
0 siblings, 0 replies; 49+ messages in thread
From: Rick Conn @ 1986-04-04 3:41 UTC (permalink / raw)
The basic goal of GC is to reclaim space that is no longer needed for
one purpose and allow said space to be used for something else. For
an application program, this can be done by the opsys, which I believe
is the focus of your message, but it can also be done by the application
itself. In both cases, the desired effect, viz the utilization of less
space by the application, is achieved. This is why I feel that the GARBAGE
component is viable ... and it offers a solution now, rather than later.
Rick
-------
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
@ 2007-01-24 11:06 gautier_niouzes
2007-01-24 19:25 ` tmoran
0 siblings, 1 reply; 49+ messages in thread
From: gautier_niouzes @ 2007-01-24 11:06 UTC (permalink / raw)
Jeffrey R. Carter:
> > Turbo Pascal and other alternatives were already in place and
> > much cheaper than Ada. A few brave souls tried to compete
> > with products such as RR Software's Janus Ada and Meridian's
> > AdaVantage, but the full environment (e.g., integrated editors,
> > debuggers, etc.) were not in place they were for Turbo Pascal.
> There's always the question of why, given TP's widespread popularity, C
> became more popular.
It has to do with the deep unportability of Pascal and (consequence)
the fragmentation of Pascal into incompatible dialects. At the time you
had Amiga's, Atari's, Mac's; you had MS Windows coming to replace DOS,
so a DOS-oriented, Pascal dialect had little chance against C, except
for a short time.
TP was an extremely fast compiler producing unoptimized code (except
some trivial XOR AX,AX's), but with the CPU's frequencies quickly up
around 1990, the interest was more targeted to profit from this speed
in the compiled code and less to have a couple millions more of LoC
compiled per second.
...
> Windows 95 was the 1st widely used OS with support for tasking. Ada (95)
> was the only widely available language with support for tasking at the
> time. We probably lost a good opportunity to gain more acceptance of Ada
> by not including a standard windowing library and promoting Ada as the
> best language for taking advantage of Win95's features.
Mmmh I think it was a good idea *not* to include a standard windowing
library: then now Ada would be stuck with an outdated standard
windowing library. There was also another problem then: the lack of a
good but cheap or free compiler.
Don't be so pessimistic, Ada's quality only appear with time - and of
course with the effort of brave souls.
If you say "I'm a smart software engineer, Ada is for me and not for
you", you won't help Ada.
If you make good, visible, useful open-source software with Ada, you
will help.
______________________________________________________________
Gautier -- http://www.mysunrise.ch/users/gdm/index.htm
Ada programming -- http://www.mysunrise.ch/users/gdm/gsoft.htm
NB: For a direct answer, e-mail address on the Web site!
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-24 11:06 How come Ada isn't more popular? gautier_niouzes
@ 2007-01-24 19:25 ` tmoran
2007-01-25 4:46 ` Gautier
0 siblings, 1 reply; 49+ messages in thread
From: tmoran @ 2007-01-24 19:25 UTC (permalink / raw)
> > Windows 95 was the 1st widely used OS with support for tasking. Ada (95)
> > was the only widely available language with support for tasking at the
> > time. We probably lost a good opportunity to gain more acceptance of Ada
> > by not including a standard windowing library and promoting Ada as the
> > best language for taking advantage of Win95's features.
>
> Mmmh I think it was a good idea *not* to include a standard windowing
> library: then now Ada would be stuck with an outdated standard
> windowing library. There was also another problem then: the lack of a
> good but cheap or free compiler.
Actually some of us did try to make a standard Windows library and
promote Ada as the best language for Windows programming (see "CLAW, a
High Level, Portable, Ada 95 Binding for Microsoft Windows" TriAda 1997).
As a matter of fact, it seems everybody who wanted access to the Windows
API designed a standard windowing library - eg GWindows et al.
Also, there was the Janus Ada compiler which was pretty cheap at
something like $100, and the early versions of the free Gnat compiler were
coming out at that time.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-24 19:25 ` tmoran
@ 2007-01-25 4:46 ` Gautier
2007-01-25 9:29 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Gautier @ 2007-01-25 4:46 UTC (permalink / raw)
tmoran@acm.org wrote:
>>> Windows 95 was the 1st widely used OS with support for tasking. Ada (95)
>>> was the only widely available language with support for tasking at the
>>> time. We probably lost a good opportunity to gain more acceptance of Ada
>>> by not including a standard windowing library and promoting Ada as the
>>> best language for taking advantage of Win95's features.
>> Mmmh I think it was a good idea *not* to include a standard windowing
>> library: then now Ada would be stuck with an outdated standard
>> windowing library. There was also another problem then: the lack of a
>> good but cheap or free compiler.
>
> Actually some of us did try to make a standard Windows library and
> promote Ada as the best language for Windows programming (see "CLAW, a
> High Level, Portable, Ada 95 Binding for Microsoft Windows" TriAda 1997).
> As a matter of fact, it seems everybody who wanted access to the Windows
> API designed a standard windowing library - eg GWindows et al.
The point is that neither CLAW nor GWindows were included in the Ada standard,
and it is a good thing. And promoting Ada for Windows programming in an Ada
conference is good, but that won't make that language a lot more popular: you
have to make promotion outside the insider circle...
> Also, there was the Janus Ada compiler which was pretty cheap at
> something like $100, and the early versions of the free Gnat compiler were
> coming out at that time.
I'm afraid you read a bit too quickly: I discussed about finding a (_good_)
and (cheap or free) compiler in 1995. GNAT needed a few years to become really
good, IHMO.
______________________________________________________________
Gautier -- http://www.mysunrise.ch/users/gdm/index.htm
Ada programming -- http://www.mysunrise.ch/users/gdm/gsoft.htm
NB: For a direct answer, e-mail address on the Web site!
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-25 4:46 ` Gautier
@ 2007-01-25 9:29 ` Markus E Leypold
2007-01-27 16:59 ` Stephen Leake
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-01-25 9:29 UTC (permalink / raw)
Gautier <gautier@fakeaddress.nil> writes:
>> Also, there was the Janus Ada compiler which was pretty cheap at
>> something like $100, and the early versions of the free Gnat compiler were
>> coming out at that time.
>
> I'm afraid you read a bit too quickly: I discussed about finding a
> (_good_) and (cheap or free) compiler in 1995. GNAT needed a few years
> to become really good, IHMO.
Given, that I recently found in 3.15p (which is not the newest one, I know):
- Really bad bugs handling Read and Write of discrimated records,
- Race conditions in the runtime system when trying to catch
interrupts,
- No way to catch the console break under windows with the interupts
mechanism (i.e. a design error in my opinion),
I wonder wether GNAT was good even in, say 2002 (or whatever was
3.15p's release date).
And all those problems are really expensive to circumvent (partly
because the runtime system insists on fiddling with the signal
handlers).
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-25 9:29 ` Markus E Leypold
@ 2007-01-27 16:59 ` Stephen Leake
2007-01-27 20:40 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Stephen Leake @ 2007-01-27 16:59 UTC (permalink / raw)
Markus E Leypold <development-2006-8ecbb5cc8aREMOVETHIS@ANDTHATm-e-leypold.de> writes:
> Given, that I recently found in 3.15p (which is not the newest one, I know):
>
>
> - Really bad bugs handling Read and Write of discrimated records,
>
> - Race conditions in the runtime system when trying to catch
> interrupts,
>
> - No way to catch the console break under windows with the interupts
> mechanism (i.e. a design error in my opinion),
>
>
> I wonder wether GNAT was good even in, say 2002 (or whatever was
> 3.15p's release date).
It was good, in my opinion; at that time, I was far more productive
writting real applications using GNAT 3.15p than Borland C++.
> And all those problems are really expensive to circumvent (partly
> because the runtime system insists on fiddling with the signal
> handlers).
But to be fair, you have to say how easy the solution was in Ada, vs
the solution to the same problem in some other language.
What is the equivalent of Discriminated_Record'Write in C? The concept
doesn't even exist!
--
-- Stephe
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-27 16:59 ` Stephen Leake
@ 2007-01-27 20:40 ` Markus E Leypold
2007-01-29 8:56 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-01-27 20:40 UTC (permalink / raw)
Stephen Leake <stephen_leake@stephe-leake.org> writes:
> Markus E Leypold <development-2006-8ecbb5cc8aREMOVETHIS@ANDTHATm-e-leypold.de> writes:
>
>> Given, that I recently found in 3.15p (which is not the newest one, I know):
>>
>>
>> - Really bad bugs handling Read and Write of discrimated records,
>>
>> - Race conditions in the runtime system when trying to catch
>> interrupts,
>>
>> - No way to catch the console break under windows with the interupts
>> mechanism (i.e. a design error in my opinion),
>>
>>
>> I wonder wether GNAT was good even in, say 2002 (or whatever was
>> 3.15p's release date).
>
> It was good, in my opinion; at that time, I was far more productive
> writting real applications using GNAT 3.15p than Borland C++.
>
>> And all those problems are really expensive to circumvent (partly
>> because the runtime system insists on fiddling with the signal
>> handlers).
>
> But to be fair, you have to say how easy the solution was in Ada, vs
> the solution to the same problem in some other language.
Unfortunately that's not been the case that Ada was cheaper here:
- Read/Write Bug (see below).
- Races with interrupts / catching break -- well, catching INT or
TERM in C or console breaks is really dead easy. With GNAT you've
first to remove all interrupt handling, then re-attach the
interrupts and introduce a race at the begin of the program (which
makes the program freeze -- aborting properly at that point would
be OK, but a freeze is not). And catching the windows console break
in C just takes a C procedure call. Since in the GNAT RTS it's not
mapped to an interrupt (design error, IMHO), you'll have to write a
C stub and do some magic there (see, C again, only more effort).
> What is the equivalent of Discriminated_Record'Write in C? The concept
> doesn't even exist!
Partly the circumvention was not to use discriminated records together
with controlled types, meaning instead of having a string in an
discrimated part, the string was set to empty when it would have been
hidden by the discriminant. I could have done that in C. The other
part was to write 'Write myself and do the right thing. I could also
have done the same in C.
And C has union types. Of course they are not type safe (as C always
is), but the same effect can be achieved when writing a proper
handling procedure that depend on some discriminating field.
Look I do not want to promote C (far from it). And I'm not talking
about embedded programming or aerospace, but end user PC
applications. So in this not uncommon case / market my hypothesis is
(has been now for some time) that the ecological niche for Ada has
been closed by the development of languages and tools during the last
20 years:
- Some low level stuff is still better done in C, since the
operating systems and their interface libraries are written in C
and often fit better to that language (I've the impression that
there are some problems with the GNAT tasking runtime, since it
introduce a new layer above the host operating system's
philosophy).
- Other languages are also type safe and have become really
fast. They have a more "expressive" and powerful type system (I'm
basically talking about the Hindley-Milner type system here and
OCaml or SML, not about Java, mind you. If I look at the new Java
generics (there's a paper by Wadler about it) and compare the new
subtyping polymorphism there with the Hindley-Milner type system,
I just begin to see how impossible it is to write really useful
container libraries without polymorphism of that kind. No, Ada
doesn't have it and C doesn't have it. And the absence of useful
generics and the absolute impossibility to get those with
preprocessing in C is in my eyes a much more important argument
against C as an application programming language than the buffer
overflow problems. And yes, that applies to Java up to 1.4 also).
- Other languages have garbage collection.
- Even C has acquired a number of new tools (splint, valgrind, C
Cured) that make reasonable reliable programming (we are not
talking about autopilots and not about rockets here, at least I
don't!) in C much more feasible.
- The vendor situation ...
Overall I'm haunted by the impression that C + some high level
language of your choice with a proper FFI makes more sense for day to
day development of (a) complete PC applications, (b) Web software and
(c) small tools (say: "scripting").
I'm not alone with that, AFAIK. But note: I don't say this to
disparage Ada. Ada, to me, is some kind of super-Pascal. But for
historical reasons that has not played out (between Turbo Pascal being
traditional on Micros and having the community Ada didn't have at that
time and the Ada compilers coming just a bit too late), and now the
situation has changed. The time of the Pascals is over. Their "mind
share" has been swallowed by Python, Ocaml, Java, dependent on to
which paradigm of programming those people adhere.
If you look upon the number of books sold by O'Reilly, the big
contender is not C (and not C++ anymore). They are Java (basically for
people with a C++ mind that have seen the light and don't want to do
manual memory management any more), Python, Pearl and Ruby (no pointers, no
types, but GC) and of course C#.
The trend I see, is that GC is a must, clumsy pointer handling is out
and types are an option for those that can understand them.
I admit I'm not completely sure how all that fits together. But the
perception that it is C vs. Ada is IMHO wrong. That particular fight
was -- in a sense -- already lost when Turbo Pascal and Modula lost
out against C. (Examining that part of history should answer the OPs
question). Perhaps the answer in general is, that unreliability
doesn't matter so much in most software (MS Office i.e. has become
pretty stable despite being written in C AFAIK), since it is not
embedded or not in aerospace.
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-27 20:40 ` Markus E Leypold
@ 2007-01-29 8:56 ` Maciej Sobczak
2007-01-29 14:21 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-01-29 8:56 UTC (permalink / raw)
Markus E Leypold wrote:
> They are Java (basically for
> people with a C++ mind that have seen the light and don't want to do
> manual memory management any more
Sorry, but that's a misconception - I don't remember when I was the last
time I was messing with manual memory management in a regular C++ code.
I estimate that in my current programming I call delete (or free or
whatever) once in 5-10 kLOC.
Is Java going to save me from this *nightmare*? Wow, I'm impressed.
> The trend I see, is that GC is a must, clumsy pointer handling is out
> and types are an option for those that can understand them.
Indeed, looks like everybody is going in that direction.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-29 8:56 ` Maciej Sobczak
@ 2007-01-29 14:21 ` Markus E Leypold
2007-01-31 9:23 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-01-29 14:21 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Markus E Leypold wrote:
>
> > They are Java (basically for
>> people with a C++ mind that have seen the light and don't want to do
>> manual memory management any more
>
> Sorry, but that's a misconception - I don't remember when I was the
> last time I was messing with manual memory management in a regular C++
> code. I estimate that in my current programming I call delete (or free
> or whatever) once in 5-10 kLOC.
OK. I'll have to reconsider this statement. I usually couldn't trim
down 'automatic' allocation to that extent, but that might have been
my application area. What I remember though, is the difficulty to
recover from exceptions in the presence of automatic (scope bound)
memory management. (I hope I'm making sense here, else I'd really have
to go back to my C++ mind and notes and try to retrieve the right
vocabulary and reasoning and -- well -- I don't really want to have a
C++ discussion in c.l.a. :-). If we must, let's shift that to personal
mail or to another group ...).
> Is Java going to save me from this *nightmare*? Wow, I'm impressed.
Good for you, if it is not a nightmare. But from what I remember in
the 1997s to 1998s (that was when there still were problems with STLs,
exceptions and the string libraries in C++ and when there was no
standard and Java was new), that this was one of the motivations that
people shifted to Java (either from C++ or from C). The other
motivation was the "portable GUI" which, I think, mostly disappointed
the expectations.
Of course I might be wrong. This is just teh impression I got "from
the trenches" and I might be missing a mor global point of view. It
perhaps does not apply today where C++ and the understanding of C++
has matured a bit (there is even an embedded subset of C++ which will
annoy folks here no end :-).
>> The trend I see, is that GC is a must, clumsy pointer handling is out
>> and types are an option for those that can understand them.
>
> Indeed, looks like everybody is going in that direction.
And certainly. Why should advances in hardware only buy more spiffy
GUIs and not something to ease the everyday pain for the everyday
software developer :-).
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-29 14:21 ` Markus E Leypold
@ 2007-01-31 9:23 ` Maciej Sobczak
2007-01-31 10:24 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-01-31 9:23 UTC (permalink / raw)
Markus E Leypold wrote:
>> Sorry, but that's a misconception - I don't remember when I was the
>> last time I was messing with manual memory management in a regular C++
>> code. I estimate that in my current programming I call delete (or free
>> or whatever) once in 5-10 kLOC.
>
> OK. I'll have to reconsider this statement. I usually couldn't trim
> down 'automatic' allocation to that extent, but that might have been
> my application area. What I remember though, is the difficulty to
> recover from exceptions in the presence of automatic (scope bound)
> memory management. (I hope I'm making sense here, else I'd really have
> to go back to my C++ mind and notes and try to retrieve the right
> vocabulary and reasoning and -- well -- I don't really want to have a
> C++ discussion in c.l.a. :-).
Why not having C++ discussions on the list where people claim that C++
sucks? :-)
> If we must, let's shift that to personal
> mail or to another group ...).
Yes, please feel free to contact me with regard to the above (follow the
links in my signature).
> But from what I remember in
> the 1997s to 1998s
Most programming languages were terrible at that time, that's true.
> (that was when there still were problems with STLs,
> exceptions and the string libraries in C++ and when there was no
> standard and Java was new), that this was one of the motivations that
> people shifted to Java (either from C++ or from C).
Yes, I know that. And I will keep stating that this motivation resulted
from common misconceptions, further amplified by Java marketing.
I've even heard that Java is better, because it has a String class and
there is no need to use char* as in C++ (!). FUD can buy a lot.
> The other
> motivation was the "portable GUI"
Yes.
> which, I think, mostly disappointed
> the expectations.
Still, GUI that sucks was better than no standard GUI at all for lost of
people. Both C++ and Ada are in the same camp in this aspect. I'm not
claiming that these languages should have standard GUI, but not having
it definitely scared many.
> Of course I might be wrong. This is just teh impression I got "from
> the trenches" and I might be missing a mor global point of view. It
> perhaps does not apply today where C++
Well, there is still no standard GUI for C++, but the choice with
non-standard ones is quite impressive:
http://www.free-soft.org/guitool/
> and the understanding of C++
> has matured a bit
Yes. Sadly, too late for those who already changed their mind.
> (there is even an embedded subset of C++ which will
> annoy folks here no end :-).
I think that at the end of the day the embedded C++ will disappear from
the market as the "full" C++ gets wider compiler support on embedded
platforms. There will simply be no motivation for using subsets.
Subsetting C++ would be beneficial in the sense similar to Ravenscar or
by extracting some core and using it with formal methods (sort of
"SPARK++"), but I doubt it will ever happen.
>>> The trend I see, is that GC is a must, clumsy pointer handling is out
>>> and types are an option for those that can understand them.
>> Indeed, looks like everybody is going in that direction.
>
> And certainly. Why should advances in hardware only buy more spiffy
> GUIs and not something to ease the everyday pain for the everyday
> software developer :-).
The interesting thing is that memory management is *said* to be painful.
C++ and Ada are similar in this regard - the majority of the regular
code don't need manual memory management (local objects!) or can have it
encapsulated (containers!), so there are no problems that would need to
be solved. Reference-oriented languages have completely different ratio
of "new per kLOC" so GC is not a feature there, it's a must. But then
the question is not whether GC is better, but whether reference-oriented
languages are better than value-oriented ones. Many people get seduced
by GC before they even start asking such questions.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-31 9:23 ` Maciej Sobczak
@ 2007-01-31 10:24 ` Markus E Leypold
2007-02-02 8:42 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-01-31 10:24 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Markus E Leypold wrote:
>
>>> Sorry, but that's a misconception - I don't remember when I was the
>>> last time I was messing with manual memory management in a regular C++
>>> code. I estimate that in my current programming I call delete (or free
>>> or whatever) once in 5-10 kLOC.
>> OK. I'll have to reconsider this statement. I usually couldn't trim
>> down 'automatic' allocation to that extent, but that might have been
>> my application area. What I remember though, is the difficulty to
>> recover from exceptions in the presence of automatic (scope bound)
>> memory management. (I hope I'm making sense here, else I'd really have
>> to go back to my C++ mind and notes and try to retrieve the right
>> vocabulary and reasoning and -- well -- I don't really want to have a
>> C++ discussion in c.l.a. :-).
>
> Why not having C++ discussions on the list where people claim that C++
> sucks? :-)
Because of the level of detail? Because I haven't really swapped in my
C++-personality yet, I'm wary to do it and fear I'll slip up in public
claiming the wrong things about C++? :-)
Jokes aside, I feel a bit off topic with this and the fervour in this
thread seems to have diminished somewhat anyway ...
>
>> But from what I remember in
>> the 1997s to 1998s
>
> Most programming languages were terrible at that time, that's true.
Not Ada 95 ... :-). Actually I think it was that time when history
branched and Ada missed to become main stream, but that is not a
historians claim only a personal impression.
>
>> (that was when there still were problems with STLs,
>> exceptions and the string libraries in C++ and when there was no
>> standard and Java was new), that this was one of the motivations that
>> people shifted to Java (either from C++ or from C).
>
> Yes, I know that. And I will keep stating that this motivation
> resulted from common misconceptions, further amplified by Java
> marketing.
Oh yes, misconceptions perhaps. But I'v only been talking about
peoples motivations (which say a lot about their perceived problems).
> I've even heard that Java is better, because it has a String class and
> there is no need to use char* as in C++ (!). FUD can buy a lot.
As far as that goes I have seen people getting tripped up really bad
by string::c_str(). And I think you need it, if you don't program pure
C++, which at that time nobody did. The implementation I saw there
might have been strange/faulty though -- s.c_str() returned a pointer
into some internal data structure of s which promptly changed when s
was modified. The only "safe" way to use it was strdup(s.c_str()) and
that is not threadsafe as anybody can see. I see "the need to use
char* in C++" rumour as the result of people having been burned
by similarly quirks at that time.
>
>> The other
>> motivation was the "portable GUI"
>
> Yes.
>
>> which, I think, mostly disappointed
>> the expectations.
>
> Still, GUI that sucks was better than no standard GUI at all for lost
> of people.
"portable" not "standard". There were always standard GUIs on Unix and
on Win32, only they where not exchangeable.
> Both C++ and Ada are in the same camp in this aspect. I'm
> not claiming that these languages should have standard GUI, but not
> having it definitely scared many.
Let's say it the other way round: Java certainly attracted a number of
people because of the "standard GUI". And since I firmly believe that
a GUI does NOT belong into a language standard but rather in a
separate standard for a runtime environment, I wonder a bit which kind
of people those were.
>> Of course I might be wrong. This is just teh impression I got "from
>> the trenches" and I might be missing a mor global point of view. It
>> perhaps does not apply today where C++
>
> Well, there is still no standard GUI for C++, but the choice with
> non-standard ones is quite impressive:
I haven't been talking about the standard GUI here, but about the
memory managment issues I hinted at earlier. I see, I've been jumping
a bit here.
>
> http://www.free-soft.org/guitool/
>
>> and the understanding of C++
>> has matured a bit
>
> Yes. Sadly, too late for those who already changed their mind.
>
>> (there is even an embedded subset of C++ which will
>> annoy folks here no end :-).
> I think that at the end of the day the embedded C++ will disappear
> from the market as the "full" C++ gets wider compiler support on
> embedded platforms.
That is no question of compiler support, as I understand it, but of
verifiability and safety. A bit like Ravenscar, but -- of course --
not as highly integer (ahem ... :-).
> There will simply be no motivation for using subsets.
But there will -- see above.
> Subsetting C++ would be beneficial in the sense similar to Ravenscar
> or by extracting some core and using it with formal methods (sort of
> "SPARK++"), but I doubt it will ever happen.
It already did (and perhaps died)
http://en.wikipedia.org/wiki/Embedded_C++
>>>> The trend I see, is that GC is a must, clumsy pointer handling is out
>>>> and types are an option for those that can understand them.
>>> Indeed, looks like everybody is going in that direction.
>> And certainly. Why should advances in hardware only buy more spiffy
>> GUIs and not something to ease the everyday pain for the everyday
>> software developer :-).
>
> The interesting thing is that memory management is *said* to be
> painful. C++ and Ada are similar in this regard - the majority of the
> regular code don't need manual memory management (local objects!) or
> can have it encapsulated (containers!), so there are no problems that
> would need to be solved.
I disagree. The only-downward-closures style of C++ and Ada, which
allows only to mimic "upward closures" by using classes, heavily
influences the way the programmer thinks. Higher level abstractions
(as in functional languages) would require full closure -- and since
this means that memory life time cannot bound to scope any more, this
would be the point where manual memory management becomes painful.
Furthermore I've been convinced that manual memory management hinders
modularity.
> Reference-oriented languages have completely
> different ratio of "new per kLOC" so GC is not a feature there, it's a
> must.
I wonder, if it is really possible to do OO without being
reference-oriented. I somewhat doubt it.
> But then the question is not whether GC is better, but whether
> reference-oriented languages are better than value-oriented ones. Many
> people get seduced by GC before they even start asking such questions.
Value-oriented in my world would be functional -- languages which all
heavily rely on GC.
I also admit being one of the seduced, but that is not surprising
since my main focus is not in embedded programming and in everything
else it's sheer folly not to have GC. The arguments against GC often
read like arguments against virtual memory, against high level
languages as opposed to assembler, against filesystems (yes there was
a time when some people thought that the application would best do
allocation of disc cylinders itself since it knows its access patterns
better than the FS).
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-01-31 10:24 ` Markus E Leypold
@ 2007-02-02 8:42 ` Maciej Sobczak
2007-02-02 13:57 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-02 8:42 UTC (permalink / raw)
Markus E Leypold wrote:
>>> But from what I remember in
>>> the 1997s to 1998s
>> Most programming languages were terrible at that time, that's true.
>
> Not Ada 95 ... :-).
Ada 95 *is* terrible. It doesn't have containers nor unbounded strings
and it cannot even return limited types from functions. Yuck! ;-)
> Oh yes, misconceptions perhaps. But I'v only been talking about
> peoples motivations (which say a lot about their perceived problems).
Everybody has full rights to use misconceptions as a basis for
perception and motivation. That's a natural limitation of human brain.
>> I've even heard that Java is better, because it has a String class and
>> there is no need to use char* as in C++ (!). FUD can buy a lot.
>
> As far as that goes I have seen people getting tripped up really bad
> by string::c_str(). And I think you need it, if you don't program pure
> C++, which at that time nobody did.
Good point. It is true that people get tripped when interfacing with old
C code. What about interfacing with old C code *from Java*? Are there
less opportunities for getting tripped, or what?
Another misconception.
Interfacing to old C is tricky from Ada as well.
(Strangely, in "pure" C++ you have to use .c_str() even when opening a
file stream, because fstream constructor does not understand string -
that's real oops, but actually I've never seen anybody getting tripped
here.)
> s.c_str() returned a pointer
> into some internal data structure of s which promptly changed when s
> was modified.
Yes.
> The only "safe" way to use it was strdup(s.c_str())
No, the only "safe" way to use it is making an immutable copy of the string:
string someString = "Hello";
const string safeCopy(someString);
some_old_C_function(safeCopy.c_str());
// modify someString here without influencing anything
// ...
> and
> that is not threadsafe as anybody can see.
Why? There is nothing about threads here.
> I see "the need to use
> char* in C++" rumour as the result of people having been burned
> by similarly quirks at that time.
Yes, I understand it. Still, most of the rumour comes from misconceptions.
>> I think that at the end of the day the embedded C++ will disappear
>> from the market as the "full" C++ gets wider compiler support on
>> embedded platforms.
>
> That is no question of compiler support, as I understand it, but of
> verifiability and safety. A bit like Ravenscar, but -- of course --
> not as highly integer (ahem ... :-).
I agree with it, but restrictions for embedded C++ are not catering for
the same objectives as those of Ravenscar. For example: EC++ does not
have templates. Nor even namespaces (:-|). Does it have anything to do
with schedulability or necessary runtime support? No. Result - some
people got frustrated and invented this:
http://www.iar.com/p7371/p7371_eng.php
Funny?
That's why I believe that Embedded C++ will die.
>> Subsetting C++ would be beneficial in the sense similar to Ravenscar
>> or by extracting some core and using it with formal methods (sort of
>> "SPARK++"), but I doubt it will ever happen.
>
> It already did (and perhaps died)
>
> http://en.wikipedia.org/wiki/Embedded_C++
Exactly. It will die, because it's just a subset. If it was a subset
extended with annotations ("SPARK++") or with anything else, the
situation would be different, because it would provide new possibilities
instead of only limiting them.
>> The interesting thing is that memory management is *said* to be
>> painful.
> I disagree. The only-downward-closures style of C++ and Ada, which
> allows only to mimic "upward closures" by using classes, heavily
> influences the way the programmer thinks. Higher level abstractions
> (as in functional languages) would require full closure -- and since
> this means that memory life time cannot bound to scope any more, this
> would be the point where manual memory management becomes painful.
You can have it by refcounting function frames (and preserving some
determinism of destructors). GC is not needed for full closures, as far
as I perceive it (with all my misconceptions behind ;-) ).
On the other hand, GC is convincing with some lock-free algorithms.
Now, *this* is a tough subject for Ada community, right? ;-)
> Furthermore I've been convinced that manual memory management hinders
> modularity.
Whereas I say that I don't care about manual memory management in my
programs. You can have modularity without GC.
(And if I think about all these funny GC-related effects like
resurrection of objects in Java, then I'm not sure what kind of
modularity you are referring to. ;-) )
>> Reference-oriented languages have completely
>> different ratio of "new per kLOC" so GC is not a feature there, it's a
>> must.
>
> I wonder, if it is really possible to do OO without being
> reference-oriented. I somewhat doubt it.
Why? OO is about encapsulation and polymorphism, these don't need
references everywhere.
>> But then the question is not whether GC is better, but whether
>> reference-oriented languages are better than value-oriented ones. Many
>> people get seduced by GC before they even start asking such questions.
>
> Value-oriented in my world would be functional -- languages which all
> heavily rely on GC.
What about maintainability and reasoning?
> I also admit being one of the seduced, but that is not surprising
> since my main focus is not in embedded programming and in everything
> else it's sheer folly not to have GC.
I disagree. I'm not seduced.
> The arguments against GC often
> read like arguments against virtual memory, against high level
> languages as opposed to assembler, against filesystems (yes there was
> a time when some people thought that the application would best do
> allocation of disc cylinders itself since it knows its access patterns
> better than the FS).
Valid points. Still, Blue Gene/L uses real addressing scheme in each
node and more advanced database servers use raw disk access bypassing
the features provided by FS. Guess why.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-02 8:42 ` Maciej Sobczak
@ 2007-02-02 13:57 ` Markus E Leypold
2007-02-05 9:59 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-02 13:57 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Markus E Leypold wrote:
>
>>>> But from what I remember in
>>>> the 1997s to 1998s
>>> Most programming languages were terrible at that time, that's true.
>> Not Ada 95 ... :-).
>
> Ada 95 *is* terrible. It doesn't have containers nor unbounded strings
> and it cannot even return limited types from functions. Yuck! ;-)
Can't it? Wasn't there a trick with renaming somewhere? Like
A : Limited_Type renames some_function(...);
I seem to remember something like this. Might be mistaken: I usually
end up to eliminate limited types in my programs against my will,
since they play bad with unlimited base classes (like found in
GtkAda).
>> Oh yes, misconceptions perhaps. But I'v only been talking about
>> peoples motivations (which say a lot about their perceived problems).
>
> Everybody has full rights to use misconceptions as a basis for
> perception and motivation. That's a natural limitation of human brain.
Exactly. And we talked about "Why isn't Ada more popular?". This is
basically about how Ada is perceived, not about "technical truth",
i.e. is Ada really as it is perceived.
>>> I've even heard that Java is better, because it has a String class and
>>> there is no need to use char* as in C++ (!). FUD can buy a lot.
>> As far as that goes I have seen people getting tripped up really bad
>> by string::c_str(). And I think you need it, if you don't program pure
>> C++, which at that time nobody did.
>
> Good point. It is true that people get tripped when interfacing with
> old C code.
> What about interfacing with old C code *from Java*? Are
They don't. And that is the interesting things. The people transitioning
C->Java where a totally different class from those doing (almost at the
same time, not exactly, but almost) the C->C++ transition.
The C++ adopters were often (not necessarily) always motivated by
being able to integrate their existing code base and take their
C-knowhow with them. They felt that they didn't get more removed from
the system, but still had all the interfaces and libraries
available. In a sense this was a no-brainer for the applications
folks. They thought they got OO for free (no downside). Of course the
downside was that old hand C programmers don't necessarily make good
C++ programmers or good OO developers.
The Java adopters on the other side knew that they were giving up
their legacy code (if they had any) but on the up side were rewarded
with a much more complete standard runtime library. Adopting Java
removed you a further step from your host system, so old know-how was
only partly applicable if at all (yes, there is JNI, but it wasn't
commonly used). So Java was adopted by people who (a) thought the win
worth the price they had to pay (basically start from the beginning),
(b) people who realized that their existing code base was crap anyway
:-) and (c) newcomers (companies or students who didn't have any
specific know-how yet and decided to get that know-how in Java.
The last point in my eyes accounts for relatively high numbers of
clueless look-what-we-have-newly-invented (old win in new bottles)
newbies in the Java sector who gave Java a bad name. I still have the
reflex to groan deeply when I hear the words: "We now we are trying to
re-implement this in Java". It's probably unjustified these days but
some years ago that sentence to me was the typical hallmark of a
buzzword chasing newbie who thought that by choosing his favourite
language (probably only his 1st or 2nd one), all problems would
magically go away.
(Sorry, no proofs or sources here, folks. I think the recent history of
programming language development and adoption would bear some more
research. You might apply at my e-mail address to sponsor this
research :-)
When I talk about all those transitions, I see, that there was no
C->Ada transition, at least no mass movement. So we come back to the
C->initial question: Why not?
I think some of the posts here have already given answers to that:
Historical reasons.
Those transitions would have had to happen around 1995-2000 which in
my eyes was a period where people were looking for new languages (GUI
development in C and all this became rather unfeasible at the
time). But a process of bringing the candidate languages into the
public awareness would have to have started earlier. Was the Ada 95
standard just a tiny bit too late (it is understandable that Ada 83
was not a serious contender for this, people were looking for OO
really urgently)? Or was it the vendor situation? GCC has had C++ for
some time, but did GNAT come too late?
I think this is very much the case of being "at the right place at the
right time" -- when people were looking for ways out of their pain,
Java and C++ were (more or less) ready to at least promise
salvation. I wonder if Ada was ready ... -- at least it wasn't in
public discussion then.
> What about interfacing with old C code *from Java*? Are
> there less opportunities for getting tripped, or what?
As I said: It's less part of the overall migration strategy usually
associated with a transition to Java.
> Another misconception. Interfacing to old C is tricky from Ada as
> well.
Yes :-). I never denied that. But then you're less tempted to mix Ada
and C freely, as you're in C++/C. So in Ada (and in Java and in every
other language with a useful foreign function call interface) you get
a clear interface (in the original as in the programming sense of the
word) to C. In C++ the temptation / opportunity to get messed up is
much greater.
> (Strangely, in "pure" C++ you have to use .c_str() even when opening a
> file stream, because fstream constructor does not understand string -
> that's real oops, but actually I've never seen anybody getting tripped
> here.)
If you just do f(s.c_str()) and f _is_ properly behaved, that is, only
reads from the pointer or does a strdup(), everything is fine, but, I
note, not thread safe. I wouldn't exclude the possibility that the
resulting race condition is hidden in a nice number of C++ programs
out there.
>> s.c_str() returned a pointer
>> into some internal data structure of s which promptly changed when s
>> was modified.
>
> Yes.
>
>> The only "safe" way to use it was strdup(s.c_str())
>
> No, the only "safe" way to use it is making an immutable copy of the string:
>
> string someString = "Hello";
> const string safeCopy(someString);
> some_old_C_function(safeCopy.c_str());
Brr. Yes, that's another way to solve this problem.
>
> // modify someString here without influencing anything
> // ...
>
> > and
>> that is not threadsafe as anybody can see.
> Why? There is nothing about threads here.
Your solution is thread safe, if the strings package is (which it
wasn't in the past). My "solution" isn't since, if any other thread
holds a reference to the string in question and modifies it between
c_str() and strdup() we're not working only with suddenly modified
data (which shouldn't happen), but with pointers to invalid memory.
That means: The race has the potential not to be just a race, but to
break type safety! Which is an interaction between the presence of
threads and the semantics of a program which is just so bad bad bad.
>> I see "the need to use
>> char* in C++" rumour as the result of people having been burned
>> by similarly quirks at that time.
>
> Yes, I understand it. Still, most of the rumour comes from misconceptions.
I think this is not about, that you/someone _can_ handle C++
safely. It's about how probable that is to happen without having to
study up on arcane knowledge, by just doing the next best "reasonable"
thing. And that is the area where C++ will trip up, yes, especially
the newcomer.
>>> I think that at the end of the day the embedded C++ will disappear
>>> from the market as the "full" C++ gets wider compiler support on
>>> embedded platforms.
>> That is no question of compiler support, as I understand it, but of
>> verifiability and safety. A bit like Ravenscar, but -- of course --
>> not as highly integer (ahem ... :-).
>
> I agree with it, but restrictions for embedded C++ are not catering
> for the same objectives as those of Ravenscar. For example: EC++ does
> not have templates. Nor even namespaces (:-|). Does it have anything
> to do with schedulability or necessary runtime support? No. Result -
But perhaps with trying to attach at least a feeble resemblance of
semantics to the remaining language and avoid -- heuristically -- the
most common handling errors (namespaces + overloading + "last
identifier defined wins" make a nice mess in C++).
> some people got frustrated and invented this:
>
> http://www.iar.com/p7371/p7371_eng.php
Obviously another market: Minus verifiability (well, of a sort), plus
the ability to compile to really small targets. Useful.
>
> Funny?
>
> That's why I believe that Embedded C++ will die.
That might be.
>>> Subsetting C++ would be beneficial in the sense similar to Ravenscar
>>> or by extracting some core and using it with formal methods (sort of
>>> "SPARK++"), but I doubt it will ever happen.
>> It already did (and perhaps died)
>> http://en.wikipedia.org/wiki/Embedded_C++
> Exactly. It will die, because it's just a subset. If it was a subset
> extended with annotations ("SPARK++") or with anything else, the
> situation would be different, because it would provide new
> possibilities instead of only limiting them.
There is some truth in that.
>
>>> The interesting thing is that memory management is *said* to be
>>> painful.
>
>> I disagree. The only-downward-closures style of C++ and Ada, which
>> allows only to mimic "upward closures" by using classes, heavily
>> influences the way the programmer thinks. Higher level abstractions
>> (as in functional languages) would require full closure -- and since
>> this means that memory life time cannot bound to scope any more, this
>> would be the point where manual memory management becomes painful.
> You can have it by refcounting function frames (and preserving some
> determinism of destructors). GC is not needed for full closures, as
> far as I perceive it (with all my misconceptions behind ;-) ).
Yes, one could do it like that. Ref-counting is rumoured to be
inefficient, but if you don't have too many closure that might just
work.
> On the other hand, GC is convincing with some lock-free algorithms.
> Now, *this* is a tough subject for Ada community, right? ;-)
:-).
>
>> Furthermore I've been convinced that manual memory management hinders
>> modularity.
> Whereas I say that I don't care about manual memory management in my
> programs. You can have modularity without GC.
Certainly. But you can have more with GC. George Bauhaus recently
refered to the "A Critique of Standard ML" by Andrew W. Appel.
http://www.cs.princeton.edu/research/techreps/TR-364-92
I re-read that paper cursorily and noticed that there a some nice
points about the desirability of GC in there (approx. 1 page) I
suggest you read that: It says it better than I could say it, that w/o
GC the responsibility for freeing/disposing of allocated storage is
always/often a difficult question in general.
People who don't have GC often say that they can do anything with
manual memory management. I humbly suggest that might be, because they
already think about their solutions in terms compatible with manual
memory management. Which means, they are missing the perception of
those opportunities where GC would buy a vastly simpler architecture /
solution / whatever.
> (And if I think about all these funny GC-related effects like
> resurrection of objects in Java, then I'm not sure what kind of
> modularity you are referring to. ;-) )
Resurrection? You're talking about finalization in Java? Well -- the
way this is designed it's just a perversion.
>>> Reference-oriented languages have completely
>>> different ratio of "new per kLOC" so GC is not a feature there, it's a
>>> must.
>> I wonder, if it is really possible to do OO without being
>> reference-oriented. I somewhat doubt it.
> Why? OO is about encapsulation and polymorphism, these don't need
> references everywhere.
Yes, but -- you want to keep, say, a list of Shape(s). Those can be
Triangle(s), Circle(s) etc, which are all derived from class
Shape. How do you store this list? An array of Shape'Class is out of
question because of the different allocation requirements for the
descendants of Shape(s).
>>> But then the question is not whether GC is better, but whether
>>> reference-oriented languages are better than value-oriented ones. Many
>>> people get seduced by GC before they even start asking such questions.
>> Value-oriented in my world would be functional -- languages which all
>> heavily rely on GC.
> What about maintainability and reasoning?
What about it? It's easy with value-oriented languages (i.e. languages
that just produce new values from old ones in a non-destructive
fashion). Functional languages do this therefore reasoning is a well
developed art there. But the representations of all those values
(trees, lists, ...) (a) rely heavily on representation sharing and (b)
use references because of that. They need and use GC.
>> I also admit being one of the seduced, but that is not surprising
>> since my main focus is not in embedded programming and in everything
>> else it's sheer folly not to have GC.
>
> I disagree. I'm not seduced.
So stay pure. :-)
Reminds me a bit to the folks in youth who didn't want to use high
level languages like (gasp) C or Pascal, and insisted on Assembler,
because they (a) wanted to be efficient all the time, (b) didn't trust
the compiler.
I've decided, if I want to deliver any interesting functionality to
the end user, my resources (developer time) are limited, I have to
leave everything I can to automation (i.e. compilers, garbage
collectors, even libraries), to be able to reach my lofty goals.
>> The arguments against GC often read like arguments against virtual
>> memory, against high level languages as opposed to assembler,
>> against file systems (yes there was a time when some people thought
>> that the application would best do allocation of disc cylinders
>> itself since it knows its access patterns better than the FS).
> Valid points. Still,
> Blue Gene/L uses real addressing scheme in each
> node and more advanced database servers use raw disk access bypassing
> the features provided by FS. Guess why.
Yes. Certainly. The point is to know when to optimise, not to do it
always. Like I said elsewhere: I advocate the use of a type safe,
garbage collected language more in the functional sector, probably,
together with a good foreign function call interface and a real low
level language for interfacing and, perhaps, hot spot
optimisation.
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-02 13:57 ` Markus E Leypold
@ 2007-02-05 9:59 ` Maciej Sobczak
2007-02-05 13:43 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-05 9:59 UTC (permalink / raw)
Markus E Leypold wrote:
[I agree with what you say on historical perspective on language
transitions and the probabilistic effects that languages have on
newbies, so this part was cut.]
> If you just do f(s.c_str()) and f _is_ properly behaved, that is, only
> reads from the pointer or does a strdup(), everything is fine, but, I
> note, not thread safe. I wouldn't exclude the possibility that the
> resulting race condition is hidden in a nice number of C++ programs
> out there.
If you have a race condition because of some thread is modifying a
string object *while* some other thread is using it, then you have a
heavy design problem. This is absolutely not related to interfacing with C.
> Your solution is thread safe, if the strings package is (which it
> wasn't in the past).
Strings package cannot make it any better, because the granularity of
thread-safety results from the program logic, not from the package
interface. String is too low-level (it's a general utility) to be
thread-safe in any useful sense. That's why: a) it should not be
thread-safe on its own, b) you still have a design problem.
Interestingly, Ada doesn't make it any better. Neithe does Java. You
always need to coordinate threads/tasks/whatever on some higher
conceptual level than primitive string operations.
[about closures]
>> You can have it by refcounting function frames (and preserving some
>> determinism of destructors). GC is not needed for full closures, as
>> far as I perceive it (with all my misconceptions behind ;-) ).
>
> Yes, one could do it like that. Ref-counting is rumoured to be
> inefficient
Which relates to cascading destructors, not to function frames.
> but if you don't have too many closure that might just
> work.
If you have too many closures, then well, you have too many closures. :-)
We've been talking not only about performance, but also about
readability and maintenance. ;-)
>>> Furthermore I've been convinced that manual memory management hinders
>>> modularity.
>
>> Whereas I say that I don't care about manual memory management in my
>> programs. You can have modularity without GC.
>
> Certainly. But you can have more with GC.
In a strictly technical sense of the word, yes. But then there comes a
question about possible loses in other areas, like program structure or
clarity.
Being able to just drop things on the floor is a nice feature when
considered in isolation, but not necessarily compatible with other
objectives that must be met at the same time.
> People who don't have GC often say that they can do anything with
> manual memory management.
And I say that this is misconception. I don't have/use GC and I don't
bother with *manual* memory management neither. That's the point. In Ada
this point is spelled [Limited_]Controlled (it's a complete mess, but
that's not the fault of the concept) and in C++ it's spelled automatic
storage duration.
Today manual memory management is a low-level thingy that you don't have
to care about, unless you *really* want to (and then it's really good
that you can get it). And as I've already pointed out, in my regular
programming manual memory management is a rarity.
On the other hand, most languages with GC get it wrong by relying *only*
on GC, everywhere, whereas it is useful (if at all) only for memory. The
problem is that few programs rely on only memory and in a typical case
there are lots of resources that are not memory oriented and they have
to be managed, somehow. When GC is a shiny center of the language, those
other kinds of resources suffer from not having appropriate support. In
practical terms, you don't have manual management of memory, but you
have *instead* manual management of *everything else* and the result is
either code bloat or more bugs (or both, typically).
Languages like Ada or C++ provide more general solution, which is
conceptually not related to any kind of resource and can be therefore
applied to every one. The result is clean, short and uniform code, which
is even immune to extensions in the implementation of any class. Think
about adding a non-memory resource to a class that was up to now only
memory oriented - if it requires any modification on the client side,
like adding tons of finally blocks and calls to
close/dispose/dismiss/etc. methods *everywhere*, then in such a language
the term "encapsulation" is a joke.
An ideal solution seems to be a mix of both (GC and automatic objects),
but I think that the industry needs a few generations of failed attempts
to get this mix right. We're not yet there.
>> OO is about encapsulation and polymorphism, these don't need
>> references everywhere.
>
> Yes, but -- you want to keep, say, a list of Shape(s). Those can be
> Triangle(s), Circle(s) etc, which are all derived from class
> Shape. How do you store this list? An array of Shape'Class is out of
> question because of the different allocation requirements for the
> descendants of Shape(s).
Why should I bother?
Note also that I didn't say that references/pointers should be dropped.
I say that you don't need them everywhere. That's a difference.
> I've decided, if I want to deliver any interesting functionality to
> the end user, my resources (developer time) are limited, I have to
> leave everything I can to automation (i.e. compilers, garbage
> collectors, even libraries), to be able to reach my lofty goals.
I also leave everything I can to automation. It's spelled
[Limited_]Controlled in Ada and automatic storage duration in C++.
I cannot imagine reaching my lofty goals otherwise. ;-)
> The point is to know when to optimise, not to do it
> always.
I didn't even mention the word "optimization". I'm taling about structure.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-05 9:59 ` Maciej Sobczak
@ 2007-02-05 13:43 ` Markus E Leypold
2007-02-06 9:15 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-05 13:43 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Markus E Leypold wrote:
>
> [I agree with what you say on historical perspective on language
> transitions and the probabilistic effects that languages have on
> newbies, so this part was cut.]
>
>> If you just do f(s.c_str()) and f _is_ properly behaved, that is, only
>> reads from the pointer or does a strdup(), everything is fine, but, I
>> note, not thread safe. I wouldn't exclude the possibility that the
>> resulting race condition is hidden in a nice number of C++ programs
>> out there.
>
> If you have a race condition because of some thread is modifying a
> string object *while* some other thread is using it, then you have a
> heavy design problem. This is absolutely not related to interfacing
> with C.
Yes, I realize that. Still, providing a pointer to a the inner state
of an object, that only stays valid until I touch the object for the
next time is not only not thread safe (which I realized is not the
problem) but it's not type safe: A error in programming leads to
erronous execution, i.e. reading and writing invalid memory. That is
worse.
I think there was talk once about a thread safe string library, but at
the moment I fail to see how that relates to the problem in question.
>
>> Your solution is thread safe, if the strings package is (which it
>> wasn't in the past).
>
> Strings package cannot make it any better, because the granularity of
> thread-safety results from the program logic, not from the package
> interface. String is too low-level (it's a general utility) to be
> thread-safe in any useful sense. That's why: a) it should not be
> thread-safe on its own, b) you still have a design problem.
Yes. I realize that. Don't know what made me write that :-).
> Interestingly, Ada doesn't make it any better. Neithe does Java. You
> always need to coordinate threads/tasks/whatever on some higher
> conceptual level than primitive string operations.
So forgive me. Let's ditch the thread safety aspect and instead:
Giving pointers to internal state of objects violates (a)
encapsulation (it fixes a specific implementation) and (b) is not type
safe. I'm sure we can hang c_str() on account of this charge alone and
can drop the thread-unsafety allegation.
> [about closures]
>
>>> You can have it by refcounting function frames (and preserving some
>>> determinism of destructors). GC is not needed for full closures, as
>>> far as I perceive it (with all my misconceptions behind ;-) ).
>> Yes, one could do it like that. Ref-counting is rumoured to be
>> inefficient
>
> Which relates to cascading destructors, not to function frames.
My impression was it relates to both. Especially since both are
interlocked: In a world with closure objects (from OO) and variables
can refer to closures (function frames) and vice versa).
>> but if you don't have too many closure that might just
>> work.
>
> If you have too many closures, then well, you have too many closures. :-)
Yes :-). Only in a ref counted implementation even too many might not
be enough.
> We've been talking not only about performance, but also about
> readability and maintenance. ;-)
Of this thread? :-)
>
>>>> Furthermore I've been convinced that manual memory management hinders
>>>> modularity.
>>
>>> Whereas I say that I don't care about manual memory management in my
>>> programs. You can have modularity without GC.
>> Certainly. But you can have more with GC.
>
> In a strictly technical sense of the word, yes. But then there comes a
> question about possible loses in other areas, like program structure
> or clarity.
I think the absence of manual memory management code actually furthers
clarity.
>
> Being able to just drop things on the floor is a nice feature when
> considered in isolation, but not necessarily compatible with other
> objectives that must be met at the same time.
Which?
>> People who don't have GC often say that they can do anything with
>> manual memory management.
>
> And I say that this is misconception. I don't have/use GC and I don't
> bother with *manual* memory management neither. That's the point. In
> Ada this point is spelled [Limited_]Controlled (it's a complete mess,
> but that's not the fault of the concept) and in C++ it's spelled
> automatic storage duration.
My impression was that Ada Controlled storage is actually quite a
clean concept compared to C++ storage duration.
But both tie allocation to program scope, synchronous with a stack. I
insist that is not always desirable: It rules out some architecture,
especially those where OO abounds.
The problem with Controlled, BTW, is that it seems to interact with
the rest of the language in such a way that GNAT didn't get it right
even after ~10 years of development. Perhaps difficult w/o a formal
semantics.
> Today manual memory management is a low-level thingy that you don't
> have to care about, unless you *really* want to (and then it's really
> good that you can get it). And as I've already pointed out, in my
> regular programming manual memory management is a rarity.
> On the other hand, most languages with GC get it wrong by relying
> *only* on GC, everywhere, whereas it is useful (if at all) only for
> memory.
I've heard that complaint repeatedly, but still do not understand it.
> The problem is that few programs rely on only memory and in a
> typical case there are lots of resources that are not memory oriented
> and they have to be managed, somehow.
> When GC is a shiny center of the
> language, those other kinds of resources suffer from not having
> appropriate support. In practical terms, you don't have manual
> management of memory, but you have *instead* manual management of
> *everything else* and the result is either code bloat or more bugs (or
> both, typically).
Now, now. Having GC doesn't preclude you from managing ressources
unrelated to memory in a manual fashion. Apart from that languages
with GC often provide nice tricks to tie external ressources to their
memory proxy and ditch them when the memory proxy is unreachable
(i.e. the programm definitely won't use the external ressource any
longer). Examples: IO channels (only sometimes useful), temporary
files, files locks, shared memory allocations. Even if you manage
ressources manually, GC still limits the impact of leaks. And BTW - in
fcuntional langauges you can do more against ressource leaks, sicne
you can "wrap" functions:
(with_file "output" (with_file "out_put" copy_data))
It's not always done, but a useful micro pattern.
> Languages like Ada or C++ provide more general solution, which is
> conceptually not related to any kind of resource and can be
> therefore applied to every one.
Since you're solving a problem here, which I deny that it exists, I
can't follow you here. But I notice, that
"Languages like C provide a more general solution (with regard to
accessing memory), which is conceptually not related to any kind of
fixed type system and can therefore implement any type and data model"
would become a valid argument if I agreed with you. It's the
generality we are getting rid of during the evolution of programming
languages. Assembler is the "most general" solution, but we are
getting structured programming, typesystems amd finally garbage
collection.
> The result is clean, short and uniform code,
>which is even immune to extensions in the implementation of any
>class. Think about adding a non-memory resource to a class that was
>up to now only memory oriented - if it requires any modification on
>the client side, like adding tons of finally blocks and calls to
>close/dispose/dismiss/etc. methods *everywhere*, then in such a
>language the term "encapsulation" is a joke.
Well, you think Ada here. In an FP I write (usually) something like:
with_lock "/var/foo/some.lck" (fun () -> do_something1 (); do_something2 param; ...).
The fact that Ada and C++ don't have curried functions and cannot
construct unnamed functions or procedures is really limiting in this
case and probably causal to your misconception that it would be
necessary to add tons of exceaption handling at the client side.
And BTW: In Ada I would encapsulate the ressource in a Controlled
object (a ressource proxy or handle) and get the same effect (tying it
to a scope). Indeed I have already done so, to make a program which
uses quite a number of locks, to remove locks when it terminated or
crashes. Works nicely.
> An ideal solution seems to be a mix of both (GC and automatic
> objects), but I think that the industry needs a few generations of
> failed attempts to get this mix right. We're not yet there.
>>> OO is about encapsulation and polymorphism, these don't need
>>> references everywhere.
>> Yes, but -- you want to keep, say, a list of Shape(s). Those can be
>> Triangle(s), Circle(s) etc, which are all derived from class
>> Shape. How do you store this list? An array of Shape'Class is out of
>> question because of the different allocation requirements for the
>> descendants of Shape(s).
> Why should I bother?
>
> Note also that I didn't say that references/pointers should be
> dropped. I say that you don't need them everywhere. That's a
> difference.
OK, so you need them _almost_ everywhere :-). I take your point.
>> I've decided, if I want to deliver any interesting functionality to
>> the end user, my resources (developer time) are limited, I have to
>> leave everything I can to automation (i.e. compilers, garbage
>> collectors, even libraries), to be able to reach my lofty goals.
>
> I also leave everything I can to automation. It's spelled
> [Limited_]Controlled in Ada and automatic storage duration in C++.
> I cannot imagine reaching my lofty goals otherwise. ;-)
Good. 'Controlled' buys you a lot in Ada, but there are 2 problems
(a) AFAIS (that is still my hypothesis, binding storage to scope is
not alway possible (esp. when doing GUIs and MVC and this
like). I cannot prove but from what I experienced I rather
convinced of it.
(b) AFAIR there are restrictions on _where_ I can define controlled
types. AFAIR that was a PITA.
>> The point is to know when to optimise, not to do it
>> always.
> I didn't even mention the word "optimization". I'm taling about structure.
OK. But how does a program become less structured by removing the
manual memory management? The GC is not magically transforming the
program into spaghetti code ...
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-05 13:43 ` Markus E Leypold
@ 2007-02-06 9:15 ` Maciej Sobczak
2007-02-06 11:45 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-06 9:15 UTC (permalink / raw)
Markus E Leypold wrote:
> Let's ditch the thread safety aspect and instead:
> Giving pointers to internal state of objects violates (a)
> encapsulation (it fixes a specific implementation) and (b) is not type
> safe.
That's right. This is a result of the fact that C-style strings are not
encapsulated at all and interfacing to them means stepping down to the
common ground.
>> We've been talking not only about performance, but also about
>> readability and maintenance. ;-)
>
> Of this thread? :-)
:-)
>>>>> Furthermore I've been convinced that manual memory management hinders
>>>>> modularity.
>>>> Whereas I say that I don't care about manual memory management in my
>>>> programs. You can have modularity without GC.
>>> Certainly. But you can have more with GC.
>> In a strictly technical sense of the word, yes. But then there comes a
>> question about possible loses in other areas, like program structure
>> or clarity.
>
> I think the absence of manual memory management code actually furthers
> clarity.
I believe so. And I stress again - GC is not the only solution for
manual memory management.
>> Being able to just drop things on the floor is a nice feature when
>> considered in isolation, but not necessarily compatible with other
>> objectives that must be met at the same time.
>
> Which?
Determinism in both timing and resource consumption?
>>> People who don't have GC often say that they can do anything with
>>> manual memory management.
>> And I say that this is misconception. I don't have/use GC and I don't
>> bother with *manual* memory management neither. That's the point. In
>> Ada this point is spelled [Limited_]Controlled (it's a complete mess,
>> but that's not the fault of the concept) and in C++ it's spelled
>> automatic storage duration.
>
> My impression was that Ada Controlled storage is actually quite a
> clean concept compared to C++ storage duration.
Clean? It adds tag to the type, which then becomes a controlling type in
every primitive operation. I got bitten by this recently.
Adding a destructor to C++ class never has any side effects like this.
Apart from this, the bare existence of *two* base types Controlled and
Limited_Controlled means that the concepts of controlled and limited are
not really orthogonal in the sense that adding one of these
meta-properties affects the interface that is "shared" by the other aspect.
It's a mess. Actually, it prevents me from thinking clearly about what I
want to achieve.
> But both tie allocation to program scope, synchronous with a stack. I
> insist that is not always desirable: It rules out some architecture,
> especially those where OO abounds.
What architecture?
> The problem with Controlled, BTW, is that it seems to interact with
> the rest of the language in such a way that GNAT didn't get it right
> even after ~10 years of development. Perhaps difficult w/o a formal
> semantics.
You see.
>> On the other hand, most languages with GC get it wrong by relying
>> *only* on GC, everywhere, whereas it is useful (if at all) only for
>> memory.
> Now, now. Having GC doesn't preclude you from managing ressources
> unrelated to memory in a manual fashion.
Of course. No, thank you. I prefer a language which enables me to use
the same logic for all resources, so I *don't have to* manage *anything*
manually.
In other words, it's very nice that GC doesn't preclude me from doing
some stuff manually, but that's not enough.
> Apart from that languages
> with GC often provide nice tricks to tie external ressources to their
> memory proxy and ditch them when the memory proxy is unreachable
These "nice tricks" are not so nice. Most of all, they provide no
guarantee whatsoever, even that they will be invoked at all.
A friend of mine spent long evenings recently hunting for database
connection leaks in a big Java application. That's telling something.
> And BTW - in
> fcuntional langauges you can do more against ressource leaks, sicne
> you can "wrap" functions:
>
> (with_file "output" (with_file "out_put" copy_data))
>
> It's not always done, but a useful micro pattern.
Yes, it basically emulates something that is just natural in those
languages that provide scope-based lifetime out of the box.
>> Languages like Ada or C++ provide more general solution, which is
>> conceptually not related to any kind of resource and can be
>> therefore applied to every one.
>
> Since you're solving a problem here, which I deny that it exists
You might wish to tell this to my friend - the one hunting database
connection leaks. :-)
> But I notice, that
>
> "Languages like C provide a more general solution (with regard to
> accessing memory), which is conceptually not related to any kind of
> fixed type system and can therefore implement any type and data model"
>
> would become a valid argument if I agreed with you.
Except that it's not the point I'm making.
> In an FP I write (usually) something like:
>
> with_lock "/var/foo/some.lck" (fun () -> do_something1 (); do_something2 param; ...).
>
> The fact that Ada and C++ don't have curried functions and cannot
> construct unnamed functions or procedures is really limiting in this
> case and probably causal to your misconception that it would be
> necessary to add tons of exceaption handling at the client side.
Tons of exception handling (and not only - every way to leave a scope
needs to be guarded, not only by exception) are necessary in those
languages that rely on GC without providing the above possibility at the
same time. The other possibility is to rely on scoped lifetime in the
first place, where neither GC nor the above tricks are necessary to
achieve proper cleanup.
> And BTW: In Ada I would encapsulate the ressource in a Controlled
> object (a ressource proxy or handle) and get the same effect (tying it
> to a scope).
Yes.
> Indeed I have already done so, to make a program which
> uses quite a number of locks, to remove locks when it terminated or
> crashes. Works nicely.
Of course. That's my point.
(except, maybe, the crashing part, when likely there is nobody to handle
the cleanup)
>> Note also that I didn't say that references/pointers should be
>> dropped. I say that you don't need them everywhere. That's a
>> difference.
>
> OK, so you need them _almost_ everywhere :-). I take your point.
No, you don't. I agree for references/pointers in polymorphic
collections. That's not even close to "almost everywhere" for me, but
your application domain may differ.
> 'Controlled' buys you a lot in Ada, but there are 2 problems
>
> (a) AFAIS (that is still my hypothesis, binding storage to scope is
> not alway possible (esp. when doing GUIs and MVC and this
> like). I cannot prove but from what I experienced I rather
> convinced of it.
I don't follow this.
> (b) AFAIR there are restrictions on _where_ I can define controlled
> types. AFAIR that was a PITA.
That's a mess. I'm sorry to repeat that.
> But how does a program become less structured by removing the
> manual memory management? The GC is not magically transforming the
> program into spaghetti code ...
You get spathetti once you start adding finalizers - the spaghetti is
then formed in both time (when something is invoked) and space (where
the code is).
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-06 9:15 ` Maciej Sobczak
@ 2007-02-06 11:45 ` Markus E Leypold
2007-02-06 14:16 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-06 11:45 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
>> I think the absence of manual memory management code actually
>> furthers
>> clarity.
>
> I believe so. And I stress again - GC is not the only solution for
> manual memory management.
OK. I accept that for the moment. I'm just not convinced you can do
everything you need with scope bound memory and if you introduce
manual MM again, you'll be back to confusing manual memory
management. I say this just to restate my point of view clearly once
again -- not that I think that either of us can prove his position at
the moment.
>>> Being able to just drop things on the floor is a nice feature when
>>> considered in isolation, but not necessarily compatible with other
>>> objectives that must be met at the same time.
>> Which?
>
> Determinism in both timing and resource consumption?
Which brings me back to what I said repeatedly to other people:
(1) That this determinism is very often not a requirement (outside of
embdedded programming)
(2) The determinism is shot anyway by using a heap, not by using
GC. Even better: GC can introduce detereminism in space
consumption by compacting the heap, which naive heaps with manual
MM don't do because of fragementation.
(3) What is often needed are upper limits not determinism and thos
upper limits can be guaranteed with GC or with an appropriate
collector.
>>>> People who don't have GC often say that they can do anything with
>>>> manual memory management.
>>> And I say that this is misconception.
Good. :-)
>>> Ada this point is spelled [Limited_]Controlled (it's a complete mess,
>>> but that's not the fault of the concept) and in C++ it's spelled
>>> automatic storage duration.
>> My impression was that Ada Controlled storage is actually quite a
>> clean concept compared to C++ storage duration.
> Clean? It adds tag to the type, which then becomes a controlling type
> in every primitive operation.
> I got bitten by this recently. Adding a destructor to C++ class
> never has any side effects like this.
I understand. But the Ada OO way is peculiar, but not unmanagable.
> Apart from this, the bare existence of *two* base types Controlled and
> Limited_Controlled means that the concepts of controlled and limited
> are not really orthogonal in the sense that adding one of these
> meta-properties affects the interface that is "shared" by the other
> aspect.
Still. Being able to add a Finalize means you need to have a tagged
type. I see no alternative.
> It's a mess. Actually, it prevents me from thinking clearly about what
> I want to achieve.
Wow.
>
>> But both tie allocation to program scope, synchronous with a stack. I
>> insist that is not always desirable: It rules out some architecture,
>> especially those where OO abounds.
>
> What architecture?
I already say in another post: That is difficult to show with a toy
system. It only shows in larger systems where you really can't / don't
want to say in any give subsystem module how long a certain peice of
data lives. So none of those can be burdened with deallocating it.
>> The problem with Controlled, BTW, is that it seems to interact with
>> the rest of the language in such a way that GNAT didn't get it right
>> even after ~10 years of development. Perhaps difficult w/o a formal
>> semantics.
> You see.
Yes, I see. But GNAT is also a political problem (see the role of
AdaCore, formerly ACT), so (public) GNAT not getting things right
might well not indicate a problem with reading the Ada standard, but
in the release politics for public version. My hint: There is no
incentive to release a high quality public version GNAT.
>>> On the other hand, most languages with GC get it wrong by relying
>>> *only* on GC, everywhere, whereas it is useful (if at all) only for
>>> memory.
>
>> Now, now. Having GC doesn't preclude you from managing ressources
>> unrelated to memory in a manual fashion.
>
> Of course. No, thank you. I prefer a language which enables me to use
> the same logic for all resources, so I *don't have to* manage
> *anything* manually.
Which as you said yourself, is difficult to do. And memory is
ressource used most frequently, whereas the temptation to e.g. drop
file descriptors is much less.
> In other words, it's very nice that GC doesn't preclude me from doing
> some stuff manually, but that's not enough.
I'm appalled: You don't want GC, but no, it doesn't do enough for you?
Of yourse YMMV. but when I have it, it works really well for me.
>> Apart from that languages
>> with GC often provide nice tricks to tie external ressources to their
>> memory proxy and ditch them when the memory proxy is unreachable
>
> These "nice tricks" are not so nice. Most of all, they provide no
> guarantee whatsoever, even that they will be invoked at all.
That's not quite true. Those tricks are building blocks to implement
ressources that are automatically finalized when becoming
unreachable. But it's up to the library author to write a complete
implementation.
> A friend of mine spent long evenings recently hunting for database
> connection leaks in a big Java application. That's telling something.
Well -- so he was naive and should have handled / understood that part
of the system better. A friend of mine spent half a month with finding
problems with manual allocation/deallocation and sneaking heap
corruption. Does that prove anything? I don't think so.
>> And BTW - in
>> fcuntional langauges you can do more against ressource leaks, sicne
>> you can "wrap" functions:
>> (with_file "output" (with_file "out_put" copy_data))
>> It's not always done, but a useful micro pattern.
> Yes, it basically emulates something that is just natural in those
> languages that provide scope-based lifetime out of the box.
This is no emulation, but how FP does "scope based". Without the
necessity to add exception handling at the client side or without
having to introduce tagged types / classes. Isn't THAT nice? :-)
>
>>> Languages like Ada or C++ provide more general solution, which is
>>> conceptually not related to any kind of resource and can be
>>> therefore applied to every one.
>> Since you're solving a problem here, which I deny that it exists
> You might wish to tell this to my friend - the one hunting database
> connection leaks. :-)
Yes, I'll hold that up. Your friend got bitten by believing in a
mechanism where he shouldn't while I deny the the necessity to manage
other ressources by GC for the general case. It's a nice trick
sometimes, but one doesn't need it.
>> But I notice, that
>> "Languages like C provide a more general solution (with regard to
>> accessing memory), which is conceptually not related to any kind of
>> fixed type system and can therefore implement any type and data model"
>> would become a valid argument if I agreed with you.
>
> Except that it's not the point I'm making.
No, but the structure of the argument is basically the same. The
analogy should help to show why it is (IMHO) invalid.
>
>> In an FP I write (usually) something like:
>> with_lock "/var/foo/some.lck" (fun () -> do_something1 ();
>> do_something2 param; ...).
>> The fact that Ada and C++ don't have curried functions and cannot
>> construct unnamed functions or procedures is really limiting in this
>> case and probably causal to your misconception that it would be
>> necessary to add tons of exceaption handling at the client side.
> Tons of exception handling (and not only - every way to leave a scope
> needs to be guarded, not only by exception) are necessary in those
> languages that rely on GC without providing the above possibility at
> the same time.
No. I've done the same in Ada w/o controlled objects, but using a
generic procedure.
procedure mark_data_records is new process_cache_with_lock( Operation => mark_record, ... );
begin
mark_data_records(...);
end;
The client side has no burden with exceaption handling.
> The other possibility is to rely on scoped lifetime in
> the first place, where neither GC nor the above tricks are necessary
> to achieve proper cleanup.
Always assumed that works as a general apporach. Personally I cherish
the additional freedom I get from GC.
>> And BTW: In Ada I would encapsulate the ressource in a Controlled
>> object (a ressource proxy or handle) and get the same effect (tying it
>> to a scope).
>
> Yes.
>
>> Indeed I have already done so, to make a program which
>> uses quite a number of locks, to remove locks when it terminated or
>> crashes. Works nicely.
>
> Of course. That's my point.
> (except, maybe, the crashing part, when likely there is nobody to
> handle the cleanup)
By "crashes" I mean uncaught exceptions propagating back to the main
procedure. In those cases Finalize() runs.
I've BTW, done the same on OCaml (library for automatically
deallocating locks), so I don't see how GC prevents me from doing so.
>>> Note also that I didn't say that references/pointers should be
>>> dropped. I say that you don't need them everywhere. That's a
>>> difference.
>> OK, so you need them _almost_ everywhere :-). I take your point.
> No, you don't. I agree for references/pointers in polymorphic
> collections. That's not even close to "almost everywhere" for me, but
> your application domain may differ.
Yes. it does, abviously. You might not be aware, but code destined for
mere consumers (as opposed to embedded code and code destined as tools
for other developers) has a large amount of GUI code in it.
>> 'Controlled' buys you a lot in Ada, but there are 2 problems
>> (a) AFAIS (that is still my hypothesis, binding storage to scope is
>> not alway possible (esp. when doing GUIs and MVC and this
>> like). I cannot prove but from what I experienced I rather
>> convinced of it.
>
> I don't follow this.
>
>> (b) AFAIR there are restrictions on _where_ I can define controlled
>> types. AFAIR that was a PITA.
>
> That's a mess. I'm sorry to repeat that.
Yes. But does C++ do it better? The Ada restrictions AFAIK come from
the necessity of separate linking and compilation (you must be able to
relink w/o looking at the body) and C++ treats that against the
ability to add finalizers everyhwere.
>> But how does a program become less structured by removing the
>> manual memory management? The GC is not magically transforming the
>> program into spaghetti code ...
> You get spathetti once you start adding finalizers - the spaghetti is
> then formed in both time (when something is invoked) and space (where
> the code is).
No. Neither GC nore finalizers make code incomprehensible. I can only
assert it again, since are not discussing proofs here ore specific
examples.
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-06 11:45 ` Markus E Leypold
@ 2007-02-06 14:16 ` Maciej Sobczak
2007-02-06 15:44 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-06 14:16 UTC (permalink / raw)
Markus E Leypold wrote:
>> And I stress again - GC is not the only solution for
>> manual memory management.
>
> OK. I accept that for the moment. I'm just not convinced you can do
> everything you need with scope bound memory
Sure. But for the sake of discussion completeness, you might wish to
throw an example of a situation where scoped lifetime will not make it.
>> Determinism in both timing and resource consumption?
>
> Which brings me back to what I said repeatedly to other people:
>
> (1) That this determinism is very often not a requirement (outside of
> embdedded programming)
Java programmer wrote a loop where he opened database cursors, released
in the cursor finalizer. All was working like a charm, unless put into
production, when in one case the loop had to spin many more times than
he ever cared to test. GC did not clean up the abandoned cursor objects
fast enough and the number of unnecessarily opened cursors hit the
server limit. That was the end of this application.
The fix was easy: write explicit close/dispose/dismiss/whatever at the
end of the loop, so that effectively there was never more than one open
cursor. In fact, this was *manual* resource management.
The above would be avoided altogether with scoped lifetime.
You are right that determinism is very often not a requirement. It is
just the life that very often shows that the initial requirements were
not complete.
> (2) The determinism is shot anyway by using a heap, not by using
> GC. Even better: GC can introduce detereminism in space
> consumption by compacting the heap, which naive heaps with manual
> MM don't do because of fragementation.
There is nothing particular in scoped lifetime that would prohibit
compacting heaps and there is nothing particular in GC that guarantees
it. It's just the statistics based on popular implementations, not a rule.
I can perfectly imagine compacting heaps managed by scoped lifetime.
> (3) What is often needed are upper limits not determinism and thos
> upper limits can be guaranteed with GC or with an appropriate
> collector.
This refers to memory consumption only, whereas I clearly stated
deterministic *time* as a second (first, actually) goal.
>>> My impression was that Ada Controlled storage is actually quite a
>>> clean concept compared to C++ storage duration.
>
>> Clean? It adds tag to the type, which then becomes a controlling type
>> in every primitive operation.
>
>> I got bitten by this recently. Adding a destructor to C++ class
>> never has any side effects like this.
>
> I understand. But the Ada OO way is peculiar, but not unmanagable.
OK, I accept the word "peculiar". I only oppose "quite a clean concept"
in your previous post. :-)
>> Apart from this, the bare existence of *two* base types Controlled and
>> Limited_Controlled means that the concepts of controlled and limited
>> are not really orthogonal in the sense that adding one of these
>> meta-properties affects the interface that is "shared" by the other
>> aspect.
>
> Still. Being able to add a Finalize means you need to have a tagged
> type. I see no alternative.
You might want to take a look at C++.
>>> But both tie allocation to program scope, synchronous with a stack. I
>>> insist that is not always desirable: It rules out some architecture,
>>> especially those where OO abounds.
>> What architecture?
>
> I already say in another post: That is difficult to show with a toy
> system. It only shows in larger systems where you really can't / don't
> want to say in any give subsystem module how long a certain peice of
> data lives. So none of those can be burdened with deallocating it.
OK. What about refcounting with smart pointers?
>>> The problem with Controlled, BTW, is that it seems to interact with
>>> the rest of the language in such a way that GNAT didn't get it right
>>> even after ~10 years of development. Perhaps difficult w/o a formal
>>> semantics.
>
>> You see.
>
> Yes, I see. But GNAT is also a political problem (see the role of
> AdaCore, formerly ACT), so (public) GNAT not getting things right
> might well not indicate a problem with reading the Ada standard, but
> in the release politics for public version. My hint: There is no
> incentive to release a high quality public version GNAT.
I get the message. Clear enough.
>> In other words, it's very nice that GC doesn't preclude me from doing
>> some stuff manually, but that's not enough.
>
> I'm appalled: You don't want GC, but no, it doesn't do enough for you?
Exactly. It's not enough, because it doesn't solve the problem of
resource management in a general way.
> Of yourse YMMV. but when I have it, it works really well for me.
I acknowledge that there might be some applications which are strictly
memory-oriented. They are just not the ones I usually write.
>>> Apart from that languages
>>> with GC often provide nice tricks to tie external ressources to their
>>> memory proxy and ditch them when the memory proxy is unreachable
>> These "nice tricks" are not so nice. Most of all, they provide no
>> guarantee whatsoever, even that they will be invoked at all.
>
> That's not quite true. Those tricks are building blocks to implement
> ressources that are automatically finalized when becoming
> unreachable. But it's up to the library author to write a complete
> implementation.
I don't understand. If the is no guarantee that the finalizer will be
*ever* called, then what kind of building block it is?
>> A friend of mine spent long evenings recently hunting for database
>> connection leaks in a big Java application. That's telling something.
>
> Well -- so he was naive and should have handled / understood that part
> of the system better.
Sure. In other words, be prepared that with GC you have to
handle/understand some parts of the sytem better.
> A friend of mine spent half a month with finding
> problems with manual allocation/deallocation and sneaking heap
> corruption. Does that prove anything? I don't think so.
It does prove that your friend did not benefit from the language that
provides scoped lifetime.
>>> And BTW - in
>>> fcuntional langauges you can do more against ressource leaks, sicne
>>> you can "wrap" functions:
>>> (with_file "output" (with_file "out_put" copy_data))
>>> It's not always done, but a useful micro pattern.
>
>> Yes, it basically emulates something that is just natural in those
>> languages that provide scope-based lifetime out of the box.
>
> This is no emulation, but how FP does "scope based". Without the
> necessity to add exception handling at the client side or without
> having to introduce tagged types / classes. Isn't THAT nice? :-)
Same thing with scoped lifetime, as implemented in C++. No need for
exception handling (unless handling is actually meaninful), nor for
changes in the interface. That's nice, I agree.
The difference is that in languages with scoped lifetime the lifetime
management is a property of the type (and so applies to all instances),
whereas the "FP-trick" above is a property of the use-side. Which one is
more robust and less prone to bugs?
BTW - please show me an example involving 10 objects of different kinds. :-)
>>> But I notice, that
>>> "Languages like C provide a more general solution (with regard to
>>> accessing memory), which is conceptually not related to any kind of
>>> fixed type system and can therefore implement any type and data model"
>>> would become a valid argument if I agreed with you.
>> Except that it's not the point I'm making.
>
> No, but the structure of the argument is basically the same. The
> analogy should help to show why it is (IMHO) invalid.
Ok, but please elaborate on the above first, so I'm sure that it relates
to my point.
>> Tons of exception handling (and not only - every way to leave a scope
>> needs to be guarded, not only by exception) are necessary in those
>> languages that rely on GC without providing the above possibility at
>> the same time.
>
> No. I've done the same in Ada w/o controlled objects, but using a
> generic procedure.
>
> procedure mark_data_records is new process_cache_with_lock( Operation => mark_record, ... );
>
> begin
> mark_data_records(...);
> end;
>
> The client side has no burden with exceaption handling.
Could you explain this example a bit?
>> I agree for references/pointers in polymorphic
>> collections. That's not even close to "almost everywhere" for me, but
>> your application domain may differ.
>
> Yes. it does, abviously. You might not be aware, but code destined for
> mere consumers (as opposed to embedded code and code destined as tools
> for other developers) has a large amount of GUI code in it.
Yes.
>>> (b) AFAIR there are restrictions on _where_ I can define controlled
>>> types. AFAIR that was a PITA.
>> That's a mess. I'm sorry to repeat that.
>
> Yes. But does C++ do it better? The Ada restrictions AFAIK come from
> the necessity of separate linking and compilation (you must be able to
> relink w/o looking at the body) and C++ treats that against the
> ability to add finalizers everyhwere.
I don't understand. Adding a finalizer/destructor to the type that
didn't have it before means changes in both specs and the body.
Relinking is not enough.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-06 14:16 ` Maciej Sobczak
@ 2007-02-06 15:44 ` Markus E Leypold
2007-02-07 8:55 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-06 15:44 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Markus E Leypold wrote:
>
>>> And I stress again - GC is not the only solution for
>>> manual memory management.
>> OK. I accept that for the moment. I'm just not convinced you can do
>> everything you need with scope bound memory
>
> Sure. But for the sake of discussion completeness, you might wish to
> throw an example of a situation where scoped lifetime will not make it.
Model-View-Controller in GUIs. Especially trying to adapt that to GTKAda.
Sorry for being so short on this, but detailing this example would be
very long and after all perhaps not very convincing. At every local
vie of the situation one could argue that it would just be possible to
... whatever. But in the whole it is really hard to build a reusable
toolkit this way without reference counting or GC. Certainly it is
impossible (I'm convinced) with scope bound live time. Unfortunately
failure (or at least tremendous difficulties) to build something in s
specific fashion w/o a (semi-) formal proof or at least the
possibility to strip that down to a minimal example is not very
convincing, since you could always assume that more research would
have found a solution. (In my case I was happy to build a specialized
solution and note for later reference the susspicion that controlled
objects would be needed for a general one and scope bound life
wouldn't suffice).
I always intended to look a bit deeper into this issue but until now
other things were more important.
>>> Determinism in both timing and resource consumption?
>> Which brings me back to what I said repeatedly to other people: (1)
>> That this determinism is very often not a requirement (outside of
>> embdedded programming)
>
> Java programmer wrote a loop where he opened database cursors,
> released in the cursor finalizer. All was working like a charm, unless
> put into production, when in one case the loop had to spin many more
> times than he ever cared to test.
I'm not surprised. GC'ing ressources that are bounded doesn't spare
you knowing about the way GC works. My suggestion would have been to
either close the cursor explicitely (since I know about the problem)
or wrap the production of a new cursor in a module/class which also
looks at the number of already opened cursors and collects before
reaching certain limits (in effect introducing a new, additional,
threshold for GC).
> GC did not clean up the abandoned
> cursor objects fast enough and the number of unnecessarily opened
> cursors hit the server limit. That was the end of this application.
:-)
>
> The fix was easy: write explicit close/dispose/dismiss/whatever at the
> end of the loop, so that effectively there was never more than one
> open cursor. In fact, this was *manual* resource management.
Yes. As I said: GC can be made into an instrument to manage other
ressources, but it has to be done right. Sometimes you're just better
of assisting this mechanism by manually disposing of external
ressources at the right places. You're approach "I want it all, and if
I can't have both (memory management AND general ressource collection)
I don't want neither" is somewhat counterproductive.
But you might well continue to believe in your policy here. I,
personally, find that it brings me a big step nearer to salvation if I
can have GC, even if I do only manual MM with it. After all: I don't
have this much other (external) ressources to care about and if I do,
it pays to have a careful look at their structure and then wrap some
abstraction around them.
> The above would be avoided altogether with scoped lifetime.
In this case yes. See -- I do not deny the advantages of 'scoped
lifetime'. It is a useful pattern, I've myself given some examples in
my last answer. But your approach is, since somebody had problems
misusing GC in a specific case in which scoped lifetime would have
worked fine, that therefore GC is useless and scoped lifetime
rules. Personally I prefer to have both approaches at hand, they are
complementary, but I certainly wouldn't want to miss GC in some
languages.
As far as the usability of GC goes, it even helps with controlled
objects: Controlled objects might be highly structured and the usual
(i.e. Ada) apporach is, that you hide the details of building and
later deallocating the structure under the hodd of the abstraction
barrier. Fine. That works. But with GC I don't even have to write a
tear-it-down procedure for everything a scoped object allocates under
the hood. I just make sure to close (e.g.) the filehandle and let the
rest to the GC.
> You are right that determinism is very often not a requirement. It is
> just the life that very often shows that the initial requirements were
> not complete.
>
>> (2) The determinism is shot anyway by using a heap, not by using
>> GC. Even better: GC can introduce detereminism in space
>> consumption by compacting the heap, which naive heaps with manual
>> MM don't do because of fragementation.
>
> There is nothing particular in scoped lifetime that would prohibit
> compacting heaps and there is nothing particular in GC that guarantees
No. But without having 3/4ths of a GC anyway compaction is pretty
pointless.
> it. It's just the statistics based on popular implementations, not a
> rule.
Sorry, that is nonsense. There are garbage collectors that are
designed to be compacting. They are moving objects around. This is
absolutely deterministic and not statistical. Whereas manual
allocation and deallocation as in Ada or C will fragment the heap and
you have NO guarantee (only statistics) about the ratio of allocated
(used) memory and presently unusable hole. None. Hows that about
reliability if you can't give space guarantees even if you know about
the memory your algorithms need, since unfortunately you cannot
perdict the exact sequence of allocations?
> I can perfectly imagine compacting heaps managed by scoped lifetime.
Yes you can do that. Since you're following pointers than and reqrite
them you might as well go the whole way and deollaocate unusable
memory while you're at it.
>
>> (3) What is often needed are upper limits not determinism and thos
>> upper limits can be guaranteed with GC or with an appropriate
>> collector.
> This refers to memory consumption only, whereas I clearly stated
> deterministic *time* as a second (first, actually) goal.
This refers to both, there are real time compatible GC
algorithms. Development didn't stop in the last 20 years.
>>>> My impression was that Ada Controlled storage is actually quite a
>>>> clean concept compared to C++ storage duration.
>>
>>> Clean? It adds tag to the type, which then becomes a controlling type
>>> in every primitive operation.
>>
>>> I got bitten by this recently. Adding a destructor to C++ class
>>> never has any side effects like this.
>> I understand. But the Ada OO way is peculiar, but not unmanagable.
> OK, I accept the word "peculiar". I only oppose "quite a clean
> concept" in your previous post. :-)
Tha Ada OO way. 'Controlled' is just the logical consequence and on
top of tagged types quite clean.
>>> Apart from this, the bare existence of *two* base types Controlled and
>>> Limited_Controlled means that the concepts of controlled and limited
>>> are not really orthogonal in the sense that adding one of these
>>> meta-properties affects the interface that is "shared" by the other
>>> aspect.
>> Still. Being able to add a Finalize means you need to have a tagged
>> type. I see no alternative.
>
> You might want to take a look at C++.
I know C++ rather well. :-)
>>>> But both tie allocation to program scope, synchronous with a stack. I
>>>> insist that is not always desirable: It rules out some architecture,
>>>> especially those where OO abounds.
>>> What architecture?
>> I already say in another post: That is difficult to show with a toy
>> system. It only shows in larger systems where you really can't / don't
>> want to say in any give subsystem module how long a certain peice of
>> data lives. So none of those can be burdened with deallocating it.
> OK. What about refcounting with smart pointers?
(1) It ties lifetime to multiple scopes (instead of one), (2) its not
efficient, (3) It stille doesn't work for the general case, since
there is still a place where you have to decide that you don't need
that pointer copy any more, which is unscoped.
>>>> The problem with Controlled, BTW, is that it seems to interact with
>>>> the rest of the language in such a way that GNAT didn't get it right
>>>> even after ~10 years of development. Perhaps difficult w/o a formal
>>>> semantics.
>>
>>> You see.
>> Yes, I see. But GNAT is also a political problem (see the role of
>> AdaCore, formerly ACT), so (public) GNAT not getting things right
>> might well not indicate a problem with reading the Ada standard, but
>> in the release politics for public version. My hint: There is no
>> incentive to release a high quality public version GNAT.
>
> I get the message. Clear enough.
:-) Good. Whereas A. mightily profits by all the improvements in the
GCC backend, which IMHO was their prime motivation to support and
actively push reintegration into the GCC tree (they would have been
stuck with a GCC 2.8 based compiler else). Their is another theory
that they did it all out of the goodness of their hearts, but I don't
subscribe to that.
>>> In other words, it's very nice that GC doesn't preclude me from doing
>>> some stuff manually, but that's not enough.
>> I'm appalled: You don't want GC, but no, it doesn't do enough for
>> you?
> Exactly. It's not enough, because it doesn't solve the problem of
> resource management in a general way.
Poor misguided friend. :-)
>> Of yourse YMMV. but when I have it, it works really well for me.
>
> I acknowledge that there might be some applications which are strictly
> memory-oriented. They are just not the ones I usually write.
It also works for apps that are not "memory-oriented". I think you're
missing that e.g. filehandles are really simpler and differently
structured ressource from memory. A filehandle does not contain
references to memory or other filehandle. Memory does. That vastly
simplifies the problem of managing file handles indeed so much that
I'm convinced that you don't need buitlin support for this.
>>>> Apart from that languages
>>>> with GC often provide nice tricks to tie external ressources to their
>>>> memory proxy and ditch them when the memory proxy is unreachable
>>> These "nice tricks" are not so nice. Most of all, they provide no
>>> guarantee whatsoever, even that they will be invoked at all.
>> That's not quite true. Those tricks are building blocks to implement
>> ressources that are automatically finalized when becoming
>> unreachable. But it's up to the library author to write a complete
>> implementation.
>
> I don't understand. If the is no guarantee that the finalizer will be
> *ever* called, then what kind of building block it is?
>
>>> A friend of mine spent long evenings recently hunting for database
>>> connection leaks in a big Java application. That's telling something.
>> Well -- so he was naive and should have handled / understood that
>> part
>> of the system better.
>
> Sure. In other words, be prepared that with GC you have to
> handle/understand some parts of the sytem better.
So?
>
>> A friend of mine spent half a month with finding
>> problems with manual allocation/deallocation and sneaking heap
>> corruption. Does that prove anything? I don't think so.
> It does prove that your friend did not benefit from the language that
> provides scoped lifetime.
In that case, yes. But since there is a new-Operator in Ada, leaking
would have been the same problem.
>>>> And BTW - in
>>>> fcuntional langauges you can do more against ressource leaks, sicne
>>>> you can "wrap" functions:
>>>> (with_file "output" (with_file "out_put" copy_data))
>>>> It's not always done, but a useful micro pattern.
>>
>>> Yes, it basically emulates something that is just natural in those
>>> languages that provide scope-based lifetime out of the box.
>> This is no emulation, but how FP does "scope based". Without the
>> necessity to add exception handling at the client side or without
>> having to introduce tagged types / classes. Isn't THAT nice? :-)
>
> Same thing with scoped lifetime, as implemented in C++. No need for
> exception handling (unless handling is actually meaninful), nor for
> changes in the interface. That's nice, I agree.
> The difference is that in languages with scoped lifetime the lifetime
> management is a property of the type (and so applies to all
> instances), whereas the "FP-trick" above is a property of the
> use-side. Which one is more robust and less prone to bugs?
This is, forgive me, nonsense. I might want to use a file handle in a
scoped way here and in a free floating way there. It still afile
handel. And no -- the FP way is not "more prone to bugs" and as with
George Bauhaus I simply refuse this kind of discussion (FUD and
ContraFUD).
> BTW - please show me an example involving 10 objects of different kinds. :-)
All at the same time? Well -- bad programming. You don't do everything
at the same time in FP (and in Ada ...) and I hardly ever have
functions involving 10 parameters.
>>>> But I notice, that
>>>> "Languages like C provide a more general solution (with regard to
>>>> accessing memory), which is conceptually not related to any kind of
>>>> fixed type system and can therefore implement any type and data model"
>>>> would become a valid argument if I agreed with you.
>>> Except that it's not the point I'm making.
>> No, but the structure of the argument is basically the same. The
>> analogy should help to show why it is (IMHO) invalid.
>
> Ok, but please elaborate on the above first, so I'm sure that it
> relates to my point.
You refuse mor automation and abstraction on the pretext of generality
and better control. That exactly is the point that has been made
against: Compiled languages, structured programming, type systems,
modularization, OO, etc -- name any advance you want, it has been
opposed with arguments of exactly that kind. What is missing from them
is, though, some kind of argument that the "loss of control" or "the
loss of generaity" actually is bad, or better: Does cost more than it
pays for. Your argument, I admit, might be permissible, but it needs
more groundwork.
>>> Tons of exception handling (and not only - every way to leave a scope
>>> needs to be guarded, not only by exception) are necessary in those
>>> languages that rely on GC without providing the above possibility at
>>> the same time.
>> No. I've done the same in Ada w/o controlled objects, but using a
>> generic procedure.
>> procedure mark_data_records is new process_cache_with_lock(
>> Operation => mark_record, ... );
>> begin
>> mark_data_records(...);
>> end;
>> The client side has no burden with exceaption handling.
>
> Could you explain this example a bit?
Later. Don't hesitate to ask again. I'll just cut+paste the complete
code too, but it takes some time (which I don't have now).
>>> I agree for references/pointers in polymorphic
>>> collections. That's not even close to "almost everywhere" for me, but
>>> your application domain may differ.
>> Yes. it does, abviously. You might not be aware, but code destined
>> for
>> mere consumers (as opposed to embedded code and code destined as tools
>> for other developers) has a large amount of GUI code in it.
>
> Yes.
>
>>>> (b) AFAIR there are restrictions on _where_ I can define controlled
>>>> types. AFAIR that was a PITA.
>>> That's a mess. I'm sorry to repeat that.
>> Yes. But does C++ do it better? The Ada restrictions AFAIK come from
>> the necessity of separate linking and compilation (you must be able to
>> relink w/o looking at the body) and C++ treats that against the
>> ability to add finalizers everyhwere.
> I don't understand. Adding a finalizer/destructor to the type that
> didn't have it before means changes in both specs and the
> body. Relinking is not enough.
I thought you talked about the restriction where a Controlled type can
be defined.
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: How come Ada isn't more popular?
2007-02-06 15:44 ` Markus E Leypold
@ 2007-02-07 8:55 ` Maciej Sobczak
2007-02-07 9:30 ` GC in Ada Martin Krischik
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-07 8:55 UTC (permalink / raw)
Markus E Leypold wrote:
>> Sure. But for the sake of discussion completeness, you might wish to
>> throw an example of a situation where scoped lifetime will not make it.
>
> Model-View-Controller in GUIs. Especially trying to adapt that to GTKAda.
I will not repeat Dmitry's arguments here.
>> Java programmer wrote a loop where he opened database cursors,
[...]
> I'm not surprised. GC'ing ressources that are bounded doesn't spare
> you knowing about the way GC works.
Exactly. That's why I say that the solution is incomplete. If you have
to think about the mechanics of some solution, then that solution is not
entirely/properly automated.
> You're approach "I want it all, and if
> I can't have both (memory management AND general ressource collection)
> I don't want neither" is somewhat counterproductive.
I find it counterproductive to apply different management strategies
with regard to *implementation details* of different types.
I prefer solutions which enable me to hide implementation details from
clients (sotware engineering?). If clients have to treat different types
differently just because their implementation details differ, then it
means that these implementation details leak out in the form of distinct
handling methods. I want to treat String and DatabaseCursor in the same
way - that's the prerequisite for being productive for me.
> But you might well continue to believe in your policy here.
Thanks. :-)
> I,
> personally, find that it brings me a big step nearer to salvation if I
> can have GC, even if I do only manual MM with it. After all: I don't
> have this much other (external) ressources to care about and if I do,
> it pays to have a careful look at their structure and then wrap some
> abstraction around them.
OK, I understand it. We just agree that GC is a valid solution for
*some* class of computing problems.
So why people claim that GC-oriented languages are general-purpose?
> But your approach is, since somebody had problems
> misusing GC in a specific case
No, that's not the point. The point is that languages which are build
around GC tend to drop the proper support for other types of resources
altogether. It's not the particular programmer who misused GC in a
specific case - it's language designers who closed themselves in the GC
cage and cranked a language that fails to provide good support for wider
class of problems.
As I've already said, the ideal would be to have both GC and scoped
lifetime. The problem is that there is no reliable industry experience
with such a mix, unless we treat Boehm's GC as one.
> As far as the usability of GC goes, it even helps with controlled
> objects
[...]
> I just make sure to close (e.g.) the filehandle and let the
> rest to the GC.
Of course. But then it's up to the designer of the type to decide how to
treat each component of that type - it should be implementation detail.
This decision should not be put on the shoulders of the final user,
which is now the case in mainstream GC-oriented languages. This is what
is broken.
>> There is nothing particular in scoped lifetime that would prohibit
>> compacting heaps and there is nothing particular in GC that guarantees
>
> No. But without having 3/4ths of a GC anyway compaction is pretty
> pointless.
Why? If the goal of compaction is to avoid fragmentation, then what is
pointless in having compacted heaps managed by scoped lifetime?
>> it. It's just the statistics based on popular implementations, not a
>> rule.
>
> Sorry, that is nonsense. There are garbage collectors that are
> designed to be compacting.
So what? This is exactly the statistics I'm talking about, that does not
prove that GC guarantees compacting or that the lack of GC prevents it.
> They are moving objects around. This is
> absolutely deterministic and not statistical.
By statistics I mean the number of language implementations on the
market that choose to use compacting GC vs. the number of languages that
use non-compacting heaps. :-)
> Whereas manual
> allocation and deallocation as in Ada or C will fragment the heap and
> you have NO guarantee (only statistics) about the ratio of allocated
> (used) memory and presently unusable hole.
If that bothers you, then use non-fragmenting allocators.
> Hows that about
> reliability if you can't give space guarantees even if you know about
> the memory your algorithms need, since unfortunately you cannot
> perdict the exact sequence of allocations?
I use non-fragmenting allocator and I get my guarantees.
>> I can perfectly imagine compacting heaps managed by scoped lifetime.
>
> Yes you can do that. Since you're following pointers than and reqrite
> them you might as well go the whole way and deollaocate unusable
> memory while you're at it.
Yes. Note that scoped lifetime does not preclude GC on some lower level.
Scoped lifetime provides a hook for deterministic "good bye" action -
there is nothing more to it. Even if that "good bye" action calls
free/delete/whatever on some memory block, there is nothing that forces
the runtime to return the given block of memory right back to the
operating system. Actually, none of the self-respecting allocators do
this systematically - instead they keep the memory around for a while in
anticipation of future allocations. I have nothing against GC at this
level, really (and I've seen such implementations - in fact, a fully
standard-compliant implementation of the C language could provide
*empty* free function and GC underneath; and fully conformant C++
implementation could just call destructors as a result of delete and
leave the raw memory to GC).
What I'm against is a GC "paradigm" that prevents me from having
deterministic "good bye" hooks for scoped lifetime. The problem is that
most GC-oriented languages I'm aware of do have this "issue".
In other words, for me GC is acceptable as an implementation detail of
the dynamic memory allocator. I don't care *how* the allocator deals
with memory that I free in the same sense that I don't care *how* the
operating system deals with files that I remove from the filesystem.
What I care about are hooks and scoped lifetime is an obvious answer for
this.
>>> (3) What is often needed are upper limits not determinism and thos
>>> upper limits can be guaranteed with GC or with an appropriate
>>> collector.
>
>> This refers to memory consumption only, whereas I clearly stated
>> deterministic *time* as a second (first, actually) goal.
>
> This refers to both, there are real time compatible GC
> algorithms.
I'm interested in what is their target audience. I would expect any
decent RT system to *refrain* from using dynamic memory except in the
initialization phase (so that the "mission phase" is performed with
constant set of objects), in which case RT GC would be just an answer to
the question that nobody asked.
Experts might wish to correct me and elaborate on this.
>> OK. What about refcounting with smart pointers?
>
> (1) It ties lifetime to multiple scopes (instead of one)
With GC tracing pointers you have the same, just the tracing is hidden.
> (2) its not
> efficient
Why?
> (3) It stille doesn't work for the general case
Neither does GC, as seen in examples. :-)
>> I acknowledge that there might be some applications which are strictly
>> memory-oriented. They are just not the ones I usually write.
>
> It also works for apps that are not "memory-oriented". I think you're
> missing that e.g. filehandles are really simpler and differently
> structured ressource from memory. A filehandle does not contain
> references to memory or other filehandle. Memory does. That vastly
> simplifies the problem of managing file handles indeed so much that
> I'm convinced that you don't need buitlin support for this.
Somehow this idea didn't work for database cursors, as already described.
>> Sure. In other words, be prepared that with GC you have to
>> handle/understand some parts of the sytem better.
>
> So?
So the implementation details of *some* types leak out in the sense that
they force me to understand their internal mechanics. I don't want to.
I want to say this:
declare
Sql : Session := Open_Session("some parameters");
Str : String := "Hello";
begin
-- ...
end;
instead of this:
declare
Sql : Session := Open_Session("some parameters");
Str : String := "Hello";
begin
-- ...
-- damn, I have to do *something* with *some* stuff here
end;
[about FP]
>> The difference is that in languages with scoped lifetime the lifetime
>> management is a property of the type (and so applies to all
>> instances), whereas the "FP-trick" above is a property of the
>> use-side. Which one is more robust and less prone to bugs?
>
> This is, forgive me, nonsense. I might want to use a file handle in a
> scoped way here and in a free floating way there.
What about readability and maintainability of such code?
> And no -- the FP way is not "more prone to bugs"
Unless you use a handle in a free floating way and find later that in
production your code was called in a loop causing handles to pile up?
I have the practical example (already described) that this way of
thinking can lead to failures. The programmer wanted to use a database
cursor in a free floating way. That was fine. Later his code was used in
a loop. Ah, yes - his code was used in a loop written by another
programmer, so his judgement about whether it's OK to use anything in a
free floating way was misguided from the very beginning.
> and as with
> George Bauhaus I simply refuse this kind of discussion (FUD and
> ContraFUD).
OK. We will just stay unconvinced. :-)
>> BTW - please show me an example involving 10 objects of different kinds. :-)
>
> All at the same time?
Yes.
> Well -- bad programming.
I knew you would answer this. :-)
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* GC in Ada
2007-02-07 8:55 ` Maciej Sobczak
@ 2007-02-07 9:30 ` Martin Krischik
2007-02-07 11:08 ` Markus E Leypold
2007-02-07 11:15 ` Maciej Sobczak
0 siblings, 2 replies; 49+ messages in thread
From: Martin Krischik @ 2007-02-07 9:30 UTC (permalink / raw)
Maciej Sobczak schrieb:
> Yes. Note that scoped lifetime does not preclude GC on some lower level.
> Scoped lifetime provides a hook for deterministic "good bye" action -
> there is nothing more to it. Even if that "good bye" action calls
> free/delete/whatever on some memory block, there is nothing that forces
> the runtime to return the given block of memory right back to the
> operating system. Actually, none of the self-respecting allocators do
> this systematically - instead they keep the memory around for a while in
> anticipation of future allocations.
I believe in most systems memory is never returned.
If I understood Unix file management right only memory at the end of the
heap can be returned. Without compaction a no go.
And on Windows I know that only the full block allocated with MemAlloc
can be returned. Blocks are always page sized (multiple of 4kb). A smart
Memory manager might reserve a full block for large allocations but all
those tiny 20 byte allocations will never be returned to the OS.
> What I'm against is a GC "paradigm" that prevents me from having
> deterministic "good bye" hooks for scoped lifetime. The problem is that
> most GC-oriented languages I'm aware of do have this "issue".
But isn't that exactly what "Unchecked_Deallocation" and "pragma
Controlled" is all about? Has Ada - by your rationale - not got GC right?
Martin
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 9:30 ` GC in Ada Martin Krischik
@ 2007-02-07 11:08 ` Markus E Leypold
2007-02-07 11:15 ` Maciej Sobczak
1 sibling, 0 replies; 49+ messages in thread
From: Markus E Leypold @ 2007-02-07 11:08 UTC (permalink / raw)
Martin Krischik <krischik@users.sourceforge.net> writes:
> Maciej Sobczak schrieb:
>
>> Yes. Note that scoped lifetime does not preclude GC on some lower level.
>> Scoped lifetime provides a hook for deterministic "good bye" action
>> -
>> there is nothing more to it. Even if that "good bye" action calls
>> free/delete/whatever on some memory block, there is nothing that
>> forces the runtime to return the given block of memory right back to
>> the operating system. Actually, none of the self-respecting
>> allocators do this systematically - instead they keep the memory
>> around for a while in anticipation of future allocations.
>
> I believe in most systems memory is never returned.
>
> If I understood Unix file management right only memory at the end of
> the heap can be returned. Without compaction a no go.
You're partly right. It depends on the heap implementation. If done
using sbrk() you're right. If done using mmap() you'd have --
theoretically -- the possibility wo return sufficiently large holes to
the OS.
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 9:30 ` GC in Ada Martin Krischik
2007-02-07 11:08 ` Markus E Leypold
@ 2007-02-07 11:15 ` Maciej Sobczak
2007-02-07 11:53 ` Martin Krischik
2007-02-07 12:19 ` Markus E Leypold
1 sibling, 2 replies; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-07 11:15 UTC (permalink / raw)
Martin Krischik wrote:
>> What I'm against is a GC "paradigm" that prevents me from having
>> deterministic "good bye" hooks for scoped lifetime. The problem is
>> that most GC-oriented languages I'm aware of do have this "issue".
>
> But isn't that exactly what "Unchecked_Deallocation" and "pragma
> Controlled" is all about? Has Ada - by your rationale - not got GC right?
By my rationale Ada and C++ got it perfectly right ([Limited_]Controlled
mess aside).
The only difference between them in this regard is that Ada explicitly
allows GC on the low level without requiring it (so that implementations
can ignore the whole idea) and that C++ is traditionally silent about
the concept altogether (so that implementations can provide it). ;-)
(Note that GC will likely be formalized in the upcoming C++ standard.)
My criticism is targeted at those languages which bring GC to the top
level obstructing the visible part of the object model.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 11:15 ` Maciej Sobczak
@ 2007-02-07 11:53 ` Martin Krischik
2007-02-07 12:22 ` Markus E Leypold
` (2 more replies)
2007-02-07 12:19 ` Markus E Leypold
1 sibling, 3 replies; 49+ messages in thread
From: Martin Krischik @ 2007-02-07 11:53 UTC (permalink / raw)
Maciej Sobczak schrieb:
> Martin Krischik wrote:
>
>>> What I'm against is a GC "paradigm" that prevents me from having
>>> deterministic "good bye" hooks for scoped lifetime. The problem is
>>> that most GC-oriented languages I'm aware of do have this "issue".
>>
>> But isn't that exactly what "Unchecked_Deallocation" and "pragma
>> Controlled" is all about? Has Ada - by your rationale - not got GC right?
>
> By my rationale Ada and C++ got it perfectly right ([Limited_]Controlled
> mess aside).
>
> The only difference between them in this regard is that Ada explicitly
> allows GC on the low level without requiring it (so that implementations
> can ignore the whole idea) and that C++ is traditionally silent about
> the concept altogether (so that implementations can provide it). ;-)
Only that C++ does not have pragma Controlled to switch the collector
off. And Unchecked_Deallocation should deallocate even when a collector
is present.
> (Note that GC will likely be formalized in the upcoming C++ standard.)
Which could solve the above.
> My criticism is targeted at those languages which bring GC to the top
> level obstructing the visible part of the object model.
On my Weblogic course I could not stop shaking my head about all the
problems which brings the "all is pointer" concept of Java.
Martin
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 11:53 ` Martin Krischik
@ 2007-02-07 12:22 ` Markus E Leypold
2007-02-08 7:26 ` Martin Krischik
2007-02-08 7:48 ` Maciej Sobczak
2007-02-08 18:38 ` Dmitry A. Kazakov
2 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-07 12:22 UTC (permalink / raw)
Martin Krischik <krischik@users.sourceforge.net> writes:
> Maciej Sobczak schrieb:
>> Martin Krischik wrote:
>>
>>>> What I'm against is a GC "paradigm" that prevents me from having
>>>> deterministic "good bye" hooks for scoped lifetime. The problem is
>>>> that most GC-oriented languages I'm aware of do have this "issue".
>>>
>>> But isn't that exactly what "Unchecked_Deallocation" and "pragma
>>> Controlled" is all about? Has Ada - by your rationale - not got GC
>>> right?
>> By my rationale Ada and C++ got it perfectly right
>> ([Limited_]Controlled mess aside).
>> The only difference between them in this regard is that Ada
>> explicitly allows GC on the low level without requiring it (so that
>> implementations can ignore the whole idea) and that C++ is
>> traditionally silent about the concept altogether (so that
>> implementations can provide it). ;-)
>
> Only that C++ does not have pragma Controlled to switch the collector
> off. And Unchecked_Deallocation should deallocate even when a
> collector is present.
>
>> (Note that GC will likely be formalized in the upcoming C++ standard.)
>
> Which could solve the above.
>
>> My criticism is targeted at those languages which bring GC to the
>> top level obstructing the visible part of the object model.
>
> On my Weblogic course I could not stop shaking my head about all the
> problems which brings the "all is pointer" concept of Java.
Fortunately you can ignore this "all is pointer" by exposing only a
read-only interface to the client and leaving the rest to the GC. That
feels exactly like passing records around with in, out and in/out. I
don't see the problem. Not to doubt your experience on this, but just
because I'm curious: Can you provide a hint or example what the
problems are?
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 12:22 ` Markus E Leypold
@ 2007-02-08 7:26 ` Martin Krischik
2007-02-08 9:33 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Martin Krischik @ 2007-02-08 7:26 UTC (permalink / raw)
Markus E Leypold schrieb:
> Martin Krischik <krischik@users.sourceforge.net> writes:
>> Maciej Sobczak schrieb:
>> On my Weblogic course I could not stop shaking my head about all the
>> problems which brings the "all is pointer" concept of Java.
> Fortunately you can ignore this "all is pointer" by exposing only a
> read-only interface to the client and leaving the rest to the GC
Only Java has no const keyword. It is supposed to get one but how long
until it is actually used?
> That
> feels exactly like passing records around with in, out and in/out. I
> don't see the problem. Not to doubt your experience on this, but just
> because I'm curious: Can you provide a hint or example what the
> problems are?
I can give you the solution, which is not used all that widely because
of the performance impact:
class X
{
Date date = new Date;
Date
get_Date ()
{
return new Date (date);
}
void
set_Date (Date new_Date)
{
date = new Date (new_Date);
}
}
Got it? If get_Date would just return date it would return a modifiable
pointer - which could be used and - well - modified almost everywhere.
As said the solution above is not used all that often and so the
interesting part in the weblogic course was that the hard core java
programmers had to learn that for remote call object those object might
be copied behind the scenes and not passes by (non const) reference. And
so to those objects modifications might be lost!
This was the moment where a hole horror scenario unfolded to me:
programmers which actually modified objects returned by a geta function
instead of using the appropriate seta function!
Martin
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 7:26 ` Martin Krischik
@ 2007-02-08 9:33 ` Markus E Leypold
2007-02-09 13:37 ` Martin Krischik
2007-02-09 13:47 ` Georg Bauhaus
0 siblings, 2 replies; 49+ messages in thread
From: Markus E Leypold @ 2007-02-08 9:33 UTC (permalink / raw)
Martin Krischik <krischik@users.sourceforge.net> writes:
> Markus E Leypold schrieb:
>
>> Martin Krischik <krischik@users.sourceforge.net> writes:
>
>>> Maciej Sobczak schrieb:
>
>>> On my Weblogic course I could not stop shaking my head about all the
>>> problems which brings the "all is pointer" concept of Java.
>
>> Fortunately you can ignore this "all is pointer" by exposing only a
>> read-only interface to the client and leaving the rest to the GC
> Only Java has no const keyword. It is supposed to get one but how long
> until it is actually used?
Excuse me, but ... "a read only interface" means:
(a) make all field of objects private
(b) allow access only my methods
(c) provide only Get_*-methods, no Set_*-methods
(a+b) is actually standard good practice in software engineering,
since it allows to hide representation and allows for maintaining
cached attributes/data (which direct access to fields won't allow).
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 9:33 ` Markus E Leypold
@ 2007-02-09 13:37 ` Martin Krischik
2007-02-09 13:47 ` Georg Bauhaus
1 sibling, 0 replies; 49+ messages in thread
From: Martin Krischik @ 2007-02-09 13:37 UTC (permalink / raw)
Markus E Leypold schrieb:
>> Only Java has no const keyword. It is supposed to get one but how long
>> until it is actually used?
>
> Excuse me, but ... "a read only interface" means:
>
> (a) make all field of objects private
> (b) allow access only my methods
> (c) provide only Get_*-methods, no Set_*-methods
you mean as in:
java.lang.Date
get_Date ()
{
return this.date;
}
True enough, with a Get_*-method like this you never need a Set_*-method
;-).
Martin
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 9:33 ` Markus E Leypold
2007-02-09 13:37 ` Martin Krischik
@ 2007-02-09 13:47 ` Georg Bauhaus
2007-02-09 15:29 ` Maciej Sobczak
1 sibling, 1 reply; 49+ messages in thread
From: Georg Bauhaus @ 2007-02-09 13:47 UTC (permalink / raw)
On Thu, 2007-02-08 at 10:33 +0100, Markus E Leypold wrote:
>
> Martin Krischik <krischik@users.sourceforge.net> writes:
> > Only Java has no const keyword. It is supposed to get one but how long
> > until it is actually used?
>
>
> Excuse me, but ... "a read only interface" means:
>
> (a) make all field of objects private
> (b) allow access only my methods
What Eiffel does by design.
> (c) provide only Get_*-methods, no Set_*-methods
This lets programmers design type interfaces and wrappers so that
objects are effectively read-only.
Wouldn't it be nice to just export a constant view where
needed? Like C++'s const& or Ada's access-to-constant?
Or to have C++ const view and Ada in mode parameters
that extend to the referred object?
procedure a is
type J is
record
x: Integer;
end record;
procedure nope(this: in J; new_x: Integer) is
begin
this.x := new_x; -- compile time error
end nope;
begin
nope(42);
end a;
struct J
{
int x;
};
int main()
{
J wj;
const J rj = wj;
wj.x = 42;
rj.x = 42; // compile time error
return 0;
}
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 11:53 ` Martin Krischik
2007-02-07 12:22 ` Markus E Leypold
@ 2007-02-08 7:48 ` Maciej Sobczak
2007-02-08 8:20 ` Martin Krischik
` (2 more replies)
2007-02-08 18:38 ` Dmitry A. Kazakov
2 siblings, 3 replies; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-08 7:48 UTC (permalink / raw)
Martin Krischik wrote:
> And Unchecked_Deallocation should deallocate even when a collector
> is present.
I don't understand. There is no legal way for the program to verify that
anything was indeed deallocated, so it doesn't make much sense to say
that this behaviour is required.
As far as I understand it, Unchecked_Deallocation is allowed to do
nothing. That wouldn't be a very competitive language implementation,
but AARM does not require it either. :-)
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 7:48 ` Maciej Sobczak
@ 2007-02-08 8:20 ` Martin Krischik
2007-02-08 8:43 ` Markus E Leypold
2007-02-08 18:24 ` Jeffrey R. Carter
2 siblings, 0 replies; 49+ messages in thread
From: Martin Krischik @ 2007-02-08 8:20 UTC (permalink / raw)
Maciej Sobczak schrieb:
> Martin Krischik wrote:
>> And Unchecked_Deallocation should deallocate even when a collector is
>> present.
> I don't understand. There is no legal way for the program to verify that
> anything was indeed deallocated, so it doesn't make much sense to say
> that this behaviour is required.
should /= must
> As far as I understand it, Unchecked_Deallocation is allowed to do
> nothing. That wouldn't be a very competitive language implementation,
> but AARM does not require it either. :-)
Just read it up [1] and Unchecked_Deallocation may not be a no-op - you
forgot finalization. Apart from that:
---------------------
Implementation Advice
For a standard storage pool, Free should actually reclaim the storage.
---------------------
Again "Should /= must" and - you are right - there are no references as
to when reclaim should happen.
Martin
[1] http://www.adaic.com/standards/05rm/html/RM-13-11-2.html
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 7:48 ` Maciej Sobczak
2007-02-08 8:20 ` Martin Krischik
@ 2007-02-08 8:43 ` Markus E Leypold
2007-02-09 14:20 ` Maciej Sobczak
2007-02-08 18:24 ` Jeffrey R. Carter
2 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-08 8:43 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Martin Krischik wrote:
>
>> And Unchecked_Deallocation should deallocate even when a collector
>> is present.
>
> I don't understand. There is no legal way for the program to verify
> that anything was indeed deallocated, so it doesn't make much sense to
> say that this behaviour is required.
Oh yes. Deallocating immeditately and deallocating later makes a
difference in time and space behaviour -- which IS measurable outside
the program (BTW: Exactly what you've been harping upon in you
opposition against GC :-)) )
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 8:43 ` Markus E Leypold
@ 2007-02-09 14:20 ` Maciej Sobczak
2007-02-09 16:23 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-09 14:20 UTC (permalink / raw)
Markus E Leypold wrote:
>>> And Unchecked_Deallocation should deallocate even when a collector
>>> is present.
>> I don't understand. There is no legal way for the program to verify
>> that anything was indeed deallocated, so it doesn't make much sense to
>> say that this behaviour is required.
>
> Oh yes. Deallocating immeditately and deallocating later makes a
> difference in time and space behaviour -- which IS measurable outside
> the program
With a small issues that this possibility is not formalized by the
language standard (please read carefully my sentence above: "there is no
*legal* way *for the program*").
And that is why it is *not* measurable, because there is no sensible way
to define at which level of memory management it should be measured.
As was already pointed out in this thread, with some operating systems
memory reclamation might not be meaningful at all unless the whole
program is terminated.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 14:20 ` Maciej Sobczak
@ 2007-02-09 16:23 ` Markus E Leypold
2007-02-12 8:52 ` Maciej Sobczak
0 siblings, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-09 16:23 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
>> Oh yes. Deallocating immeditately and deallocating later makes a
>> difference in time and space behaviour -- which IS measurable outside
>> the program
>
> With a small issues that this possibility is not formalized by the
> language standard (please read carefully my sentence above: "there is
> no *legal* way *for the program*").
>
> And that is why it is *not* measurable, because there is no sensible
> way to define at which level of memory management it should be
> measured.
You said:
>>> I don't understand. There is no legal way for the program to verify
>>> that anything was indeed deallocated, so it doesn't make much sense to
>>> say that this behaviour is required.
This is a 'non sequitur', since it makes sense to say the behaviour is
required to fix certain real time properties. Regardless of wether it
can be detected in the program (and it could, by observing the wall
clock).
> As was already pointed out in this thread, with some operating systems
> memory reclamation might not be meaningful at all unless the whole
> program is terminated.
I don't even ask to be shown such an operating system...
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 16:23 ` Markus E Leypold
@ 2007-02-12 8:52 ` Maciej Sobczak
2007-02-12 12:56 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-12 8:52 UTC (permalink / raw)
Markus E Leypold wrote:
>>>> I don't understand. There is no legal way for the program to verify
>>>> that anything was indeed deallocated, so it doesn't make much sense to
>>>> say that this behaviour is required.
>
> This is a 'non sequitur', since it makes sense to say the behaviour is
> required to fix certain real time properties. Regardless of wether it
> can be detected in the program (and it could, by observing the wall
> clock).
Observing the wall clock does not help much in a language where even
null; can raise exceptions. Standard does not even guarantee that any
given sequence of instructions will give consistent timings when run twice.
Definitely, observice the wall clock is of no use to verify memory
deallocation, since deallocation might have positive effect on the
timing as well as negative or none at all.
>> As was already pointed out in this thread, with some operating systems
>> memory reclamation might not be meaningful at all unless the whole
>> program is terminated.
>
> I don't even ask to be shown such an operating system...
On systems with virtual memory, deallocations that don't span at least
one full page (that condition can be met after joining with other free
blocks, though) will certainly not deallocate anything to the operating
system.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-12 8:52 ` Maciej Sobczak
@ 2007-02-12 12:56 ` Markus E Leypold
0 siblings, 0 replies; 49+ messages in thread
From: Markus E Leypold @ 2007-02-12 12:56 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Markus E Leypold wrote:
>
>>>>> I don't understand. There is no legal way for the program to verify
>>>>> that anything was indeed deallocated, so it doesn't make much sense to
>>>>> say that this behaviour is required.
>> This is a 'non sequitur', since it makes sense to say the behaviour
>> is
>> required to fix certain real time properties. Regardless of wether it
>> can be detected in the program (and it could, by observing the wall
>> clock).
>
> Observing the wall clock does not help much in a language where even
> null; can raise exceptions. Standard does not even guarantee that any
> given sequence of instructions will give consistent timings when run
> twice.
>
> Definitely, observice the wall clock is of no use to verify memory
> deallocation, since deallocation might have positive effect on the
> timing as well as negative or none at all.
If you don't deallocate memory "really" in Unchecked_Deallocation then
you aither run out of memory sooner or later (verifiable and it makes
sense to require that this doesn't occur) or you have a garbage
collector and intermittend garbage collection runs which are visible
in the real time behaviour. So it makes sense to say the behaviour
(real deallocation to return memory to the free list) is required and
checkable, regardless of the question wether _the program_ can verify this.
And as I said, from "no legal way for the program to verify" to
"doesn't make much sense to say that this behaviour is required" is a
non sequitur since behaviour might entail aspects that cannot only be
described in the production of data.
>>> As was already pointed out in this thread, with some operating systems
>>> memory reclamation might not be meaningful at all unless the whole
>>> program is terminated.
>> I don't even ask to be shown such an operating system...
>
> On systems with virtual memory, deallocations that don't span at least
> one full page (that condition can be met after joining with other free
> blocks, though) will certainly not deallocate anything to the
> operating system.
Excuse me: I misread "memory reclamation might not be meaningful at
all unless the whole program is terminated" as: No memory is ever
deallocated. Which is still the only meaningful reading of your
sentence since deallocation to the internal free block list is still
meaningful and not a no-op.
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 7:48 ` Maciej Sobczak
2007-02-08 8:20 ` Martin Krischik
2007-02-08 8:43 ` Markus E Leypold
@ 2007-02-08 18:24 ` Jeffrey R. Carter
2007-02-09 8:57 ` Jean-Pierre Rosen
2 siblings, 1 reply; 49+ messages in thread
From: Jeffrey R. Carter @ 2007-02-08 18:24 UTC (permalink / raw)
Maciej Sobczak wrote:
>
> As far as I understand it, Unchecked_Deallocation is allowed to do
> nothing. That wouldn't be a very competitive language implementation,
> but AARM does not require it either. :-)
Unchecked_Deallocation has to set its parameter to null.
The Rolm-Data General compiler, the 1st validated Ada-83 compiler, did
only that. The argument was that every program had a 4 GB virtual memory
space, so there was no need to actually reclaim memory.
In reality, I think it was skipped to save time so they could have the
1st validated compiler.
--
Jeff Carter
"What I wouldn't give for a large sock with horse manure in it."
Annie Hall
42
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 18:24 ` Jeffrey R. Carter
@ 2007-02-09 8:57 ` Jean-Pierre Rosen
2007-02-09 12:57 ` Robert A Duff
2007-02-09 18:35 ` Jeffrey R. Carter
0 siblings, 2 replies; 49+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-09 8:57 UTC (permalink / raw)
Jeffrey R. Carter a �crit :
> The Rolm-Data General compiler, the 1st validated Ada-83 compiler
Although DG claimed that, their certificate shows #2.
#1 was Ada-ED, from NYU, as everybody should know.
--
---------------------------------------------------------
J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 8:57 ` Jean-Pierre Rosen
@ 2007-02-09 12:57 ` Robert A Duff
2007-02-09 14:44 ` Jean-Pierre Rosen
2007-02-09 18:35 ` Jeffrey R. Carter
1 sibling, 1 reply; 49+ messages in thread
From: Robert A Duff @ 2007-02-09 12:57 UTC (permalink / raw)
Jean-Pierre Rosen <rosen@adalog.fr> writes:
> Jeffrey R. Carter a �crit :
>
>> The Rolm-Data General compiler, the 1st validated Ada-83 compiler
> Although DG claimed that, their certificate shows #2.
> #1 was Ada-ED, from NYU, as everybody should know.
I think it's fair to say that Ada-ED was the first validated
implementation of Ada 83, and that the Rolm-Data General implementation
was the first validated Ada compiler for Ada 83.
- Bob
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 12:57 ` Robert A Duff
@ 2007-02-09 14:44 ` Jean-Pierre Rosen
2007-02-10 13:38 ` Robert A Duff
0 siblings, 1 reply; 49+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-09 14:44 UTC (permalink / raw)
Robert A Duff a �crit :
> Jean-Pierre Rosen <rosen@adalog.fr> writes:
> I think it's fair to say that Ada-ED was the first validated
> implementation of Ada 83, and that the Rolm-Data General implementation
> was the first validated Ada compiler for Ada 83.
>
I fail to see why Ada-ED does not deserve the name "compiler". I may
admit that Rolm was the first /industrial/ compiler, i.e. usable for
real programs...
--
---------------------------------------------------------
J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 14:44 ` Jean-Pierre Rosen
@ 2007-02-10 13:38 ` Robert A Duff
2007-02-12 8:47 ` Jean-Pierre Rosen
0 siblings, 1 reply; 49+ messages in thread
From: Robert A Duff @ 2007-02-10 13:38 UTC (permalink / raw)
Jean-Pierre Rosen <rosen@adalog.fr> writes:
> Robert A Duff a �crit :
>> Jean-Pierre Rosen <rosen@adalog.fr> writes:
>> I think it's fair to say that Ada-ED was the first validated
>> implementation of Ada 83, and that the Rolm-Data General implementation
>> was the first validated Ada compiler for Ada 83.
>>
> I fail to see why Ada-ED does not deserve the name "compiler".
Because the Ada-ED implementation was highly interpretive.
>...I may
> admit that Rolm was the first /industrial/ compiler, i.e. usable for
> real programs...
- Bob
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-10 13:38 ` Robert A Duff
@ 2007-02-12 8:47 ` Jean-Pierre Rosen
2007-02-12 15:31 ` Jeffrey R. Carter
0 siblings, 1 reply; 49+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-12 8:47 UTC (permalink / raw)
Robert A Duff a �crit :
>> I fail to see why Ada-ED does not deserve the name "compiler".
>
> Because the Ada-ED implementation was highly interpretive.
>
So what? You still have to transform source text into another
representation, and do numerous checks at the same time. That's what I
call "compiling".
--
---------------------------------------------------------
J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-12 8:47 ` Jean-Pierre Rosen
@ 2007-02-12 15:31 ` Jeffrey R. Carter
0 siblings, 0 replies; 49+ messages in thread
From: Jeffrey R. Carter @ 2007-02-12 15:31 UTC (permalink / raw)
Jean-Pierre Rosen wrote:
>
> So what? You still have to transform source text into another
> representation, and do numerous checks at the same time. That's what I
> call "compiling".
Then your definition differs from mine, Duff's, and numerous other people's.
--
Jeff Carter
"So if I understand 'The Matrix Reloaded' correctly, the Matrix is
basically a Microsoft operating system--it runs for a while and
then crashes and reboots. By design, no less. Neo is just a
memory leak that's too hard to fix, so they left him in ... The
users don't complain because they're packed in slush and kept
sedated."
Marin D. Condic
65
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 8:57 ` Jean-Pierre Rosen
2007-02-09 12:57 ` Robert A Duff
@ 2007-02-09 18:35 ` Jeffrey R. Carter
2007-02-10 19:01 ` Martin Krischik
2007-02-11 15:22 ` Pascal Obry
1 sibling, 2 replies; 49+ messages in thread
From: Jeffrey R. Carter @ 2007-02-09 18:35 UTC (permalink / raw)
Jean-Pierre Rosen wrote:
>
> Although DG claimed that, their certificate shows #2.
> #1 was Ada-ED, from NYU, as everybody should know.
Ada-ED was an interpreter, not a compiler, as everybody should know.
--
Jeff Carter
"Sheriff murdered, crops burned, stores looted,
people stampeded, and cattle raped."
Blazing Saddles
35
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 18:35 ` Jeffrey R. Carter
@ 2007-02-10 19:01 ` Martin Krischik
2007-02-11 15:22 ` Pascal Obry
1 sibling, 0 replies; 49+ messages in thread
From: Martin Krischik @ 2007-02-10 19:01 UTC (permalink / raw)
Jeffrey R. Carter wrote:
> Jean-Pierre Rosen wrote:
>>
>> Although DG claimed that, their certificate shows #2.
>> #1 was Ada-ED, from NYU, as everybody should know.
>
> Ada-ED was an interpreter, not a compiler, as everybody should know.
Ada interpreter, cool.
Martin
--
mailto://krischik@users.sourceforge.net
Ada programming at: http://ada.krischik.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 18:35 ` Jeffrey R. Carter
2007-02-10 19:01 ` Martin Krischik
@ 2007-02-11 15:22 ` Pascal Obry
2007-02-11 20:30 ` Jeffrey R. Carter
1 sibling, 1 reply; 49+ messages in thread
From: Pascal Obry @ 2007-02-11 15:22 UTC (permalink / raw)
To: Jeffrey R. Carter
Jeffrey R. Carter a �crit :
> Jean-Pierre Rosen wrote:
>>
>> Although DG claimed that, their certificate shows #2.
>> #1 was Ada-ED, from NYU, as everybody should know.
>
> Ada-ED was an interpreter, not a compiler, as everybody should know.
Well one could argue that Ada-ED was a compiler to a virtual machine.
Pascal.
--
--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-11 15:22 ` Pascal Obry
@ 2007-02-11 20:30 ` Jeffrey R. Carter
2007-02-13 18:47 ` Pascal Obry
0 siblings, 1 reply; 49+ messages in thread
From: Jeffrey R. Carter @ 2007-02-11 20:30 UTC (permalink / raw)
Pascal Obry wrote:
>
> Well one could argue that Ada-ED was a compiler to a virtual machine.
I never used it, but my understanding is that that isn't an accurate
description.
--
Jeff Carter
"Many times we're given rhymes that are quite unsingable."
Monty Python and the Holy Grail
57
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-11 20:30 ` Jeffrey R. Carter
@ 2007-02-13 18:47 ` Pascal Obry
2007-02-13 23:08 ` Jeffrey R. Carter
2007-02-14 11:10 ` Jean-Pierre Rosen
0 siblings, 2 replies; 49+ messages in thread
From: Pascal Obry @ 2007-02-13 18:47 UTC (permalink / raw)
To: Jeffrey R. Carter
Jeffrey R. Carter a �crit :
> Pascal Obry wrote:
>>
>> Well one could argue that Ada-ED was a compiler to a virtual machine.
>
> I never used it, but my understanding is that that isn't an accurate
> description.
I've used it... log time ago and IIRC it was using a kind of p-code a-la
Pascal. This is just a kind of virtual machine to me...
Pascal.
--
--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-13 18:47 ` Pascal Obry
@ 2007-02-13 23:08 ` Jeffrey R. Carter
2007-02-14 11:13 ` Jean-Pierre Rosen
2007-02-14 19:47 ` Robert A Duff
2007-02-14 11:10 ` Jean-Pierre Rosen
1 sibling, 2 replies; 49+ messages in thread
From: Jeffrey R. Carter @ 2007-02-13 23:08 UTC (permalink / raw)
Pascal Obry wrote:
>
> I've used it... log time ago and IIRC it was using a kind of p-code a-la
> Pascal. This is just a kind of virtual machine to me...
I guess it's a matter of semantics. If the program reads the source and
performs the actions required by it, I consider it an interpreter. What
activities it uses internally to perform that interpretation are an
elephant.
If the program converts the source into another form which can later be
executed by appropriate HW or an emulator, then I call it a compiler.
UCSD Pascal had a Pascal-to-P-code compiler. The P-code could be
executed later, on a P-code machine or, more commonly, through an
emulator. (I'm not aware of there ever being a P-code machine, which is
why it's often referred to as a virtual machine. But there's no reason
there couldn't have been one.) Java works similarly.
My understanding is that Ada-Ed was in the former category.
--
Jeff Carter
"C++ is like giving an AK-47 to a monk, shooting him
full of crack and letting him loose in a mall and
expecting him to balance your checking account
'when he has the time.'"
Drew Olbrich
52
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-13 23:08 ` Jeffrey R. Carter
@ 2007-02-14 11:13 ` Jean-Pierre Rosen
2007-02-14 16:29 ` Jeffrey R. Carter
2007-02-14 19:47 ` Robert A Duff
1 sibling, 1 reply; 49+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-14 11:13 UTC (permalink / raw)
Jeffrey R. Carter a �crit :
> The P-code could be
> executed later, on a P-code machine or, more commonly, through an
> emulator. (I'm not aware of there ever being a P-code machine, which is
> why it's often referred to as a virtual machine. But there's no reason
> there couldn't have been one.)
There was one, by Western-Digital:
http://en.wikipedia.org/wiki/Pascal_MicroEngine
--
---------------------------------------------------------
J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-13 23:08 ` Jeffrey R. Carter
2007-02-14 11:13 ` Jean-Pierre Rosen
@ 2007-02-14 19:47 ` Robert A Duff
1 sibling, 0 replies; 49+ messages in thread
From: Robert A Duff @ 2007-02-14 19:47 UTC (permalink / raw)
"Jeffrey R. Carter" <jrcarter@acm.org> writes:
> If the program converts the source into another form which can later be
> executed by appropriate HW or an emulator, then I call it a
> compiler.
Well, Ada requires certain errors to be detected at compile time,
so it is impossible to write a _pure_ interpreter for Ada -- something
like a Unix shell, which executes the first line before even looking
at the second line. An Ada implementation has to look at the entire
program before running it.
But still, the part of Ada-Ed that runs the SETL structures or the bytes
or whatever, is an interpreter. I claim Ada-Ed is a hybrid
implementation of Ada -- neither pure interpreter nor pure compiler.
- Bob
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-13 18:47 ` Pascal Obry
2007-02-13 23:08 ` Jeffrey R. Carter
@ 2007-02-14 11:10 ` Jean-Pierre Rosen
2007-02-14 16:29 ` Jeffrey R. Carter
1 sibling, 1 reply; 49+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-14 11:10 UTC (permalink / raw)
Pascal Obry a �crit :
> Jeffrey R. Carter a �crit :
>> Pascal Obry wrote:
>>> Well one could argue that Ada-ED was a compiler to a virtual machine.
>> I never used it, but my understanding is that that isn't an accurate
>> description.
>
> I've used it... log time ago and IIRC it was using a kind of p-code a-la
> Pascal. This is just a kind of virtual machine to me...
>
Since we are in historical mode...
The first version of Ada-ED was interpreting directly SETL structures.
The second one generated code for a virtual machine.
If you are really interested, the description of the virtual machine can
be found in the PhD thesis of P. Kruchten and J-P. Rosen ;-). Only
available on paper, though (this was in 1985).
--
---------------------------------------------------------
J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-14 11:10 ` Jean-Pierre Rosen
@ 2007-02-14 16:29 ` Jeffrey R. Carter
2007-02-15 8:39 ` Jean-Pierre Rosen
0 siblings, 1 reply; 49+ messages in thread
From: Jeffrey R. Carter @ 2007-02-14 16:29 UTC (permalink / raw)
Jean-Pierre Rosen wrote:
>
> The first version of Ada-ED was interpreting directly SETL structures.
> The second one generated code for a virtual machine.
I guess the question, then, is which version received certificate #1?
--
Jeff Carter
"I blow my nose on you."
Monty Python & the Holy Grail
03
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-14 16:29 ` Jeffrey R. Carter
@ 2007-02-15 8:39 ` Jean-Pierre Rosen
2007-02-15 17:14 ` Jeffrey R. Carter
0 siblings, 1 reply; 49+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-15 8:39 UTC (permalink / raw)
Jeffrey R. Carter a �crit :
> Jean-Pierre Rosen wrote:
>>
>> The first version of Ada-ED was interpreting directly SETL structures.
>> The second one generated code for a virtual machine.
>
> I guess the question, then, is which version received certificate #1?
>
It was the full SETL version. I don't argue that it was interpreted. I
just mean that a "compiler" is any tool that processes a programming
language (which term would you use instead?).
The difference between HW code generation or "interpretation" is a very
thin implementation detail. As mentionned before, is P-Code
interpretation? What if P-Code is cast in hardware? And isn't a "true"
processor something that interprets its own code with micro-instructions?
--
---------------------------------------------------------
J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-15 8:39 ` Jean-Pierre Rosen
@ 2007-02-15 17:14 ` Jeffrey R. Carter
0 siblings, 0 replies; 49+ messages in thread
From: Jeffrey R. Carter @ 2007-02-15 17:14 UTC (permalink / raw)
Jean-Pierre Rosen wrote:
>>
> It was the full SETL version. I don't argue that it was interpreted. I
> just mean that a "compiler" is any tool that processes a programming
> language (which term would you use instead?).
That's a pretty broad definition. It covers pretty printers and
AdaSubst, for example.
I don't have a term for any tool that processes a programming language.
--
Jeff Carter
"Go and boil your bottoms."
Monty Python & the Holy Grail
01
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 11:53 ` Martin Krischik
2007-02-07 12:22 ` Markus E Leypold
2007-02-08 7:48 ` Maciej Sobczak
@ 2007-02-08 18:38 ` Dmitry A. Kazakov
2007-02-09 7:58 ` Maciej Sobczak
2007-02-09 10:07 ` Martin Krischik
2 siblings, 2 replies; 49+ messages in thread
From: Dmitry A. Kazakov @ 2007-02-08 18:38 UTC (permalink / raw)
On Wed, 07 Feb 2007 12:53:27 +0100, Martin Krischik wrote:
> On my Weblogic course I could not stop shaking my head about all the
> problems which brings the "all is pointer" concept of Java.
Which is the same sort of rubbish as "all is object."
Clearly it is impossible to have all types referents. I don't mean here
small fundamental types like Boolean etc, but the pointers themselves. To
incorporate then one needs to introduce pointers-to-pointers, then
pointers-to-pointers-to-pointers ad infinitum, which is impossible. The
"concept" leaks.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 18:38 ` Dmitry A. Kazakov
@ 2007-02-09 7:58 ` Maciej Sobczak
2007-02-09 10:07 ` Martin Krischik
1 sibling, 0 replies; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-09 7:58 UTC (permalink / raw)
Dmitry A. Kazakov wrote:
>> On my Weblogic course I could not stop shaking my head about all the
>> problems which brings the "all is pointer" concept of Java.
>
> Which is the same sort of rubbish as "all is object."
>
> Clearly it is impossible to have all types referents. I don't mean here
> small fundamental types like Boolean etc, but the pointers themselves. To
> incorporate then one needs to introduce pointers-to-pointers, then
> pointers-to-pointers-to-pointers ad infinitum, which is impossible. The
> "concept" leaks.
Not only it leaks, but combined with other defficiencies often leads to
"consciusness split".
Consider: int, Integer, IntHolder.
That gives three types for the same logical domain.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 18:38 ` Dmitry A. Kazakov
2007-02-09 7:58 ` Maciej Sobczak
@ 2007-02-09 10:07 ` Martin Krischik
2007-02-09 14:10 ` Dmitry A. Kazakov
1 sibling, 1 reply; 49+ messages in thread
From: Martin Krischik @ 2007-02-09 10:07 UTC (permalink / raw)
Dmitry A. Kazakov schrieb:
> On Wed, 07 Feb 2007 12:53:27 +0100, Martin Krischik wrote:
>
>> On my Weblogic course I could not stop shaking my head about all the
>> problems which brings the "all is pointer" concept of Java.
>
> Which is the same sort of rubbish as "all is object."
>
> Clearly it is impossible to have all types referents. I don't mean here
> small fundamental types like Boolean etc, but the pointers themselves. To
> incorporate then one needs to introduce pointers-to-pointers, then
> pointers-to-pointers-to-pointers ad infinitum, which is impossible. The
> "concept" leaks.
Well, really it is "all objects and arrays handled by reference". But
still a silly concept.
Martin
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-09 10:07 ` Martin Krischik
@ 2007-02-09 14:10 ` Dmitry A. Kazakov
0 siblings, 0 replies; 49+ messages in thread
From: Dmitry A. Kazakov @ 2007-02-09 14:10 UTC (permalink / raw)
On Fri, 09 Feb 2007 11:07:44 +0100, Martin Krischik wrote:
> Dmitry A. Kazakov schrieb:
>> On Wed, 07 Feb 2007 12:53:27 +0100, Martin Krischik wrote:
>>
>>> On my Weblogic course I could not stop shaking my head about all the
>>> problems which brings the "all is pointer" concept of Java.
>>
>> Which is the same sort of rubbish as "all is object."
>>
>> Clearly it is impossible to have all types referents. I don't mean here
>> small fundamental types like Boolean etc, but the pointers themselves. To
>> incorporate then one needs to introduce pointers-to-pointers, then
>> pointers-to-pointers-to-pointers ad infinitum, which is impossible. The
>> "concept" leaks.
>
> Well, really it is "all objects and arrays handled by reference". But
> still a silly concept.
But what is so special in Boolean which makes it non-object? Such
irregularities are always language design faults.
[ As for Ada there should be Boolean'Class. ]
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 11:15 ` Maciej Sobczak
2007-02-07 11:53 ` Martin Krischik
@ 2007-02-07 12:19 ` Markus E Leypold
2007-02-08 7:54 ` Maciej Sobczak
1 sibling, 1 reply; 49+ messages in thread
From: Markus E Leypold @ 2007-02-07 12:19 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Martin Krischik wrote:
>
>>> What I'm against is a GC "paradigm" that prevents me from having
>>> deterministic "good bye" hooks for scoped lifetime. The problem is
>>> that most GC-oriented languages I'm aware of do have this "issue".
>> But isn't that exactly what "Unchecked_Deallocation" and "pragma
>> Controlled" is all about? Has Ada - by your rationale - not got GC
>> right?
>
> By my rationale Ada and C++ got it perfectly right
> ([Limited_]Controlled mess aside).
>
> The only difference between them in this regard is that Ada explicitly
> allows GC on the low level without requiring it (so that
> implementations can ignore the whole idea) and that C++ is
> traditionally silent about the concept altogether (so that
> implementations can provide it). ;-)
>
> (Note that GC will likely be formalized in the upcoming C++ standard.)
>
> My criticism is targeted at those languages which bring GC to the top
> level obstructing the visible part of the object model.
You mean like Smalltalk -- a language which carries the label OO
wrongly, because it's GC obstructs the object model.
Hm.
Actually I think, real OO (as opposed to tagged types) need GC and
unbounded life time of objects. That's indeed all OO is about.
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-07 12:19 ` Markus E Leypold
@ 2007-02-08 7:54 ` Maciej Sobczak
2007-02-08 9:49 ` Markus E Leypold
0 siblings, 1 reply; 49+ messages in thread
From: Maciej Sobczak @ 2007-02-08 7:54 UTC (permalink / raw)
Markus E Leypold wrote:
> Actually I think, real OO (as opposed to tagged types) need GC and
> unbounded life time of objects. That's indeed all OO is about.
That is quite novel definition of OO. Any references?
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
2007-02-08 7:54 ` Maciej Sobczak
@ 2007-02-08 9:49 ` Markus E Leypold
0 siblings, 0 replies; 49+ messages in thread
From: Markus E Leypold @ 2007-02-08 9:49 UTC (permalink / raw)
Maciej Sobczak <no.spam@no.spam.com> writes:
> Markus E Leypold wrote:
>> Actually I think, real OO (as opposed to tagged types) need GC and
>> unbounded life time of objects. That's indeed all OO is about.
> That is quite novel definition of OO. Any references?
Ian Graham: "Object-Oriented Methods: Principles & Practice".
Footnotes and general comments there make that quite clear I
think. (But I can't quote chapter and verse here, since I DON'T intend
to spent the rest of the morning with researching literature). I
remeber that this was not the first source which noted that a delete
operator is quite contrary to the spirit of stateful OOP (as opposed
to the functional OO models proposed by Abadi and others (Cardelli?).
And then of course there is me, myself, as a source :-). The reasoning
fo the assertion above is basically that OO is about having
(projected) views of a total system. In no view you can say that an
object leaves the view -- you can only say that it becomes unimportant
for the view (goes into the kernel of the projection). And that is so
for all views -- whose sum only gives the complete system. Then (when
viewing the whole system) you can decide where to delete. Object
deletion is a global property/function. Rooting it in any subsystem
will destroy modularity in the OO sense (which is different from
modularity in the modula2 or Ada-sense, which is derived from the
ideas of structured programming).
But note that I'm not really interested in discussing these
propositions. Either you profit from it or you don't -- I don't think
that it is easy to see what I mean and it was hard for me to get that
insights at the beginning (since there is really lots of bad
literature on OO (mostly the ad-hoc approach) and few usable formal or
semi-formal approaches). Discussing those on a serious level would take
lots of time. So I have just to assert my guru status here for lack of
more time :-).
Regards -- Markus
^ permalink raw reply [flat|nested] 49+ messages in thread
* GC in Ada
@ 1986-04-02 17:50 Stavros Macrakis
0 siblings, 0 replies; 49+ messages in thread
From: Stavros Macrakis @ 1986-04-02 17:50 UTC (permalink / raw)
`Garbage collection' (GC) has been used in more than one sense in
this discussion. Several contributors equate GC with `storage
reclamation'. This is misleading. The essence of GC is finding free
space by determining what access objects remain in use.
GC is a particular type of automatic storage reclamation -- reference
counting, e.g., is another. And explicit deallocation or deallocation
of stack frames at scope exit time is simply not automatic storage
reclamation. Neither is deallocating an entire collection at the exit
of the scope declaring the type. All of these are useful techniques,
but they are not GC. GC has different advantages and disadvantages
than each of these other methods.
It is not possible to write a garbage collector within Ada (unless, of
course, you have hooks into the implementation). It is not even
possible to define a private type which reclaims its own space within
Ada. The essential problem is that there is no way of determining the
``immediately accessible nodes'' in Knuth's ("Art") terminology, i.e.
the variables containing access values. It would be possible to
define a limited type that used GC or ref counting if there were a
finalization operation (see Schwarz&Melliar-Smith, ``The Finalization
Operation for Abstract Types'' ICSE/5, p273).
For a discussion of issues relating to GC implementation in an
Ada-like environment, see Susan Owicki, ``Making the World Safe for
GC'' in POPL/8, p77.
The term `Garbage collection' has had a precise meaning for over
twenty years. Here are some citations:
J.McCarthy et al, "Lisp 1.5 Programmer's Manual" (2nd ed, 1965) p105:
garbage collector: The routine in LISP which identifies all active
list structure by tracing it from fixed base cells and marking it,
and then collects all unneeded cells (garbage) into a free-storage
list so that these words can be used again.
Knuth, "Art" (1st ed, 1968), sec 2.5B p438:
...a policy of simply doing nothing until space runs out, then
searching for all the areas currently in use and fashioning a
new AVAIL list.
Aho&Ullman, "Principles of Compiler Design" (1st ed, 1977) p44:
A garbage collection scans all records, determining which are in
use, and making an <available space list> of those which may be
reused.
Note that "garbage collection" has been used in many systems other than
Lisp with the same meaning: Snobol, ECL, MIT Teco (!), ....
-s
ICSE = Intl. Conf. on Software Engineering
POPL = Principles of Programming Languages (Conf.)
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: GC in Ada
@ 1986-04-02 3:02 Rick Conn
0 siblings, 0 replies; 49+ messages in thread
From: Rick Conn @ 1986-04-02 3:02 UTC (permalink / raw)
There is a simple garbage collector written in Ada in the
repository in PD:<ADA.COMPONENTS>GARBAGE*.*; data:
%4 Garbage_Collection
Author : Doug Bryan
: Computer Systems Lab
: Stanford University
Machine/System Compiled/Run on :
Data General MV/10000 running the Ada Development Environment 2.2
Keywords : MEMORY, GARBAGE, GARBAGE COLLECTION
Abstract :
This is a generic garbage collector. It simply maintains an internal
linked list of items which have been freed then reuses these items when more
are needed.
-------
^ permalink raw reply [flat|nested] 49+ messages in thread
* GC in Ada
@ 1986-04-01 22:00 Stavros Macrakis
0 siblings, 0 replies; 49+ messages in thread
From: Stavros Macrakis @ 1986-04-01 22:00 UTC (permalink / raw)
A few weeks ago, there was a query about garbage collection in Ada.
I've assembled a few notes on the subject.
Several Ada systems appear to have garbage collectors (GC) in
development environments (e.g. Rational, Symbolics). As far as I have
been able to ascertain, no production Ada environment provides garbage
collection in the target computer.
The usual argument is that embedded applications shouldn't GC, because
GC is inefficient and episodic. This is spurious for several reasons:
1. Prototypes and early development versions may prefer to sacrifice
efficiency for simplicity and speed of programming.
2. Many Ada applications will in fact not be embedded.
3. If you have GC, it is very easy to add a capability to detect and
report undeallocated objects and dangling pointers.
4. Embedded applications may even want GC. Traditional episodic GC
will do for some; others will need real-time GC algorithms.
5. GC may not be as expensive as is thought, especially if currently
known advanced implementation techniques are used.
6. Given Ada's typing, scoping, and tasking, much processing can
continue even during episodic GC.
However, implementing a GC for Ada is not entirely trivial. Although
Ada's semantics certainly allow GC, some compiler-writers may have
chosen runtime models which make it impractical or impossible. Not
only must pointers in memory be traceable, but transient `hazards'
must be avoided. The conditions are even more stringent for
relocating/compacting GC, of course.
The only current compilers I know the internals of, the Intermetrics
Byron Ada compiler family, have their runtime model carefully designed
to in fact make GC practical. However, no customer so far has
specified GC. I'm sure Intermetrics would be happy to sell a GC
option as soon as the market demand became sufficient.
-s
Stavros Macrakis
Harvard Univ. and Intermetrics, Inc.
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2007-02-15 17:14 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1986-04-04 3:41 GC in Ada Rick Conn
-- strict thread matches above, loose matches on Subject: below --
2007-01-24 11:06 How come Ada isn't more popular? gautier_niouzes
2007-01-24 19:25 ` tmoran
2007-01-25 4:46 ` Gautier
2007-01-25 9:29 ` Markus E Leypold
2007-01-27 16:59 ` Stephen Leake
2007-01-27 20:40 ` Markus E Leypold
2007-01-29 8:56 ` Maciej Sobczak
2007-01-29 14:21 ` Markus E Leypold
2007-01-31 9:23 ` Maciej Sobczak
2007-01-31 10:24 ` Markus E Leypold
2007-02-02 8:42 ` Maciej Sobczak
2007-02-02 13:57 ` Markus E Leypold
2007-02-05 9:59 ` Maciej Sobczak
2007-02-05 13:43 ` Markus E Leypold
2007-02-06 9:15 ` Maciej Sobczak
2007-02-06 11:45 ` Markus E Leypold
2007-02-06 14:16 ` Maciej Sobczak
2007-02-06 15:44 ` Markus E Leypold
2007-02-07 8:55 ` Maciej Sobczak
2007-02-07 9:30 ` GC in Ada Martin Krischik
2007-02-07 11:08 ` Markus E Leypold
2007-02-07 11:15 ` Maciej Sobczak
2007-02-07 11:53 ` Martin Krischik
2007-02-07 12:22 ` Markus E Leypold
2007-02-08 7:26 ` Martin Krischik
2007-02-08 9:33 ` Markus E Leypold
2007-02-09 13:37 ` Martin Krischik
2007-02-09 13:47 ` Georg Bauhaus
2007-02-09 15:29 ` Maciej Sobczak
2007-02-09 20:52 ` Georg Bauhaus
2007-02-08 7:48 ` Maciej Sobczak
2007-02-08 8:20 ` Martin Krischik
2007-02-08 8:43 ` Markus E Leypold
2007-02-09 14:20 ` Maciej Sobczak
2007-02-09 16:23 ` Markus E Leypold
2007-02-12 8:52 ` Maciej Sobczak
2007-02-12 12:56 ` Markus E Leypold
2007-02-08 18:24 ` Jeffrey R. Carter
2007-02-09 8:57 ` Jean-Pierre Rosen
2007-02-09 12:57 ` Robert A Duff
2007-02-09 14:44 ` Jean-Pierre Rosen
2007-02-10 13:38 ` Robert A Duff
2007-02-12 8:47 ` Jean-Pierre Rosen
2007-02-12 15:31 ` Jeffrey R. Carter
2007-02-09 18:35 ` Jeffrey R. Carter
2007-02-10 19:01 ` Martin Krischik
2007-02-11 15:22 ` Pascal Obry
2007-02-11 20:30 ` Jeffrey R. Carter
2007-02-13 18:47 ` Pascal Obry
2007-02-13 23:08 ` Jeffrey R. Carter
2007-02-14 11:13 ` Jean-Pierre Rosen
2007-02-14 16:29 ` Jeffrey R. Carter
2007-02-14 19:47 ` Robert A Duff
2007-02-14 11:10 ` Jean-Pierre Rosen
2007-02-14 16:29 ` Jeffrey R. Carter
2007-02-15 8:39 ` Jean-Pierre Rosen
2007-02-15 17:14 ` Jeffrey R. Carter
2007-02-08 18:38 ` Dmitry A. Kazakov
2007-02-09 7:58 ` Maciej Sobczak
2007-02-09 10:07 ` Martin Krischik
2007-02-09 14:10 ` Dmitry A. Kazakov
2007-02-07 12:19 ` Markus E Leypold
2007-02-08 7:54 ` Maciej Sobczak
2007-02-08 9:49 ` Markus E Leypold
1986-04-02 17:50 Stavros Macrakis
1986-04-02 3:02 Rick Conn
1986-04-01 22:00 Stavros Macrakis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox