* U : Unbounded_String := "bla bla bla"; (was: Is the Writing...)
@ 2003-10-02 18:02 amado.alves
2003-10-03 0:05 ` U : Unbounded String : " Alexander Kopilovitch
2003-10-03 9:00 ` U : Unbounded_String := " Preben Randhol
0 siblings, 2 replies; 44+ messages in thread
From: amado.alves @ 2003-10-02 18:02 UTC (permalink / raw)
To: comp.lang.ada
User-defined implicit conversion would solve this problem. Pragma approach:
function To_Unbounded (S : String) return Unbounded_String;
pragma Implicit_Conversion (To_Unbounded);
U : Unbounded_String := "bla bla bla";
However it is too late for Ada 2005 now :-(
/*
Alexander Kopilovitch wrote:
> Robert I. Eachus wrote:
>
> > ...
> > So I never understand all those complaints about explicitly
> converting
> > to and from Bounded_String (or Unbounded_String).
>
> (Let's speak about Unbounded_Strings, for clarity).
>
> There are 2 main reasons for those complaints:
>
> 1) types Unbounded_String and String appear unrelated, while they are
> conceptually related for all natural purposes. (If you want
> specifically a fixed-size array of characters which is not assumed to
> be a textual line in the application then say "array of Character",
> and not "String").
>
> 2) there is no literals in Ada for Unbounded_Strings which contradicts
> the natural concept of a textual line of varying
> (unlimited) size.
>
> [From all my experience in (non-corporate) application programming I
> must say that while this seemingly tiny and unimportant point remains
> unresolved, Ada never will be more popular among programmers than she
> is now. And if this point will be resolved with some satifactory way
> then Ada's popularity will increase (well, certainly not by magnitude,
> but quite noticeably) almost immediately. Well, I'm not sure
> that it is a good goal to make Ada more popular; therefore I
> can admit that preserving status quo here may be actually a
> proper decision.]
*/
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded String : "bla bla bla"; (was: Is the Writing...) 2003-10-02 18:02 U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) amado.alves @ 2003-10-03 0:05 ` Alexander Kopilovitch 2003-10-03 20:46 ` Dmitry A. Kazakov 2003-10-03 9:00 ` U : Unbounded_String := " Preben Randhol 1 sibling, 1 reply; 44+ messages in thread From: Alexander Kopilovitch @ 2003-10-03 0:05 UTC (permalink / raw) amado.alves wrote: > User-defined implicit conversion would solve this problem. Pragma > approach: > > function To Unbounded (S : String) return Unbounded String; > pragma Implicit Conversion (To Unbounded); > U : Unbounded String := "bla bla bla"; Implicit conversions lead to some problems (it is also known from C++ experience.) I'm sure that it would be serious mistake to add implicit conversions to the language for purposes of this rank only. Generally, I think that implicit conversion are significantly less compatible with Ada spirit than with C++ spirit; in C++ implicit conversions probably do more good than bad, while in Ada they probably will do more bad than good. > However it is too late for Ada 2005 now :-( Well, I think that for unclear and disputable issues (like this) we should not pay attention to the deadlines -:) . If we find a good solution, which pleases both users and compiler vendors then probably ARG will be happy to include it into some Technical Corrigendum -:) . Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded String : "bla bla bla"; (was: Is the Writing...) 2003-10-03 0:05 ` U : Unbounded String : " Alexander Kopilovitch @ 2003-10-03 20:46 ` Dmitry A. Kazakov 0 siblings, 0 replies; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-03 20:46 UTC (permalink / raw) Alexander Kopilovitch wrote: > amado.alves wrote: > >> User-defined implicit conversion would solve this problem. Pragma >> approach: >> >> function To Unbounded (S : String) return Unbounded String; >> pragma Implicit Conversion (To Unbounded); >> U : Unbounded String := "bla bla bla"; > > Implicit conversions lead to some problems (it is also known from C++ > experience.) I'm sure that it would be serious mistake to add implicit > conversions to the language for purposes of this rank only. Generally, I > think that implicit conversion are significantly less compatible with Ada > spirit than with C++ spirit; in C++ implicit conversions probably do more > good than bad, while in Ada they probably will do more bad than good. No they are always bad. Conversions has to allowed between related types only. So instead of making arbitrary short-cuts between the types one has to organize them in an appropriate way. For instance, all string types have to have a common ancestor. And importantly, to have an ancestor, should not imply either taggedness or implementation inheritance. -- Regards, Dmitry A. Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-02 18:02 U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) amado.alves 2003-10-03 0:05 ` U : Unbounded String : " Alexander Kopilovitch @ 2003-10-03 9:00 ` Preben Randhol 2003-10-03 11:17 ` Jeff C, 1 sibling, 1 reply; 44+ messages in thread From: Preben Randhol @ 2003-10-03 9:00 UTC (permalink / raw) On 2003-10-02, amado.alves <amado.alves@netcabo.pt> wrote: > User-defined implicit conversion would solve this problem. Pragma approach: > > function To_Unbounded (S : String) return Unbounded_String; > pragma Implicit_Conversion (To_Unbounded); > U : Unbounded_String := "bla bla bla"; NONONONONONONO Implicit conversion is the mother of all bugs. It is a nightmare in FORTRAN 77 and you have to put a definistion turning it off. Preben ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-03 9:00 ` U : Unbounded_String := " Preben Randhol @ 2003-10-03 11:17 ` Jeff C, 2003-10-04 2:49 ` Robert I. Eachus 0 siblings, 1 reply; 44+ messages in thread From: Jeff C, @ 2003-10-03 11:17 UTC (permalink / raw) "Preben Randhol" <randhol+valid_for_reply_from_news@pvv.org> wrote in message news:slrnbnqekm.j6.randhol+valid_for_reply_from_news@kiuk0152.chembio.ntnu.no... > On 2003-10-02, amado.alves <amado.alves@netcabo.pt> wrote: > > User-defined implicit conversion would solve this problem. Pragma approach: > > > > function To_Unbounded (S : String) return Unbounded_String; > > pragma Implicit_Conversion (To_Unbounded); > > U : Unbounded_String := "bla bla bla"; > > > NONONONONONONO > > Implicit conversion is the mother of all bugs. It is a nightmare in > FORTRAN 77 and you have to put a definistion turning it off. > > Preben While I somewhat agree with your basic premise here, we already have implicit conversion in Ada for numeric literals. e.g I can write type My_Integer is range 0 .. 10; a : my_integer := 10; I dont have to write A : my_integer := to_my_integer(10); So I agree that fixing the string problems via an all powerful pragma might be a bad idea. Establishing the idea of a Universal String might not be a bad idea..... Although in reality once if we just really standardize on "+" doing a conversion and being in the standard I think the extra syntax overhead is pretty minimal. ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-03 11:17 ` Jeff C, @ 2003-10-04 2:49 ` Robert I. Eachus 2003-10-06 23:57 ` Alexandre E. Kopilovitch 0 siblings, 1 reply; 44+ messages in thread From: Robert I. Eachus @ 2003-10-04 2:49 UTC (permalink / raw) Jeff C, wrote: > "Preben Randhol" <randhol+valid_for_reply_from_news@pvv.org> wrote in > message > news:slrnbnqekm.j6.randhol+valid_for_reply_from_news@kiuk0152.chembio.ntnu.no... > >>On 2003-10-02, amado.alves <amado.alves@netcabo.pt> wrote: >> >>>User-defined implicit conversion would solve this problem. Pragma >> > approach: > >>> function To_Unbounded (S : String) return Unbounded_String; >>> pragma Implicit_Conversion (To_Unbounded); >>> U : Unbounded_String := "bla bla bla"; >> >> >>NONONONONONONO >> >>Implicit conversion is the mother of all bugs. It is a nightmare in >>FORTRAN 77 and you have to put a definistion turning it off. >> Jeff said: > While I somewhat agree with your basic premise here, we already have > implicit conversion in Ada for numeric literals. > > e.g I can write > > type My_Integer is range 0 .. 10; > > a : my_integer := 10; > > I dont have to write > > A : my_integer := to_my_integer(10); > > So I agree that fixing the string problems via an all powerful pragma might > be a bad idea. Establishing the idea of a Universal String might not be a bad > idea..... Although in reality once if we just really standardize on "+" doing > a conversion and being in the standard I think the extra syntax overhead is > pretty minimal. All I can say is this is language parochialism at its worst. Not Ada parochialism, but ASCII parochialism. In Ada, ASCII and Latin-1 have no particular magic status. If you want to have a character type Cyrillic and a Cyrillic_String, Greek and Greek_String, or for that matter Japanese_Character and Japanese_String go right ahead. (Although we could have a facinating discussion of which mapping to use for the Japanese alphabet.) As a trivial example, since I want something that all your terminals can reproduce: type Roman_Digit is ('I', 'V', 'X', 'L', 'C', 'D', 'M'); type Roman is array(Natural range <>) of Roman_Digit; Year: constant Roman := "MMIII"; If you look in the Ada Reference Manual at sections 3.5.2 Character Types and 3.6.3 String Types, you would know all this. Now look at that declaration of Year above. There is no IMPLICIT conversion from String to Roman, there is an implicit conversion from a string_literal to a string type. If you said instead: Bad_Year: constant Roman := "BAD"; -- Constraint_Error raised. Now do you understand why you need for the conversions from String to Unbounded_String to be explicit? There could be mappings involved, and you definitely don't want two implicit conversions, because the intermediate type could be just about anything. This is also why, as I said, we were careful to limit the overloadings of "&" and assignment. If the (Latin1 based) types were too overloaded, then creating and using other character and string types would be extremely painful. Note that most Ada programming, even in non-English speaking countries, is done by people who speak English. But systems written in Ada support end-users who use lots of different languages. Some Ada programs are even translators from one to another. So support for non-ASCII (and even non-ISO 8859) languages was a major requirement for Ada 9X. -- Robert I. Eachus "Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art. It remains to work these concepts into a practical, down-to-earth context, and for this there is nothing more practical or down-to-earth than what I have been talking about all along...the repair of an old motorcycle." -- from Zen and the Art of Motorcycle Maintenance by Robert Pirsig ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-04 2:49 ` Robert I. Eachus @ 2003-10-06 23:57 ` Alexandre E. Kopilovitch 2003-10-07 8:51 ` Dmitry A. Kazakov 2003-10-08 23:18 ` Robert I. Eachus 0 siblings, 2 replies; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-06 23:57 UTC (permalink / raw) To: comp.lang.ada Robert I. Eachus wrote: > > ... Establishing the idea of a Universal String might not be a bad > > idea..... Although in reality once if we just really standardize on "+" doing > > a conversion and being in the standard I think the extra syntax overhead is > > pretty minimal. > >All I can say is this is language parochialism at its worst. Not Ada >parochialism, but ASCII parochialism. It does not seem a proper attribution. In the issues like that, when you insist on totally equal status, you are actually promoting not diversity, but entropy. (It seems also that you have mostly remote view on other "parishes", and not enough information about real-life problems with national encodings in those other "parishes".) > In Ada, ASCII and Latin-1 have no > particular magic status. If you want to have a character type Cyrillic > and a Cyrillic_String, Greek and Greek_String, or for that matter > Japanese_Character and Japanese_String go right ahead. I can't get how it is related to the issue of relation between Strings and Unbounded_Strings, and to the implicit conversions between them. Those implicit conversions should not change actual memory contents in any way, and they are completely unrelated to any encodings. BTW, when you mentioned Cyrillic_String you made me smiling grimly. Do you know that there are 3 alive Cyrillic encodings? Do you know that, for example, in Windows, the final effect of your Cyrillic encoding depends not only upon encoding, but upon Regional Settings also? And there are plenty of more subtle issues, which may easily hurt you when you deal with a Cyrillic encoding. So, don't fancy that your Cyrillic_String will be of much help, especially if you want to develop a robust product for actual field use. > If you look in the Ada Reference Manual at sections 3.5.2 Character > Types and 3.6.3 String Types, you would know all this. I looked there, and I reread what said Cohen's book about all that stuff - character sets, maps, and translation - and found nothing relevant to the issue of implicit conversion between Strings and Unbounded_Strings. > Now do you understand why you need for the conversions from String to > Unbounded_String to be explicit? No, I haven't even a slightest idea. > There could be mappings involved, How they can be involved? From all I can see, there are no implicit mappings (that is, implicit calls of Translate subroutine) in Ada 95 standard. > and you definitely don't want two implicit conversions, because the >intermediate type could be just about anything. That could be bad indeed if the mappings were involved, but they don't. I think that at this stage I should show my sketch proposition (which I sent to Ada-Comment about a year ago). Here it is: --------------------------------------------------------------------------- Generalizing the Strings/Unbounded_Strings issue, I would propose a new notion of "enveloped" private type. That is, a private type Y may be declared as an envelope (new keyword) of some base type X: type Y is private envelope of X; Enveloping type (Y above) is required to have 2 private primitive operations: function Strip (Source : Y) return X; and function Upgrade (Source : X) return Y; which must be exact inverse of each other: Strip( Upgrade(V) ) = V and Upgrade( Strip(W) ) = W and their implementation is severely restricted so that compiler can verify and guarantee these identities. Then, a variable of enveloped type may be immediately initialized with a value of enveloping type and vice versa, in all cases of initialization, which include: 1) declaration with initialization V : X := R; -- where R is either a variable or constant of type Y -- or a function returning result of type Y W : Y := S; -- where S is either a variable or constant of type X -- or a function returning result of type X 2) argument for "in" parameter of a subroutine call function F(A : in X; B : in Y) ... procedure P(A: in X; B : in Y) ... V : X; W : Y; ... ... := F(W, V); P(W, V); 3) argument for "out" parameter of a procedure call procedure P1(U : out X) ... procedure P2(U : in out X) ... W : Y; ... P1(W); P2(W); In all these cases a compiler provides implicit conversions between types X and Y using private operations Strip and Upgrade of Y. Further, there may be several different envelopes for the same base type: type Y is private envelope of X; type Z is private envelope of X; and one of those envelopes may be immediately used for an initialization of a variable or parameter of another envelope type (as in the previous case above). For example: procedure P(W : out Y) ... T : Z; ... P(T); The compiler provides implicit conversions between types Y and Z using compositions Z.Upgrade(Y.Strip(...)) and Y.Upgrade(Z.Strip(...)) . I believe that the notion of enveloped type may be considered (to some extent) as opposite to the notion of subtype. --------------------------------------------------------------------------- As I said recently, I can't work out this proposition to a precise and complete form - I'm not a language laywer. But I don't see why it may be fundamentally wrong, and still nobody said me that. Some today's comments for that 1-year-old text: 1) subroutine names "Strip" and "Upgrade" should be seen as "denonations", and To_X, To_Y (To_String and To_Unbounded_String in the case of Strings) may be actually used instead. 2) although this proposition is about implicit conversions, the latter are essentially identities, and what is most important, they are applicable for explicitly related types only. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-06 23:57 ` Alexandre E. Kopilovitch @ 2003-10-07 8:51 ` Dmitry A. Kazakov 2003-10-08 19:12 ` Alexandre E. Kopilovitch 2003-10-08 23:18 ` Robert I. Eachus 1 sibling, 1 reply; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-07 8:51 UTC (permalink / raw) On Tue, 7 Oct 3 03:57:32 +0400, "Alexandre E. Kopilovitch" <aek@vib.usr.pu.ru> wrote: >Generalizing the Strings/Unbounded_Strings issue, I would propose a new notion >of "enveloped" private type. That is, a private type Y may be declared as an >envelope (new keyword) of some base type X: > > type Y is private envelope of X; > [...] Looks similar to my proposal for defining subtypes. Why to call "envelope" something which is a subtype? (:-)) It should be something like: type Y is private subtype X; Note that for the sake of genericity one need more than two conversions. There should be four: X_From_Y, X_To_Y - this pair is used when an Y gets substituted for X, i.e. when Y *inherits* a subprogram from X. One of these conversions might be absent. Then Y becomes an in-subtype or an out-subtype of X: type Y is private in subtype X; -- Only in-subroutines are inherited. So only X_From_Y has to be -- defined Y_From_X, Y_To_X - this pair is used when X gets substituted for Y, i.e. when Y exports something to X. This is a way to create supertypes. type Y is private in out supertype X: If all four conversions are present both types become fully interchangeable, which is actually required in case String vs. Unbounded_String. So a definition of Unbounded_String should be: type Unbounded_String is private in out subtype String, in out supertype String; procedure Append (Source : in out Unbounded_String; New_Item : in Unbounded_String); -- No need to define a variant with New_Item of String, -- because it will be automatically exported to String. private type Unbounded_String is new Ada.Finalization.Controlled with record ... Note also that there is no need to require X_From_Y (X_To_Y (X))=X. It is an implementation detail. The standard subtype does not insure that. It raises Constraint_Error when a conversion is not possible. For example, Append exported to String: X : String (80); Append (X, "something"); -- Constraint_Error Here X was converted to Unbounded_String, then Append was called, then it failed to convert it back to String. --- Regards, Dmitry Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-07 8:51 ` Dmitry A. Kazakov @ 2003-10-08 19:12 ` Alexandre E. Kopilovitch 2003-10-09 8:42 ` Dmitry A. Kazakov 0 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-08 19:12 UTC (permalink / raw) To: comp.lang.ada Dmitry A. Kazakov wrote: > > type Y is private envelope of X; > > > [...] > > Looks similar to my proposal for defining subtypes. I think yes, there is much in common, although there is also a difference: your proposal is significantly broader; it is certainly heavier as it pertains to possible consequences, and I have no immediate opinion whether it will be easier to find sound applications for that your proposal than for mine, more narrow one. > Why to call "envelope" something which is a subtype? (:-)) "subtype" in Ada implies a possibility of some kind of restriction imposed on the base type, while the word "envelope" implies some kind of extension (of functionality or applicability). You see, for a language terms I strongly prefer application/user view to a compilation theory's view, even for advanced entities/constructions. > It should be something like: > > type Y is private subtype X; > > Note that for the sake of genericity one need more than two > conversions. There should be four: > > X_From_Y, X_To_Y - this pair is used when an Y gets substituted for X, > i.e. when Y *inherits* a subprogram from X. One of these conversions > might be absent. Then Y becomes an in-subtype or an out-subtype of X: > > type Y is private in subtype X; > -- Only in-subroutines are inherited. So only X_From_Y has to be > -- defined > > Y_From_X, Y_To_X - this pair is used when X gets substituted for Y, > i.e. when Y exports something to X. This is a way to create > supertypes. > > type Y is private in out supertype X: Well, it seems that your proposal uses basic diagrams while mine is restricted to isomorphic representations. > If all four conversions are present both types become fully > interchangeable, which is actually required in case String vs. > Unbounded_String. I don't see why two conversions (which provide isomorhic representation) aren't enough for String vs. Unbounded_String... if we have no intention to extend current functionality (except of making implicit conversions possible). Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-08 19:12 ` Alexandre E. Kopilovitch @ 2003-10-09 8:42 ` Dmitry A. Kazakov 2003-10-10 20:58 ` Alexander Kopilovitch 0 siblings, 1 reply; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-09 8:42 UTC (permalink / raw) On Wed, 8 Oct 2003 23:12:35 +0400 (MSD), "Alexandre E. Kopilovitch" <aek@vib.usr.pu.ru> wrote: >Dmitry A. Kazakov wrote: > >> > type Y is private envelope of X; >> > >> [...] >> >> Looks similar to my proposal for defining subtypes. > >I think yes, there is much in common, although there is also a difference: >your proposal is significantly broader; it is certainly heavier as it pertains >to possible consequences, and I have no immediate opinion whether it will be >easier to find sound applications for that your proposal than for mine, more >narrow one. Yes. I wish Ada's ADT be fully reviewed. Yet, I believe, that it can remain compatibile to Ada 83. >> Why to call "envelope" something which is a subtype? (:-)) > >"subtype" in Ada implies a possibility of some kind of restriction imposed on >the base type, while the word "envelope" implies some kind of extension (of >functionality or applicability). You see, for a language terms I strongly >prefer application/user view to a compilation theory's view, even for advanced >entities/constructions. This is the crucial point. Subtype should imply nothing, but substitutability, in the sense that you can pass objects of the subtype where the base type is expected. The rest is implementation details. A subtype can be made either by a specialization (like constraining) or a generalization (like type extension) or by providing a completely new implementation (interface inheritance from a non-abstract type). But all these ways are no more than implementation details which could be moved to the private part. >> It should be something like: >> >> type Y is private subtype X; >> >> Note that for the sake of genericity one need more than two >> conversions. There should be four: >> >> X_From_Y, X_To_Y - this pair is used when an Y gets substituted for X, >> i.e. when Y *inherits* a subprogram from X. One of these conversions >> might be absent. Then Y becomes an in-subtype or an out-subtype of X: >> >> type Y is private in subtype X; >> -- Only in-subroutines are inherited. So only X_From_Y has to be >> -- defined >> >> Y_From_X, Y_To_X - this pair is used when X gets substituted for Y, >> i.e. when Y exports something to X. This is a way to create >> supertypes. >> >> type Y is private in out supertype X: > >Well, it seems that your proposal uses basic diagrams while mine is restricted >to isomorphic representations. You are poisoned by LSP! (:-)) >> If all four conversions are present both types become fully >> interchangeable, which is actually required in case String vs. >> Unbounded_String. > >I don't see why two conversions (which provide isomorhic representation) >aren't enough for String vs. Unbounded_String... if we have no intention to >extend current functionality (except of making implicit conversions possible). Because, I do not want isomorhic representations. I want a fundamentally new, universal concept of subtyping which would cover all known cases. For instance, it would work for tagged extensible types. If the subtype just extends the base (contains it representation), then the pair of conversions Base_*_Derived are just view conversions, because Derived has an instance of Base. Another pair Derived_*_Base should create and destroy a completely new Derived object. Presently the second pair is outlawed in Ada, so we have only view conversions from Derived to Base (forth and back). This limits use of tagged types. For example, they cannot be directly used to implement multiple representations, I mean the cases when Derived has to be an equivalent [sub- and supertype] of Base, and not just a subtype. --- Regards, Dmitry Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-09 8:42 ` Dmitry A. Kazakov @ 2003-10-10 20:58 ` Alexander Kopilovitch 2003-10-13 8:35 ` Dmitry A. Kazakov 0 siblings, 1 reply; 44+ messages in thread From: Alexander Kopilovitch @ 2003-10-10 20:58 UTC (permalink / raw) Dmitry A. Kazakov wrote: > >your proposal is significantly broader; it is certainly heavier as it pertains > >to possible consequences, and I have no immediate opinion whether it will be > >easier to find sound applications for that your proposal than for mine, more > >narrow one. > > Yes. I wish Ada's ADT be fully reviewed. Yet, I believe, that it can > remain compatibile to Ada 83. Well, I don't think that Ada 95 ADT is inferior relative to ADT system in any other programming language... But I'd like to note here that in Ada 95 you should consider ADT alongside with packages, because structuring in Ada 95 relies upon both these system in their interdependent combination. > >> Why to call "envelope" something which is a subtype? (:-)) > > > >"subtype" in Ada implies a possibility of some kind of restriction imposed on > >the base type, while the word "envelope" implies some kind of extension (of > >functionality or applicability). You see, for a language terms I strongly > >prefer application/user view to a compilation theory's view, even for advanced > >entities/constructions. > > This is the crucial point. Subtype should imply nothing, but > substitutability, in the sense that you can pass objects of the > subtype where the base type is expected. The rest is implementation > details. A subtype can be made either by a specialization (like > constraining) or a generalization (like type extension) or by > providing a completely new implementation (interface inheritance from > a non-abstract type). But all these ways are no more than > implementation details which could be moved to the private part. I think that then it should not be called "subtype", there should be another, more precise name. Further, such a construct probably will have far fetched consequences (being combined with other, already existing features). I don't think that such a big leap should be made into the dark, and without much appeal, simply following a vague analogy. Some good justification must be provided in advance, as we aren't in C++. That justification may be either theoretical or practical. For a theoretical one I'd prefer (well, actually I long dreamed for that) that somebody will give a grant to Grothendieck (while he is alive) for exploring and reviewing structuring systems in programming in general and object systems in particular; most probably this way we'll acquire superior theoretical foundation, which we will be unable to reach otherwise for decade or or two. For a practical one, you (yes, you, as you expressly wish this construct -:) should provide Ada-specific example where this construct is natural for some problem space. You may notice that this my "requirement" somehow differs from the standard one -:) -- this is needed not for justification of efforts, but rather for a guidance in dealings with subtleties and consequences. > >Well, it seems that your proposal uses basic diagrams while mine is restricted > >to isomorphic representations. > > You are poisoned by LSP! (:-)) I can't deny it because I don't know what it means (I'm not a native English speaker who can easily differentiate POW = Prisoner of War from POW = Prince of Wales). So, what is that LSP? Late Super Power? Large Sentence Propagator? Lightweight Streamline Processing? -:) > I want a > fundamentally new, universal concept of subtyping which would cover > all known cases. > > For instance, it would work for tagged extensible types. If the > subtype just extends the base (contains it representation), then the > pair of conversions Base_*_Derived are just view conversions, because > Derived has an instance of Base. Another pair Derived_*_Base should > create and destroy a completely new Derived object. Presently the > second pair is outlawed in Ada, so we have only view conversions from > Derived to Base (forth and back). This limits use of tagged types. For > example, they cannot be directly used to implement multiple > representations, I mean the cases when Derived has to be an equivalent > [sub- and supertype] of Base, and not just a subtype. Well, I must confess I can't understand all this - probably because I never learned Computer Science -;) . I have learned some mathematics, including functional analysis and algebraic topology - so I can understand abstract algebraic and categorical models; I have some experience with real programming applications - so I can understand structuring of a problem space; but I become essentially lost in a "professional" mixture of Computer Science and Software Engineering - I simply cannot catch true meaning of words. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-10 20:58 ` Alexander Kopilovitch @ 2003-10-13 8:35 ` Dmitry A. Kazakov 2003-10-13 21:43 ` Alexandre E. Kopilovitch 0 siblings, 1 reply; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-13 8:35 UTC (permalink / raw) On 10 Oct 2003 13:58:29 -0700, aek@vib.usr.pu.ru (Alexander Kopilovitch) wrote: >Dmitry A. Kazakov wrote: > >> >your proposal is significantly broader; it is certainly heavier as it pertains >> >to possible consequences, and I have no immediate opinion whether it will be >> >easier to find sound applications for that your proposal than for mine, more >> >narrow one. >> >> Yes. I wish Ada's ADT be fully reviewed. Yet, I believe, that it can >> remain compatibile to Ada 83. > >Well, I don't think that Ada 95 ADT is inferior relative to ADT system in any >other programming language... Why should we compare? And frankly, there is no language with a good ADT, presently. >But I'd like to note here that in Ada 95 you >should consider ADT alongside with packages, because structuring in Ada 95 >relies upon both these system in their interdependent combination. It is a common misconception. I see where it comes from. C++ classes are usually compared with Ada packages. Why should we slavish follow C++, which has *incurable* problems with ADT? >> >> Why to call "envelope" something which is a subtype? (:-)) >> > >> >"subtype" in Ada implies a possibility of some kind of restriction imposed on >> >the base type, while the word "envelope" implies some kind of extension (of >> >functionality or applicability). You see, for a language terms I strongly >> >prefer application/user view to a compilation theory's view, even for advanced >> >entities/constructions. >> >> This is the crucial point. Subtype should imply nothing, but >> substitutability, in the sense that you can pass objects of the >> subtype where the base type is expected. The rest is implementation >> details. A subtype can be made either by a specialization (like >> constraining) or a generalization (like type extension) or by >> providing a completely new implementation (interface inheritance from >> a non-abstract type). But all these ways are no more than >> implementation details which could be moved to the private part. > >I think that then it should not be called "subtype", there should be another, Well, there is "subclass", but "class" is already in use, Ada's class is a type closure. But I see nothing wrong in calling it sub- and supertype. >more precise name. Further, such a construct probably will have far fetched >consequences (being combined with other, already existing features). I don't >think that such a big leap should be made into the dark, and without much >appeal, simply following a vague analogy. Some good justification must be >provided in advance, as we aren't in C++. That justification may be either >theoretical or practical. I think that this construct will have very little influence on the existing ones. However it will allow to express the existing ones in new terms. For instance, both Ada's "tagged types" and "subtypes" could be then viewed as abbreviations. >For a practical one, you (yes, you, as you expressly wish this construct -:) >should provide Ada-specific example where this construct is natural for some >problem space. You may notice that this my "requirement" somehow differs from >the standard one -:) -- this is needed not for justification of efforts, but >rather for a guidance in dealings with subtleties and consequences. > >> >Well, it seems that your proposal uses basic diagrams while mine is restricted >> >to isomorphic representations. >> >> You are poisoned by LSP! (:-)) > >I can't deny it because I don't know what it means (I'm not a native English >speaker who can easily differentiate POW = Prisoner of War from POW = Prince >of Wales). So, what is that LSP? Late Super Power? Large Sentence Propagator? >Lightweight Streamline Processing? -:) LSP = Liskov Substitutability Principle. The idea is that a derived thing is a subtype (LSP-subtype) if and only if all its instances are in all contexts substitutable where the base is expected. Then under substitutability LSP the program semantics (behaviour) is understood. For this reason "subtype" cannot become a language term. So people have invented "subclass", which is a language term then. And of course, "subclass" /= "subtype". Then it is obvious, that an absolute LSP requires isomorphic value sets, which makes it useless. Mathematically LSP is sort of: Y is a subtype of X if any statement including X remains true if Y is substituted for X (and under all quantifiers). So square is not a LSP-subtype of rectangle; constant Integer is not of Integer; String (1..80) is not of String etc. >> I want a >> fundamentally new, universal concept of subtyping which would cover >> all known cases. >> >> For instance, it would work for tagged extensible types. If the >> subtype just extends the base (contains it representation), then the >> pair of conversions Base_*_Derived are just view conversions, because >> Derived has an instance of Base. Another pair Derived_*_Base should >> create and destroy a completely new Derived object. Presently the >> second pair is outlawed in Ada, so we have only view conversions from >> Derived to Base (forth and back). This limits use of tagged types. For >> example, they cannot be directly used to implement multiple >> representations, I mean the cases when Derived has to be an equivalent >> [sub- and supertype] of Base, and not just a subtype. > >Well, I must confess I can't understand all this - probably because I never >learned Computer Science -;) . I have learned some mathematics, including >functional analysis and algebraic topology - so I can understand abstract >algebraic and categorical models; I have some experience with real programming >applications - so I can understand structuring of a problem space; but I become >essentially lost in a "professional" mixture of Computer Science and Software >Engineering - I simply cannot catch true meaning of words. It is quite simple. When you write: type Y is new X with ...; then each Y contains an instance of X. This is programming by extension. Because each Y has X, to substitute an Y for X you need not create any new object if: 1. X is passed by reference 2. X in Y is aliased So to call Foo (Object : in out X); on some Y you can calculate a reference to X in Y (X_From_Y conversion); call Foo; drop the reference (X_To_Y conversion). So this pair is just view conversions. [ See now, why in Ada tagged types are by-reference ones? ] Now, imagine, that you have defined Bar (Object : in out Y); and want to export it to X (which is illegal in Ada), so that Bar could be called with an X. For this you have to provide true conversions. Y_From_X and Y_To_X which would create a new object Y: You create a new Y from X; you pass it to Bar; you store X-part of Y back into the argument. That's it. --- Regards, Dmitry Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-13 8:35 ` Dmitry A. Kazakov @ 2003-10-13 21:43 ` Alexandre E. Kopilovitch 2003-10-14 8:09 ` Dmitry A. Kazakov 0 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-13 21:43 UTC (permalink / raw) To: comp.lang.ada Dmitry A. Kazakov wrote: > >> ... I wish Ada's ADT be fully reviewed. Yet, I believe, that it can > >> remain compatibile to Ada 83. > > > >Well, I don't think that Ada 95 ADT is inferior relative to ADT system in any > >other programming language... > > Why should we compare? Well, what should we do otherwise? -;) Yes, we can compare not against other programming languages, but against perceived needs or against sound theoretical models (which one you prefer?), but we still should compare. > And frankly, there is no language with a good ADT, presently. Well, I don't know ALL languages... for example (as it pertains to ADT) I don't know Eiffel, I don't know recent Norwegian branch (Abel, Beta), I don't know ML descendants (which some algebraically-oriented teams are developing), to name a few. I recall that ADT in Limbo language were somehow original, at least in that they were called exactly ADT, and in fact there was 3-level object system, in which ADT was 2nd level. > >But I'd like to note here that in Ada 95 you > >should consider ADT alongside with packages, because structuring in Ada 95 > >relies upon both these system in their interdependent combination. > > It is a common misconception. Really? I am slightly surprised - not only with your opinion that this is a misconception, but also with the claim that it is "common". > I see where it comes from. C++ classes > are usually compared with Ada packages. I did not imply that, and I really don't think so. In Ada, hierarchies of packages are clearly separated from hierarchies of types, while in C++ we have single kind of hierarchy - of classes (which has multiple inheritance in some compensation). > >> >"subtype" in Ada implies a possibility of some kind of restriction imposed on > >> >the base type, while the word "envelope" implies some kind of extension (of > >> >functionality or applicability). You see, for a language terms I strongly > >> >prefer application/user view to a compilation theory's view, even for advanced > >> >entities/constructions. > >> > >> This is the crucial point. Subtype should imply nothing, but > >> substitutability, in the sense that you can pass objects of the > >> subtype where the base type is expected. The rest is implementation > >> details. A subtype can be made either by a specialization (like > >> constraining) or a generalization (like type extension) or by > >> providing a completely new implementation (interface inheritance from > >> a non-abstract type). But all these ways are no more than > >> implementation details which could be moved to the private part. > > > >I think that then it should not be called "subtype", there should be another, > > Well, there is "subclass", but "class" is already in use, Ada's class > is a type closure. But I see nothing wrong in calling it sub- and > supertype. I disagree with use of sub- prefix for something that is not a proper restriction (in some respect) relative to the base. I always found usage of the word "subtype" in Ada very appropriate; at the same time I found informal use of "subclass" term in C++ and some other languages quite corresponding to their eclectic and over-pragmatic character. > >more precise name. Further, such a construct probably will have far fetched > >consequences (being combined with other, already existing features). I don't > >think that such a big leap should be made into the dark, and without much > >appeal, simply following a vague analogy. Some good justification must be > >provided in advance, as we aren't in C++. That justification may be either > >theoretical or practical. > > I think that this construct will have very little influence on the > existing ones. Well, you can change landscape completely without touching usable lands - just breaking the borders between them. > However it will allow to express the existing ones in > new terms. For instance, both Ada's "tagged types" and "subtypes" > could be then viewed as abbreviations. Don't you think that this quite serious claim? Don't you think that such "groundbreaking" views deserve complete and consistent presentation - not within a newsgroup dialogue, but in the form of an article? Whether these your views are right or wrong, it is practically impossible to analyze them (or to decide that there is nothing to analyze -;) , until they are presented in the form of a set of refutable statements. For example, I can't conclude from your present words whether you are trying to add some dose of SmallTalk-like flexibility to Ada's "type machine". > LSP = Liskov Substitutability Principle. The idea is that a derived > thing is a subtype (LSP-subtype) if and only if all its instances are > in all contexts substitutable where the base is expected. Then under > substitutability LSP the program semantics (behaviour) is understood. Well, let it be LSP, although I find this not much different from the notion of "particular case" in mathematics. > For this reason "subtype" cannot become a language term. You mean that it is meta-language term, right? > So people > have invented "subclass", which is a language term then. And of > course, "subclass" /= "subtype". Then it is obvious, that an absolute > LSP requires isomorphic value sets, which makes it useless. > Mathematically LSP is sort of: Y is a subtype of X if any statement > including X remains true if Y is substituted for X (and under all > quantifiers). So square is not a LSP-subtype of rectangle; constant > Integer is not of Integer; String (1..80) is not of String etc. I can't agree with all that: not "any statement", far from that, but only statements expressed in terms of operations defined for that type, and moreover, only statements that are valid for all values of the type (vs. statements that are valid for some individual values only). So square IS a particular case of rectangle: all statements that are valid for ALL rectangles are automatically valid for all squares. > >> I want a > >> fundamentally new, universal concept of subtyping which would cover > >> all known cases. Do you mean that this concept is completely original, or you have some references to somehow similar concepts? > > When you write: > > > > type Y is new X with ...; > > > > then each Y contains an instance of X. This is programming by > > extension. Because each Y has X, to substitute an Y for X you need not > > create any new object if: > > > > 1. X is passed by reference > > 2. X in Y is aliased > > > > So to call Foo (Object : in out X); on some Y you can calculate a > > reference to X in Y (X_From_Y conversion); call Foo; drop the > > reference (X_To_Y conversion). So this pair is just view conversions. > > [ See now, why in Ada tagged types are by-reference ones? ] > > > > Now, imagine, that you have defined Bar (Object : in out Y); and want > > to export it to X (which is illegal in Ada), so that Bar could be > > called with an X. For this you have to provide true conversions. > > Y_From_X and Y_To_X which would create a new object Y: > > > > You create a new Y from X; you pass it to Bar; you store X-part of Y > > back into the argument. Well, I think I understand now. Perhaps some languages will accept this construct, but as for Ada, I'm pretty sure that Ada will reject it (Ada does not like tools/features that are both powerful and without a bunch of restraints). Ada may accept it informally, as a design pattern, but will never accept it as an intrinsic feature, which (after all) is permitted to act implicitly. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-13 21:43 ` Alexandre E. Kopilovitch @ 2003-10-14 8:09 ` Dmitry A. Kazakov 2003-10-16 9:39 ` Alexandre E. Kopilovitch 0 siblings, 1 reply; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-14 8:09 UTC (permalink / raw) On Tue, 14 Oct 2003 01:43:46 +0400 (MSD), "Alexandre E. Kopilovitch" <aek@vib.usr.pu.ru> wrote: >Dmitry A. Kazakov wrote: > >> >> ... I wish Ada's ADT be fully reviewed. Yet, I believe, that it can >> >> remain compatibile to Ada 83. >> > >> >Well, I don't think that Ada 95 ADT is inferior relative to ADT system in any >> >other programming language... >> >> Why should we compare? > >Well, what should we do otherwise? -;) Yes, we can compare not against other >programming languages, but against perceived needs or against sound theoretical >models (which one you prefer?), but we still should compare. > >> And frankly, there is no language with a good ADT, presently. > >Well, I don't know ALL languages... for example (as it pertains to ADT) I >don't know Eiffel, I don't know recent Norwegian branch (Abel, Beta), I don't >know ML descendants (which some algebraically-oriented teams are developing), >to name a few. I recall that ADT in Limbo language were somehow original, >at least in that they were called exactly ADT, and in fact there was 3-level >object system, in which ADT was 2nd level. Well, probably, there is a ready to use fusion reactor hidden somewhere. Who knows... (:-)) >> >But I'd like to note here that in Ada 95 you >> >should consider ADT alongside with packages, because structuring in Ada 95 >> >relies upon both these system in their interdependent combination. >> >> It is a common misconception. > >Really? I am slightly surprised - not only with your opinion that this is a >misconception, but also with the claim that it is "common". Packages are pretty orthogonal to types. And it is good so. BTW all cases in Ada where it is not so (like requirements to do something at the library level) are rather boring. >> I see where it comes from. C++ classes >> are usually compared with Ada packages. > >I did not imply that, and I really don't think so. In Ada, hierarchies of >packages are clearly separated from hierarchies of types, while in C++ we have >single kind of hierarchy - of classes (which has multiple inheritance in some >compensation). Yes, this is why Ada packages have little to do with ADT. >> >> >"subtype" in Ada implies a possibility of some kind of restriction imposed on >> >> >the base type, while the word "envelope" implies some kind of extension (of >> >> >functionality or applicability). You see, for a language terms I strongly >> >> >prefer application/user view to a compilation theory's view, even for advanced >> >> >entities/constructions. >> >> >> >> This is the crucial point. Subtype should imply nothing, but >> >> substitutability, in the sense that you can pass objects of the >> >> subtype where the base type is expected. The rest is implementation >> >> details. A subtype can be made either by a specialization (like >> >> constraining) or a generalization (like type extension) or by >> >> providing a completely new implementation (interface inheritance from >> >> a non-abstract type). But all these ways are no more than >> >> implementation details which could be moved to the private part. >> > >> >I think that then it should not be called "subtype", there should be another, >> >> Well, there is "subclass", but "class" is already in use, Ada's class >> is a type closure. But I see nothing wrong in calling it sub- and >> supertype. > >I disagree with use of sub- prefix for something that is not a proper restriction >(in some respect) relative to the base. There are clear restrictions: A, a subtype of B (A <: B), means that A is substitutable for B = there is a conversion forth and back to B. B, a supertype of A (B :> A) means A <: B. > I always found usage of the word >"subtype" in Ada very appropriate; at the same time I found informal use of >"subclass" term in C++ and some other languages quite corresponding to their >eclectic and over-pragmatic character. Frankly, I see no difference. Ada 83 had "type", so its constrained descendant became "subtype". C++ had "class", so "subclass" was invented. >> >more precise name. Further, such a construct probably will have far fetched >> >consequences (being combined with other, already existing features). I don't >> >think that such a big leap should be made into the dark, and without much >> >appeal, simply following a vague analogy. Some good justification must be >> >provided in advance, as we aren't in C++. That justification may be either >> >theoretical or practical. >> >> I think that this construct will have very little influence on the >> existing ones. > >Well, you can change landscape completely without touching usable lands - >just breaking the borders between them. That is what I want. >> However it will allow to express the existing ones in >> new terms. For instance, both Ada's "tagged types" and "subtypes" >> could be then viewed as abbreviations. > >Don't you think that this quite serious claim? I do. > Don't you think that such >"groundbreaking" views deserve complete and consistent presentation - not >within a newsgroup dialogue, but in the form of an article? Yes, and more than that. Probably one need an experimental version of the language. However the idea is pretty simple. Actually it is just separating implementation and interface. If you follow it consequently, you will define a type is a set of public operations applied to the values of private representation. If then you will try to define what a derived type could be, the only way you could do it is in terms of conversions. If you do that you will see, that in fact all known cases of derived types fall under this model. >Whether these your >views are right or wrong, it is practically impossible to analyze them (or >to decide that there is nothing to analyze -;) , until they are presented >in the form of a set of refutable statements. See above. It is not just one article, but several. Do you have a grant for me? (:-)) >For example, I can't conclude from your present words whether you are trying >to add some dose of SmallTalk-like flexibility to Ada's "type machine". No. I want to evolve Ada's ADT. I want *less* "flexibility", less built-in types, less generics, less access discriminants, less pointers, less kludges of any kind. It is a pitty that each new change adds new pragmas, attributes and other hard-wired things to Ada. It is the time to make the language simplier. >> LSP = Liskov Substitutability Principle. The idea is that a derived >> thing is a subtype (LSP-subtype) if and only if all its instances are >> in all contexts substitutable where the base is expected. Then under >> substitutability LSP the program semantics (behaviour) is understood. > >Well, let it be LSP, although I find this not much different from the notion >of "particular case" in mathematics. > >> For this reason "subtype" cannot become a language term. > >You mean that it is meta-language term, right? Yes. It becomes OOA/D term and disappears in the swamp. >> So people >> have invented "subclass", which is a language term then. And of >> course, "subclass" /= "subtype". Then it is obvious, that an absolute >> LSP requires isomorphic value sets, which makes it useless. >> Mathematically LSP is sort of: Y is a subtype of X if any statement >> including X remains true if Y is substituted for X (and under all >> quantifiers). So square is not a LSP-subtype of rectangle; constant >> Integer is not of Integer; String (1..80) is not of String etc. > >I can't agree with all that: not "any statement", far from that, but only >statements expressed in terms of operations defined for that type, and moreover, >only statements that are valid for all values of the type (vs. statements that >are valid for some individual values only). So square IS a particular case of >rectangle: all statements that are valid for ALL rectangles are automatically >valid for all squares. This is again a usual misconception. People with a mathematical background just cannot imagine what LSP goes after. It is not about rectangles, it is about types, i.e. about sets of rectangles in general. Now consider this statement: for all X, Y there is a <rectangle> with height=X and width=Y For a square to be a LSP-subtype of rectangle, means that the above will remain true if you change <rectange> to <square>! >> >> I want a >> >> fundamentally new, universal concept of subtyping which would cover >> >> all known cases. > >Do you mean that this concept is completely original, or you have some >references to somehow similar concepts? The idea is too simple to call it a concept. As for implementations, I know no language, which allows this. >> > When you write: >> > >> > type Y is new X with ...; >> > >> > then each Y contains an instance of X. This is programming by >> > extension. Because each Y has X, to substitute an Y for X you need not >> > create any new object if: >> > >> > 1. X is passed by reference >> > 2. X in Y is aliased >> > >> > So to call Foo (Object : in out X); on some Y you can calculate a >> > reference to X in Y (X_From_Y conversion); call Foo; drop the >> > reference (X_To_Y conversion). So this pair is just view conversions. >> > [ See now, why in Ada tagged types are by-reference ones? ] >> > >> > Now, imagine, that you have defined Bar (Object : in out Y); and want >> > to export it to X (which is illegal in Ada), so that Bar could be >> > called with an X. For this you have to provide true conversions. >> > Y_From_X and Y_To_X which would create a new object Y: >> > >> > You create a new Y from X; you pass it to Bar; you store X-part of Y >> > back into the argument. > >Well, I think I understand now. Perhaps some languages will accept this construct, >but as for Ada, I'm pretty sure that Ada will reject it (Ada does not like >tools/features that are both powerful and without a bunch of restraints). >Ada may accept it informally, as a design pattern, but will never accept it >as an intrinsic feature, which (after all) is permitted to act implicitly. Do you mean tagged types (:-)), or supertypes or something else? --- Regards, Dmitry Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-14 8:09 ` Dmitry A. Kazakov @ 2003-10-16 9:39 ` Alexandre E. Kopilovitch 2003-10-18 10:57 ` Dmitry A. Kazakov 0 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-16 9:39 UTC (permalink / raw) To: comp.lang.ada Dmitry A. Kazakov wrote: > >I disagree with use of sub- prefix for something that is not a proper restriction > >(in some respect) relative to the base. > > There are clear restrictions: > > A, a subtype of B (A <: B), means that A is substitutable for B = > there is a conversion forth and back to B. > > B, a supertype of A (B :> A) means A <: B. I meant that "subtype" should be in some sense smaller then the base type, or at least should be somehow subordinate to the base type. But for the relations you presented it is unclear which kind of subordination they imply. Can you put it all in more rigorous terms? > >Well, you can change landscape completely without touching usable lands - > >just breaking the borders between them. > > That is what I want. So, as you want to change landscape completely (or at least very substantially), you should have some general image of that new landscape beforehand, right? And you should have some grounds to assert that new landscape is viable, that is, sufficiently stable and usable. Or you just want to make changes and then see consequences as they emerge? > > Don't you think that such > >"groundbreaking" views deserve complete and consistent presentation - not > >within a newsgroup dialogue, but in the form of an article? > > Yes, and more than that. Probably one need an experimental version of > the language. This made me pretty sceptical. Well, just one more experimental language, created just for illustration of some concept(s) or construct(s) (in the best case) or for pleasing author's ambitions (in the worst case). Nobody knows and nobody cares. Well, if you know more substantial purpose for that language then tell it - it is too hard to guess what it may be. > However the idea is pretty simple. Actually it is just separating > implementation and interface. If you follow it consequently, you will > define a type is a set of public operations applied to the values of > private representation. If then you will try to define what a derived > type could be, the only way you could do it is in terms of > conversions. If you do that you will see, that in fact all known cases > of derived types fall under this model. Simple idea do not imply simple consequences. And the consequences, and methods to deal with them is a bitter field to be explored. Well, I vaguely remember that there were metaclasses... quite popular thing N years ago. I think that there were something in common with that you propose. Perhaps even in more developed form. But I may be mistaken here, I remember nothing concrete about those metaclasses... except of that a flavor of them was present, for example, in IBM's SOM. > >For example, I can't conclude from your present words whether you are trying > >to add some dose of SmallTalk-like flexibility to Ada's "type machine". > > No. I want to evolve Ada's ADT. I want *less* "flexibility", less > built-in types, less generics, less access discriminants, less > pointers, less kludges of any kind. > > It is a pitty that each new change adds new pragmas, attributes and > other hard-wired things to Ada. It is the time to make the language > simplier. But complexity of a language must somehow correspond to complexity of the informational body for which it is used. And typical Ada applications so far are relatively close to real world, which certainly has complex informational body, as we view it scientifically/technically. Therefore you can't just make the language simplier - before that you should provide some new view on the real world, which decrease complexity of informational body. Well, like Copernicus did. -:) > >> Mathematically LSP is sort of: Y is a subtype of X if any statement > >> including X remains true if Y is substituted for X (and under all > >> quantifiers). So square is not a LSP-subtype of rectangle; constant > >> Integer is not of Integer; String (1..80) is not of String etc. > > > >I can't agree with all that: not "any statement", far from that, but only > >statements expressed in terms of operations defined for that type, and moreover, > >only statements that are valid for all values of the type (vs. statements that > >are valid for some individual values only). So square IS a particular case of > >rectangle: all statements that are valid for ALL rectangles are automatically > >valid for all squares. > > This is again a usual misconception. People with a mathematical > background just cannot imagine what LSP goes after. Well, I must tell you that there are people that have more than one background, and mathematics can be just one of them -:) Yes, this phenomen is not common, but nevertheless it happens sometimes. > It is not about > rectangles, it is about types, i.e. about sets of rectangles in > general. Now consider this statement: > > for all X, Y there is a <rectangle> with height=X and width=Y > > For a square to be a LSP-subtype of rectangle, means that the above > will remain true if you change <rectange> to <square>! Incorrect, though. Your example statement is not about all rectangles... it isn't sufficient just use the words "for all" somewhere in the statement -;) > >> >> I want a > >> >> fundamentally new, universal concept of subtyping which would cover > >> >> all known cases. > > > >Do you mean that this concept is completely original, or you have some > >references to somehow similar concepts? > > The idea is too simple to call it a concept. But it may be just starting, undeveloped idea; and it may appear that it has unavoidable consequences, which lead to substantial extension etc., etc. > As for implementations, I know no language, which allows this. Isn't it strange? Simple idea, allegely useful, but still unused after 50+ years of programming. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-16 9:39 ` Alexandre E. Kopilovitch @ 2003-10-18 10:57 ` Dmitry A. Kazakov 0 siblings, 0 replies; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-18 10:57 UTC (permalink / raw) Alexandre E. Kopilovitch wrote: > Dmitry A. Kazakov wrote: > >> >I disagree with use of sub- prefix for something that is not a proper >> >restriction (in some respect) relative to the base. >> >> There are clear restrictions: >> >> A, a subtype of B (A <: B), means that A is substitutable for B = >> there is a conversion forth and back to B. >> >> B, a supertype of A (B :> A) means A <: B. > > I meant that "subtype" should be in some sense smaller then the base type, > or at least should be somehow subordinate to the base type. But for the > relations you presented it is unclear which kind of subordination they > imply. Can you put it all in more rigorous terms? The above is far more rigourous than "to be smaller in some sense". Smaller in which sense? Remember the fundamental principle - we are talking not about values, but mappings between them. If you would try to define that "smaller" you will inevitable come to my definition, though putting an additional constraint that in any case a forth-conversion should be always possible. So looking at _legal_ Ada you will discover: type A is new B with null record; -- is a subtype, A is same as B type A is new B with record ...; -- not a subtype, A is not smaller type A is new B with private; -- maybe a subtype, should look in private Do you want such a "rigorous" definition? I don't. This is the way LSP goes. So the notion of LSP subtype depends on the program semantics and thus becomes almost useless. > Well, I must tell you that there are people that have more than one > background, > and mathematics can be just one of them -:) Yes, this phenomen is not > common, but nevertheless it happens sometimes. > >> It is not about >> rectangles, it is about types, i.e. about sets of rectangles in >> general. Now consider this statement: >> >> for all X, Y there is a <rectangle> with height=X and width=Y >> >> For a square to be a LSP-subtype of rectangle, means that the above >> will remain true if you change <rectange> to <square>! > > Incorrect, though. Your example statement is not about all rectangles... > it isn't sufficient just use the words "for all" somewhere in the > statement -;) The above was just a formal equivalent of: type Rectange is tagged ...; function Create (Height, Width : Float) return Rectangle; type Square is new Rectangle with null record; function Create (Height, Width : Float) return Square; Now you have overridden Create, but you cannot implement it! So LSP claims that Square is not a subtype of Rectangle. [ There are many ways to mend it to some extent. ] But the lesson to learn is LSP-subtype /= subset, and to amazement of anybody aware of geometry, consequently, Square is not a Rectangle [ when you are considering a program ]. Nice, isn't it? >> >> >> I want a >> >> >> fundamentally new, universal concept of subtyping which would cover >> >> >> all known cases. >> > >> >Do you mean that this concept is completely original, or you have some >> >references to somehow similar concepts? >> >> The idea is too simple to call it a concept. > > But it may be just starting, undeveloped idea; and it may appear that it > has unavoidable consequences, which lead to substantial extension etc., > etc. > >> As for implementations, I know no language, which allows this. > > Isn't it strange? Simple idea, allegely useful, but still unused after 50+ > years of programming. Huh, how about ten commandments? Even simplier ideas and much longer period of time! -- Regards, Dmitry A. Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-06 23:57 ` Alexandre E. Kopilovitch 2003-10-07 8:51 ` Dmitry A. Kazakov @ 2003-10-08 23:18 ` Robert I. Eachus 2003-10-09 21:35 ` Alexandre E. Kopilovitch 1 sibling, 1 reply; 44+ messages in thread From: Robert I. Eachus @ 2003-10-08 23:18 UTC (permalink / raw) Alexandre E. Kopilovitch wrote: > BTW, when you mentioned Cyrillic_String you made me smiling grimly. Do you > know that there are 3 alive Cyrillic encodings? Do you know that, for example, > in Windows, the final effect of your Cyrillic encoding depends not only upon > encoding, but upon Regional Settings also? And there are plenty of more subtle > issues, which may easily hurt you when you deal with a Cyrillic encoding. So, > don't fancy that your Cyrillic_String will be of much help, especially if you > want to develop a robust product for actual field use. You are just thinking Russian, there are even more Cyrillic character bindings for other Cyrillic languages. When it comes to multiple representations for one language Japanese is by far the worst! But if you don't see it, try this. In Ada, I can DEFINE a Cyrillic_String type and bind it to one of the variants, and add other string types for other variants, then provide for conversions between them. The fact that almost all conversions are explicit makes all this possible. Let me add three types and show you the problem: type Unbounded_Cyrillic is new Ada.Strings.Unbounded.Unbounded_String; -- to make sure you don't get confused. Yeah, I know, in real life -- you should make the derivation private, and provide Cyrillic_String -- versions of some of the operations in Ada.Strings.Unbounded. Take --- all that as given. type Georgian_String is (...); type Unbounded_Georgian is new Ada.Strings.Unbounded.Unbounded_String; -- same as above. In Ada as it is now, I can say: Some_String: Unbounded_Cyrillic := To_Unbounded("Македонии"); Other_String: Unbounded_Georgian := To_Unbounded("Македонии"); In each case, there is an implicit conversion from the string_literal "Македонии" to the proper string type, then that type is converted to the proper unbounded type. But if you add additional implicit conversions into the mix, it all falls apart: Some_String: Unbounded_Cyrillic := "Македонии"; I hope you don't expect the compiler to guess which set of implicit conversions to apply! I am certainly not going to try to list all the possibilities, but for example, there is: "Македонии" to String to Unbounded_String to Cyrillic_String. And yes, in this case, the first conversion would raise Constraint_Error. But I could choose some other example where all the characters were in both (Latin1) String and Cyrillic_String. But I don't have to: "Македонии" to Georgian_String to Unbounded_Georgian to Unbounded_String to Cyrillic_String. Once you introduce new implicit conversions, the compiler is going to have to assume that they may occur anywhere. If the overloading rules result in only one possible match, great. But you will find that right now Ada has about as many implicit conversions as it can without creating lots of ambiguous situations. And yes, there are situations in Ada currently where you have to qualify expressions to avoid ambiguity. The most userul balance point is where everything can be done, and you don't have to qualify expressions too often. Oh, since I am trying to be fair here, there is one additional implicit conversion that I would love to figure out how to add to the language. (Well, I know how to add it, I just don't think I'll ever get enough interest to make it happen.) That would be to add some pragmas that allowed character, string, or numeric literals to private types. The conversion directly from a character literal to Unbounded_Cyrillic wouldn't break anything. It also wouldn't help if you had a Cyrillic_String variable to put in an Unbounded_Cyrillic object. -- Robert I. Eachus "Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art. It remains to work these concepts into a practical, down-to-earth context, and for this there is nothing more practical or down-to-earth than what I have been talking about all along...the repair of an old motorcycle." -- from Zen and the Art of Motorcycle Maintenance by Robert Pirsig ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-08 23:18 ` Robert I. Eachus @ 2003-10-09 21:35 ` Alexandre E. Kopilovitch 2003-10-10 18:10 ` Robert I. Eachus 0 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-09 21:35 UTC (permalink / raw) To: comp.lang.ada Robert I. Eachus" wrote: > > BTW, when you mentioned Cyrillic_String you made me smiling grimly. Do you > > know that there are 3 alive Cyrillic encodings? Do you know that, for example, > > in Windows, the final effect of your Cyrillic encoding depends not only upon > > encoding, but upon Regional Settings also? And there are plenty of more subtle > > issues, which may easily hurt you when you deal with a Cyrillic encoding. So, > > don't fancy that your Cyrillic_String will be of much help, especially if you > > want to develop a robust product for actual field use. > > You are just thinking Russian, Well, if you add Ukrainian, Bulgarian and Serbian, not mentioning Belarussian and a bunch of pseudo-Cyrillic languages from Abkhaz to Kazakh (and I can't even get what is happening with Tatar now: I heard recently that there is even a legal case in Constitutional court about that - Tatars want Latin-based alphabet for their language, but federal authorities insist on Cyrillic one only... if I understood all that properly), the situation probably will not become better -;) > there are even more Cyrillic character bindings for other Cyrillic languages. I see very little potential use for them, particularly in Ada world... other than mining raw intelligence data from newspapers, emails and websites -;) . I can't imagine that those nations will use Ada for their accounting purposes... or even for desktop publishing and for computer games. > When it comes to multiple > representations for one language Japanese is by far the worst! I'm not sure, though. Yes, Japanese is quite impressive in this regard, I have seen that in a raw reality (my daughter, being a linguist, had some correspondense by e-mail with several Japanese girls, and I was called for decoding and encoding those emails - well, it took some time and effort). But that is on the surface. When you go deep into real application problems, the situation may change: I know well that there are subtle and unpleasant problems with Russian encodings, and I know nothing about Japanese at that level. > But if > you don't see it, try this. In Ada, I can DEFINE a Cyrillic_String type > and bind it to one of the variants, and add other string types for other > variants, then provide for conversions between them. The fact that > almost all conversions are explicit makes all this possible. Let me add > three types and show you the problem: > > type Unbounded_Cyrillic is new Ada.Strings.Unbounded.Unbounded_String; > -- to make sure you don't get confused. Yeah, I know, in real life > -- you should make the derivation private, and provide Cyrillic_String > -- versions of some of the operations in Ada.Strings.Unbounded. Take > --- all that as given. > type Georgian_String is (...); > type Unbounded_Georgian is new Ada.Strings.Unbounded.Unbounded_String; > -- same as above. > > In Ada as it is now, I can say: > > Some_String: Unbounded_Cyrillic := To_Unbounded("п°п╟п╨п╣п╢п╬п╫п╦п╦"); > Other_String: Unbounded_Georgian := To_Unbounded("п°п╟п╨п╣п╢п╬п╫п╦п╦"); > > In each case, there is an implicit conversion from the string_literal > "п°п╟п╨п╣п╢п╬п╫п╦п╦" to the proper string type, then that type is converted to > the proper unbounded type. But if you add additional implicit > conversions into the mix, it all falls apart: Oh, it seems that I see (at last!) what you mean: you assume that conversions between encodings should be implicit! But this is far from desirable in real applications! > Some_String: Unbounded_Cyrillic := "п°п╟п╨п╣п╢п╬п╫п╦п╦"; > > I hope you don't expect the compiler to guess which set of implicit > conversions to apply! I am certainly not going to try to list all the > possibilities, but for example, there is: "п°п╟п╨п╣п╢п╬п╫п╦п╦" to String to > Unbounded_String to Cyrillic_String. And yes, in this case, the first > conversion would raise Constraint_Error. But I could choose some other > example where all the characters were in both (Latin1) String and > Cyrillic_String. But I don't have to: "п°п╟п╨п╣п╢п╬п╫п╦п╦" to Georgian_String to > Unbounded_Georgian to Unbounded_String to Cyrillic_String. > > Once you introduce new implicit conversions, the compiler is going to > have to assume that they may occur anywhere. If the overloading rules > result in only one possible match, great. But you will find that right > now Ada has about as many implicit conversions as it can without > creating lots of ambiguous situations. And yes, there are situations in > Ada currently where you have to qualify expressions to avoid ambiguity. > The most userul balance point is where everything can be done, and you > don't have to qualify expressions too often. I think that now I understand the difference between our views on the issue. I understand perfectly that there should not be two competing kinds of implicit conversions (one between encodings and another between String and Unbounded_String). So we have to choose between them. You assumed that implicit conversions between encodings are more natural and more desirable than implicit conversions between String and Unbounded_String. My firm opinion is exactly opposite: conversions between encodings should be explicit as a rule, and they all must be done within the "frontier" layer of the application; so, I'm quite sure that while such implicit conversions between encodings may be justified in Visual Basic and sometimes in C++, they are entirely undesirable for Ada (as a standard feature). At the same time I see implicit conversions between String and Unbounded_String as very natural and desirable for real applications. I don't know the reasons for that your assumption and preference... all I can say is that my preference is certainly influenced by substantial experience with strings in real applications, which often involved dealings with various encodings (although there was not Ada - there were Fortran IV/77, COBOL 66, several assemblers, PL/1, C/C++, Pascal/Delphi) > Oh, since I am trying to be fair here, there is one additional implicit > conversion that I would love to figure out how to add to the language. > (Well, I know how to add it, I just don't think I'll ever get enough > interest to make it happen.) That would be to add some pragmas that > allowed character, string, or numeric literals to private types. The > conversion directly from a character literal to Unbounded_Cyrillic > wouldn't break anything. It also wouldn't help if you had a > Cyrillic_String variable to put in an Unbounded_Cyrillic object. I am not sure that I understand properly what you meant here, but anyway, I can repeat that literals are very significant, and making possible to have (non-trivial) literals for private types would be very good thing. For strings (I mean Unbounded_Strings) this is especially important. It is the primary need; full-scale implicit conversions between Strings and Unbounded_Strings are also desirable, but the case of literals is certainly the most important. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-09 21:35 ` Alexandre E. Kopilovitch @ 2003-10-10 18:10 ` Robert I. Eachus 2003-10-11 19:43 ` Alexandre E. Kopilovitch 0 siblings, 1 reply; 44+ messages in thread From: Robert I. Eachus @ 2003-10-10 18:10 UTC (permalink / raw) Alexandre E. Kopilovitch wrote: > I'm not sure, though. Yes, Japanese is quite impressive in this regard, > I have seen that in a raw reality (my daughter, being a linguist, had some > correspondense by e-mail with several Japanese girls, and I was called for > decoding and encoding those emails - well, it took some time and effort). > But that is on the surface. When you go deep into real application problems, > the situation may change: I know well that there are subtle and unpleasant > problems with Russian encodings, and I know nothing about Japanese at that > level. Have you played the Minesweeper game that comes with Windows in expert mode? Working with Japanese text is like that when you have to deal with encodings. They have three alphabets, and almost all words can be written using more than one. But the RIGHT one to use often depends on context. > Oh, it seems that I see (at last!) what you mean: you assume that conversions > between encodings should be implicit! But this is far from desirable in real > applications! > I understand perfectly that there should not be two competing kinds of implicit > conversions (one between encodings and another between String and Unbounded_String). > So we have to choose between them. > > You assumed that implicit conversions between encodings are more natural and > more desirable than implicit conversions between String and Unbounded_String. Agreed, but you miss the problem. Right now you can implicitly convert between string literals and ANY string type. You want to add implicit conversions between String and Unbounded_String, and presumably between Wide_String and Wide_Unbounded_String, and so on. Now when you go to do explicit conversions between string types, the presence of two potential implicit conversions makes some simple seeming conversions hard or impossible to write unambiguously. If you want to go and overload the (existing) explicit conversions on unary "+" everything works just fine. The cases that would be ambiguous with two implicit conversions around are disambiguated by the presence and location of the "+" operators. And as I said, I think adding implicit conversion of string literals to and from Unbounded_String would work. But that isn't the case you want. And of course, adding both that and implict conversions from String to Unbounded_String would be a disaster. The rules in ARM 4.6, Type Conversions and 8.6 The Context of Overload Resolution are pretty complex just to make the language SEEM simple to use. ;-) -- Robert I. Eachus "Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art. It remains to work these concepts into a practical, down-to-earth context, and for this there is nothing more practical or down-to-earth than what I have been talking about all along...the repair of an old motorcycle." -- from Zen and the Art of Motorcycle Maintenance by Robert Pirsig ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-10 18:10 ` Robert I. Eachus @ 2003-10-11 19:43 ` Alexandre E. Kopilovitch 2003-10-12 5:03 ` Robert I. Eachus 0 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-11 19:43 UTC (permalink / raw) To: comp.lang.ada Robert I. Eachus wrote: > Have you played the Minesweeper game that comes with Windows in expert > mode? Well, I almost never played computer games... there was single exception about a decade ago, I played SimCity for half an hour. But I developed several computer games for living, so I am not completely out of contact with that world -;) . Right now I glanced on this Minesweeper game, trying to catch what you meant. > Working with Japanese text is like that when you have to deal > with encodings. They have three alphabets, and almost all words can be > written using more than one. But the RIGHT one to use often depends on > context. I consulted my home linguist on this matter, and she confirmed that there are exactly 3 alphabets in Japanese: Hiragana, Katakana and a set of hierogliphs. Then she went in the usage details, but unfortunately she knows nothing about computer encodings. Anyway, she explicitly rejected the possibility of context-dependent recognition of Japanese characters in writing or in print. So, I still don't understand those specific problems your mentioned: do you mean that not only encodings for these 3 alphabets overlap, but also that no special "switching" characters are used? It seems highly unlikely to me - I hardly can believe that Japanese really use some non-trivial context-dependent encoding scheme instead of simple switching of alphabets (by special characters) or using non-overlapped encodings for these alphabets. And excuse me, but I will not believe in that until a native Japanese programmer confirm that... and explain the reasons, however briefly. (Those problems with Japanese encodings that I met in emails were of entirely another kind, they had no in common with context-dependent encodings.) > > Oh, it seems that I see (at last!) what you mean: you assume that conversions > > between encodings should be implicit! But this is far from desirable in real > > applications! > > > > I understand perfectly that there should not be two competing kinds of implicit > > conversions (one between encodings and another between String and Unbounded_String). > > So we have to choose between them. > > > You assumed that implicit conversions between encodings are more natural and > more desirable than implicit conversions between String and Unbounded_String. > > Agreed, but you miss the problem. Right now you can implicitly convert > between string literals and ANY string type. Do you mean that, for example, an initialization S : String := "literal"; includes implicit conversion? That is, a string literal itself belongs to some type other than String? And I didn't catch what you mean by "ANY string type" - surely that "ANY" can't include Unbounded_String type, as in U_S : Unbounded_String := "literal"; -- illegal (but I want it to be legal) > You want to add implicit > conversions between String and Unbounded_String, and presumably between > Wide_String and Wide_Unbounded_String, and so on. Yes. But I'm ready to accept some restrictions... and the minimum which I want is implicit conversions from string literals to Unbounded_String and Bounded_String. Additionally, it will be very good to permit implicit conversions from non-literal strings to other string types (I mean String to Unbounded_String and vice versa etc.) in initializations. Regarding full-scale implicit conversions (in assignments and expressions), I'm not sure... perhaps associated problems overweight the gains there indeed, and in fact the gains in those cases are more doubtful. > Now when you go to do explicit conversions between string types, the > presence of two potential implicit conversions makes some simple seeming > conversions hard or impossible to write unambiguously. If you want to > go and overload the (existing) explicit conversions on unary "+" > everything works just fine. The cases that would be ambiguous with two > implicit conversions around are disambiguated by the presence and > location of the "+" operators. I think I understand this problem of ambiguity. For example, in S : String := ... U_S : Unbounded_String := ... ... U_S := Translate(S, ,,,); is going to be ambiguous if we have Translate both from String to String and from Unbounded_String to Unbounded_String -- there appears the diagram: conversion S --------------> Temp_1 | | | | Translate | | Translate | | v conversion v Temp_2 -------------> U_S with two possible paths - via Temp_1:Unbounded_String or via Temp_2:String . But what I want to stress is that this diagram is always commutative, that is, all possible paths from the source to the destination will always lead to the same result. The crucial point is that this may be rigorously proven, so we can safely pick either path. I understand that value of statements about rigorous proofs is generally doubtful as it pertains to real programming, and in particular, to compilers. But fortunately in Ada we have a professional team specializing in this area - I mean SPARK people, and I hope that they may provide sufficient support for this method of disambiguation, if needed. > And as I said, I think adding implicit conversion of string literals to > and from Unbounded_String would work. But that isn't the case you want. Actually this is exactly the minimum which I want, as I just explained above. > And of course, adding both that and implict conversions from String to > Unbounded_String would be a disaster. I'm ready to admit that in general case, which includes assignments and expressions. In fact, requirement of explicit conversions in assignments and expressions not only brings some inconvenience, but also acts as a kind of warning which sometimes may be useful, so there is much less desire for implicit conversions in those cases. Probably there is only one common case where those implicit conversions (in expression) are definitely desired: S1 : String := ... S2 : String := ... U_S : Unbounded_String := ... ... Text_IO.Put(S1 & U_S & S2); > The rules in ARM 4.6, Type Conversions and 8.6 The Context of Overload > Resolution are pretty complex just to make the language SEEM simple to > use. ;-) I believe that (I can't say that I *understand* that, though -;) . Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-11 19:43 ` Alexandre E. Kopilovitch @ 2003-10-12 5:03 ` Robert I. Eachus 2003-10-13 9:07 ` Dmitry A. Kazakov ` (2 more replies) 0 siblings, 3 replies; 44+ messages in thread From: Robert I. Eachus @ 2003-10-12 5:03 UTC (permalink / raw) Alexandre E. Kopilovitch wrote: > I consulted my home linguist on this matter, and she confirmed that there are > exactly 3 alphabets in Japanese: Hiragana, Katakana and a set of hierogliphs. > Then she went in the usage details, but unfortunately she knows nothing about > computer encodings. Anyway, she explicitly rejected the possibility of > context-dependent recognition of Japanese characters in writing or in print. No, the problem is not that the meanings of the characters are context dependent, although in several cases there are over a hundred Kanji that match a single Katakana or Hirigana syllable. (Katakana and Hirigana are phonetic, and there are many more Kanji than phonetically different syllables.) But that is not the problem. In a single text you may see the same word spelled in all three alphabets, or in a mixture of say Hirigana and Kanji. The alphabet chosen to write the word adds or confirms contextual information. The Hirigana alphabet was originally designed for use by women, and is therefore often used to add a feminine implication to a word. The same goes for Katakana and foreign words or things. So you can spell "trousers" in three different alphabets to mean men's pants, womens slacks, and blue jeans. And of course, using the "wrong" spelling in some contexts can be an intentional nasty insult. That is the Minesweeper problem. >>Agreed, but you miss the problem. Right now you can implicitly convert >>between string literals and ANY string type. > > > Do you mean that, for example, an initialization > > S : String := "literal"; > > includes implicit conversion? That is, a string literal itself belongs to some > type other than String? That is correct. > > And I didn't catch what you mean by "ANY string type" - surely that "ANY" > can't include Unbounded_String type, as in In Ada a character type is any enumeration type where one or more of the enumeration values is a character literal. A string type is a one dimensional array of characters. (The index type need not be Integer, or even an integer type.) > > U_S : Unbounded_String := "literal"; -- illegal (but I want it to be legal) > String literals are a universal type that can be implicitly converted to any string type. As I said it would be possible to make this case legal by making Unbounded_String (and presumably similar types) string types. But that would work against what you really want, since now, if you also allow Foo: String := "foo"; O_S : Unbounded_String := Foo; --implicit conversion U_S : Unbounded_String := "literal"; -- Can't work now. ^ ambiguous could be: function ""(L: string_literal) return Unbounded_String; or function ""(L: string_literal) return String; followed by function ""(L: String) return Unbounded_String; But as I said, if you overload unary "+" with the conversion from String to Unbounded_String (and probably vice-versa), then everything works. You write U_S : Unbounded_String := +"literal"; and it all works, you get one implicit conversion (to String) and one explicit conversion (from String to Unbounded_String). How many years of those little plus signs do I need to match all the verbiage we have exchanged on this subject? Now if you want to recommend that in Ada 200X, package Ada.Strings.Unbounded include: function "+" (Source : in String) return Unbounded_String renames To_Unbounded_String; function "+" (Source : in Unbounded_String) return String renames To_String; I will certainly support that. I don't really know why they were left out of Ada.Strings.Unbounded while function To_Unbounded_String (Length : in Natural) return Unbounded_String; function "*" (Left : in Natural; Right : in String) return Unbounded_String; function "*" (Left : in Natural; Right : in Unbounded_String) return Unbounded_String; Although that does mean you can write our canonical example as: U_S : Unbounded_String := 1 * "literal"; > I think I understand this problem of ambiguity. For example, in > > S : String := ... > U_S : Unbounded_String := ... > ... > U_S := Translate(S, ,,,); > > is going to be ambiguous if we have Translate both from String to String and > from Unbounded_String to Unbounded_String -- there appears the diagram: > > conversion > S --------------> Temp_1 > | | > | | > Translate | | Translate > | | > v conversion v > Temp_2 -------------> U_S > > with two possible paths - via Temp_1:Unbounded_String or via Temp_2:String . > But what I want to stress is that this diagram is always commutative, that is, > all possible paths from the source to the destination will always lead to the > same result. The crucial point is that this may be rigorously proven, so we > can safely pick either path. Would be nice if it were true, but remember you can, if you feel like it overide one of the Translate functions with a different meaning. -- Robert I. Eachus "Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art. It remains to work these concepts into a practical, down-to-earth context, and for this there is nothing more practical or down-to-earth than what I have been talking about all along...the repair of an old motorcycle." -- from Zen and the Art of Motorcycle Maintenance by Robert Pirsig ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-12 5:03 ` Robert I. Eachus @ 2003-10-13 9:07 ` Dmitry A. Kazakov 2003-10-13 14:36 ` Alexandre E. Kopilovitch 2003-10-17 20:26 ` Randy Brukardt 2 siblings, 0 replies; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-13 9:07 UTC (permalink / raw) On Sun, 12 Oct 2003 05:03:09 GMT, "Robert I. Eachus" <rieachus@comcast.net> wrote: >String literals are a universal type that can be implicitly converted to >any string type. As I said it would be possible to make this case legal >by making Unbounded_String (and presumably similar types) string types. > But that would work against what you really want, since now, if you >also allow > > Foo: String := "foo"; > O_S : Unbounded_String := Foo; --implicit conversion > U_S : Unbounded_String := "literal"; -- Can't work now. > ^ ambiguous could be: > function ""(L: string_literal) return Unbounded_String; >or > function ""(L: string_literal) return String; followed by > function ""(L: String) return Unbounded_String; and function ""(L: string_literal) return String; followed by function ""(L: String) return Unbounded_String; followed by function ""(L: Unbounded_String) return String; followed by function ""(L: String) return Unbounded_String; followed by function ""(L: Unbounded_String) return String; followed by ... Clearly domination rules a la C++ are required to disambiguate type conversions. Moreover one will probably need some pragmas to prioritize conversions. For instance, to ensure that for temp results the compiler would prefer String over Unbounded_String. It is not easy, though nobody claimed that it is! (:-)) --- Regards, Dmitry Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-12 5:03 ` Robert I. Eachus 2003-10-13 9:07 ` Dmitry A. Kazakov @ 2003-10-13 14:36 ` Alexandre E. Kopilovitch 2003-10-13 19:46 ` Robert I. Eachus 2003-10-17 20:26 ` Randy Brukardt 2 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-13 14:36 UTC (permalink / raw) To: comp.lang.ada Robert I. Eachus wrote: > In a single text you may see > the same word spelled in all three alphabets, or in a mixture of say > Hirigana and Kanji. The alphabet chosen to write the word adds or > confirms contextual information. The Hirigana alphabet was originally > designed for use by women, and is therefore often used to add a feminine > implication to a word. The same goes for Katakana and foreign words or > things. So you can spell "trousers" in three different alphabets to > mean men's pants, womens slacks, and blue jeans. I consulted my home linguist once more, and this time a discussion became slightly hot: although she confirmed genesis of Hiragana and Katakana, she had a big trouble with understanding your final sentence in the paragraph. After some discussion it appeared that she had two disagreements with your thesis and example: first, she insisted that the thesis is far from generally applicable, and actually may be true for borderline cases only; second, that your particular example isn't good, at least for current state of Japanese: if you want to write "trousers", you may write just that, not specializing a particular kind (the word taken from French, naturally using Katakana), and what is most important, all those words are different by themselves, without regard to particular notation. > And of course, using the "wrong" spelling in some contexts can be an > intentional nasty insult. That is the Minesweeper problem. Well, this is somehow true for perhaps every language. For example, there is a classical "mine" for a foreigner trying to speak Russian: just shift the stress in the word from the second syllable to first one, and you instantly convert "to write" into "to urinate" (or, adding a short prefix, "to describe" into "to urinate on [something]"); and note that a stress is almost never showed in regular Russian written or printed texts. > > > > U_S : Unbounded_String := "literal"; -- illegal (but I want it to be legal) > > > String literals are a universal type that can be implicitly converted to > any string type. As I said it would be possible to make this case legal > by making Unbounded_String (and presumably similar types) string types. So the minimum, which I want, can be achieved (without much effort, if I understand you properly). > But that would work against what you really want, since now, if you > also allow > > Foo: String := "foo"; > O_S : Unbounded_String := Foo; --implicit conversion > U_S : Unbounded_String := "literal"; -- Can't work now. > ^ ambiguous could be: > function ""(L: string_literal) return Unbounded_String; > or > function ""(L: string_literal) return String; followed by > function ""(L: String) return Unbounded_String; Implicit conversions for literals is the most important case, both practically and ideologically. So, if the choice is between "implicit conversions for literals only" and "no implicit conversions at all, as it is now" then I definitely choose first option. > But as I said, if you overload unary "+" with the conversion from String > to Unbounded_String (and probably vice-versa), then everything works. > You write > U_S : Unbounded_String := +"literal"; > and it all works, you get one implicit conversion (to String) and one > explicit conversion (from String to Unbounded_String). > > How many years of those little plus signs do I need to match all the > verbiage we have exchanged on this subject? These little plus signs constantly make a programmer remembering that String and Unbounded_String are different types, which often is an inadequate view. (And it is almost always inadequate view for literals.) > Now if you want to recommend that in Ada 200X, package > Ada.Strings.Unbounded include: > > function "+" (Source : in String) return Unbounded_String > renames To_Unbounded_String; > function "+" (Source : in Unbounded_String) return String > renames To_String; I can tolerate the first of them, but I definitely dislike second one - "+" here is certainly bad name (application programmers, unlike compiler writers, will not associate this "+" with "additional conversion"). I recall that there was discussion in Ada-Comment on this issue (in think in 2002) and the name "@" was proposed (perhaps by Robert Dewar, but I may be mistaken in that) for those conversions, or for some broader purpose, I don't remember exactly. I think that if both above conversions will have the same name then "@" is much better than "+" for them. > I will certainly support that. I don't really know why they were left > out of Ada.Strings.Unbounded while > > function To_Unbounded_String (Length : in Natural) > return Unbounded_String; > > function "*" (Left : in Natural; > Right : in String) > return Unbounded_String; > > function "*" (Left : in Natural; > Right : in Unbounded_String) > return Unbounded_String; > > Although that does mean you can write our canonical example as: > > U_S : Unbounded_String := 1 * "literal"; Yes, sometimes it is even slightly better than "+". Perhaps just because of the presence of "1 *" that "+" was left out. > > I think I understand this problem of ambiguity. For example, in > > > > S : String := ... > > U_S : Unbounded_String := ... > > ... > > U_S := Translate(S, ,,,); > > > > is going to be ambiguous if we have Translate both from String to String and > > from Unbounded_String to Unbounded_String -- there appears the diagram: > > > > conversion > > S --------------> Temp_1 > > | | > > | | > > Translate | | Translate > > | | > > v conversion v > > Temp_2 -------------> U_S > > > > with two possible paths - via Temp_1:Unbounded_String or via Temp_2:String . > > But what I want to stress is that this diagram is always commutative, that is, > > all possible paths from the source to the destination will always lead to the > > same result. The crucial point is that this may be rigorously proven, so we > > can safely pick either path. > > Would be nice if it were true, but remember you can, if you feel like it > overide one of the Translate functions with a different meaning. Actually no, because one can override it for derived type only, but there is no relation for derived type that can guarantee commutativity of the diagram. In other words, a type derived by extension ("with" for tagged types) does not inherit relations of this kind. Therefore, if you derive some type, say, Decorated_Unbounded_String from Unbounded_String, there will be no implicit conversion between String and Decorated_Unbounded_String, unless you re-establish the appropriate relation between them, with all associated verification. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-13 14:36 ` Alexandre E. Kopilovitch @ 2003-10-13 19:46 ` Robert I. Eachus 2003-10-14 1:35 ` Jeffrey Carter 2003-10-14 17:11 ` Alexandre E. Kopilovitch 0 siblings, 2 replies; 44+ messages in thread From: Robert I. Eachus @ 2003-10-13 19:46 UTC (permalink / raw) Alexandre E. Kopilovitch wrote: > I consulted my home linguist once more, and this time a discussion became > slightly hot: although she confirmed genesis of Hiragana and Katakana, she > had a big trouble with understanding your final sentence in the paragraph. > After some discussion it appeared that she had two disagreements with your > thesis and example: first, she insisted that the thesis is far from generally > applicable, and actually may be true for borderline cases only; second, that > your particular example isn't good, at least for current state of Japanese: > if you want to write "trousers", you may write just that, not specializing a > particular kind (the word taken from French, naturally using Katakana), and > what is most important, all those words are different by themselves, without > regard to particular notation. Shrug, she is taking the point of view that a native Japanese speaker would: That you just know these things. Today the Katakana version is never wrong, but thirty or forty years ago, it might get you in trouble speaking with/of a man, and definitely when speaking of what a woman was wearing--unless she was wearking blue jeans, and I guess then that was scandal enough! ;-) But speaking as an external observer who has watched as the Japanese language adopted to new technology and phrases, there are no rules that can be reliably used in advance to determine which spelling or spellings will prevail. As your friend says, the feminine implication of Hirigana and the foreign connotation of Katakana provide some guidance, but they are definitely not hard and fast rules. > Well, this is somehow true for perhaps every language. For example, there is > a classical "mine" for a foreigner trying to speak Russian: just shift the stress > in the word from the second syllable to first one, and you instantly convert > "to write" into "to urinate" (or, adding a short prefix, "to describe" into > "to urinate on [something]"); and note that a stress is almost never showed in > regular Russian written or printed texts. Of course, my point was that most of the unexploded bombs in Japanese tend to be in written Japanese instead of the spoken langage. > Implicit conversions for literals is the most important case, both practically > and ideologically. So, if the choice is between "implicit conversions for > literals only" and "no implicit conversions at all, as it is now" then I > definitely choose first option. Go ahead and advocate it, I certainly won't vote against it. The tricky part would be proposing the language notation for binding a set of literals to a private type. > I recall that there was discussion in Ada-Comment on this issue (in think in > 2002) and the name "@" was proposed (perhaps by Robert Dewar, but I may be > mistaken in that) for those conversions, or for some broader purpose, I don't > remember exactly. I think that if both above conversions will have the same > name then "@" is much better than "+" for them. Yes, another proposal in this area that has never made it into the language. It would not be hard for implementors to define a few "extra" unary and binary operators that programmers could define for specific types. > Yes, sometimes it is even slightly better than "+". Perhaps just because of > the presence of "1 *" that "+" was left out. I have never used the "1 *" operation, preferring to define/overload unary "+". But I have occasionally used say 80*" " for initial values. > Actually no, because one can override it for derived type only, but there is > no relation for derived type that can guarantee commutativity of the diagram. > In other words, a type derived by extension ("with" for tagged types) does not > inherit relations of this kind. Therefore, if you derive some type, say, > Decorated_Unbounded_String from Unbounded_String, there will be no implicit > conversion between String and Decorated_Unbounded_String, unless you re-establish > the appropriate relation between them, with all associated verification. No the operations are defined. In some tagged type cases they become abstract and you have to override them, but it doesn't mean they aren't there. -- Robert I. Eachus "Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art. It remains to work these concepts into a practical, down-to-earth context, and for this there is nothing more practical or down-to-earth than what I have been talking about all along...the repair of an old motorcycle." -- from Zen and the Art of Motorcycle Maintenance by Robert Pirsig ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-13 19:46 ` Robert I. Eachus @ 2003-10-14 1:35 ` Jeffrey Carter 2003-10-14 17:11 ` Alexandre E. Kopilovitch 1 sibling, 0 replies; 44+ messages in thread From: Jeffrey Carter @ 2003-10-14 1:35 UTC (permalink / raw) Robert I. Eachus wrote: > Alexandre E. Kopilovitch wrote: >> I recall that there was discussion in Ada-Comment on this issue (in >> think in >> 2002) and the name "@" was proposed (perhaps by Robert Dewar, but I >> may be >> mistaken in that) for those conversions, or for some broader purpose, >> I don't >> remember exactly. I think that if both above conversions will have the >> same >> name then "@" is much better than "+" for them. > > Yes, another proposal in this area that has never made it into the > language. It would not be hard for implementors to define a few "extra" > unary and binary operators that programmers could define for specific > types. There was a proposal to use the "pillow" character for this purpose that wasn't adopted. This was good, because the pillow became the Euro symbol. Personally, if we're going to have such an operator, I like the idea of "\". This already has the interpretation that what follows should be interpreted differently in many UNIX programs and in C, and it's on most people's keyboards. It might also trick C people into thinking Ada looks familiar. -- Jeff Carter "When danger reared its ugly head, he bravely turned his tail and fled." Monty Python and the Holy Grail 60 ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-13 19:46 ` Robert I. Eachus 2003-10-14 1:35 ` Jeffrey Carter @ 2003-10-14 17:11 ` Alexandre E. Kopilovitch 2003-10-14 20:26 ` Mark A. Biggar 1 sibling, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-14 17:11 UTC (permalink / raw) To: comp.lang.ada Robert I. Eachus wrote: > > Implicit conversions for literals is the most important case, both practically > > and ideologically. So, if the choice is between "implicit conversions for > > literals only" and "no implicit conversions at all, as it is now" then I > > definitely choose first option. > > Go ahead and advocate it, I suppose you mean proposing that in Ada-Comment, right? (I don't know any other place for that, as the comp.lang.ada stage is already passed.) > The tricky > part would be proposing the language notation for binding a set of > literals to a private type. Thank you for this hint (I really needed it, and the word "binding" there appeared somehow important for me). How do you think about straightforward solution (for that language notation), which relies upon new attribute, say, Literal_Conversion? I mean that in Ada.Strings.Unbounded package we can include: for Unbounded_String'Literal_Conversion use To_Unbounded_String; (I understand that the same thing we can say using new pragma Literal_Conversion: pragma Literal_Conversion (Unbounded_String, To_Unbounded_String); but I think that an attribute is somehow better here). Then, there is a problem, for which I need an advice: should this new feature (implicit conversion of literals to private types) be restricted to the types defined in the language standard and types added by an implementation? Or it should be made available to all user-defined types without any restrictions? I tend to think there should be some restrictions, but right now I have neither good criteria for that nor appropriate mechanism for enforcement. Also, I am in doubt about overriding Literal_Conversion for derived types - whether this opportunity desirable or not... and how that overriding (for attributes) may be controlled (the attributes is so mystical thing, I never saw any general explanation of the nature of and rules for attributes - neither in ARM nor in Barnes's book nor in Cohen's book). Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-14 17:11 ` Alexandre E. Kopilovitch @ 2003-10-14 20:26 ` Mark A. Biggar 2003-10-14 20:58 ` Robert I. Eachus 2003-10-15 16:59 ` Alexandre E. Kopilovitch 0 siblings, 2 replies; 44+ messages in thread From: Mark A. Biggar @ 2003-10-14 20:26 UTC (permalink / raw) Alexandre E. Kopilovitch wrote: > Robert I. Eachus wrote: > > >>>Implicit conversions for literals is the most important case, both practically >>>and ideologically. So, if the choice is between "implicit conversions for >>>literals only" and "no implicit conversions at all, as it is now" then I >>>definitely choose first option. >> >>Go ahead and advocate it, > > > I suppose you mean proposing that in Ada-Comment, right? (I don't know any > other place for that, as the comp.lang.ada stage is already passed.) > > >>The tricky >>part would be proposing the language notation for binding a set of >>literals to a private type. > > > Thank you for this hint (I really needed it, and the word "binding" there > appeared somehow important for me). How do you think about straightforward > solution (for that language notation), which relies upon new attribute, say, > Literal_Conversion? I mean that in Ada.Strings.Unbounded package we can include: > > for Unbounded_String'Literal_Conversion use To_Unbounded_String; > > (I understand that the same thing we can say using new pragma Literal_Conversion: > > pragma Literal_Conversion (Unbounded_String, To_Unbounded_String); > > but I think that an attribute is somehow better here). > > Then, there is a problem, for which I need an advice: should this new feature > (implicit conversion of literals to private types) be restricted to the types > defined in the language standard and types added by an implementation? Or it > should be made available to all user-defined types without any restrictions? > I tend to think there should be some restrictions, but right now I have neither > good criteria for that nor appropriate mechanism for enforcement. Also, I am > in doubt about overriding Literal_Conversion for derived types - whether this > opportunity desirable or not... and how that overriding (for attributes) may > be controlled (the attributes is so mystical thing, I never saw any general > explanation of the nature of and rules for attributes - neither in ARM nor in > Barnes's book nor in Cohen's book). With either of the above suggestions you probably also need to specify the type of literal allowed as well. This would allow for multiple different types of literals to be used. -- mark@biggar.org mark.a.biggar@comcast.net ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-14 20:26 ` Mark A. Biggar @ 2003-10-14 20:58 ` Robert I. Eachus 2003-10-15 16:59 ` Alexandre E. Kopilovitch 1 sibling, 0 replies; 44+ messages in thread From: Robert I. Eachus @ 2003-10-14 20:58 UTC (permalink / raw) Mark A. Biggar wrote: > With either of the above suggestions you probably also need to specify > the type of literal allowed as well. This would allow for multiple > different types of literals to be used. Yep, there are types where you want to add numeric literals, and other types where you want character or string literals. But I think Alexander Kopilovitch does have a good idea on how to do it. Bind to a named function that returns the type, and allow that function to raise Constraint_Error. There may be cases, like an arbitrary precision numeric type where you CAN'T support all possible literals but you can try. This of course argues for a numeric literal conversion function that takes a string and is actually run at compile time. But that is a detail. -- Robert I. Eachus "Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art. It remains to work these concepts into a practical, down-to-earth context, and for this there is nothing more practical or down-to-earth than what I have been talking about all along...the repair of an old motorcycle." -- from Zen and the Art of Motorcycle Maintenance by Robert Pirsig ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-14 20:26 ` Mark A. Biggar 2003-10-14 20:58 ` Robert I. Eachus @ 2003-10-15 16:59 ` Alexandre E. Kopilovitch 2003-10-15 20:38 ` (see below) 2003-10-16 8:01 ` Dmitry A. Kazakov 1 sibling, 2 replies; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-15 16:59 UTC (permalink / raw) To: comp.lang.ada Mark A. Biggar wrote: > > for Unbounded_String'Literal_Conversion use To_Unbounded_String; > > > > (I understand that the same thing we can say using new pragma Literal_Conversion: > > > > pragma Literal_Conversion (Unbounded_String, To_Unbounded_String); > > > > but I think that an attribute is somehow better here). > > With either of the above suggestions you probably also need to specify > the type of literal allowed as well. This would allow for multiple > different types of literals to be used. I don't know whether it makes sense to have multiple types of literals for some private type. Are there appropriate examples? But anyway, I see no problems with that in proposed notation: you have just provide appropriate functions for conversion, with the same name (similarly to usual overloading): type Flex is private; for Flex'Literal_Conversion use To_Flex; function To_Flex(Source : String) return Flex; function To_Flex(Source : Integer) return Flex; Compiler will use one of those conversion functions depending on the literal type it encountered. And I think that any other way to tell the compiler an information for this purpose can't be better. My concern is quite opposite: I'd like have a control for blocking such overloadings (thus denying an opportunity for multiple literal types for a given type), including those that may emerge within a derivation of a new type; I don't know whether it is possible - to control that overloading - and it worries me slightly. Although I don't count all that multiplicity of literal types as too significant matter. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-15 16:59 ` Alexandre E. Kopilovitch @ 2003-10-15 20:38 ` (see below) 2003-10-16 0:31 ` Alexandre E. Kopilovitch 2003-10-16 8:01 ` Dmitry A. Kazakov 1 sibling, 1 reply; 44+ messages in thread From: (see below) @ 2003-10-15 20:38 UTC (permalink / raw) On 15/10/03 17:59, in article mailman.91.1066237341.25614.comp.lang.ada@ada-france.org, "Alexandre E. Kopilovitch" <aek@vib.usr.pu.ru> wrote: > type Flex is private; > for Flex'Literal_Conversion use To_Flex; ... > My concern is quite opposite: I'd like have a control for blocking such > overloadings (thus denying an opportunity for multiple literal types for a > given type), for Flex'Literal_Conversion use <>; -- ?? > including those that may emerge within a derivation of a new type; for Flex'Class'Literal_Conversion use <>; -- ?? -- Bill ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-15 20:38 ` (see below) @ 2003-10-16 0:31 ` Alexandre E. Kopilovitch 2003-10-16 2:30 ` (see below) 0 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-16 0:31 UTC (permalink / raw) To: comp.lang.ada "(see below)" <yaldnifb@blueyonder.co.uk> wrote: > > type Flex is private; > > for Flex'Literal_Conversion use To_Flex; > ... > > My concern is quite opposite: I'd like have a control for blocking such > > overloadings (thus denying an opportunity for multiple literal types for a > > given type), > > for Flex'Literal_Conversion use <>; -- ?? Well, I don't see this form as informative, but I must confess that here I'm guilty myself, because this part of my concern was probably a sort of self-inflicted FUD. But > > including those that may emerge within a derivation of a new type; > > for Flex'Class'Literal_Conversion use <>; -- ?? this part of the concern is real, and idea about 'Class seems interesting and perhaps good enough. I think that the angle brackets are actually not needed here, we can simply write: for Flex'Class'Literal_Conversion use To_Flex; and this will mean that for all types belonging to Flex'Class two restrictions are in effect: 1) To_Flex functions declared in other packages cannot be used for conversions of literals; 2) Literal_Conversion attribute cannot be (re)defined in other packages. So, there will be a clear difference between for Flex'Literal_Conversion use To_Flex; and for Flex'Class'Literal_Conversion use To_Flex; Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-16 0:31 ` Alexandre E. Kopilovitch @ 2003-10-16 2:30 ` (see below) 2003-10-16 13:54 ` Alexandre E. Kopilovitch 0 siblings, 1 reply; 44+ messages in thread From: (see below) @ 2003-10-16 2:30 UTC (permalink / raw) On 16/10/03 01:31, in article mailman.93.1066263890.25614.comp.lang.ada@ada-france.org, "Alexandre E. Kopilovitch" <aek@vib.usr.pu.ru> wrote: > this part of the concern is real, and idea about 'Class seems interesting and > perhaps good enough. I think that the angle brackets are actually not needed > here, we can simply write: > > for Flex'Class'Literal_Conversion use To_Flex; I was not clear enough. I intended the "<>" to mean that only the predefined literal conversion was to be used. Perhaps that is not necessary? -- Bill ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-16 2:30 ` (see below) @ 2003-10-16 13:54 ` Alexandre E. Kopilovitch 2003-10-16 14:11 ` (see below) 0 siblings, 1 reply; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-16 13:54 UTC (permalink / raw) To: comp.lang.ada "(see below)" <yaldnifb@blueyonder.co.uk> wrote: > > this part of the concern is real, and idea about 'Class seems interesting and > > perhaps good enough. I think that the angle brackets are actually not needed > > here, we can simply write: > > > > for Flex'Class'Literal_Conversion use To_Flex; > > I was not clear enough. I intended the "<>" to mean that only the predefined > literal conversion was to be used. Perhaps that is not necessary? It is desirable option, I just didn't catch it myself. But I doubt that "<>" here is good: is there a place in Ada where "<>" carries the sense of "predefined"? Actually we may write for that case: for Flex'Class'Literal_Conversion use Flex'Literal_Conversion; It will certainly carry the indended sense, but it is quite long and somehow indirectly, thus looking as one more idiom. So, I'd prefer your suggestion if "<>" is already associated with "predefined", but if not (and if there is no other conventional symbol for "predefined") then I'd stick to the latter (long) notation. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-16 13:54 ` Alexandre E. Kopilovitch @ 2003-10-16 14:11 ` (see below) 0 siblings, 0 replies; 44+ messages in thread From: (see below) @ 2003-10-16 14:11 UTC (permalink / raw) On 16/10/03 14:54, in article mailman.99.1066312586.25614.comp.lang.ada@ada-france.org, "Alexandre E. Kopilovitch" <aek@vib.usr.pu.ru> wrote: >> I was not clear enough. I intended the "<>" to mean that only the predefined >> literal conversion was to be used. Perhaps that is not necessary? > > It is desirable option, I just didn't catch it myself. But I doubt that "<>" > here is good: is there a place in Ada where "<>" carries the sense of > "predefined"? Kind of. Sort of. 8-) When you specify generic formal functions over a a generic formal type, e.g.: generic type T is ....; function "+" (L,R : in Thing) return Thing is <>; ... Meaning that the primitive "+" for thing is the default. > Actually we may write for that case: > > for Flex'Class'Literal_Conversion use Flex'Literal_Conversion; > > It will certainly carry the indended sense, but it is quite long and somehow > indirectly, thus looking as one more idiom. So, I'd prefer your suggestion if > "<>" is already associated with "predefined", but if not (and if there is no > other conventional symbol for "predefined") then I'd stick to the latter > (long) notation. Yours is better, because more explicit, I think. -- Bill ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-15 16:59 ` Alexandre E. Kopilovitch 2003-10-15 20:38 ` (see below) @ 2003-10-16 8:01 ` Dmitry A. Kazakov 1 sibling, 0 replies; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-16 8:01 UTC (permalink / raw) On Wed, 15 Oct 2003 20:59:00 +0400 (MSD), "Alexandre E. Kopilovitch" <aek@vib.usr.pu.ru> wrote: >My concern is quite opposite: I'd like have a control for blocking such >overloadings (thus denying an opportunity for multiple literal types for a given >type), including those that may emerge within a derivation of a new type; >I don't know whether it is possible - to control that overloading - and it >worries me slightly. You can't. Formally a literal of type T and a parameterless function returning T are indistinguishable. So if you derive you inherit all of them. However, if these functions were considered primitive operations (so be a subject of overriding, not overloading) AND disallowing were allowed, then one could do something with that. --- Regards, Dmitry Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-12 5:03 ` Robert I. Eachus 2003-10-13 9:07 ` Dmitry A. Kazakov 2003-10-13 14:36 ` Alexandre E. Kopilovitch @ 2003-10-17 20:26 ` Randy Brukardt 2003-10-17 21:39 ` Alexandre E. Kopilovitch 2003-10-17 23:03 ` Robert I. Eachus 2 siblings, 2 replies; 44+ messages in thread From: Randy Brukardt @ 2003-10-17 20:26 UTC (permalink / raw) "Robert I. Eachus" <rieachus@comcast.net> wrote in message news:3F88E067.30209@comcast.net... > Now if you want to recommend that in Ada 200X, package > Ada.Strings.Unbounded include: > > function "+" (Source : in String) return Unbounded_String > renames To_Unbounded_String; > function "+" (Source : in Unbounded_String) return String > renames To_String; > > I will certainly support that. I don't really know why they were left > out of Ada.Strings.Unbounded while They were briefly in AI-301's improvements to Ada.Strings, but enough people think that they're ugly that they were taken out. Earlier, Robert said: > And as I said, I think adding implicit conversion of string literals to > and from Unbounded_String would work. No, actually it wouldn't. It would make a lot of existing code ambiguous. A : Unbounded_String; B : Unbounded_String := A & "something"; Since the string literal could have either type String or Unbounded_String, and "&" can have operands of either type, the expression would become ambiguous. It's virtually impossible to "improve" the current Ada.Strings.Unbounded. We'd have to create a whole new package for that, and that would be a tough sell for the standard. My preference would be a package in which String occurs only in the To_xxx routines, along with a way to use string literals directly as Unbounded_Strings. (Non-literal conversions from String should be explicit.) That would be a much more consistent abstraction than Ada.Strings.Unbounded. But we're pretty much stuck with Ada.Strings.Unbounded as it is. Randy. ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-17 20:26 ` Randy Brukardt @ 2003-10-17 21:39 ` Alexandre E. Kopilovitch 2003-10-17 23:03 ` Robert I. Eachus 1 sibling, 0 replies; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-17 21:39 UTC (permalink / raw) To: comp.lang.ada Randy Brukardt wrote: > > And as I said, I think adding implicit conversion of string literals to > > and from Unbounded_String would work. > > No, actually it wouldn't. It would make a lot of existing code ambiguous. > > A : Unbounded_String; > B : Unbounded_String := A & "something"; > > Since the string literal could have either type String or Unbounded_String, > and "&" can have operands of either type, the expression would become > ambiguous. If we accept basic commutative diagrams technique at least *for predefined packages* this problem disappears. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-17 20:26 ` Randy Brukardt 2003-10-17 21:39 ` Alexandre E. Kopilovitch @ 2003-10-17 23:03 ` Robert I. Eachus 2003-10-23 21:11 ` Alexandre E. Kopilovitch 1 sibling, 1 reply; 44+ messages in thread From: Robert I. Eachus @ 2003-10-17 23:03 UTC (permalink / raw) Randy Brukardt wrote: > "Robert I. Eachus" <rieachus@comcast.net> wrote in message > news:3F88E067.30209@comcast.net... > >>Now if you want to recommend that in Ada 200X, package >>Ada.Strings.Unbounded include: >> >> function "+" (Source : in String) return Unbounded_String >> renames To_Unbounded_String; >> function "+" (Source : in Unbounded_String) return String >> renames To_String; >> >>I will certainly support that. I don't really know why they were left >>out of Ada.Strings.Unbounded while > > > They were briefly in AI-301's improvements to Ada.Strings, but enough people > think that they're ugly that they were taken out. Foo: Unbounded_String := + "some_string"; -- is ugly? Was anything offered as a non-ugly alternative? And Russ still thinks there is some point to arguing for adding += to Ada? Lots of luck Russ, but don't be surprised if many of us feel that you are Don Quixote tilting at windmills. > No, actually it wouldn't. It would make a lot of existing code ambiguous. > > A : Unbounded_String; > B : Unbounded_String := A & "something"; > > Since the string literal could have either type String or Unbounded_String, > and "&" can have operands of either type, the expression would become > ambiguous. Ouch! I forgot about that case when explaining why the non-literal conversions would be ambiguous. Having worked on this though, you can handle it as you say by a completely new package with different overloadings of "&"--which won't happen. -- Robert I. Eachus "Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art. It remains to work these concepts into a practical, down-to-earth context, and for this there is nothing more practical or down-to-earth than what I have been talking about all along...the repair of an old motorcycle." -- from Zen and the Art of Motorcycle Maintenance by Robert Pirsig ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-17 23:03 ` Robert I. Eachus @ 2003-10-23 21:11 ` Alexandre E. Kopilovitch 0 siblings, 0 replies; 44+ messages in thread From: Alexandre E. Kopilovitch @ 2003-10-23 21:11 UTC (permalink / raw) To: comp.lang.ada Robert I. Eachus wrote: > Randy Brukardt wrote: > > ... > > No, actually it wouldn't. It would make a lot of existing code ambiguous. > > A : Unbounded_String; > B : Unbounded_String := A & "something"; > > Since the string literal could have either type String or Unbounded_String, > and "&" can have operands of either type, the expression would become > ambiguous. > > Ouch! I forgot about that case when explaining why the non-literal > conversions would be ambiguous. Having worked on this though, you can > handle it as you say by a completely new package with different > overloadings of "&"--which won't happen. Actually there is no need for completely new package with different overloadings of "&" - just one little new pragma seems sufficient: pragma Non_Literal_Argument(subroutine-name, parameter-name); stating (immediately following the function's spec) that actual argument for this parameter cannot be a literal. That is, the whole spec for that "&" will be: function "&" (Left : in Unbounded_String; Right : in String) return Unbounded_String; pragma Non_Literal_Argument("&", Right); and that unhappy ambiguity disappears forever. Alexander Kopilovitch aek@vib.usr.pu.ru Saint-Petersburg Russia ^ permalink raw reply [flat|nested] 44+ messages in thread
* RE: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) @ 2003-10-03 12:00 amado.alves 2003-10-03 15:54 ` Mark A. Biggar 2003-10-03 20:41 ` Dmitry A. Kazakov 0 siblings, 2 replies; 44+ messages in thread From: amado.alves @ 2003-10-03 12:00 UTC (permalink / raw) To: comp.lang.ada "...we already have implicit conversion in Ada for numeric literals." (Jeff) I know, and that was the 'precedent' for my proposal. And I'm familiar with the 'nightmare' of generalised implicit conversion (in C). But this could be tamed in Ada by defining the effect scope of pragma Implicit_Conversion to be the immediately enclosing block. Or a family of such pragmas for fine control of the effect: Implicit_Conversion_Down_From_Here Implicit_Conversion_Up_To_The_Next_Enclosing_Block Implicit_Conversion_All_Over However I am not totally confortable with *pragmas* for this class of effect. Is their precedence? ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-03 12:00 amado.alves @ 2003-10-03 15:54 ` Mark A. Biggar 2003-10-03 20:41 ` Dmitry A. Kazakov 1 sibling, 0 replies; 44+ messages in thread From: Mark A. Biggar @ 2003-10-03 15:54 UTC (permalink / raw) amado.alves wrote: > "...we already have implicit conversion in Ada for numeric literals." (Jeff) > > I know, and that was the 'precedent' for my proposal. And I'm familiar with the 'nightmare' of generalised implicit conversion (in C). But this could be tamed in Ada by defining the effect scope of pragma Implicit_Conversion to be the immediately enclosing block. > > Or a family of such pragmas for fine control of the effect: > > Implicit_Conversion_Down_From_Here > Implicit_Conversion_Up_To_The_Next_Enclosing_Block > Implicit_Conversion_All_Over > > However I am not totally confortable with *pragmas* for this class of effect. Is their precedence? The ARG would probably reject these pragmas out of hand. There are currently no pragmas that take a syntatically illegal program and make it legal, which is what the above do (actually pragma import is an exception to this rule, but it fills in missing syntax, not changes illegal to legal). It's alright to make legal things illegal (E.g., pragme restrictions) but not the other way. See LRM 2.8(16) -- mark@biggar.org mark.a.biggar@comcast.net ^ permalink raw reply [flat|nested] 44+ messages in thread
* RE: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-03 12:00 amado.alves 2003-10-03 15:54 ` Mark A. Biggar @ 2003-10-03 20:41 ` Dmitry A. Kazakov 1 sibling, 0 replies; 44+ messages in thread From: Dmitry A. Kazakov @ 2003-10-03 20:41 UTC (permalink / raw) amado.alves wrote: > "...we already have implicit conversion in Ada for numeric literals." > (Jeff) > > I know, and that was the 'precedent' for my proposal. And I'm familiar > with the 'nightmare' of generalised implicit conversion (in C). But this > could be tamed in Ada by defining the effect scope of pragma > Implicit_Conversion to be the immediately enclosing block. > > Or a family of such pragmas for fine control of the effect: > > Implicit_Conversion_Down_From_Here > Implicit_Conversion_Up_To_The_Next_Enclosing_Block > Implicit_Conversion_All_Over > > However I am not totally confortable with *pragmas* for this class of > effect. Is their precedence? It would be awful. The only way *any* conversion may appear is a definition of a derived type. type B is new A with ...; -- This inherits and defines a [view] conversion Similarly there should be a way to *not* inherit an implementation of A, but provide all necessary conversions instead. -- Regards, Dmitry A. Kazakov www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 44+ messages in thread
* RE: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) @ 2003-10-03 16:12 amado.alves 2003-10-04 12:16 ` Preben Randhol 0 siblings, 1 reply; 44+ messages in thread From: amado.alves @ 2003-10-03 16:12 UTC (permalink / raw) To: comp.lang.ada "It's alright to make legal things illegal (E.g., pragma restrictions) but not the other way." (Mark) Then make the implicit conversion legal, and supply the restriction No_Implicit_Conversion ;-) ^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) 2003-10-03 16:12 amado.alves @ 2003-10-04 12:16 ` Preben Randhol 0 siblings, 0 replies; 44+ messages in thread From: Preben Randhol @ 2003-10-04 12:16 UTC (permalink / raw) On 2003-10-03, amado.alves <amado.alves@netcabo.pt> wrote: > "It's alright to make legal things illegal (E.g., > pragma restrictions) but not the other way." (Mark) > > Then make the implicit conversion legal, and supply the restriction > No_Implicit_Conversion ;-) Why not use C instead? Preben ^ permalink raw reply [flat|nested] 44+ messages in thread
end of thread, other threads:[~2003-10-23 21:11 UTC | newest] Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2003-10-02 18:02 U : Unbounded_String := "bla bla bla"; (was: Is the Writing...) amado.alves 2003-10-03 0:05 ` U : Unbounded String : " Alexander Kopilovitch 2003-10-03 20:46 ` Dmitry A. Kazakov 2003-10-03 9:00 ` U : Unbounded_String := " Preben Randhol 2003-10-03 11:17 ` Jeff C, 2003-10-04 2:49 ` Robert I. Eachus 2003-10-06 23:57 ` Alexandre E. Kopilovitch 2003-10-07 8:51 ` Dmitry A. Kazakov 2003-10-08 19:12 ` Alexandre E. Kopilovitch 2003-10-09 8:42 ` Dmitry A. Kazakov 2003-10-10 20:58 ` Alexander Kopilovitch 2003-10-13 8:35 ` Dmitry A. Kazakov 2003-10-13 21:43 ` Alexandre E. Kopilovitch 2003-10-14 8:09 ` Dmitry A. Kazakov 2003-10-16 9:39 ` Alexandre E. Kopilovitch 2003-10-18 10:57 ` Dmitry A. Kazakov 2003-10-08 23:18 ` Robert I. Eachus 2003-10-09 21:35 ` Alexandre E. Kopilovitch 2003-10-10 18:10 ` Robert I. Eachus 2003-10-11 19:43 ` Alexandre E. Kopilovitch 2003-10-12 5:03 ` Robert I. Eachus 2003-10-13 9:07 ` Dmitry A. Kazakov 2003-10-13 14:36 ` Alexandre E. Kopilovitch 2003-10-13 19:46 ` Robert I. Eachus 2003-10-14 1:35 ` Jeffrey Carter 2003-10-14 17:11 ` Alexandre E. Kopilovitch 2003-10-14 20:26 ` Mark A. Biggar 2003-10-14 20:58 ` Robert I. Eachus 2003-10-15 16:59 ` Alexandre E. Kopilovitch 2003-10-15 20:38 ` (see below) 2003-10-16 0:31 ` Alexandre E. Kopilovitch 2003-10-16 2:30 ` (see below) 2003-10-16 13:54 ` Alexandre E. Kopilovitch 2003-10-16 14:11 ` (see below) 2003-10-16 8:01 ` Dmitry A. Kazakov 2003-10-17 20:26 ` Randy Brukardt 2003-10-17 21:39 ` Alexandre E. Kopilovitch 2003-10-17 23:03 ` Robert I. Eachus 2003-10-23 21:11 ` Alexandre E. Kopilovitch -- strict thread matches above, loose matches on Subject: below -- 2003-10-03 12:00 amado.alves 2003-10-03 15:54 ` Mark A. Biggar 2003-10-03 20:41 ` Dmitry A. Kazakov 2003-10-03 16:12 amado.alves 2003-10-04 12:16 ` Preben Randhol
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox