comp.lang.ada
 help / color / mirror / Atom feed
* Re: Why use C++?
       [not found]       ` <1fd0cc9b-859d-428e-b68a-11e34de84225@gz10g2000vbb.googlegroups.com>
@ 2011-08-10 19:05         ` Niklas Holsti
  2011-08-10 22:37           ` Randy Brukardt
  0 siblings, 1 reply; 87+ messages in thread
From: Niklas Holsti @ 2011-08-10 19:05 UTC (permalink / raw)


Stuart Redmann wrote:
> Stuart Redmann wrote
>>> Just one example: We use a lot of COM components in our company. If I
>>> want to use them under C++, I have to
>>> (A) Make sure that I initialize the COM run-time properly,
>>> (B) #import the type libraries,
>>> (C) generate mapping classes from the type libraries so that I can use
>>> the interfaces like I can do under VB.
> 
> Paul wrote:
>> Yes the right tool for these type of applications is usually not C++. If
>> your project was to create the COM object  then C++ would probably be the
>> language of choice.
> 
> It is ;-)
> 
> Sometimes I wish that there was a C++ interpreter as well. This way I
> had to remember only one set of run-time library calls instead of two.
> However, the industry prefers to re-invent the nth virtual machine and
> the xth GUI toolkit. And of course, none of the popular programming
> languages (not even Ada95) supports proper non-wrapping unsigned
> integer types (and I have to use unsigned types for image
> processing).

In what way does Ada (95, or the current standard 2005) not support your 
needs for unsigned types?

If you dislike the wrapping behaviour of the Ada "modular" unsigned 
types, what is wrong with defining your own unsigned integer type in 
Ada, as in

    type My_Number is range 0 .. 53621;

or whatever range you need. This gives you non-wrapping arithmetic with 
run-time checks for overflow, underflow, and range.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-10 19:05         ` Why use C++? Niklas Holsti
@ 2011-08-10 22:37           ` Randy Brukardt
  2011-08-10 22:49             ` Ludovic Brenta
  2011-08-11  7:54             ` Dmitry A. Kazakov
  0 siblings, 2 replies; 87+ messages in thread
From: Randy Brukardt @ 2011-08-10 22:37 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:9ag33sFmuaU1@mid.individual.net...
...
> In what way does Ada (95, or the current standard 2005) not support your 
> needs for unsigned types?

I'm not the OP, but...

> If you dislike the wrapping behaviour of the Ada "modular" unsigned types, 
> what is wrong with defining your own unsigned integer type in Ada, as in
>
>    type My_Number is range 0 .. 53621;

This does not work on the largest unsigned integer type (mod 2**32 in 
Janus/Ada). Ada 95 requires

    type My_Number is range 0 .. 2**32-1;

to be rejected, as the upper bound exceeds Max_Int. (This is technically a 
"signed_integer_type_definition", which shows the cause of the problem.)

The Janus/Ada intermediate code can generate overflow checks for the largest 
integer type, but I've failed to find any reasonable way to model such a 
type in Ada (assuming we want a real integer type, with numeric literals). 
The closest is the "non-standard integer type" is 3.5.4(26) -- but why 
should such a basic need be "non-standard"? And having it "non-standard" 
means that you can't use type declarations like the above to use it. Yuck.

There are uses for wrapping types, but they are far less likely than wanting 
overflow detection. The default should be to catch errors, not turn them 
into different ones.

Too late to fix, unfortunately.

                                                      Randy.





> or whatever range you need. This gives you non-wrapping arithmetic with 
> run-time checks for overflow, underflow, and range.
>
> -- 
> Niklas Holsti
> Tidorum Ltd
> niklas holsti tidorum fi
>       .      @       . 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-10 22:37           ` Randy Brukardt
@ 2011-08-10 22:49             ` Ludovic Brenta
  2011-08-12  4:54               ` Randy Brukardt
  2011-08-11  7:54             ` Dmitry A. Kazakov
  1 sibling, 1 reply; 87+ messages in thread
From: Ludovic Brenta @ 2011-08-10 22:49 UTC (permalink / raw)


"Randy Brukardt" writes on comp.lang.ada:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
> news:9ag33sFmuaU1@mid.individual.net...
> ...
>> In what way does Ada (95, or the current standard 2005) not support your 
>> needs for unsigned types?
>
> I'm not the OP, but...
>
>> If you dislike the wrapping behaviour of the Ada "modular" unsigned types, 
>> what is wrong with defining your own unsigned integer type in Ada, as in
>>
>>    type My_Number is range 0 .. 53621;
>
> This does not work on the largest unsigned integer type (mod 2**32 in 
> Janus/Ada).

Presuming Max_Int = 2**31 - 1, then I'd say range 0 .. 53621 is
perfectly legal.  It would only be illegal on a 16-bit machine.

> Ada 95 requires
>
>     type My_Number is range 0 .. 2**32-1;
>
> to be rejected, as the upper bound exceeds Max_Int. (This is
> technically a "signed_integer_type_definition", which shows the cause
> of the problem.)

But this simply stems from the requirement that Ada be efficiently
implementable on this target hardware, which has 32-bit integers.

-- 
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-10 22:37           ` Randy Brukardt
  2011-08-10 22:49             ` Ludovic Brenta
@ 2011-08-11  7:54             ` Dmitry A. Kazakov
  2011-08-11  8:20               ` Jed
                                 ` (2 more replies)
  1 sibling, 3 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-11  7:54 UTC (permalink / raw)


On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:

> There are uses for wrapping types, but they are far less likely than wanting 
> overflow detection. The default should be to catch errors, not turn them 
> into different ones.

The OP mentioned image processing, the behavior frequently needed there is
saturated integer arithmetic, which is nether ranged nor modular.

As for modular types, wrapping is the mathematically correct behavior, it
is not an error.

You just cannot provide every possible arithmetic at the language level.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11  7:54             ` Dmitry A. Kazakov
@ 2011-08-11  8:20               ` Jed
  2011-08-11  9:13                 ` Dmitry A. Kazakov
  2011-08-12 11:48                 ` Stuart Redmann
  2011-08-12  5:02               ` Randy Brukardt
  2011-08-18 13:39               ` Louisa
  2 siblings, 2 replies; 87+ messages in thread
From: Jed @ 2011-08-11  8:20 UTC (permalink / raw)



"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...
> On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:
>
>> There are uses for wrapping types, but they are far less likely than 
>> wanting
>> overflow detection. The default should be to catch errors, not turn 
>> them
>> into different ones.
>
> The OP mentioned image processing, the behavior frequently needed there 
> is
> saturated integer arithmetic, which is nether ranged nor modular.
>
> As for modular types, wrapping is the mathematically correct behavior, 
> it
> is not an error.
>
> You just cannot provide every possible arithmetic at the language 
> level.
>

What do you think the practical level of limitation is? Were you thinking 
"beyond integers" with your statement? What kinds of integer types would 
you like to see built-in to a hypothetical ideal language? 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11  8:20               ` Jed
@ 2011-08-11  9:13                 ` Dmitry A. Kazakov
  2011-08-11 10:57                   ` Jed
  2011-08-12 11:48                 ` Stuart Redmann
  1 sibling, 1 reply; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-11  9:13 UTC (permalink / raw)


On Thu, 11 Aug 2011 03:20:55 -0500, Jed wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...
>> On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:
>>
>>> There are uses for wrapping types, but they are far less likely than wanting
>>> overflow detection. The default should be to catch errors, not turn them
>>> into different ones.
>>
>> The OP mentioned image processing, the behavior frequently needed there is
>> saturated integer arithmetic, which is nether ranged nor modular.
>>
>> As for modular types, wrapping is the mathematically correct behavior, 
>> it is not an error.
>>
>> You just cannot provide every possible arithmetic at the language 
>> level.
>>
> 
> What do you think the practical level of limitation is?

Types that cannot be constructed by the operations of the types algebra 
provided by the language. Richer the algebra is, less built-in types 
needed.

> Were you thinking "beyond integers" with your statement?

There is nothing special in integer types, except that for the types 
algebra you need ordinal numerals (i.e. at least one integer type).

> What kinds of integer types would 
> you like to see built-in to a hypothetical ideal language?

As little as possible. Usage of predefined integer types makes design more 
fragile. E.g. it is bad when somebody uses Ada's Integer, or C++ int (OK, 
in C++ there is no other option), instead of introducing a type reflecting 
the application semantics rather than the decision of some compiler vendor 
motivated by other concerns.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11  9:13                 ` Dmitry A. Kazakov
@ 2011-08-11 10:57                   ` Jed
  2011-08-11 11:43                     ` Georg Bauhaus
                                       ` (2 more replies)
  0 siblings, 3 replies; 87+ messages in thread
From: Jed @ 2011-08-11 10:57 UTC (permalink / raw)



"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:150vz10ihvb5a.1lysmewa1muz4$.dlg@40tude.net...
> On Thu, 11 Aug 2011 03:20:55 -0500, Jed wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...
>>> On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:
>>>
>>>> There are uses for wrapping types, but they are far less likely than 
>>>> wanting
>>>> overflow detection. The default should be to catch errors, not turn 
>>>> them
>>>> into different ones.
>>>
>>> The OP mentioned image processing, the behavior frequently needed 
>>> there is
>>> saturated integer arithmetic, which is nether ranged nor modular.
>>>
>>> As for modular types, wrapping is the mathematically correct 
>>> behavior,
>>> it is not an error.
>>>
>>> You just cannot provide every possible arithmetic at the language
>>> level.
>>>
>>
>> What do you think the practical level of limitation is?
>
> Types that cannot be constructed by the operations of the types algebra
> provided by the language. Richer the algebra is, less built-in types
> needed.

Will you give an example to clarify please?

>
>> Were you thinking "beyond integers" with your statement?
>
> There is nothing special in integer types, except that for the types
> algebra you need ordinal numerals (i.e. at least one integer type).

I know a little bit (enough to "be dangerous") of Intel assembly, and it 
looks like C/C++ integer types are a direct reflection of what is at that 
level. Even the fact that by default, integer literals are signed. So in 
that respect, I think they are special. They are chosen for efficiency as 
a direct reflection of hardware (I'm not saying all hardware is the same, 
mind you).

>
>> What kinds of integer types would
>> you like to see built-in to a hypothetical ideal language?
>
> As little as possible. Usage of predefined integer types makes design 
> more
> fragile. E.g. it is bad when somebody uses Ada's Integer, or C++ int 
> (OK,
> in C++ there is no other option), instead of introducing a type 
> reflecting
> the application semantics rather than the decision of some compiler 
> vendor
> motivated by other concerns.
>

You have to build those types though based upon the built-in ones, yes? 
If so, aren't modular, wrapping and overflow-checked equally good for 
something and all worthy of being in a language? Of course there is 
signed, unsigned and the different bit widths as candidates also. And are 
not those built-ins good to use "raw" in many cases? Are you suggesting 
that a language have NO types? There is assembly language for that (and 
the instruction set pretty much dictates what types you have to work with 
at that level). 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11 10:57                   ` Jed
@ 2011-08-11 11:43                     ` Georg Bauhaus
  2011-08-12  5:07                       ` Jed
  2011-08-11 13:11                     ` Nomen Nescio
  2011-08-11 15:09                     ` Dmitry A. Kazakov
  2 siblings, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-11 11:43 UTC (permalink / raw)


On 11.08.11 12:57, Jed wrote:

> You have to build those types though based upon the built-in ones, yes? 
> If so, aren't modular, wrapping and overflow-checked equally good for 
> something and all worthy of being in a language? Of course there is 
> signed, unsigned and the different bit widths as candidates also. And are 
> not those built-ins good to use "raw" in many cases? Are you suggesting 
> that a language have NO types? There is assembly language for that (and 
> the instruction set pretty much dictates what types you have to work with 
> at that level). 

I like means to construct discrete types, for example, from
items expressible in the language without explicitly referring
to some built-in named type.  Let literals 0 and 15 be known to the
compiler.  Let the programmer say,
  "I want a ranking system, and I want the type for that system
to have values {0, ..., 15}. Plus I need the following operations,
..., but not these ..., and my "-"/'Pred function should be saturating
(if there should be operator overloading)."

In this request, does the programmer need to refer to int, or uint16_t,
or Ada's Natural, or anything for that?

- georg



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11 10:57                   ` Jed
  2011-08-11 11:43                     ` Georg Bauhaus
@ 2011-08-11 13:11                     ` Nomen Nescio
  2011-08-11 15:11                       ` Paul
  2011-08-12  5:15                       ` Jed
  2011-08-11 15:09                     ` Dmitry A. Kazakov
  2 siblings, 2 replies; 87+ messages in thread
From: Nomen Nescio @ 2011-08-11 13:11 UTC (permalink / raw)


"Jed" <jehdiah@orbitway.net> wrote:

> I know a little bit (enough to "be dangerous") of Intel assembly, and it 
> looks like C/C++ integer types are a direct reflection of what is at that 
> level. Even the fact that by default, integer literals are signed. So in 
> that respect, I think they are special. They are chosen for efficiency as 
> a direct reflection of hardware (I'm not saying all hardware is the same, 
> mind you).

What?

Of course there are unsigned integers even in the twisted world of Intel.
What's more C/C++ integer types are not a direct reflection of anything
except the language spec otherwise the code would be completely not
portable.

> You have to build those types though based upon the built-in ones, yes? 
> If so, aren't modular, wrapping and overflow-checked equally good for 
> something and all worthy of being in a language? Of course there is 
> signed, unsigned and the different bit widths as candidates also. And are 
> not those built-ins good to use "raw" in many cases? Are you suggesting 
> that a language have NO types? There is assembly language for that (and 
> the instruction set pretty much dictates what types you have to work with 
> at that level). 

It's not true assembly language has no types, especially with certain
systems and certain assemblers. The type in assembly language *does*
usually reflect the native types of the underlying machine very closely,
obviously.




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11 10:57                   ` Jed
  2011-08-11 11:43                     ` Georg Bauhaus
  2011-08-11 13:11                     ` Nomen Nescio
@ 2011-08-11 15:09                     ` Dmitry A. Kazakov
  2011-08-12  5:03                       ` Jed
  2 siblings, 1 reply; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-11 15:09 UTC (permalink / raw)


On Thu, 11 Aug 2011 05:57:32 -0500, Jed wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:150vz10ihvb5a.1lysmewa1muz4$.dlg@40tude.net...

>> Types that cannot be constructed by the operations of the types algebra
>> provided by the language. Richer the algebra is, less built-in types
>> needed.
> 
> Will you give an example to clarify please?

Ada does not have predefined modular types in the package Standard, because
any modular type can be constructed using "type T is mod N" construct.

>>> What kinds of integer types would
>>> you like to see built-in to a hypothetical ideal language?
>>
>> As little as possible. Usage of predefined integer types makes design more
>> fragile. E.g. it is bad when somebody uses Ada's Integer, or C++ int (OK,
>> in C++ there is no other option), instead of introducing a type reflecting
>> the application semantics rather than the decision of some compiler vendor
>> motivated by other concerns.
> 
> You have to build those types though based upon the built-in ones, yes?

That depends. In Ada integer types are constructed from ranges. In order to
specify the range you need literals. The type of a literal
(Universal_Integer) is not the type of the result. So technically the type
is not built on Universal_Integer.
 
> If so, aren't modular, wrapping and overflow-checked equally good for 
> something and all worthy of being in a language? Of course there is 
> signed, unsigned and the different bit widths as candidates also.

Packed decimal, non-symmetric around zero, saturating, non-continuous, big
number, non-complement encoded, integers with ideal values (like NaN etc)
and so on, and so forth. You cannot have them built-in.

> And are not those built-ins good to use "raw" in many cases?

Always bad. The language shall distinguish the interface, the contract the
type fulfills, and the implementation of that type. "raw" turns this
picture upside down. First comes an implementation and then, frequently
never, consideration whether this implementation is right for the needs.

> Are you suggesting that a language have NO types?

I suggest a minimal possible set of given types and as wide as possible set
of types constructed. The point is that it is the application domain
requirements to drive the design, and the choice of types in particular.

> There is assembly language for that (and 
> the instruction set pretty much dictates what types you have to work with 
> at that level).

There is no language without types. That includes assembly languages.
Weak/paraconsistent typing does not mean no typing. In each context you
have a type associated with any object involved in an operation on that
object.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11 13:11                     ` Nomen Nescio
@ 2011-08-11 15:11                       ` Paul
  2011-08-12  5:15                       ` Jed
  1 sibling, 0 replies; 87+ messages in thread
From: Paul @ 2011-08-11 15:11 UTC (permalink / raw)



"Nomen Nescio" <nobody@dizum.com> wrote in message 
news:4209d782502610f6dbc08933d358b6d6@dizum.com...
> "Jed" <jehdiah@orbitway.net> wrote:
>
>> I know a little bit (enough to "be dangerous") of Intel assembly, and it
>> looks like C/C++ integer types are a direct reflection of what is at that
>> level. Even the fact that by default, integer literals are signed. So in
>> that respect, I think they are special. They are chosen for efficiency as
>> a direct reflection of hardware (I'm not saying all hardware is the same,
>> mind you).
>
> What?
>
> Of course there are unsigned integers even in the twisted world of Intel.
> What's more C/C++ integer types are not a direct reflection of anything
> except the language spec otherwise the code would be completely not
> portable.
>
>> You have to build those types though based upon the built-in ones, yes?
>> If so, aren't modular, wrapping and overflow-checked equally good for
>> something and all worthy of being in a language? Of course there is
>> signed, unsigned and the different bit widths as candidates also. And are
>> not those built-ins good to use "raw" in many cases? Are you suggesting
>> that a language have NO types? There is assembly language for that (and
>> the instruction set pretty much dictates what types you have to work with
>> at that level).
>
> It's not true assembly language has no types, especially with certain
> systems and certain assemblers. The type in assembly language *does*
> usually reflect the native types of the underlying machine very closely,
> obviously.
>
I think what jedhiah means is there are no types at asm level, just sizes 
such as Byte, DWord, QWord etc. These scalars are neither signed not 
unsigned but there are operations that can process the data which assumes a 
sign.
Of course there are different types if you consider scalars, real numbers, 
pointers and UDT's as different types.


--- Posted via news://freenews.netfront.net/ - Complaints to news@netfront.net ---



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-10 22:49             ` Ludovic Brenta
@ 2011-08-12  4:54               ` Randy Brukardt
  0 siblings, 0 replies; 87+ messages in thread
From: Randy Brukardt @ 2011-08-12  4:54 UTC (permalink / raw)


"Ludovic Brenta" <ludovic@ludovic-brenta.org> wrote in message 
news:87pqkc6fwj.fsf@ludovic-brenta.org...
> "Randy Brukardt" writes on comp.lang.ada:
...
>> Ada 95 requires
>>
>>     type My_Number is range 0 .. 2**32-1;
>>
>> to be rejected, as the upper bound exceeds Max_Int. (This is
>> technically a "signed_integer_type_definition", which shows the cause
>> of the problem.)
>
> But this simply stems from the requirement that Ada be efficiently
> implementable on this target hardware, which has 32-bit integers.

Huh? Unsigned 32-bit integers are directly supported by the machine hardware 
(on every real 32-bit processor I've ever looked at). The problem is that 
Ada requires all integers to be signed or wrap-around, which is silly.

                                         Randy.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11  7:54             ` Dmitry A. Kazakov
  2011-08-11  8:20               ` Jed
@ 2011-08-12  5:02               ` Randy Brukardt
  2011-08-12  5:16                 ` Robert Wessel
                                   ` (2 more replies)
  2011-08-18 13:39               ` Louisa
  2 siblings, 3 replies; 87+ messages in thread
From: Randy Brukardt @ 2011-08-12  5:02 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...
> On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:
>
>> There are uses for wrapping types, but they are far less likely than 
>> wanting
>> overflow detection. The default should be to catch errors, not turn them
>> into different ones.
>
> The OP mentioned image processing, the behavior frequently needed there is
> saturated integer arithmetic, which is nether ranged nor modular.

I'm not familar with any hardware on which saturated integer arithmetic is 
provided. If it was, I would expect direct language support for it (which 
need not require new kinds of types).

> As for modular types, wrapping is the mathematically correct behavior, it
> is not an error.

Right, but using modular types as a stand-in for unsigned integers doesn't 
really work.

Syntactically, Ada ought to have a way to declare an overflow-checked 
integer type without any restriction to signed/unsigned representations.

Modular types are something altogether different (and in all honesty, rare 
enough that direct language support is of dubious value -- most of us 
supported adding them to Ada 95 simply because it was the only way to get 
any support for the largest unsigned integer type).

> You just cannot provide every possible arithmetic at the language level.

No, you should have a single general integer type (with no reflection on 
"bits" or "signs"). The rest should be modeled as libraries. (I think we 
actually agree on this -- must be something in the water today. ;-) Ada has 
screwed this up (but not as badly as most languages).

                                    Randy.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11 15:09                     ` Dmitry A. Kazakov
@ 2011-08-12  5:03                       ` Jed
  2011-08-12  8:32                         ` Georg Bauhaus
  2011-08-12  9:21                         ` Dmitry A. Kazakov
  0 siblings, 2 replies; 87+ messages in thread
From: Jed @ 2011-08-12  5:03 UTC (permalink / raw)



"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1q4c610mmuxn7$.1k6s78wa0r8fj.dlg@40tude.net...
> On Thu, 11 Aug 2011 05:57:32 -0500, Jed wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:150vz10ihvb5a.1lysmewa1muz4$.dlg@40tude.net...
>
>>> Types that cannot be constructed by the operations of the types 
>>> algebra
>>> provided by the language. Richer the algebra is, less built-in types
>>> needed.
>>
>> Will you give an example to clarify please?
>
> Ada does not have predefined modular types in the package Standard, 
> because
> any modular type can be constructed using "type T is mod N" construct.

OK, I plainly see what you meant now even without your example. It's 
amazing what a good night's sleep will do. Last night, I was keying-in on 
the word "algebra" too much and it blinded me to the overall meaning.

I'm not sure I'd want that level of flexibility in a language. I think it 
may be too much effort for too little benefit. It seems it may be doing 
something in software that could be supported in hardware but isn't (?).
Maybe too much of a "purist" approach. I'd need more info about it and be 
convinced it is worth it before I'd put that on my shortlist of 
compelling features for a language to have. Investigation into Ada? Does 
Ada, in your opinion, "do it right/better"?

>
>>>> What kinds of integer types would
>>>> you like to see built-in to a hypothetical ideal language?
>>>
>>> As little as possible. Usage of predefined integer types makes design 
>>> more
>>> fragile. E.g. it is bad when somebody uses Ada's Integer, or C++ int 
>>> (OK,
>>> in C++ there is no other option), instead of introducing a type 
>>> reflecting
>>> the application semantics rather than the decision of some compiler 
>>> vendor
>>> motivated by other concerns.
>>
>> You have to build those types though based upon the built-in ones, 
>> yes?
>
> That depends. In Ada integer types are constructed from ranges.

Oh, I thought ranges in Ada were "subtype ranges", hence being based upon 
the built-ins.

>  In order to
> specify the range you need literals. The type of a literal
> (Universal_Integer) is not the type of the result. So technically the 
> type
> is not built on Universal_Integer.

But the "subrange" is meant as "subrange of some existing type", yes? If 
so, what existing type?

>
>> If so, aren't modular, wrapping and overflow-checked equally good for
>> something and all worthy of being in a language? Of course there is
>> signed, unsigned and the different bit widths as candidates also.
>
> Packed decimal, non-symmetric around zero, saturating, non-continuous, 
> big
> number, non-complement encoded, integers with ideal values (like NaN 
> etc)
> and so on, and so forth. You cannot have them built-in.

Why not? What do you understand as "built-in"? Intel x86 has BCDs, so 
would be easy to have that as a type in a language. Do you require 
"built-in" to mean "supported directly by the hardware"?

>
>> And are not those built-ins good to use "raw" in many cases?
>
> Always bad.

Pretty strong opinion there.

> The language shall distinguish the interface, the contract the
> type fulfills, and the implementation of that type. "raw" turns this
> picture upside down. First comes an implementation and then, frequently
> never, consideration whether this implementation is right for the 
> needs.

You meant "application" instead of "language", right? So you would prefer 
to have a special "loop counter integer" instead of using a general 
unsigned integer?

>
>> Are you suggesting that a language have NO types?
>
> I suggest a minimal possible set of given types and as wide as possible 
> set
> of types constructed.

And do you reject wrapping an existing integer types to get desired 
semantics?

> The point is that it is the application domain
> requirements to drive the design, and the choice of types in 
> particular.
>

Can that be achieved? At what degree of complexity? Can it/should it be 
hardware-supported? Is it compromised if implemented in software?

>> There is assembly language for that (and
>> the instruction set pretty much dictates what types you have to work 
>> with
>> at that level).
>
> There is no language without types. That includes assembly languages.

"Typeless" then (quotes usually omitted for the meaning is generally 
understood).

> Weak/paraconsistent typing does not mean no typing. In each context you
> have a type associated with any object involved in an operation on that
> object.

From a HLL-designer POV, saying assembly language is "typeless", is fair, 
and usually stated that way. One can pick nits at that if they want to.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11 11:43                     ` Georg Bauhaus
@ 2011-08-12  5:07                       ` Jed
  0 siblings, 0 replies; 87+ messages in thread
From: Jed @ 2011-08-12  5:07 UTC (permalink / raw)



"Georg Bauhaus" <rm.dash-bauhaus@futureapps.de> wrote in message 
news:4e43c072$0$6625$9b4e6d93@newsspool2.arcor-online.net...
> On 11.08.11 12:57, Jed wrote:
>
>> You have to build those types though based upon the built-in ones, 
>> yes?
>> If so, aren't modular, wrapping and overflow-checked equally good for
>> something and all worthy of being in a language? Of course there is
>> signed, unsigned and the different bit widths as candidates also. And 
>> are
>> not those built-ins good to use "raw" in many cases? Are you 
>> suggesting
>> that a language have NO types? There is assembly language for that 
>> (and
>> the instruction set pretty much dictates what types you have to work 
>> with
>> at that level).
>
> I like means to construct discrete types, for example, from
> items expressible in the language without explicitly referring
> to some built-in named type.  Let literals 0 and 15 be known to the
> compiler.  Let the programmer say,
>  "I want a ranking system, and I want the type for that system
> to have values {0, ..., 15}. Plus I need the following operations,
> ..., but not these ..., and my "-"/'Pred function should be saturating
> (if there should be operator overloading)."
>
> In this request, does the programmer need to refer to int, or uint16_t,
> or Ada's Natural, or anything for that?
>

I understand that desire. I consider that 
higher-than-close-to-the-machine level of programming for, it's just 
abstracting some underlying thing. I wonder if that is what Dmitry wants: 
a higher level of abstraction. 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11 13:11                     ` Nomen Nescio
  2011-08-11 15:11                       ` Paul
@ 2011-08-12  5:15                       ` Jed
  2011-08-12 21:39                         ` Fritz Wuehler
  1 sibling, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-12  5:15 UTC (permalink / raw)



"Nomen Nescio" <nobody@dizum.com> wrote in message 
news:4209d782502610f6dbc08933d358b6d6@dizum.com...
> "Jed" <jehdiah@orbitway.net> wrote:
>
>> I know a little bit (enough to "be dangerous") of Intel assembly, and 
>> it
>> looks like C/C++ integer types are a direct reflection of what is at 
>> that
>> level. Even the fact that by default, integer literals are signed. So 
>> in
>> that respect, I think they are special. They are chosen for efficiency 
>> as
>> a direct reflection of hardware (I'm not saying all hardware is the 
>> same,
>> mind you).
>
> What?

What what?

>
> Of course there are unsigned integers even in the twisted world of 
> Intel.
> What's more C/C++ integer types are not a direct reflection of anything
> except the language spec otherwise the code would be completely not
> portable.

I didn't say it was. I said it probably arose from that. I'll bet integer 
literals are signed in most assembly languages (going out on a limb with 
this, because I really don't know). Hence, a no-brainer translation of 
HLL code to assembly (recognizing, of course, that compilers are free to 
generate machine code directly, rather than generating assembly).

>
>> You have to build those types though based upon the built-in ones, 
>> yes?
>> If so, aren't modular, wrapping and overflow-checked equally good for
>> something and all worthy of being in a language? Of course there is
>> signed, unsigned and the different bit widths as candidates also. And 
>> are
>> not those built-ins good to use "raw" in many cases? Are you 
>> suggesting
>> that a language have NO types? There is assembly language for that 
>> (and
>> the instruction set pretty much dictates what types you have to work 
>> with
>> at that level).
>
> It's not true assembly language has no types,

It's generally considered "typeless" from the POV of HLL programmers. It 
has relative meaning. No need to be pedantic about it.

> especially with certain
> systems and certain assemblers. The type in assembly language *does*
> usually reflect the native types of the underlying machine very 
> closely,
> obviously.

And I'll bet, more often than not, C/C++ built-in types reflect that 
also. It would be "silly" to specify a language to what is uncommon.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:02               ` Randy Brukardt
@ 2011-08-12  5:16                 ` Robert Wessel
  2011-08-12 16:39                   ` Adam Beneschan
  2011-08-12  5:24                 ` Jed
  2011-08-12  9:40                 ` Dmitry A. Kazakov
  2 siblings, 1 reply; 87+ messages in thread
From: Robert Wessel @ 2011-08-12  5:16 UTC (permalink / raw)


On Fri, 12 Aug 2011 00:02:55 -0500, "Randy Brukardt"
<randy@rrsoftware.com> wrote:

>"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
>news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...
>> On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:
>>
>>> There are uses for wrapping types, but they are far less likely than 
>>> wanting
>>> overflow detection. The default should be to catch errors, not turn them
>>> into different ones.
>>
>> The OP mentioned image processing, the behavior frequently needed there is
>> saturated integer arithmetic, which is nether ranged nor modular.
>
>I'm not familar with any hardware on which saturated integer arithmetic is 
>provided. If it was, I would expect direct language support for it (which 
>need not require new kinds of types).


Many DSPs, graphics coprocessors and the like have saturating integer
arithmetic, usually in addition to conventional arithmetic.  x86 has
saturating integer vector instructions as well, dating back to MMX
(PADDUSW, for example)..



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:02               ` Randy Brukardt
  2011-08-12  5:16                 ` Robert Wessel
@ 2011-08-12  5:24                 ` Jed
  2011-08-12  6:51                   ` Paavo Helde
  2011-08-12 15:50                   ` Fritz Wuehler
  2011-08-12  9:40                 ` Dmitry A. Kazakov
  2 siblings, 2 replies; 87+ messages in thread
From: Jed @ 2011-08-12  5:24 UTC (permalink / raw)



"Randy Brukardt" <randy@rrsoftware.com> wrote in message 
news:j22c61$5lo$1@munin.nbi.dk...

> Modular types are something altogether different (and in all honesty, 
> rare enough that direct language support is of dubious value -- most of 
> us supported adding them to Ada 95 simply because it was the only way 
> to get any support for the largest unsigned integer type).
>

Isn't the wrapping behavior just a consequence of wanting to get a 
representation in which signed and unsigned integers can be easily 
converted to each other? I.e., "they" didn't sit down and say, "let's 
implement unsigned integers with wrapping behavior". 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:24                 ` Jed
@ 2011-08-12  6:51                   ` Paavo Helde
  2011-08-12  7:41                     ` Georg Bauhaus
  2011-08-12 15:50                   ` Fritz Wuehler
  1 sibling, 1 reply; 87+ messages in thread
From: Paavo Helde @ 2011-08-12  6:51 UTC (permalink / raw)


"Jed" <jehdiah@orbitway.net> wrote in news:j22ddq$7pq$1@dont-email.me:

> 
> "Randy Brukardt" <randy@rrsoftware.com> wrote in message 
> news:j22c61$5lo$1@munin.nbi.dk...
> 
>> Modular types are something altogether different (and in all honesty, 
>> rare enough that direct language support is of dubious value -- most 
of 
>> us supported adding them to Ada 95 simply because it was the only way 
>> to get any support for the largest unsigned integer type).
>>
> 
> Isn't the wrapping behavior just a consequence of wanting to get a 
> representation in which signed and unsigned integers can be easily 
> converted to each other? I.e., "they" didn't sit down and say, "let's 
> implement unsigned integers with wrapping behavior". 

Indeed, the 2's complement representation allows to use the same hardware 
instruction for implementing the signed and unsigned addition (ditto for 
subtraction). However, this means that the hardware will raise the 
overflow flag exactly in the same fashion, so it would be equally easy to 
detect the overflow.

I guess that they were instead thinking about intermediate results. As 
the wrap-around point (zero) is very close to the common usage range of 
unsigned integers, it may well happen that in a complex arithmetic 
expression a temporary intermediate result may fall below zero whereas 
the final result will be OK again. This would in effect mean that the 
addition-subtraction operations would lose the associativity property, 
which would cause some headaches. E.g. the programmer should always use 
parens for indicating the relative order of addition and subtraction, 
e.g. a+(b-c) and take care that no intermediate result would overflow. As 
the common C/C++ implementations do not bother to check overflow even for 
the signed types, this would just mean that there would be a lot more of 
formally undefined behavior which seems to work as expected on a lot of 
hardware. So, instead of introducing a lot of UB they chose to make 
things like a+b-c legal by a simple trick of declaring the unsigned types 
wrap-around.

Yes, Ada could have been wiser, I'm sorry to hear this is not the case.

Cheers
Paavo
















^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  6:51                   ` Paavo Helde
@ 2011-08-12  7:41                     ` Georg Bauhaus
  0 siblings, 0 replies; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-12  7:41 UTC (permalink / raw)


On 12.08.11 08:51, Paavo Helde wrote:

> I guess that they were instead thinking about intermediate results. As
> the wrap-around point (zero) is very close to the common usage range of
> unsigned integers, it may well happen that in a complex arithmetic
> expression a temporary intermediate result may fall below zero whereas
> the final result will be OK again. This would in effect mean that the
> addition-subtraction operations would lose the associativity property,
> which would cause some headaches. E.g. the programmer should always use
> parens for indicating the relative order of addition and subtraction,
> e.g. a+(b-c) and take care that no intermediate result would overflow. As
> the common C/C++ implementations do not bother to check overflow even for
> the signed types, this would just mean that there would be a lot more of
> formally undefined behavior which seems to work as expected on a lot of
> hardware. So, instead of introducing a lot of UB they chose to make
> things like a+b-c legal by a simple trick of declaring the unsigned types
> wrap-around.
>
> Yes, Ada could have been wiser, I'm sorry to hear this is not the case.

Sort of; Ada does have a number of definitions around overflow and
when overflow matters, when in intermediate results.

      1. with System;
      2. procedure ovfl is
      3.   type I is range 1 .. System.MAX_INT;
      4.   r, a, b, c: I;
      5. begin
      6.   a := 1;
      7.   b := I'Last;
      8.   c := 2;
      9.   r := a + b - c;  -- overflow
                  |
         >>> warning: value not in range of type "I" defined at line 3
         >>> warning: "Constraint_Error" will be raised at run time

     10.   r := a + (b - c);  -- o.K.
     11.   r := (a + b) - c;  -- overflow
                   |
         >>> warning: value not in range of type "I" defined at line 3
         >>> warning: "Constraint_Error" will be raised at run time

     12. end ovfl;


$ ./ovfl

raised CONSTRAINT_ERROR : ovfl.adb:9 overflow check failed
$




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:03                       ` Jed
@ 2011-08-12  8:32                         ` Georg Bauhaus
  2011-08-12 13:15                           ` Hyman Rosen
  2011-08-12 15:14                           ` Jed
  2011-08-12  9:21                         ` Dmitry A. Kazakov
  1 sibling, 2 replies; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-12  8:32 UTC (permalink / raw)


On 12.08.11 07:03, Jed wrote:
> "Dmitry A. Kazakov"<mailbox@dmitry-kazakov.de>  wrote in message

>> Ada does not have predefined modular types in the package Standard,
>> because
>> any modular type can be constructed using "type T is mod N" construct.
>
> OK, I plainly see what you meant now even without your example. It's
> amazing what a good night's sleep will do. Last night, I was keying-in on
> the word "algebra" too much and it blinded me to the overall meaning.
>
> I'm not sure I'd want that level of flexibility in a language. I think it
> may be too much effort for too little benefit. It seems it may be doing
> something in software that could be supported in hardware but isn't (?).
> Maybe too much of a "purist" approach. I'd need more info about it and be
> convinced it is worth it before I'd put that on my shortlist of
> compelling features for a language to have.

Perhaps it is info if beginners in embedded systems programming
consistently produce much better results (statistics collected
and testedover some 6 years) when they have an expressive  and
likely more intuitive type system for fundamental types.  My guess
is that the absence of representation riddles is a boon.  Put yourself
in the position of a beginner who is woken up at 5p.m. They would not
recall the rules regarding enum vs int, or the collection of traditional
idioms for declaring arrays in C, autotruncation of numeric values
to fit in the space of LHS; and likely neither the more modern
classes addressing hardware things.
   Then, will understanding the type declarations

    type Intensity is range 0 .. 255;
    for Intensity'Size use 8;
    type RGB is (Red, Green, Blue);
    type Pixel is array (RGB) of Intensity;

require much besides what is written?   Compare

    #define RGB_LIMIT 3

    typedef unsigned char Intensity;
    enum RGB {Red, Green, Blue};
    typedef Intensity Pixel[RGB_LIMIT];

and

   switch (1) {
   case 0: break;
   case CHAR_BIT == 8: break;
   }

somewhere.  If I am a beginner, what is the number of technicalities
I must know it order to understand the second way of declaring things?
(I'll leave out that the declare-like-used rule in typedef is
confusing many. Maybe that's because the typdef-ed name isn't in some
fixed position in the typedef.)



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:03                       ` Jed
  2011-08-12  8:32                         ` Georg Bauhaus
@ 2011-08-12  9:21                         ` Dmitry A. Kazakov
  2011-08-12 13:26                           ` Jed
  1 sibling, 1 reply; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-12  9:21 UTC (permalink / raw)


On Fri, 12 Aug 2011 00:03:24 -0500, Jed wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1q4c610mmuxn7$.1k6s78wa0r8fj.dlg@40tude.net...
>> On Thu, 11 Aug 2011 05:57:32 -0500, Jed wrote:
>>
>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>>> news:150vz10ihvb5a.1lysmewa1muz4$.dlg@40tude.net...
>>
>>>> Types that cannot be constructed by the operations of the types algebra
>>>> provided by the language. Richer the algebra is, less built-in types needed.
>>>
>>> Will you give an example to clarify please?
>>
>> Ada does not have predefined modular types in the package Standard, because
>> any modular type can be constructed using "type T is mod N" construct.
> 
> OK, I plainly see what you meant now even without your example. It's 
> amazing what a good night's sleep will do. Last night, I was keying-in on 
> the word "algebra" too much and it blinded me to the overall meaning.
> 
> I'm not sure I'd want that level of flexibility in a language.

I think you get me wrong. It is rather about an ability to state the
contracts than about flexibility. If any type is needed in your design, you
should be able to describe the behavior of that type [by the language
means!] in a way the program reader would understand without looking into
the implementation or compiler/machine documentation.

(Types algebra refers to the set of [meta]operations used to create new
types)

>> That depends. In Ada integer types are constructed from ranges.
> 
> Oh, I thought ranges in Ada were "subtype ranges", hence being based upon 
> the built-ins.

Both:

   type Tiny is range -100..100;  -- A new type
   subtype Nano is Tiny range 0..2; -- A subtype
 
>>  In order to
>> specify the range you need literals. The type of a literal
>> (Universal_Integer) is not the type of the result. So technically the 
>> type is not built on Universal_Integer.
> 
> But the "subrange" is meant as "subrange of some existing type", yes? If 
> so, what existing type?

Mathematically there is no need to have a supertype containing the ranged
one. (The way compiler would implement that type is uninteresting, so long
we are considering only the type interface).

>>> If so, aren't modular, wrapping and overflow-checked equally good for
>>> something and all worthy of being in a language? Of course there is
>>> signed, unsigned and the different bit widths as candidates also.
>>
>> Packed decimal, non-symmetric around zero, saturating, non-continuous, big
>> number, non-complement encoded, integers with ideal values (like NaN etc)
>> and so on, and so forth. You cannot have them built-in.
> 
> Why not? What do you understand as "built-in"? Intel x86 has BCDs, so 
> would be easy to have that as a type in a language. Do you require 
> "built-in" to mean "supported directly by the hardware"?

Built-in = predefined type, e.g. int, char, double.

>> The language shall distinguish the interface, the contract the
>> type fulfills, and the implementation of that type. "raw" turns this
>> picture upside down. First comes an implementation and then, frequently
>> never, consideration whether this implementation is right for the 
>> needs.
> 
> You meant "application" instead of "language", right? So you would prefer 
> to have a special "loop counter integer" instead of using a general 
> unsigned integer?

Sure. In fact in Ada it is much in this way:

   for Index in A'Range loop

I don't care about the type of Index, it is just the type suitable to index
the array A.

>>> Are you suggesting that a language have NO types?
>>
>> I suggest a minimal possible set of given types and as wide as possible set
>> of types constructed.
> 
> And do you reject wrapping an existing integer types to get desired 
> semantics?

Not at all.

>> The point is that it is the application domain
>> requirements to drive the design, and the choice of types in 
>> particular.
> 
> Can that be achieved? At what degree of complexity?

We probably are close to the edge.

> Can it/should it be hardware-supported?

No. Hardware becomes less and less relevant as the software complexity
increases.

> From a HLL-designer POV, saying assembly language is "typeless", is fair, 
> and usually stated that way. One can pick nits at that if they want to.

Yes, but "typeless" actually means broken, implicit types, rather than no
types at all.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:02               ` Randy Brukardt
  2011-08-12  5:16                 ` Robert Wessel
  2011-08-12  5:24                 ` Jed
@ 2011-08-12  9:40                 ` Dmitry A. Kazakov
  2011-08-12  9:45                   ` Ludovic Brenta
  2011-08-13  8:08                   ` Stephen Leake
  2 siblings, 2 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-12  9:40 UTC (permalink / raw)


On Fri, 12 Aug 2011 00:02:55 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...

>> As for modular types, wrapping is the mathematically correct behavior, it
>> is not an error.
> 
> Right, but using modular types as a stand-in for unsigned integers doesn't 
> really work.

I never felt much need in unsigned integers, except when communicating with
hardware. But then the behavior upon overflow is either irrelevant or
better be wrapping.

> Syntactically, Ada ought to have a way to declare an overflow-checked 
> integer type without any restriction to signed/unsigned representations.

Maybe, however that should not be limited to unsigned integers. If you, as
an influential ARG member, are going to change something (:-)), then,
please, consider more general shifted integer types:

   type Shifted range 100..200;
   for Shifted'First use 0; -- The first value is 100.

(Of course there should be better syntax here)

>> You just cannot provide every possible arithmetic at the language level.
> 
> No, you should have a single general integer type (with no reflection on 
> "bits" or "signs"). The rest should be modeled as libraries. (I think we 
> actually agree on this -- must be something in the water today. ;-)

As the result of so called "global warming" we have no summer third year in
a row. Much, much cold water... (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  9:40                 ` Dmitry A. Kazakov
@ 2011-08-12  9:45                   ` Ludovic Brenta
  2011-08-12 10:48                     ` Georg Bauhaus
  2011-08-13  8:08                   ` Stephen Leake
  1 sibling, 1 reply; 87+ messages in thread
From: Ludovic Brenta @ 2011-08-12  9:45 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> As the result of so called "global warming" we have no summer third
> year in a row. Much, much cold water... (:-))

The summer occurred in May and June this year in Europe :)

-- 
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  9:45                   ` Ludovic Brenta
@ 2011-08-12 10:48                     ` Georg Bauhaus
  2011-08-12 15:56                       ` Ludovic Brenta
  0 siblings, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-12 10:48 UTC (permalink / raw)


On 12.08.11 11:45, Ludovic Brenta wrote:
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>> As the result of so called "global warming" we have no summer third
>> year in a row. Much, much cold water... (:-))
> 
> The summer occurred in May and June this year in Europe :)

We need to get more flexible representation clauses for enumeration
types from climate aware compilers.  Also, multiply and statically
dispatching aspects if an Ada program needs to be portable across
planets.
  Obviously,

   type Season is (Spring, Summer, Fall, Winter)
     with (Summer => Summer (Epoch => 21 * Century,
                             Planet => Earth)),
           ...);

Redefinitions of order related operators and attributes
would be automatic.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11  8:20               ` Jed
  2011-08-11  9:13                 ` Dmitry A. Kazakov
@ 2011-08-12 11:48                 ` Stuart Redmann
  2011-08-12 13:12                   ` Vinzent Hoefler
  1 sibling, 1 reply; 87+ messages in thread
From: Stuart Redmann @ 2011-08-12 11:48 UTC (permalink / raw)


On 10 Aug, Randy Brukardt wrote:
> >> There are uses for wrapping types, but they are far less likely
> >> than wanting overflow detection. The default should be to catch
> >> errors, not turn them into different ones.


Dmitry A. Kazakov wrote
> > The OP mentioned image processing, the behavior frequently needed
> > there is saturated integer arithmetic, which is nether ranged
> > nor modular.
> >
> > As for modular types, wrapping is the mathematically correct
> > behavior, it is not an error.
> >
> >
> > You just cannot provide every possible arithmetic at the language
> > level.


On 11 Aug., "Jed" wrote:
> What do you think the practical level of limitation is? Were you thinking
> "beyond integers" with your statement? What kinds of integer types would
> you like to see built-in to a hypothetical ideal language?


Well, I found the following snippet on
http://www.adaic.org/resources/add_content/standards/05rat/html/Rat-1-3-5.html:

<citation (hopefully covered by fair use ;-)>
Ada 95 introduced modular types which are of course unsigned integers.
However it has in certain cases proved very difficult to get unsigned
integers and signed integers to work together. This is a trivial
matter in fragile languages such as C but in Ada the type model has
proved obstructive. The basic problem is converting a value of a
signed type which happens to be negative to an unsigned type. Thus
suppose we want to add a signed offset to an unsigned address value,
we might have
type Offset_Type is range –(2**31) .. 2**31–1;
type Address_Type is mod 2**32;
Offset: Offset_Type;
Address: Address_Type;
We cannot just add Offset to Address because they are of different
types. If we convert the Offset to the address type then we might get
Constraint_Error and so on. The solution in Ada 2005 is to use a new
functional attribute S'Mod which applies to any modular subtype S and
converts a universal integer value to the modular type using the
corresponding mathematical mod operation. So we can now write
Address := Address + Address_Type'Mod(Offset);
</citation>

This is clearly cumbersome: We have to use a wrapping int for
Address_Type because Ada does not provide a non-signed integer as the
underlying hardware supports (at least not guaranteed by the language
spec).

Ideally, we should be able to do this:
type IncrediblyLargeType is range -1e100 .. +1e100;
and Ada provides some large number representation that is suitable for
this range (Chinese remainder theorem). Why should I worry about
performance? Remember that premature optimization is the root of all
evil. If it takes half a second for multiplication of such types, then
so be it. However, if it turns out that performance matters, I should
be able to add attributes to the type so that Ada can choose a
hardware supported representation.

I think that we could also drop the wrapping ints at all (I consider
them rather a kludge). If someone want wrapping semantics, he could
still write:
type MyType is range 0 .. 100;
MyVar : MyType;
MyVar = (MyVar * MyVar) mod 100;

Be honest, how often do you really _want_ a wrapping int?

Regards,
Stuart



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 11:48                 ` Stuart Redmann
@ 2011-08-12 13:12                   ` Vinzent Hoefler
  2011-08-12 15:50                     ` Stuart Redmann
  0 siblings, 1 reply; 87+ messages in thread
From: Vinzent Hoefler @ 2011-08-12 13:12 UTC (permalink / raw)


Stuart Redmann wrote:

> I think that we could also drop the wrapping ints at all (I consider
> them rather a kludge). If someone want wrapping semantics, he could
> still write:
> type MyType is range 0 .. 100;
> MyVar : MyType;
> MyVar = (MyVar * MyVar) mod 100;
>
> Be honest, how often do you really _want_ a wrapping int?

Thinking indices instead of pointers, quite often, actually.


Vinzent.

-- 
f u cn rd ths, u cn gt a gd jb n cmptr prgrmmng.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  8:32                         ` Georg Bauhaus
@ 2011-08-12 13:15                           ` Hyman Rosen
  2011-08-12 22:09                             ` Randy Brukardt
  2011-08-12 15:14                           ` Jed
  1 sibling, 1 reply; 87+ messages in thread
From: Hyman Rosen @ 2011-08-12 13:15 UTC (permalink / raw)


On 8/12/2011 4:32 AM, Georg Bauhaus wrote:
> #define RGB_LIMIT 3
>
> typedef unsigned char Intensity;
> enum RGB {Red, Green, Blue};
> typedef Intensity Pixel[RGB_LIMIT];

I might do this instead as

     typedef unsigned char Intensity;
     enum RGB_Index { Red, Green, Blue, N_RGB_Index };
     typedef Intensity Pixel[N_RGB_Index];

which avoids the need for a macro and ties the number of
intensities in the array directly to its enumerated index.

> switch (1) { case 0: break; case CHAR_BIT == 8: break; }

Oh, that's cute! It's going right into my bag of tricks :-)
You don't need the breaks, though:

     switch (1) { case 0: case CHAR_BIT == 8: ; }

And with C++, you don't need executable code:

// Probe template
template <bool tf> struct probe { };
template <> struct probe<true> { typedef int t; };

// test for 8-bit char
typedef probe<CHAR_BIT == 8>::t is_char_8_bit;

> If I am a beginner, what is the number of technicalities I
 > must know it order to understand the second way of declaring
 > things?

A few; of course, when I was a complete Ada beginner (not that
I'm much more than that now) I was always thrown by the use of
apostrophes as attribute designators. When I see

     for Intensity'Size use 8;

it makes me want to go off searching for where the string ends
and trying to figure out what's going on. I also don't know that
a beginner finds the notion of an enumerated type declared as an
array index to be immediately obvious.

> (I'll leave out that the declare-like-used rule in typedef is
> confusing many. Maybe that's because the typdef-ed name isn't in some
> fixed position in the typedef.)




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  9:21                         ` Dmitry A. Kazakov
@ 2011-08-12 13:26                           ` Jed
  2011-08-12 14:30                             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-12 13:26 UTC (permalink / raw)



"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1vn800hbyx8k4$.1lsveclj56197$.dlg@40tude.net...
> On Fri, 12 Aug 2011 00:03:24 -0500, Jed wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:1q4c610mmuxn7$.1k6s78wa0r8fj.dlg@40tude.net...
>>> On Thu, 11 Aug 2011 05:57:32 -0500, Jed wrote:
>>>
>>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>>>> news:150vz10ihvb5a.1lysmewa1muz4$.dlg@40tude.net...
>>>
>>>>> Types that cannot be constructed by the operations of the types 
>>>>> algebra
>>>>> provided by the language. Richer the algebra is, less built-in 
>>>>> types needed.
>>>>
>>>> Will you give an example to clarify please?
>>>
>>> Ada does not have predefined modular types in the package Standard, 
>>> because
>>> any modular type can be constructed using "type T is mod N" 
>>> construct.
>>
>> OK, I plainly see what you meant now even without your example. It's
>> amazing what a good night's sleep will do. Last night, I was keying-in 
>> on
>> the word "algebra" too much and it blinded me to the overall meaning.
>>
>> I'm not sure I'd want that level of flexibility in a language.
>
> I think you get me wrong. It is rather about an ability to state the
> contracts than about flexibility. If any type is needed in your design, 
> you
> should be able to describe the behavior of that type [by the language
> means!] in a way the program reader would understand without looking 
> into
> the implementation or compiler/machine documentation.

In another post in this thread, I wondered if you just want a higher 
level of abstraction, and then noted that I consider that at a higher 
level than "close to the machine" like C and C++ are. So, is that what 
you want? And do you want that in exclusion of well-defined primitive 
integers?

>>> That depends. In Ada integer types are constructed from ranges.
>>
>> Oh, I thought ranges in Ada were "subtype ranges", hence being based 
>> upon
>> the built-ins.
>
> Both:
>
>   type Tiny is range -100..100;  -- A new type
>   subtype Nano is Tiny range 0..2; -- A subtype
>

What underlies those "new" types though? Aren't they just "syntactic" 
sugar over some well-defined primitives? Wouldn't they have to be? I.e., 
at some point, the hardware has to be interfaced.

>>>  In order to
>>> specify the range you need literals. The type of a literal
>>> (Universal_Integer) is not the type of the result. So technically the
>>> type is not built on Universal_Integer.
>>
>> But the "subrange" is meant as "subrange of some existing type", yes? 
>> If
>> so, what existing type?
>
> Mathematically there is no need to have a supertype containing the 
> ranged
> one.

I'm only concerned about pragmatics (implementation).

> (The way compiler would implement that type is uninteresting,

Ha! To me that is KEY.

> so long we are considering only the type interface).

Abstraction. High level.

>
>>>> If so, aren't modular, wrapping and overflow-checked equally good 
>>>> for
>>>> something and all worthy of being in a language? Of course there is
>>>> signed, unsigned and the different bit widths as candidates also.
>>>
>>> Packed decimal, non-symmetric around zero, saturating, 
>>> non-continuous, big
>>> number, non-complement encoded, integers with ideal values (like NaN 
>>> etc)
>>> and so on, and so forth. You cannot have them built-in.
>>
>> Why not? What do you understand as "built-in"? Intel x86 has BCDs, so
>> would be easy to have that as a type in a language. Do you require
>> "built-in" to mean "supported directly by the hardware"?
>
> Built-in = predefined type, e.g. int, char, double.

OK, good. That's what I though you meant but just making sure we are on 
the same page.

>
>>> The language shall distinguish the interface, the contract the
>>> type fulfills, and the implementation of that type. "raw" turns this
>>> picture upside down. First comes an implementation and then, 
>>> frequently
>>> never, consideration whether this implementation is right for the
>>> needs.
>>
>> You meant "application" instead of "language", right? So you would 
>> prefer
>> to have a special "loop counter integer" instead of using a general
>> unsigned integer?
>
> Sure. In fact in Ada it is much in this way:
>
>   for Index in A'Range loop
>
> I don't care about the type of Index, it is just the type suitable to 
> index
> the array A.

In loop control, I wouldn't care either, but given that it's a compiler 
thing then, it wouldn't have to be a special type at all, for the 
compiler can guarantee that the use of the standard primitive type would 
be used appropriately for it is the one to generate the code from the 
loop.

I care in data structs though what the "physical" type is.

>
>>>> Are you suggesting that a language have NO types?
>>>
>>> I suggest a minimal possible set of given types and as wide as 
>>> possible set
>>> of types constructed.
>>
>> And do you reject wrapping an existing integer types to get desired
>> semantics?
>
> Not at all.

But you do think it is inadequate, clearly.

>
>>> The point is that it is the application domain
>>> requirements to drive the design, and the choice of types in
>>> particular.
>>
>> Can that be achieved? At what degree of complexity?
>
> We probably are close to the edge.

Meaning "highly complex"?

>
>> Can it/should it be hardware-supported?
>
> No. Hardware becomes less and less relevant as the software complexity
> increases.

OK, so another layer of abstraction is what you want. The syntax of, say, 
Ada's ranged types, for example. So your call is just for more syntactic 
sugar then, yes?

>
>> From a HLL-designer POV, saying assembly language is "typeless", is 
>> fair,
>> and usually stated that way. One can pick nits at that if they want 
>> to.
>
> Yes, but "typeless" actually means broken, implicit types, rather than 
> no
> types at all.
>





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 13:26                           ` Jed
@ 2011-08-12 14:30                             ` Dmitry A. Kazakov
  2011-08-12 19:06                               ` Jed
  0 siblings, 1 reply; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-12 14:30 UTC (permalink / raw)


On Fri, 12 Aug 2011 08:26:26 -0500, Jed wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1vn800hbyx8k4$.1lsveclj56197$.dlg@40tude.net...

>> I think you get me wrong. It is rather about an ability to state the
>> contracts than about flexibility. If any type is needed in your design, you
>> should be able to describe the behavior of that type [by the language
>> means!] in a way the program reader would understand without looking into
>> the implementation or compiler/machine documentation.
> 
> In another post in this thread, I wondered if you just want a higher 
> level of abstraction, and then noted that I consider that at a higher 
> level than "close to the machine" like C and C++ are. So, is that what 
> you want?

I want the type semantics specified. C/C++ int is neither lower level nor
closer to the machine it is just ill-defined. The concern is this, not its
relation to the machine, of which I (as a programmer) just do not care.

>>>> That depends. In Ada integer types are constructed from ranges.
>>>
>>> Oh, I thought ranges in Ada were "subtype ranges", hence being based 
>>> upon
>>> the built-ins.
>>
>> Both:
>>
>>   type Tiny is range -100..100;  -- A new type
>>   subtype Nano is Tiny range 0..2; -- A subtype
> 
> What underlies those "new" types though?

That is up to the compiler. 

> Aren't they just "syntactic" sugar over some well-defined primitives?

What are "well-defined primitives"?

> Wouldn't they have to be? I.e., 
> at some point, the hardware has to be interfaced.

Yes. Consider a that the target is a pile of empty bier cans maintained by
a robot. I presume that somehow the position of the cans or maybe their
colors in that pile must reflect values of the type. Why should I (as a
programmer) worry about that?
 
>>>>  In order to
>>>> specify the range you need literals. The type of a literal
>>>> (Universal_Integer) is not the type of the result. So technically the
>>>> type is not built on Universal_Integer.
>>>
>>> But the "subrange" is meant as "subrange of some existing type", yes? 
>>> If so, what existing type?
>>
>> Mathematically there is no need to have a supertype containing the 
>> ranged one.
> 
> I'm only concerned about pragmatics (implementation).

It is simple to implement such a type based on an integer machine type when
values of the language type correspond to the values of the machine type
(injection). It is less simple but quite doable to implement this type on a
array of machine values. The algorithms are well known and well studied. I
see no problem with that.

>> (The way compiler would implement that type is uninteresting,
> 
> Ha! To me that is KEY.

This is not a language design question. It is a matter of compiler design
targeting given machine.

>>>> The language shall distinguish the interface, the contract the
>>>> type fulfills, and the implementation of that type. "raw" turns this
>>>> picture upside down. First comes an implementation and then, frequently
>>>> never, consideration whether this implementation is right for the needs.
>>>
>>> You meant "application" instead of "language", right? So you would prefer
>>> to have a special "loop counter integer" instead of using a general
>>> unsigned integer?
>>
>> Sure. In fact in Ada it is much in this way:
>>
>>   for Index in A'Range loop
>>
>> I don't care about the type of Index, it is just the type suitable to index
>> the array A.
> 
> In loop control, I wouldn't care either, but given that it's a compiler 
> thing then, it wouldn't have to be a special type at all, for the 
> compiler can guarantee that the use of the standard primitive type would 
> be used appropriately for it is the one to generate the code from the 
> loop.

This is an implementation detail which shall not be exposed. Because Index
is a subtype of the array index the compiler can omit any range checks.
Contracts are useful for both the programmer and the compiler.

> I care in data structs though what the "physical" type is.

Why should you? Again, considering design by contract, and attempt to
reveal the implementation behind the contract is a design error. You shall
not rely on anything except the contract, that is a fundamental principle.

>>>>> Are you suggesting that a language have NO types?
>>>>
>>>> I suggest a minimal possible set of given types and as wide as 
>>>> possible set of types constructed.
>>>
>>> And do you reject wrapping an existing integer types to get desired
>>> semantics?
>>
>> Not at all.
> 
> But you do think it is inadequate, clearly.

No. Inadequate would be to expose such integer types in the contracts.
Implementations based on existing types are possible, but you would need
much language support (e.g. reflection) in order to ensure that the
implementation fulfills the contract. As an example, consider an
implementation of

   type I is range -200_000_000..100_000; -- This is the contract, which
      -- includes the range and the behavior of +,-,*/,**,mod,rem, overflow
      -- checks etc

on top of C's int.

>>>> The point is that it is the application domain
>>>> requirements to drive the design, and the choice of types in
>>>> particular.
>>>
>>> Can that be achieved? At what degree of complexity?
>>
>> We probably are close to the edge.
> 
> Meaning "highly complex"?

Enough complex to make unsustainable the ways programs are designed now.

>>> Can it/should it be hardware-supported?
>>
>> No. Hardware becomes less and less relevant as the software complexity
>> increases.
> 
> OK, so another layer of abstraction is what you want. The syntax of, say, 
> Ada's ranged types, for example. So your call is just for more syntactic 
> sugar then, yes?

Rather more support to contract driven design and static analysis.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  8:32                         ` Georg Bauhaus
  2011-08-12 13:15                           ` Hyman Rosen
@ 2011-08-12 15:14                           ` Jed
  2011-08-12 17:20                             ` Georg Bauhaus
  1 sibling, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-12 15:14 UTC (permalink / raw)



"Georg Bauhaus" <rm.dash-bauhaus@futureapps.de> wrote in message 
news:4e44e50a$0$7619$9b4e6d93@newsspool1.arcor-online.net...
> On 12.08.11 07:03, Jed wrote:
>> "Dmitry A. Kazakov"<mailbox@dmitry-kazakov.de>  wrote in message
>
>>> Ada does not have predefined modular types in the package Standard,
>>> because
>>> any modular type can be constructed using "type T is mod N" 
>>> construct.
>>
>> OK, I plainly see what you meant now even without your example. It's
>> amazing what a good night's sleep will do. Last night, I was keying-in 
>> on
>> the word "algebra" too much and it blinded me to the overall meaning.
>>
>> I'm not sure I'd want that level of flexibility in a language. I think 
>> it
>> may be too much effort for too little benefit. It seems it may be 
>> doing
>> something in software that could be supported in hardware but isn't 
>> (?).
>> Maybe too much of a "purist" approach. I'd need more info about it and 
>> be
>> convinced it is worth it before I'd put that on my shortlist of
>> compelling features for a language to have.
>
> Perhaps it is info if beginners in embedded systems programming
> consistently produce much better results (statistics collected
> and testedover some 6 years) when they have an expressive  and
> likely more intuitive type system for fundamental types.

While that is definitely a good characteristic, I can't evaluate it 
without knowing what effort it takes to get it. Certainly something like 
ranged types is easily doable. I'm not sure the scope of what is being 
called for by those calling for it (e.g., Dmitry), that's the problem. I 
ranges is one thing, what are the others?

> My guess
> is that the absence of representation riddles is a boon.

I would say that abstraction is nice sometimes, but if you don't have any 
guarantee of the representation, then that is necessarily a higher-level 
language than C and C++ are (i.e., higher than "close to the metal).

> Put yourself
> in the position of a beginner who is woken up at 5p.m. They would not
> recall the rules regarding enum vs int, or the collection of 
> traditional
> idioms for declaring arrays in C, autotruncation of numeric values
> to fit in the space of LHS; and likely neither the more modern
> classes addressing hardware things.

Some of those things are decicely deficient, yes, but that is not to say 
that abstracting-away the representation is always good. Indeed, do that 
and you have a 4GL rather than a 3GL.

>   Then, will understanding the type declarations
>
>    type Intensity is range 0 .. 255;
>    for Intensity'Size use 8;

Oh, but I thought you desired to abstract-away the representation. What's 
that "Size use 8", then, all about?

>    type RGB is (Red, Green, Blue);
>    type Pixel is array (RGB) of Intensity;
>
> require much besides what is written?   Compare
>
>    #define RGB_LIMIT 3
>
>    typedef unsigned char Intensity;
>    enum RGB {Red, Green, Blue};
>    typedef Intensity Pixel[RGB_LIMIT];
>
> and
>
>   switch (1) {
>   case 0: break;
>   case CHAR_BIT == 8: break;
>   }
>
> somewhere.  If I am a beginner, what is the number of technicalities
> I must know it order to understand the second way of declaring things?
> (I'll leave out that the declare-like-used rule in typedef is
> confusing many. Maybe that's because the typdef-ed name isn't in some
> fixed position in the typedef.) 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:24                 ` Jed
  2011-08-12  6:51                   ` Paavo Helde
@ 2011-08-12 15:50                   ` Fritz Wuehler
  2011-08-12 19:59                     ` Jed
  2011-08-13  8:06                     ` Stephen Leake
  1 sibling, 2 replies; 87+ messages in thread
From: Fritz Wuehler @ 2011-08-12 15:50 UTC (permalink / raw)


"Jed" <jehdiah@orbitway.net> wrote:

> 
> "Randy Brukardt" <randy@rrsoftware.com> wrote in message 
> news:j22c61$5lo$1@munin.nbi.dk...
> 
> > Modular types are something altogether different (and in all honesty, 
> > rare enough that direct language support is of dubious value -- most of 
> > us supported adding them to Ada 95 simply because it was the only way 
> > to get any support for the largest unsigned integer type).
> >
> 
> Isn't the wrapping behavior just a consequence of wanting to get a 
> representation in which signed and unsigned integers can be easily 
> converted to each other? I.e., "they" didn't sit down and say, "let's 
> implement unsigned integers with wrapping behavior". 

More likely it's a consequence of doing nothing, because the natural
behavior of the hardware is that unsigned integers wrap. Unless there was a
specific language feature designed to prevent this, that is how it will work
on most platforms.




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 13:12                   ` Vinzent Hoefler
@ 2011-08-12 15:50                     ` Stuart Redmann
  2011-08-12 17:02                       ` Bill Findlay
  2011-08-15 12:59                       ` Vinzent Hoefler
  0 siblings, 2 replies; 87+ messages in thread
From: Stuart Redmann @ 2011-08-12 15:50 UTC (permalink / raw)


Stuart Redmann wrote:
[snip]
> > Be honest, how often do you really _want_ a wrapping int?


On 12 Aug. Vinzent Hoefler wrote:
> Thinking indices instead of pointers, quite often, actually.

Could you please provide some example? Seriously, I'm really
interested in the usage of wrapping types since in the last ten years
I have not felt the need for wrapping ints at all (well, maybe two or
three times).

>
> Vinzent.
>
> --
> f u cn rd ths, u cn gt a gd jb n cmptr prgrmmng.

LOL, I like your signature.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 10:48                     ` Georg Bauhaus
@ 2011-08-12 15:56                       ` Ludovic Brenta
  0 siblings, 0 replies; 87+ messages in thread
From: Ludovic Brenta @ 2011-08-12 15:56 UTC (permalink / raw)


Georg Bauhaus wrote on comp.lang.ada:
> On 12.08.11 11:45, Ludovic Brenta wrote:
>
>> "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> writes:
>>> As the result of so called "global warming" we have no summer third
>>> year in a row. Much, much cold water... (:-))
>
>> The summer occurred in May and June this year in Europe :)
>
> We need to get more flexible representation clauses for enumeration
> types from climate aware compilers.  Also, multiply and statically
> dispatching aspects if an Ada program needs to be portable across
> planets.

And their satellites, obviously.  Things could become pretty messy in
binary star systems, too.  Think of a satellite on a retrograde orbit
(like Triton) around a heavy planet (like Neptune) that is in forward
rotation around the center of gravity of a binary star system...

>   Obviously,
>
>    type Season is (Spring, Summer, Fall, Winter)
>      with (Summer => Summer (Epoch => 21 * Century,
>                              Planet => Earth)),
>            ...);
>
> Redefinitions of order related operators and attributes
> would be automatic.

Don't forget the hemispheres and that seasons depend on latitude.  For
example, between -15 and +15 degrees on Earth there is only one
season; between 15 and 35 degrees, two seasons (dry and wet); between
35 and 72 degrees, four; above 72, two again (day and night).

Also, on some planets (e.g. Mars), the four seasons have different
lengths due to the eccentricity of the orbit.

We need an AI for that...

--
Ludovic Brenta.
The clients strategize efficient, forward-looking, trigger events.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:16                 ` Robert Wessel
@ 2011-08-12 16:39                   ` Adam Beneschan
  0 siblings, 0 replies; 87+ messages in thread
From: Adam Beneschan @ 2011-08-12 16:39 UTC (permalink / raw)


On Aug 11, 10:16 pm, Robert Wessel <robertwess...@yahoo.com> wrote:
> On Fri, 12 Aug 2011 00:02:55 -0500, "Randy Brukardt"
>
>
>
>
>
> <ra...@rrsoftware.com> wrote:
> >"Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote in message
> >news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...
> >> On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:
>
> >>> There are uses for wrapping types, but they are far less likely than
> >>> wanting
> >>> overflow detection. The default should be to catch errors, not turn them
> >>> into different ones.
>
> >> The OP mentioned image processing, the behavior frequently needed there is
> >> saturated integer arithmetic, which is nether ranged nor modular.
>
> >I'm not familar with any hardware on which saturated integer arithmetic is
> >provided. If it was, I would expect direct language support for it (which
> >need not require new kinds of types).
>
> Many DSPs, graphics coprocessors and the like have saturating integer
> arithmetic, usually in addition to conventional arithmetic.

TI c6x series, for example (that's the one I'm most familiar with).

                                -- Adam



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 15:50                     ` Stuart Redmann
@ 2011-08-12 17:02                       ` Bill Findlay
  2011-08-15 12:59                       ` Vinzent Hoefler
  1 sibling, 0 replies; 87+ messages in thread
From: Bill Findlay @ 2011-08-12 17:02 UTC (permalink / raw)


On 12/08/2011 16:50, in article
e9987381-4f7b-4d25-9c9b-429fd4da74f6@k15g2000yqd.googlegroups.com, "Stuart
Redmann" <DerTopper@web.de> wrote:

> Stuart Redmann wrote:
> [snip]
>>> Be honest, how often do you really _want_ a wrapping int?
> 
> On 12 Aug. Vinzent Hoefler wrote:
>> Thinking indices instead of pointers, quite often, actually.
> 
> Could you please provide some example? Seriously, I'm really
> interested in the usage of wrapping types since in the last ten years
> I have not felt the need for wrapping ints at all (well, maybe two or
> three times).

In my current project, the following are all defined in one unit:

   type nest_depth is mod 19;
   type sjns_depth is mod 17;

   type word is mod 2**48;
   type field_of_16_bits is mod 2**16;
   type syllable is mod 2**8;
   type halfword is mod 2**24;
   type symbol is mod 2**6;
   type symbol_number is mod 8;

   type microseconds is mod 2**64;
   type priority is mod 2**2;
   type one_bit is mod 2;
   type context is mod 2**2;
   type INS_kind is mod 2**2;
   type order_counter is mod 2**64;

For most of these, the wrapping behaviour is exactly what is needed
to model computer hardware (they are from a KDF9 emulator).

nest_depth and sjns_depth, in particular, are the index types of cyclic
buffers.

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 15:14                           ` Jed
@ 2011-08-12 17:20                             ` Georg Bauhaus
  2011-08-12 19:51                               ` Jed
  0 siblings, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-12 17:20 UTC (permalink / raw)


On 12.08.11 17:14, Jed wrote:

>>   Then, will understanding the type declarations
>>
>>    type Intensity is range 0 .. 255;
>>    for Intensity'Size use 8;
> 
> Oh, but I thought you desired to abstract-away the representation. What's 
> that "Size use 8", then, all about?


The general idea is that you start from the problem
without giving thought to representation.  Typically,
without loss of efficiency.  Compiler makers know a fair
bit about good choice of representation.  This means:
Describe precisely what matters to your program, on
any machine, without reference to any representation.
Then, and only then, and only if necessary, throw in your
knowledge of representation.

Considering the RGB array example, you'd have to know that
the imaginary display hardware works better (or at all) if
Intensity values are 8 bit, and components aligned on 8 bit
boundaries, say.  Or that this must be so because a driver
program expects data shaped this way. Typically, though,
data representation is something a compiler should know
better, but of course nothing should be in the way of
a hardware savvy programmer.  Directing compilers can be
beneficial.  Still, this kind of direction should not
steer the structure of a solution.

When have you last thought about the size of the components
of std::string, of of a vtable?  Why, then, is there a
pressing need to think about the sizes of int types?

A few programming issues typically arising from choice of
type.  They key word here is "typical", as in "real world":

1. Change of platform.

A programmer uses language defined type int (or Integer)
all over the place, assuming that objects of the type will
have 32 bit words.  Later, the program should be ported
to a 16 bits platform.  Ouch!  May need to review the entire
program.

The C solution is to at least become aware of the powers of
typedef + discipline + conventions.  Add assertions, compile
time assertions already mentioned, or templates like Hyman
Rosen's. Try isolating and checking the consequences of type
choice. In any case, this seems like a fair bit of mechanism,
and procedures in a programming organization that is outside
the language.

An Ada solution is to avoid Integer where possible
and declare the types you need in the most natural way,
which is by specifying the range of values you need.
Full stop.  (Well, mostly, but is is a lot easier and
to achieve the same goal.)

Any language with a natural way of defining types from abstract
problem domain knowledge has the same advantage in that the
definition doesn't refer to types like int or Integer, but
to the non-changing abstract solution.

2. Changing representation.

When I want to change a type's objects' representation, I must
pick a different type in C.  In Ada (or ...), I'll leave the type
declaration as is, and ask the compiler to represent it differently.
The rest of the program's logic is not affected, as the type's
value set stays the same.

The C way slightly reverses thinking about the program,
then. You must start from the set of fundamental types
and from the set of values they happen to have in some
implementation. Not from the values you need.  You cannot
declare a type that includes precisely the values required
by your solution.  You can apply your knowledge of
the implementation when choosing a type that best
represents your "real" type.  Using C++, I imagine you
can produce clever template computations to this effect.
I'm only guessing, though.  All of this, however, is not as
simple as simply stating that your type's value set should
be such-and-such.

As a simpler example, consider percentages as whole numbers
between 0 and 100.

An exception is when the program is about hardware.
Side note: it is a requirement that an Ada implementation
make all "hardware types" that it supports available to the
programmer.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 14:30                             ` Dmitry A. Kazakov
@ 2011-08-12 19:06                               ` Jed
  2011-08-12 20:05                                 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-12 19:06 UTC (permalink / raw)



"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:fwnlcp4mgj03$.1tjeqjtos00o8$.dlg@40tude.net...
> On Fri, 12 Aug 2011 08:26:26 -0500, Jed wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:1vn800hbyx8k4$.1lsveclj56197$.dlg@40tude.net...
>
>>> I think you get me wrong. It is rather about an ability to state the
>>> contracts than about flexibility. If any type is needed in your 
>>> design, you
>>> should be able to describe the behavior of that type [by the language
>>> means!] in a way the program reader would understand without looking 
>>> into
>>> the implementation or compiler/machine documentation.
>>
>> In another post in this thread, I wondered if you just want a higher
>> level of abstraction, and then noted that I consider that at a higher
>> level than "close to the machine" like C and C++ are. So, is that what
>> you want?
>
> I want the type semantics specified. C/C++ int is neither lower level 
> nor
> closer to the machine it is just ill-defined. The concern is this, not 
> its
> relation to the machine, of which I (as a programmer) just do not care.

What more definition do you want? Size guarantee? Other than that, what 
is ambiguous (platform-specific) about C++ int?

>
>>>>> That depends. In Ada integer types are constructed from ranges.
>>>>
>>>> Oh, I thought ranges in Ada were "subtype ranges", hence being based
>>>> upon
>>>> the built-ins.
>>>
>>> Both:
>>>
>>>   type Tiny is range -100..100;  -- A new type
>>>   subtype Nano is Tiny range 0..2; -- A subtype
>>
>> What underlies those "new" types though?
>
> That is up to the compiler.

Do you care about the size of the type ever?

>
>> Aren't they just "syntactic" sugar over some well-defined primitives?
>
> What are "well-defined primitives"?

The built-in things that are closest to the machine, with consistent 
representation across machines that can be relied on, that other things 
are built on top of. (That's a stab at it, anywho).

>
>> Wouldn't they have to be? I.e.,
>> at some point, the hardware has to be interfaced.
>
> Yes. Consider a that the target is a pile of empty bier cans maintained 
> by
> a robot. I presume that somehow the position of the cans or maybe their
> colors in that pile must reflect values of the type. Why should I (as a
> programmer) worry about that?

Because you know it isn't beer cans (call it, say, "a simplifying 
assumption") and you may want to (probably will want to) send some of 
those integers to another machine across the internet or to disk.

>
>>>>>  In order to
>>>>> specify the range you need literals. The type of a literal
>>>>> (Universal_Integer) is not the type of the result. So technically 
>>>>> the
>>>>> type is not built on Universal_Integer.
>>>>
>>>> But the "subrange" is meant as "subrange of some existing type", 
>>>> yes?
>>>> If so, what existing type?
>>>
>>> Mathematically there is no need to have a supertype containing the
>>> ranged one.
>>
>> I'm only concerned about pragmatics (implementation).
>
> It is simple to implement such a type based on an integer machine type 
> when
> values of the language type correspond to the values of the machine 
> type
> (injection). It is less simple but quite doable to implement this type 
> on a
> array of machine values. The algorithms are well known and well 
> studied. I
> see no problem with that.

Just another layer of abstraction. So it's as simple as that (another 
layer of abstraction), yes?

>
>>> (The way compiler would implement that type is uninteresting,
>>
>> Ha! To me that is KEY.
>
> This is not a language design question. It is a matter of compiler 
> design
> targeting given machine.

Oh? I've been having a language design discussion (primarily). You've 
been having a compiler implementation discussion?

>
>>>>> The language shall distinguish the interface, the contract the
>>>>> type fulfills, and the implementation of that type. "raw" turns 
>>>>> this
>>>>> picture upside down. First comes an implementation and then, 
>>>>> frequently
>>>>> never, consideration whether this implementation is right for the 
>>>>> needs.
>>>>
>>>> You meant "application" instead of "language", right? So you would 
>>>> prefer
>>>> to have a special "loop counter integer" instead of using a general
>>>> unsigned integer?
>>>
>>> Sure. In fact in Ada it is much in this way:
>>>
>>>   for Index in A'Range loop
>>>
>>> I don't care about the type of Index, it is just the type suitable to 
>>> index
>>> the array A.
>>
>> In loop control, I wouldn't care either, but given that it's a 
>> compiler
>> thing then, it wouldn't have to be a special type at all, for the
>> compiler can guarantee that the use of the standard primitive type 
>> would
>> be used appropriately for it is the one to generate the code from the
>> loop.
>
> This is an implementation detail which shall not be exposed.

Sure, it doesn't matter in the case of the loop variable.

>  Because Index
> is a subtype of the array index the compiler can omit any range checks.
> Contracts are useful for both the programmer and the compiler.
>
>> I care in data structs though what the "physical" type is.
>
> Why should you? Again, considering design by contract, and attempt to
> reveal the implementation behind the contract is a design error.
> You shall
> not rely on anything except the contract, that is a fundamental 
> principle.

While that paradigm does have a place, that place is not "everywhere". 
That would be throwing the baby out with the bath water.

The comments being made in this thread are really suggesting 4GL. While 
that is fine, there is a place for 3GLs and hybrids. One size fits all, 
really doesn't.

>
>>>>>> Are you suggesting that a language have NO types?
>>>>>
>>>>> I suggest a minimal possible set of given types and as wide as
>>>>> possible set of types constructed.
>>>>
>>>> And do you reject wrapping an existing integer types to get desired
>>>> semantics?
>>>
>>> Not at all.
>>
>> But you do think it is inadequate, clearly.
>
> No.

Well then, if wrapping is OK, what else is needed and why?

> Inadequate would be to expose such integer types in the contracts.

What contracts?

> Implementations based on existing types are possible, but you would 
> need
> much language support (e.g. reflection) in order to ensure that the
> implementation fulfills the contract. As an example, consider an
> implementation of
>
>   type I is range -200_000_000..100_000; -- This is the contract, which
>      -- includes the range and the behavior of +,-,*/,**,mod,rem, 
> overflow
>      -- checks etc
>
> on top of C's int.
>

Oh, THAT "contract". (I reserve that term for function calls, just to 
avoid overloading it, but I actually prefer "specification" as I 
associate "contract" with Eiffel).

I'm don't know what you are suggesting the ideal implementation would be 
of the ranged type above.

>>>>> The point is that it is the application domain
>>>>> requirements to drive the design, and the choice of types in
>>>>> particular.
>>>>
>>>> Can that be achieved? At what degree of complexity?
>>>
>>> We probably are close to the edge.
>>
>> Meaning "highly complex"?
>
> Enough complex to make unsustainable the ways programs are designed 
> now.

Explain please.

>
>>>> Can it/should it be hardware-supported?
>>>
>>> No. Hardware becomes less and less relevant as the software 
>>> complexity
>>> increases.
>>
>> OK, so another layer of abstraction is what you want. The syntax of, 
>> say,
>> Ada's ranged types, for example. So your call is just for more 
>> syntactic
>> sugar then, yes?
>
> Rather more support to contract driven design and static analysis.

That doesn't appear to be a lot of complexity to introduce into a 
compiler. It seems like common sense. So, what am I missing here?

The new enums in C++11 are a step in the right direction, yes? Maybe in 
the next iteration we'll get ranges.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 17:20                             ` Georg Bauhaus
@ 2011-08-12 19:51                               ` Jed
  2011-08-12 21:22                                 ` Ludovic Brenta
                                                   ` (3 more replies)
  0 siblings, 4 replies; 87+ messages in thread
From: Jed @ 2011-08-12 19:51 UTC (permalink / raw)



"Georg Bauhaus" <rm.dash-bauhaus@futureapps.de> wrote in message 
news:4e4560e7$0$6546$9b4e6d93@newsspool4.arcor-online.net...
> On 12.08.11 17:14, Jed wrote:
>
>>>   Then, will understanding the type declarations
>>>
>>>    type Intensity is range 0 .. 255;
>>>    for Intensity'Size use 8;
>>
>> Oh, but I thought you desired to abstract-away the representation. 
>> What's
>> that "Size use 8", then, all about?
>
>
> The general idea is that you start from the problem
> without giving thought to representation.  Typically,
> without loss of efficiency.  Compiler makers know a fair
> bit about good choice of representation.  This means:
> Describe precisely what matters to your program, on
> any machine, without reference to any representation.
> Then, and only then, and only if necessary, throw in your
> knowledge of representation.

That doesn't work when you do IO. Unless, of course, you are suggesting 
that IO should be abstracted away. Which is fine, if you want that, but 
it necessarily moves one into 4GL-land.

>
> Considering the RGB array example, you'd have to know that
> the imaginary display hardware works better (or at all) if
> Intensity values are 8 bit, and components aligned on 8 bit
> boundaries, say.  Or that this must be so because a driver
> program expects data shaped this way. Typically, though,
> data representation is something a compiler should know
> better, but of course nothing should be in the way of
> a hardware savvy programmer.  Directing compilers can be
> beneficial.  Still, this kind of direction should not
> steer the structure of a solution.

So the benefit, then, just not having to learn something? An electric 
food slicer so that the knives can be put in a locked box?

>
> When have you last thought about the size of the components
> of std::string, of of a vtable?  Why, then, is there a
> pressing need to think about the sizes of int types?

You know that that was a very bad comparison. Completely invalid.

>
> A few programming issues typically arising from choice of
> type.  They key word here is "typical", as in "real world":
>
> 1. Change of platform.
>
> A programmer uses language defined type int (or Integer)
> all over the place, assuming that objects of the type will
> have 32 bit words.  Later, the program should be ported
> to a 16 bits platform.  Ouch!  May need to review the entire
> program.

Such is a language that must maintain backward compatibility. Easy 
solution: don't use int (and learn the language you are programming in). 
The above is a bug. Sure, it was cause by the failings of the language, 
but one is supposed to know those "gotchas". C++ is not for "quick and 
dirty" programming tasks unless one is highly skilled in the usage of the 
language, knowing the ins and outs. Yes, it's easy to imagine something 
better, but that isn't C++, and the comments being made in this thread (I 
keep saying this) seem to want to move from 3GL territory to 4GL.

>
> The C solution is to at least become aware of the powers of
> typedef + discipline + conventions.  Add assertions, compile
> time assertions already mentioned, or templates like Hyman
> Rosen's. Try isolating and checking the consequences of type
> choice. In any case, this seems like a fair bit of mechanism,
> and procedures in a programming organization that is outside
> the language.
>
> An Ada solution is to avoid Integer where possible
> and declare the types you need in the most natural way,
> which is by specifying the range of values you need.
> Full stop.  (Well, mostly, but is is a lot easier and
> to achieve the same goal.)

I like the semantic also (not the syntax though). I still don't have a 
feel for the scope of the desired (your) semantics and the compiler 
implementation effort requited though. So far, I'm clear on "ranged 
integers + size attribute".

>
> Any language with a natural way of defining types from abstract
> problem domain knowledge has the same advantage in that the
> definition doesn't refer to types like int or Integer, but
> to the non-changing abstract solution.

Still, 80% solutions are good sometimes too. That "last 10%" can 
exponential increase <something>. (something = complexity or $ or 
time...). So, scope first, then gap analysis (and other analyses) to 
determine if it is worthwhile. Ranged integers? Sure, why not. Something 
more than that, well, the whole scope needs to be presented.

>
> 2. Changing representation.
>
> When I want to change a type's objects' representation, I must
> pick a different type in C.  In Ada (or ...), I'll leave the type
> declaration as is, and ask the compiler to represent it differently.
> The rest of the program's logic is not affected, as the type's
> value set stays the same.

So, a control panel on top of the low-level things. (4GL vs. 3GL).

>
> The C way slightly reverses thinking about the program,
> then. You must start from the set of fundamental types
> and from the set of values they happen to have in some
> implementation. Not from the values you need.  You cannot
> declare a type that includes precisely the values required
> by your solution.

I think that may be an incremental improvement from the application 
development standpoint. If it was exclusively that, it could become 
tedious. From the compiler implementation standpoint, maybe not worth 
doing too many things beyond, say, ranged integers. Again, what is the 
scope of your proposal? Just saying "a semantically rich programming 
language..." is quite nebulous. I'd like to see a list or something. 1. 
Ranged integers. 2. Scoped strong enums (got that already). 3. ???

>You can apply your knowledge of
> the implementation when choosing a type that best
> represents your "real" type.  Using C++, I imagine you
> can produce clever template computations to this effect.
> I'm only guessing, though.  All of this, however, is not as
> simple as simply stating that your type's value set should
> be such-and-such.
>
> As a simpler example, consider percentages as whole numbers
> between 0 and 100.
>
> An exception is when the program is about hardware.
> Side note: it is a requirement that an Ada implementation
> make all "hardware types" that it supports available to the
> programmer.

Ah ha! ;) In conjunction with all you've said so far, I'll bet you will 
say that IO should companionally be abstracted-away too. That's fine, but 
now, is that a programming language, or an environment? How far on that 
track do you go before you start calling the language "Windows"? 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 15:50                   ` Fritz Wuehler
@ 2011-08-12 19:59                     ` Jed
  2011-08-13  8:06                     ` Stephen Leake
  1 sibling, 0 replies; 87+ messages in thread
From: Jed @ 2011-08-12 19:59 UTC (permalink / raw)



"Fritz Wuehler" <fritz@spamexpire-201108.rodent.frell.theremailer.net> 
wrote in message 
news:e42b54adb274b7447820bb6982baf6f5@msgid.frell.theremailer.net...
> "Jed" <jehdiah@orbitway.net> wrote:
>
>>
>> "Randy Brukardt" <randy@rrsoftware.com> wrote in message
>> news:j22c61$5lo$1@munin.nbi.dk...
>>
>> > Modular types are something altogether different (and in all 
>> > honesty,
>> > rare enough that direct language support is of dubious value -- most 
>> > of
>> > us supported adding them to Ada 95 simply because it was the only 
>> > way
>> > to get any support for the largest unsigned integer type).
>> >
>>
>> Isn't the wrapping behavior just a consequence of wanting to get a
>> representation in which signed and unsigned integers can be easily
>> converted to each other? I.e., "they" didn't sit down and say, "let's
>> implement unsigned integers with wrapping behavior".
>
> More likely it's a consequence of doing nothing, because the natural
> behavior of the hardware is that unsigned integers wrap. Unless there 
> was a
> specific language feature designed to prevent this, that is how it will 
> work
> on most platforms.
>

Ah ha! Confirmation comes. I hypothesized similar, further up in the 
thread. 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 19:06                               ` Jed
@ 2011-08-12 20:05                                 ` Dmitry A. Kazakov
  2011-08-13  7:53                                   ` Jed
  0 siblings, 1 reply; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-12 20:05 UTC (permalink / raw)


On Fri, 12 Aug 2011 14:06:55 -0500, Jed wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:fwnlcp4mgj03$.1tjeqjtos00o8$.dlg@40tude.net...

>> I want the type semantics specified. C/C++ int is neither lower level nor
>> closer to the machine it is just ill-defined. The concern is this, not its
>> relation to the machine, of which I (as a programmer) just do not care.
> 
> What more definition do you want?

More than what? I see nothing.

> Other than that, what 
> is ambiguous (platform-specific) about C++ int?

That is one of the meanings of ill-defined. 

>>>>>> That depends. In Ada integer types are constructed from ranges.
>>>>>
>>>>> Oh, I thought ranges in Ada were "subtype ranges", hence being based
>>>>> upon
>>>>> the built-ins.
>>>>
>>>> Both:
>>>>
>>>>   type Tiny is range -100..100;  -- A new type
>>>>   subtype Nano is Tiny range 0..2; -- A subtype
>>>
>>> What underlies those "new" types though?
>>
>> That is up to the compiler.
> 
> Do you care about the size of the type ever?

You mean the size of the representation taking into account its memory
alignment? I don't care except for the cases I have to communicate the
hardware or to implement a communication protocol. Though I have to do this
quite often, so I cannot represent a typical case, not so many people are
doing embedded/protocol stuff these days. Anyway, that is well under 1% for
me.

BTW, if you aim at this application domain, note endianness and encoding
stuff. You have very little chance that int is the type the protocol or
hardware uses. In fact, you need here even more language support, because
the type semantics is to be defined far more rigorously. You need to take
the description of the protocol and rewrite it in the language terms. Even
Ada's representation clauses cannot this, e.g. to describe the integers
used in an analogue input terminal (as an integral type).

>>> Aren't they just "syntactic" sugar over some well-defined primitives?
>>
>> What are "well-defined primitives"?
> 
> The built-in things that are closest to the machine, with consistent 
> representation across machines that can be relied on, that other things 
> are built on top of. (That's a stab at it, anywho).

How these contradictory requirements could be fulfilled by something
well-defined? In fact the vagueness of such "primitives" is a consequence
of that contradiction. There is no such primitives, in principle.

>>> Wouldn't they have to be? I.e.,
>>> at some point, the hardware has to be interfaced.
>>
>> Yes. Consider a that the target is a pile of empty bier cans maintained by
>> a robot. I presume that somehow the position of the cans or maybe their
>> colors in that pile must reflect values of the type. Why should I (as a
>> programmer) worry about that?
> 
> Because you know it isn't beer cans (call it, say, "a simplifying 
> assumption") and you may want to (probably will want to) send some of 
> those integers to another machine across the internet or to disk.

In each such case I will have to use a type different from the machine
type. This is just another argument why types need to be precisely
specified. I addressed this issue above.

>>>> Mathematically there is no need to have a supertype containing the
>>>> ranged one.
>>>
>>> I'm only concerned about pragmatics (implementation).
>>
>> It is simple to implement such a type based on an integer machine type when
>> values of the language type correspond to the values of the machine type
>> (injection). It is less simple but quite doable to implement this type on a
>> array of machine values. The algorithms are well known and well studied. I
>> see no problem with that.
> 
> Just another layer of abstraction. So it's as simple as that (another 
> layer of abstraction), yes?

Yes, what in your opinion a machine word is? Just another layer of
abstraction above states of the p-n junctions of some transistors. Do you
care?

>>>> (The way compiler would implement that type is uninteresting,
>>>
>>> Ha! To me that is KEY.
>>
>> This is not a language design question. It is a matter of compiler design
>> targeting given machine.
> 
> Oh? I've been having a language design discussion (primarily).

Then that cannot by any means be a key issue.

>> Why should you? Again, considering design by contract, and attempt to
>> reveal the implementation behind the contract is a design error. You shall
>> not rely on anything except the contract, that is a fundamental principle.
> 
> While that paradigm does have a place, that place is not "everywhere". 

Where?

> The comments being made in this thread are really suggesting 4GL. While 
> that is fine, there is a place for 3GLs and hybrids. One size fits all, 
> really doesn't.

Maybe, but in software engineering it certainly does.

> Well then, if wrapping is OK, what else is needed and why?

Wrapping is an implementation, the question is WHAT does this
implementation actually implement?

>> Inadequate would be to expose such integer types in the contracts.
> 
> What contracts?

>> Implementations based on existing types are possible, but you would need
>> much language support (e.g. reflection) in order to ensure that the
>> implementation fulfills the contract. As an example, consider an
>> implementation of
>>
>>   type I is range -200_000_000..100_000; -- This is the contract, which
>>      -- includes the range and the behavior of +,-,*/,**,mod,rem, 
>> overflow
>>      -- checks etc
>>
>> on top of C's int.
> 
> Oh, THAT "contract". (I reserve that term for function calls, just to 
> avoid overloading it, but I actually prefer "specification" as I 
> associate "contract" with Eiffel).
> 
> I'm don't know what you are suggesting the ideal implementation would be 
> of the ranged type above.

There is no ideal implementations, there are ones fulfilling functional and
non-functional requirements.

>>>>>> The point is that it is the application domain
>>>>>> requirements to drive the design, and the choice of types in
>>>>>> particular.
>>>>>
>>>>> Can that be achieved? At what degree of complexity?
>>>>
>>>> We probably are close to the edge.
>>>
>>> Meaning "highly complex"?
>>
>> Enough complex to make unsustainable the ways programs are designed 
>> now.
> 
> Explain please.

If the bugs rate will not be reduced, there will not be enough human
resources to keep software development economically feasible. And this is
not considering future losses of human lives in car accidents caused by
software faults etc.

>>>>> Can it/should it be hardware-supported?
>>>>
>>>> No. Hardware becomes less and less relevant as the software 
>>>> complexity increases.
>>>
>>> OK, so another layer of abstraction is what you want. The syntax of, say,
>>> Ada's ranged types, for example. So your call is just for more syntactic
>>> sugar then, yes?
>>
>> Rather more support to contract driven design and static analysis.
> 
> That doesn't appear to be a lot of complexity to introduce into a 
> compiler. It seems like common sense. So, what am I missing here?

Contract specifications. "int" is not a contract.

> The new enums in C++11 are a step in the right direction, yes? Maybe in 
> the next iteration we'll get ranges.

Let's see.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 19:51                               ` Jed
@ 2011-08-12 21:22                                 ` Ludovic Brenta
  2011-08-14  7:00                                   ` Jed
  2011-08-13  9:37                                 ` Georg Bauhaus
                                                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 87+ messages in thread
From: Ludovic Brenta @ 2011-08-12 21:22 UTC (permalink / raw)


Jed writes:
>> When have you last thought about the size of the components
>> of std::string, of of a vtable?  Why, then, is there a
>> pressing need to think about the sizes of int types?
>
> You know that that was a very bad comparison. Completely invalid.

On the contrary I think this comparison is perfectly valid.  The
language should distinguish between integers (i.e. numbers) and various
kinds of machine registers.  If you think you need to know the number of
bits in an int, this indicates you are dealing with low-level stuff and
therefore need something much more precise than "int", which specifies
not only size in bits but also bit-endianness, byte-endianness,
alignment, permissible operations and, if arithmetic is possible at all,
overflow semantics.  C conveniently ignores all these things and leaves
them unspecified.  With Ada, the programmer can specify most of those
aspects (except, sadly, for byte-endianness).

If you confuse "size of an int" for "range of permissible values for an
int", then you ignore the possibility that the range might not fit at
all with any size in bits, e.g.

subtype Fridge_Temperature is range 250 .. 313; -- kelvins

Most of the time, you do *not* need to know the size of an int; instead
you need to know the size of a hardware register, field in a network
protocol, etc and these are, quite often, *not* numbers.

-- 
Ludovic Brenta.
The Chief Legal Officer proactively enables our potentials. 



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  5:15                       ` Jed
@ 2011-08-12 21:39                         ` Fritz Wuehler
  2011-08-14  6:52                           ` Jed
  0 siblings, 1 reply; 87+ messages in thread
From: Fritz Wuehler @ 2011-08-12 21:39 UTC (permalink / raw)


"Jed" <jehdiah@orbitway.net> wrote:

> I didn't say it was. I said it probably arose from that. I'll bet integer 
> literals are signed in most assembly languages (going out on a limb with 
> this, because I really don't know).

Absolutely not. Assembly language is about talking to the machine directly.
If you can't manipulate native types then your assembler isn't an assembler.

> Hence, a no-brainer translation of  HLL code to assembly (recognizing, of
> course, that compilers are free to generate machine code directly, rather
> than generating assembly).

Most (vast majority) of the compilers on IBM platforms generate object
code. It sounds like you think a goal of HLL design is to be easy to
implement. That's wrong. The goal of HLL design is to make it easy to solve
problems in the problem domain.

> It's generally considered "typeless" from the POV of HLL programmers. It 
> has relative meaning. No need to be pedantic about it.

It wasn't pedantic, it was a simple comment from an assembly coder. Assembly
is typed or it wouldn't be useful. However there's no type enforcement at
the assembler level (usually, although at the hardware level there is.)
That is no the same as not being typed, although to an HLL programmer it may
not be obvious that those two things are not one and the same.

> > especially with certain
> > systems and certain assemblers. The type in assembly language *does*
> > usually reflect the native types of the underlying machine very 
> > closely,
> > obviously.
> 
> And I'll bet, more often than not, C/C++ built-in types reflect that 
> also. It would be "silly" to specify a language to what is uncommon.

You have the tail wagging the dog. The C/C++ built-in types don't reflect
anything but the language design and they don't change their meaning from
platform to platform or from implementation to implementation. That's why
they're types. Incidentally they often map to native types on most
architectures which is why they were created in the first place. But now
that they have been created the people using C/C++ have to live with those
types or build abstract data types on top of them.

> It would be "silly" to specify a language to what is uncommon.

That's exactly the point of HLL. It must provide useful abstractions for
the problem domain. The underlying implementation should not matter at all.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 13:15                           ` Hyman Rosen
@ 2011-08-12 22:09                             ` Randy Brukardt
  0 siblings, 0 replies; 87+ messages in thread
From: Randy Brukardt @ 2011-08-12 22:09 UTC (permalink / raw)


"Hyman Rosen" <hyrosen@mail.com> wrote in message 
news:4e45276d$0$13269$a8266bb1@newsreader.readnews.com...
...
> A few; of course, when I was a complete Ada beginner (not that
> I'm much more than that now) I was always thrown by the use of
> apostrophes as attribute designators. When I see
>
>     for Intensity'Size use 8;
>
> it makes me want to go off searching for where the string ends
> and trying to figure out what's going on. I also don't know that
> a beginner finds the notion of an enumerated type declared as an
> array index to be immediately obvious.

In Ada 2012, you could write this as part of the type declaration (much 
preferable, IMHO):

   type Intensity is range 0 .. 255
       with Size => 8;

and then you wouldn't have to guess what the apostrophe means. You'd still 
have to guess what "Size" means (bits or bytes? It's fairly obvious here, 
but not always).

                                     Randy.






^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 20:05                                 ` Dmitry A. Kazakov
@ 2011-08-13  7:53                                   ` Jed
  2011-08-13  9:15                                     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-13  7:53 UTC (permalink / raw)



"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:1gu6ni1yb54k3$.4nbvfqqndl8m$.dlg@40tude.net...
> On Fri, 12 Aug 2011 14:06:55 -0500, Jed wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:fwnlcp4mgj03$.1tjeqjtos00o8$.dlg@40tude.net...
>
>>> I want the type semantics specified. C/C++ int is neither lower level
>>> nor
>>> closer to the machine it is just ill-defined. The concern is this,
>>> not its
>>> relation to the machine, of which I (as a programmer) just do not
>>> care.
>>
>> What more definition do you want?
>
> More than what? I see nothing.

The semantics of C++ types are specified.


>> Size guarantee?
>> Other than that, what
>> is ambiguous (platform-specific) about C++ int?
>
> That is one of the meanings of ill-defined.

Don't use it. Consider it deprecated. Some baggage stays around for
compatibility even when new constructs are introduced.

>
>>>>>>> That depends. In Ada integer types are constructed from ranges.
>>>>>>
>>>>>> Oh, I thought ranges in Ada were "subtype ranges", hence being
>>>>>> based
>>>>>> upon
>>>>>> the built-ins.
>>>>>
>>>>> Both:
>>>>>
>>>>>   type Tiny is range -100..100;  -- A new type
>>>>>   subtype Nano is Tiny range 0..2; -- A subtype
>>>>
>>>> What underlies those "new" types though?
>>>
>>> That is up to the compiler.
>>
>> Do you care about the size of the type ever?
>
> You mean the size of the representation taking into account its memory
> alignment? I don't care except for the cases I have to communicate the
> hardware or to implement a communication protocol. Though I have to do
> this
> quite often, so I cannot represent a typical case, not so many people
> are
> doing embedded/protocol stuff these days. Anyway, that is well under 1%
> for
> me.

Interesting.

>
> BTW, if you aim at this application domain, note endianness and
> encoding
> stuff. You have very little chance that int is the type the protocol or
> hardware uses.

Picking on int? You don't have to use int. You probably
should never use it. It's virtually useless (maybe not just "virtually"
either). Why would you use int instead of another integer type? You have
a bunch of others to pick from.

> In fact, you need here even more language support, because
> the type semantics is to be defined far more rigorously. You need to
> take
> the description of the protocol and rewrite it in the language terms.
> Even
> Ada's representation clauses cannot this, e.g. to describe the integers
> used in an analogue input terminal (as an integral type).

So much for dumping knowledge of the representation then.

>
>>>> Aren't they just "syntactic" sugar over some well-defined
>>>> primitives?
>>>
>>> What are "well-defined primitives"?
>>
>> The built-in things that are closest to the machine, with consistent
>> representation across machines that can be relied on, that other
>> things
>> are built on top of. (That's a stab at it, anywho).
>
> How these contradictory requirements could be fulfilled by something
> well-defined? In fact the vagueness of such "primitives" is a
> consequence
> of that contradiction. There is no such primitives, in principle.

Yeah, I went to far. I should have left out "across machines".

>
>>>> Wouldn't they have to be? I.e.,
>>>> at some point, the hardware has to be interfaced.
>>>
>>> Yes. Consider a that the target is a pile of empty bier cans
>>> maintained by
>>> a robot. I presume that somehow the position of the cans or maybe
>>> their
>>> colors in that pile must reflect values of the type. Why should I (as
>>> a
>>> programmer) worry about that?
>>
>> Because you know it isn't beer cans (call it, say, "a simplifying
>> assumption") and you may want to (probably will want to) send some of
>> those integers to another machine across the internet or to disk.
>
> In each such case I will have to use a type different from the machine
> type. This is just another argument why types need to be precisely
> specified. I addressed this issue above.

But they are precisely specified on any given platform. I think you are 
seeking to abstract that away into the implementation. So you want more 
than just "precisely specified" (for it is already so), you want it 
hidden away.

>
>>>>> Mathematically there is no need to have a supertype containing the
>>>>> ranged one.
>>>>
>>>> I'm only concerned about pragmatics (implementation).
>>>
>>> It is simple to implement such a type based on an integer machine
>>> type when
>>> values of the language type correspond to the values of the machine
>>> type
>>> (injection). It is less simple but quite doable to implement this
>>> type on a
>>> array of machine values. The algorithms are well known and well
>>> studied. I
>>> see no problem with that.
>>
>> Just another layer of abstraction. So it's as simple as that (another
>> layer of abstraction), yes?
>
> Yes, what in your opinion a machine word is?

Either 32-bits or 64-bits depending if I'm on Win32 or Win64 
(little-endian and 2's complement is a given). ;)

> Just another layer of
> abstraction above states of the p-n junctions of some transistors. Do
> you
> care?

At some level, I care, yes. At that level, no. I'm not against your 
proposal, I just don't know the scope of it. I have a feeling that going 
to the level you are calling for is what would say is "an exercise" 
instead of practical engineering. Just a feeling though, mind you. Tell 
me this, how close is Ada (and which one) to your ideal? All I know about 
Ada is what I've read about it. Maybe I should write some programs in it.

>
>>>>> (The way compiler would implement that type is uninteresting,
>>>>
>>>> Ha! To me that is KEY.
>>>
>>> This is not a language design question. It is a matter of compiler
>>> design
>>> targeting given machine.
>>
>> Oh? I've been having a language design discussion (primarily).
>
> Then that cannot by any means be a key issue.
>

That's opinion, not fact.

>>> Why should you? Again, considering design by contract, and attempt to
>>> reveal the implementation behind the contract is a design error. You
>>> shall
>>> not rely on anything except the contract, that is a fundamental
>>> principle.
>>
>> While that paradigm does have a place, that place is not "everywhere".
>
> Where?
>

The formality of "design by contract" as used in developing 
functions/methods. Sure, one can easily move to the more general meaning 
of "contract' and use it when talking just about anything, but that's to 
dilute the waters that are "design by contract".

>> The comments being made in this thread are really suggesting 4GL.
>> While
>> that is fine, there is a place for 3GLs and hybrids. One size fits
>> all,
>> really doesn't.
>
> Maybe, but in software engineering it certainly does.

Your strongly-believed, but wrong, opionion noted.

>
>> Well then, if wrapping is OK, what else is needed and why?
>
> Wrapping is an implementation, the question is WHAT does this
> implementation actually implement?

I was just asking why you find that inadequate such that something 
(largely more complex?) is needed.

>
>>> Inadequate would be to expose such integer types in the contracts.
>>
>> What contracts?
>
>>> Implementations based on existing types are possible, but you would
>>> need
>>> much language support (e.g. reflection) in order to ensure that the
>>> implementation fulfills the contract. As an example, consider an
>>> implementation of
>>>
>>>   type I is range -200_000_000..100_000; -- This is the contract,
>>> which
>>>      -- includes the range and the behavior of +,-,*/,**,mod,rem,
>>> overflow
>>>      -- checks etc
>>>
>>> on top of C's int.
>>
>> Oh, THAT "contract". (I reserve that term for function calls, just to
>> avoid overloading it, but I actually prefer "specification" as I
>> associate "contract" with Eiffel).
>>
>> I'm don't know what you are suggesting the ideal implementation would
>> be
>> of the ranged type above.
>
> There is no ideal implementations, there are ones fulfilling functional
> and
> non-functional requirements.
>
>>>>>>> The point is that it is the application domain
>>>>>>> requirements to drive the design, and the choice of types in
>>>>>>> particular.
>>>>>>
>>>>>> Can that be achieved? At what degree of complexity?
>>>>>
>>>>> We probably are close to the edge.
>>>>
>>>> Meaning "highly complex"?
>>>
>>> Enough complex to make unsustainable the ways programs are designed
>>> now.
>>
>> Explain please.
>
> If the bugs rate will not be reduced, there will not be enough human
> resources to keep software development economically feasible. And this
> is
> not considering future losses of human lives in car accidents caused by
> software faults etc.

Oh, you were thinking something else when I wrote: "Can that be achieved? 
At what degree of complexity?". I asked if creating your ideal language 
(this "definitively specified higher level" language) is feasible and if 
so, what does that do to the complexity of the implemenation (compiler)? 
Is a comparison of Ada and C++ pretty much that answer?

>
>>>>>> Can it/should it be hardware-supported?
>>>>>
>>>>> No. Hardware becomes less and less relevant as the software
>>>>> complexity increases.
>>>>
>>>> OK, so another layer of abstraction is what you want. The syntax of,
>>>> say,
>>>> Ada's ranged types, for example. So your call is just for more
>>>> syntactic
>>>> sugar then, yes?
>>>
>>> Rather more support to contract driven design and static analysis.
>>
>> That doesn't appear to be a lot of complexity to introduce into a
>> compiler. It seems like common sense. So, what am I missing here?
>
> Contract specifications. "int" is not a contract.

Does Ada meet your desired requirements?

>
>> The new enums in C++11 are a step in the right direction, yes? Maybe
>> in
>> the next iteration we'll get ranges.
>
> Let's see.
>

And not hold our breaths! 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 15:50                   ` Fritz Wuehler
  2011-08-12 19:59                     ` Jed
@ 2011-08-13  8:06                     ` Stephen Leake
  1 sibling, 0 replies; 87+ messages in thread
From: Stephen Leake @ 2011-08-13  8:06 UTC (permalink / raw)


Fritz Wuehler <fritz@spamexpire-201108.rodent.frell.theremailer.net>
writes:

> "Jed" <jehdiah@orbitway.net> wrote:
>
>> 
>> "Randy Brukardt" <randy@rrsoftware.com> wrote in message 
>> news:j22c61$5lo$1@munin.nbi.dk...
>> 
>> > Modular types are something altogether different (and in all honesty, 
>> > rare enough that direct language support is of dubious value -- most of 
>> > us supported adding them to Ada 95 simply because it was the only way 
>> > to get any support for the largest unsigned integer type).
>> >
>> 
>> Isn't the wrapping behavior just a consequence of wanting to get a 
>> representation in which signed and unsigned integers can be easily 
>> converted to each other? I.e., "they" didn't sit down and say, "let's 
>> implement unsigned integers with wrapping behavior". 
>
> More likely it's a consequence of doing nothing, because the natural
> behavior of the hardware is that unsigned integers wrap. 

The natural behavior of the hardware is that signed integers wrap, as
well.

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12  9:40                 ` Dmitry A. Kazakov
  2011-08-12  9:45                   ` Ludovic Brenta
@ 2011-08-13  8:08                   ` Stephen Leake
  1 sibling, 0 replies; 87+ messages in thread
From: Stephen Leake @ 2011-08-13  8:08 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> On Fri, 12 Aug 2011 00:02:55 -0500, Randy Brukardt wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
>> news:1d8wyhvpcmpkd.ggiui9vebmtl.dlg@40tude.net...
>
>>> As for modular types, wrapping is the mathematically correct behavior, it
>>> is not an error.
>> 
>> Right, but using modular types as a stand-in for unsigned integers doesn't 
>> really work.
>
> I never felt much need in unsigned integers, 

I use them all the time for event counters, that often can overflow in a
long-running system, so the wrapping behavior is important.

I agree it would be nice to declare other integer semantics and still
have literals.

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  7:53                                   ` Jed
@ 2011-08-13  9:15                                     ` Dmitry A. Kazakov
  2011-08-13  9:29                                       ` Ian Collins
                                                         ` (2 more replies)
  0 siblings, 3 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-13  9:15 UTC (permalink / raw)


On Sat, 13 Aug 2011 02:53:32 -0500, Jed wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
> news:1gu6ni1yb54k3$.4nbvfqqndl8m$.dlg@40tude.net...
>> On Fri, 12 Aug 2011 14:06:55 -0500, Jed wrote:
>>
>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>>> news:fwnlcp4mgj03$.1tjeqjtos00o8$.dlg@40tude.net...
>>
>>>> I want the type semantics specified. C/C++ int is neither lower level nor
>>>> closer to the machine it is just ill-defined. The concern is this, not its
>>>> relation to the machine, of which I (as a programmer) just do not
>>>> care.
>>>
>>> What more definition do you want?
>>
>> More than what? I see nothing.
> 
> The semantics of C++ types are specified.

By the word "int"?

>>> Size guarantee?
>>> Other than that, what
>>> is ambiguous (platform-specific) about C++ int?
>>
>> That is one of the meanings of ill-defined.
> 
> Don't use it.

Which was the point.

>> BTW, if you aim at this application domain, note endianness and encoding
>> stuff. You have very little chance that int is the type the protocol or
>> hardware uses.
> 
> Picking on int?

int, long, unsinged represent an example of wrong language design. The
point was that the semantics of an [integer] must be derived from the
application domain.

>> In each such case I will have to use a type different from the machine
>> type. This is just another argument why types need to be precisely
>> specified. I addressed this issue above.
> 
> But they are precisely specified on any given platform.

1. They are not. In order to see the specification of int, you have to read
the compiler documentation. My point was that the program code does not
specify the properties of int. The answer that it is defined somewhere
somehow is not an argument at all, anything works in this or that way.

2. I don't care about the platform, I care about what the program is
supposed to do. The type's properties shall be defined by the application
domain.

>> Just another layer of
>> abstraction above states of the p-n junctions of some transistors. Do
>> you care?
> 
> At some level, I care, yes. At that level, no. I'm not against your 
> proposal, I just don't know the scope of it. I have a feeling that going 
> to the level you are calling for is what would say is "an exercise" 
> instead of practical engineering. Just a feeling though, mind you. Tell 
> me this, how close is Ada (and which one) to your ideal? All I know about 
> Ada is what I've read about it. Maybe I should write some programs in it.

Ada is close, but not close enough. It is light years ahead of C++
concerning numeric types, so the problems with Ada might be difficult to
understand for a C programmer. Ada puts too much magic into numeric types.
I would prefer less. Ada lacks numeric interfaces abstracted away. Ada
lacks mapping of classes of numeric types onto proper types. [There are
classes, but no objects] Exceptions are not contracted. Too many things
required static. Other things are dynamic while should have been static,
etc.

>>>>>> (The way compiler would implement that type is uninteresting,
>>>>>
>>>>> Ha! To me that is KEY.
>>>>
>>>> This is not a language design question. It is a matter of compiler
>>>> design targeting given machine.
>>>
>>> Oh? I've been having a language design discussion (primarily).
>>
>> Then that cannot by any means be a key issue.
> 
> That's opinion, not fact.

It is a conclusion from the fact that we were discussing language design
issues rather than code generation problems.

>>>> Why should you? Again, considering design by contract, and attempt to
>>>> reveal the implementation behind the contract is a design error. You shall
>>>> not rely on anything except the contract, that is a fundamental
>>>> principle.
>>>
>>> While that paradigm does have a place, that place is not "everywhere".
>>
>> Where?
> 
> The formality of "design by contract" as used in developing 
> functions/methods.

That is a procedural view on software developing [a very outdated one,
IMO].

But design by contract is a mere engineering principle, it applies
absolutely everywhere in engineering. If you disagree, please, show
examples.

[I don't mean here Meyer's notion of DbC, which is largely wrong, IMO]

>>> Well then, if wrapping is OK, what else is needed and why?
>>
>> Wrapping is an implementation, the question is WHAT does this
>> implementation actually implement?
> 
> I was just asking why you find that inadequate such that something 
> (largely more complex?) is needed.

Hmm, then I didn't understand your question.

> Oh, you were thinking something else when I wrote: "Can that be achieved? 
> At what degree of complexity?". I asked if creating your ideal language 
> (this "definitively specified higher level" language) is feasible and if 
> so, what does that do to the complexity of the implemenation (compiler)?

I think it should make the compiler simpler. Implementation of the RTL is a
different issue.

> Is a comparison of Ada and C++ pretty much that answer?

As a matter of fact, since I partially did both, to compile Ada is way
simpler than to compile C++.

>> Contract specifications. "int" is not a contract.
> 
> Does Ada meet your desired requirements?

Ada is the closest existing approximation.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:15                                     ` Dmitry A. Kazakov
@ 2011-08-13  9:29                                       ` Ian Collins
  2011-08-13  9:52                                         ` Dmitry A. Kazakov
  2011-08-14  4:29                                       ` Jed
  2011-08-16  8:18                                       ` Nick Keighley
  2 siblings, 1 reply; 87+ messages in thread
From: Ian Collins @ 2011-08-13  9:29 UTC (permalink / raw)


On 08/13/11 09:15 PM, Dmitry A. Kazakov wrote:
> On Sat, 13 Aug 2011 02:53:32 -0500, Jed wrote:
>> "Dmitry A. Kazakov"<mailbox@dmitry-kazakov.de>  wrote:
>
>>> BTW, if you aim at this application domain, note endianness and encoding
>>> stuff. You have very little chance that int is the type the protocol or
>>> hardware uses.
>>
>> Picking on int?
>
> int, long, unsinged represent an example of wrong language design. The
> point was that the semantics of an [integer] must be derived from the
> application domain.

They are an example of a specific language design decision, not a wrong 
one.  It's pretty rare for the application domain to be concerned about 
the semantics of an integer beyond those specified by the language.  In 
those rare cases where it does, the language provides a means of doing 
so though user defined types.

>>> In each such case I will have to use a type different from the machine
>>> type. This is just another argument why types need to be precisely
>>> specified. I addressed this issue above.
>>
>> But they are precisely specified on any given platform.
>
> 1. They are not. In order to see the specification of int, you have to read
> the compiler documentation. My point was that the program code does not
> specify the properties of int. The answer that it is defined somewhere
> somehow is not an argument at all, anything works in this or that way.

If you want to get the best performance out of a given platform (which 
many C and C++ programs do), you have to program to the characteristics 
of the platform.

> 2. I don't care about the platform, I care about what the program is
> supposed to do. The type's properties shall be defined by the application
> domain.

Can you give an example of where that would be an issues not solved by 
user defined types?

-- 
Ian Collins



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 19:51                               ` Jed
  2011-08-12 21:22                                 ` Ludovic Brenta
@ 2011-08-13  9:37                                 ` Georg Bauhaus
  2011-08-14  5:22                                   ` Jed
  2011-08-13 10:27                                 ` Georg Bauhaus
  2011-08-13 11:02                                 ` Georg Bauhaus
  3 siblings, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-13  9:37 UTC (permalink / raw)


On 12.08.11 21:51, Jed wrote:

>> The general idea is that you start from the problem
>> without giving thought to representation.  Typically,
>> without loss of efficiency.  Compiler makers know a fair
>> bit about good choice of representation.  This means:
>> Describe precisely what matters to your program, on
>> any machine, without reference to any representation.
>> Then, and only then, and only if necessary, throw in your
>> knowledge of representation.
>
> That doesn't work when you do IO.


Specifying representation when needed, and separate  from
the type declaration, works perfectly well for I/O.
For I/O in particular!

Short summary:

Declaring a type for computation does *not* require specification
of representation.  However, should it become necessary to
specify a representation of objects of a type, the programmer
should be able to do so, but *not* by forcing the representation
into every existing type.


(Incidentally, bit-oriented, abstracting types (bit fields
in C, std::bitset in C++, bit arrays in D, or packed arrays of
Booleans in a host of Pascalish languages) don't make a
language 4GL.)

Examples of how omitting (implied) representation from type
declaration works well with I/O.

1. (I/O As-is) Suppose I/O means writing to a particular address.
There is an output port to write to.  It has CHAR_BIT bits,
and CHAR_BIT == 8.  You want to write an object of type long
which happens to consists of four octets.  One approach is to
write a function that shifts bits into a char object and then
pushes this object out the port.

Using the fundamental type system of C++, you need some size
computations, at compile time, or by the programmer, so that
the program will perform the right number of shifts, for a
given implementation.  With C++ and long, long[], etc, there
are idioms. They typically use the sizeof operator, and division.

The very same technique is available for objects declared of a
type 1e0 .. 1e5 that does not mention any other type's name!
The programmer may leave it to the compiler to choose a
representation for objects of type 1e0 .. 1e5 that is
best on the target hardware for computation.  For I/O, the
programmer again uses the type system and inquires of the type
how many octets there are in the type.  But!  He does not have
to name a type for the "computational objects", such as
declaring them to be "long", he just lists the values that
should be stored in objects of the type, and exactly these, no
more, no less.

In pseudo code,

    type T is exactly 1e0 .. 1e5.
    T obj.   // representation left to the compiler

    // compute, compute, compute, ...  then,

    for k in [0, octet_size(T)) do
      x <- shift_octet_from (obj@k),
      write_octet(x, port_address)
    done.

Note the declaration of T does not mention the name of any
existing type.

In case range types are boring, here is another bare metal type:

    type T is exactly 1e0 .. 1e5 if T % 2 == 0.

Very low level, fairly easy to compute, nothing like 4GL,
and no mention of representation.


2. (Conversion) Suppose I/O means writing a number of structured
objects to a stream.  A program uses many structured objects
in its inner loops, requiring high speed representations. For I/O,
though, a different representation would be better, or is required.

For the computation, simply declare the structure and let the
compiler choose an efficient layout (type sizes, alignment).

    type R is record      struct R {
      x, y, z : S;         S x, y, z;
      t : TS;              TS t;
    end record;          };

In an I/O module, though, specify the layout as needed for I/O.
Doing so will establish order and size of components. It all
happens in the type system, the compiler does it, and the programmer
does not have to write conversion functions:

    -- ... I/O interface ...
private
    -- users of public type R need not be affected

    type R_for_IO is new R;  -- same logical data as R

    for R_for_IO use record
       --  nothing at storage unit 0, padding

       Z at 1 range 0 .. 7;
       Y at 2 range 0 .. 7;
       X at 3 range 0 .. 7;

       -- 2 storage units padding, again

       T at 6 range 2 .. 13;
    end record;


All that now needs to be done when writing objects to a stream
is to apply a type conversion to the object to be output.
Just an ordinary type conversion.  Since the functions of the
I/O interface deal with all this, the type conversion is never
seen by users of "computational type" R.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:29                                       ` Ian Collins
@ 2011-08-13  9:52                                         ` Dmitry A. Kazakov
  2011-08-13 11:10                                           ` Ian Collins
                                                             ` (2 more replies)
  0 siblings, 3 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-13  9:52 UTC (permalink / raw)


On Sat, 13 Aug 2011 21:29:05 +1200, Ian Collins wrote:

> It's pretty rare for the application domain to be concerned about 
> the semantics of an integer beyond those specified by the language.

If buffer overflows and code quality is not a concern, then yes.

>>>> In each such case I will have to use a type different from the machine
>>>> type. This is just another argument why types need to be precisely
>>>> specified. I addressed this issue above.
>>>
>>> But they are precisely specified on any given platform.
>>
>> 1. They are not. In order to see the specification of int, you have to read
>> the compiler documentation. My point was that the program code does not
>> specify the properties of int. The answer that it is defined somewhere
>> somehow is not an argument at all, anything works in this or that way.
> 
> If you want to get the best performance out of a given platform (which 
> many C and C++ programs do), you have to program to the characteristics 
> of the platform.

First of all, performance is not a functional requirement. If functional
requirements are not met, non-functional ones are irrelevant. Thus the
question stands: what does int implement? Where in the program code is any
specification of the *functional* requirements which the implementation
chosen to be int is supposed to fulfill [in the best etc possible way,
which in its turn requires validation]? How do I know if int is OK, not
unsigned, not long, not short and so on?

Secondly, how do you know that int gives you the best performance across
all possible implementations fulfilling the functional requirements? By
choosing int you prohibit the compiler to select any other machine type,
which also does the things you want. The point is, specify what you want
and let the compiler to implement it in a best possible way. Choosing an
implementation is an optimization issue, you can aim at memory use, at
time, at best lock-free sharing and so on. Fixing a machine type makes your
program potentially less efficient. More precisely you specify the
semantics of required type, more knowledge the compiler has to generate
efficient code.

>> 2. I don't care about the platform, I care about what the program is
>> supposed to do. The type's properties shall be defined by the application
>> domain.
> 
> Can you give an example of where that would be an issues not solved by 
> user defined types?

BTW, the point was: don't use built-in numeric types.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 19:51                               ` Jed
  2011-08-12 21:22                                 ` Ludovic Brenta
  2011-08-13  9:37                                 ` Georg Bauhaus
@ 2011-08-13 10:27                                 ` Georg Bauhaus
  2011-08-14  5:35                                   ` Jed
  2011-08-13 11:02                                 ` Georg Bauhaus
  3 siblings, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-13 10:27 UTC (permalink / raw)


On 12.08.11 21:51, Jed wrote:

> So the benefit, then, just not having to learn something?


"Oh no, not again."


(1) If you need to know X in order to write program P, you
     should have learned X.

(2) If you do not need to know X for writing P, having
     learned X neither helps nor hurts. (e.g. X ::= float)

(3) If a language L1 requires X for P, you must learn X.

(4) If a language L2 doesn't require X for P, you may
     learn X, but need not.

(5) If P produced by L1 equals P produced by L2, then
     we don't need X for P.  QED

(There may be other reasons for needing both L1 and L2,
but not X for P.)

When a language L2 exists, its existence demonstrates that
we can drop features of L1 and still get P, exactly.

If X has show that it can hurt, this may be incentive
to drop X for a better replacement.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 19:51                               ` Jed
                                                   ` (2 preceding siblings ...)
  2011-08-13 10:27                                 ` Georg Bauhaus
@ 2011-08-13 11:02                                 ` Georg Bauhaus
  2011-08-14  5:56                                   ` Jed
  3 siblings, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-13 11:02 UTC (permalink / raw)


On 12.08.11 21:51, Jed wrote:

> I like the semantic also (not the syntax though). I still don't have a
> feel for the scope of the desired (your) semantics and the compiler
> implementation effort requited though. So far, I'm clear on "ranged
> integers + size attribute".

As others have argued, and as has come out of the study
mentioned earlier, the language should allow specifying types
by referring to those properties of the solution that
are descriptive of the envisioned type's properties.  That's
rarely int alone, but rather what the programmer has in mind.

Sorry, I can't present to you a complete and consistent summary
of the world's programmers' ideas about whole number types in
their programs, other than in the negative: they are rarely covered
by int, as is exemplified in this thread.

A positive addition: we already have alternative ways of
declaring types and they lead to exactly as efficient
programs.

These types are an interesting challenge, anyway.  A few compiler
writers have done some homework and decided that cyclic types (beyond
wrapping ones such as unsigned int) would be nightmarish to
implement.  At least efficiently, IIUC.

But do they have to implement all kind of ideas, or should
a language just offer building blocks for the type system?
For example, offer a set membership operation, one that can
no just be tacked onto operator=, but such that the compiler
can infer things systematically, guided by standard rules
whenever it meets =.

Is there a way, for example, to define a type system
that  keeps HALT out of  operator=?


> So, a control panel on top of the low-level things. (4GL vs. 3GL).

int refers only to what the C++ standard has to say about
minimal properties of int.  Simply being more specific about what
values I have in mind is far, far away from a 4GL language.
Even WRT GL time.  The idea is at least as old as 3GL, and
has been implemented long ago.
The difference in concept between int and integer types flowing
from programmer's specification need not be huge:
Writing "int" gives permissions to the implementation, so do
"n1 .. n2" etc. Only that a C++ compiler cannot but conflate
the value set the programmer had in mind and the  value set
that happens to be that of int in a given implementation.




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:52                                         ` Dmitry A. Kazakov
@ 2011-08-13 11:10                                           ` Ian Collins
  2011-08-13 11:46                                             ` Georg Bauhaus
                                                               ` (2 more replies)
  2011-08-14  4:35                                           ` Jed
  2011-08-14  4:49                                           ` Jed
  2 siblings, 3 replies; 87+ messages in thread
From: Ian Collins @ 2011-08-13 11:10 UTC (permalink / raw)


On 08/13/11 09:52 PM, Dmitry A. Kazakov wrote:
> On Sat, 13 Aug 2011 21:29:05 +1200, Ian Collins wrote:
>
>> It's pretty rare for the application domain to be concerned about
>> the semantics of an integer beyond those specified by the language.
>
> If buffer overflows and code quality is not a concern, then yes.

Most variables have limited scope and a well defined range.  So if I use 
an int in a for loop over a small set of data, I'm not writing poor 
quality code.

>>>>> In each such case I will have to use a type different from the machine
>>>>> type. This is just another argument why types need to be precisely
>>>>> specified. I addressed this issue above.
>>>>
>>>> But they are precisely specified on any given platform.
>>>
>>> 1. They are not. In order to see the specification of int, you have to read
>>> the compiler documentation. My point was that the program code does not
>>> specify the properties of int. The answer that it is defined somewhere
>>> somehow is not an argument at all, anything works in this or that way.
>>
>> If you want to get the best performance out of a given platform (which
>> many C and C++ programs do), you have to program to the characteristics
>> of the platform.
>
> First of all, performance is not a functional requirement.

It very often is the first one for most of the embedded systems I write!

> If functional
> requirements are not met, non-functional ones are irrelevant.

The functional requirements are irrelevant if they can not be met in the 
required time.

> Thus the
> question stands: what does int implement? Where in the program code is any
> specification of the *functional* requirements which the implementation
> chosen to be int is supposed to fulfill [in the best etc possible way,
> which in its turn requires validation]? How do I know if int is OK, not
> unsigned, not long, not short and so on?

In general, you can't.  Where it matters, you have to measure.

> Secondly, how do you know that int gives you the best performance across
> all possible implementations fulfilling the functional requirements? By
> choosing int you prohibit the compiler to select any other machine type,
> which also does the things you want. The point is, specify what you want
> and let the compiler to implement it in a best possible way. Choosing an
> implementation is an optimization issue, you can aim at memory use, at
> time, at best lock-free sharing and so on. Fixing a machine type makes your
> program potentially less efficient. More precisely you specify the
> semantics of required type, more knowledge the compiler has to generate
> efficient code.

I do see your point and for C or C++ the best we can do knowing a 
variable is signed or unsigned of a given range is to use one of the C 
fast_xx types.  Far from ideal I know.

Yes I agree being able to specify a type without reference to an 
underlying fundamental type is a powerful feature C and C++ lack.

-- 
Ian Collins



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 11:10                                           ` Ian Collins
@ 2011-08-13 11:46                                             ` Georg Bauhaus
  2011-08-13 20:30                                               ` Ian Collins
  2011-08-13 11:54                                             ` Brian Drummond
  2011-08-14  4:54                                             ` Jed
  2 siblings, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-13 11:46 UTC (permalink / raw)


On 13.08.11 13:10, Ian Collins wrote:
> On 08/13/11 09:52 PM, Dmitry A. Kazakov wrote:
>> On Sat, 13 Aug 2011 21:29:05 +1200, Ian Collins wrote:
>>
>>> It's pretty rare for the application domain to be concerned about
>>> the semantics of an integer beyond those specified by the language.
>>
>> If buffer overflows and code quality is not a concern, then yes.
>
> Most variables have limited scope and a well defined range. So if I use an int in a for loop over a small set of data, I'm not writing poor quality code.

Even if language design is not to be judged what you or Dmitry,
or I, if I may, write in any given specific circumstance, it
is still true that every other week an overflow drills
a CVE into the internet.


> Yes I agree being able to specify a type without reference to an underlying fundamental type is a powerful feature C and C++ lack.

Could you say why? I speculate that Jed would like to hear,
too.




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 11:10                                           ` Ian Collins
  2011-08-13 11:46                                             ` Georg Bauhaus
@ 2011-08-13 11:54                                             ` Brian Drummond
  2011-08-13 13:12                                               ` Simon Wright
  2011-08-14  4:54                                             ` Jed
  2 siblings, 1 reply; 87+ messages in thread
From: Brian Drummond @ 2011-08-13 11:54 UTC (permalink / raw)


On Sat, 13 Aug 2011 23:10:48 +1200, Ian Collins wrote:

> On 08/13/11 09:52 PM, Dmitry A. Kazakov wrote:
>> On Sat, 13 Aug 2011 21:29:05 +1200, Ian Collins wrote:
>>
>>> It's pretty rare for the application domain to be concerned about the
>>> semantics of an integer beyond those specified by the language.
>>
>> If buffer overflows and code quality is not a concern, then yes.
> 
> Most variables have limited scope and a well defined range.  So if I use
> an int in a for loop over a small set of data, I'm not writing poor
> quality code.

Yes you are (in this hypothetical example), if you care about either 
reliability or efficiency.

By specifying "int" you are prohibiting the compiler from both:
(a) using a smaller and possibly faster type that is adequate to cover 
the range of data.
and (b) using a larger type if "int" does not actually cover the range.

Specify user-defined types "range 1 .. 100" or "range 32760 ..32769" or 
"range lower .. upper" and let the compiler choose the optimal base type 
for the current target. (Imagine using "int" here on a 16-bit platform) 

Incidentally, the values of "lower" and "upper" need not be determined 
until runtime. The compiler can infer the best base type from whatever 
constraints it finds on "lower" and "upper".

So why get in its way?

- Brian



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 11:54                                             ` Brian Drummond
@ 2011-08-13 13:12                                               ` Simon Wright
  2011-08-14 11:01                                                 ` Brian Drummond
  0 siblings, 1 reply; 87+ messages in thread
From: Simon Wright @ 2011-08-13 13:12 UTC (permalink / raw)


Brian Drummond <brian@shapes.demon.co.uk> writes:

> Incidentally, the values of "lower" and "upper" need not be determined
> until runtime. The compiler can infer the best base type from whatever
> constraints it finds on "lower" and "upper".

I don't think that can be true? OK, the range can be postponed until
runtime, but the base type of which the actual bounds are instances must
be known (talking about a compiler here, not an interpreter).



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 11:46                                             ` Georg Bauhaus
@ 2011-08-13 20:30                                               ` Ian Collins
  0 siblings, 0 replies; 87+ messages in thread
From: Ian Collins @ 2011-08-13 20:30 UTC (permalink / raw)


On 08/13/11 11:46 PM, Georg Bauhaus wrote:
> On 13.08.11 13:10, Ian Collins wrote:
>
>> Yes I agree being able to specify a type without reference to an underlying fundamental type is a powerful feature C and C++ lack.
>
> Could you say why? I speculate that Jed would like to hear,
> too.

The answer was in the paragraph I was responding to....

-- 
Ian Collins



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:15                                     ` Dmitry A. Kazakov
  2011-08-13  9:29                                       ` Ian Collins
@ 2011-08-14  4:29                                       ` Jed
  2011-08-14  7:29                                         ` Dmitry A. Kazakov
  2011-08-16  8:18                                       ` Nick Keighley
  2 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-14  4:29 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Sat, 13 Aug 2011 02:53:32 -0500, Jed wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:1gu6ni1yb54k3$.4nbvfqqndl8m$.dlg@40tude.net...
>>> On Fri, 12 Aug 2011 14:06:55 -0500, Jed wrote:
>>>
>>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>>>> news:fwnlcp4mgj03$.1tjeqjtos00o8$.dlg@40tude.net...
>>>
>>>>> I want the type semantics specified. C/C++ int is neither lower
>>>>> level nor closer to the machine it is just ill-defined. The
>>>>> concern is this, not its relation to the machine, of which I (as
>>>>> a programmer) just do not
>>>>> care.
>>>>
>>>> What more definition do you want?
>>>
>>> More than what? I see nothing.
>>
>> The semantics of C++ types are specified.
>
> By the word "int"?

By "specified", I mean "you know what to expect" (being, you have not a 
guarantee).

>
>>>> Size guarantee?
>>>> Other than that, what
>>>> is ambiguous (platform-specific) about C++ int?
>>>
>>> That is one of the meanings of ill-defined.
>>
>> Don't use it.
>
> Which was the point.

Consider it ancillary, for there are other integer types. Singling out 
int is a strawman, for while it is a failure, it is not the only avenue 
in the language.

>
>>> BTW, if you aim at this application domain, note endianness and
>>> encoding stuff. You have very little chance that int is the type
>>> the protocol or hardware uses.
>>
>> Picking on int?
>
> int, long, unsinged represent an example of wrong language design.

I agree (did you mean in C++, or in general?). I have a feeling you meant 
in general. In that case, again, I'm not sure that having some of those 
monikers isn't the best compromise.  You know, the 80% rule.

>  The
> point was that the semantics of an [integer] must be derived from the
> application domain.

Maybe it's good enough in a lot of instances. Say, a memory address. 
32-bit unsigned address works nicely (on a 32-bit machine). Ah, but you 
are going to say that that is just convenient coincidence. You could be 
right. I'm not sure at this point. Again, purest approach vs. practical 
compromise is the issue.

>
>>> In each such case I will have to use a type different from the
>>> machine type. This is just another argument why types need to be
>>> precisely specified. I addressed this issue above.
>>
>> But they are precisely specified on any given platform.
>
> 1. They are not. In order to see the specification of int, you have
> to read the compiler documentation.

"platform" = hardware + OS (if there is one) + compiler.

> My point was that the program
> code does not specify the properties of int.

And I'm saying, maybe ("maybe" is a key word here) that is just a purist 
approach and not better than practical compromise.

> The answer that it is
> defined somewhere somehow is not an argument at all, anything works
> in this or that way.

Who's arguing? First of all, I wouldn't use int, but if I did, I would 
know what "int" was. Everything does not require cross-platform 
portability.

>
> 2. I don't care about the platform, I care about what the program is
> supposed to do.

Another way to skin the cat: write to a virtual machine. Java. .Net. Your 
own.

> The type's properties shall be defined by the application domain.

If you say so, but that is maybe too much effort for too little gain. 
TBD.

>
>>> Just another layer of
>>> abstraction above states of the p-n junctions of some transistors.
>>> Do you care?
>>
>> At some level, I care, yes. At that level, no. I'm not against your
>> proposal, I just don't know the scope of it. I have a feeling that
>> going to the level you are calling for is what would say is "an
>> exercise" instead of practical engineering. Just a feeling though,
>> mind you. Tell me this, how close is Ada (and which one) to your
>> ideal? All I know about Ada is what I've read about it. Maybe I
>> should write some programs in it.
>
> Ada is close, but not close enough. It is light years ahead of C++
> concerning numeric types, so the problems with Ada might be difficult
> to understand for a C programmer. Ada puts too much magic into
> numeric types. I would prefer less. Ada lacks numeric interfaces
> abstracted away. Ada lacks mapping of classes of numeric types onto
> proper types. [There are classes, but no objects] Exceptions are not
> contracted. Too many things required static. Other things are dynamic
> while should have been static, etc.

I guess I'll just have to wait and try this other language when you are 
done implementing it. ;)?

>
>>>>>>> (The way compiler would implement that type is uninteresting,
>>>>>>
>>>>>> Ha! To me that is KEY.
>>>>>
>>>>> This is not a language design question. It is a matter of compiler
>>>>> design targeting given machine.
>>>>
>>>> Oh? I've been having a language design discussion (primarily).
>>>
>>> Then that cannot by any means be a key issue.
>>
>> That's opinion, not fact.
>
> It is a conclusion from the fact that we were discussing language
> design issues rather than code generation problems.

Language design includes foresight into the implementation. Else it's 
just pondering some theoretical ideal. Academic research vs. practical 
engineering.

>
>>>>> Why should you? Again, considering design by contract, and
>>>>> attempt to reveal the implementation behind the contract is a
>>>>> design error. You shall not rely on anything except the contract,
>>>>> that is a fundamental principle.
>>>>
>>>> While that paradigm does have a place, that place is not
>>>> "everywhere".
>>>
>>> Where?
>>
>> The formality of "design by contract" as used in developing
>> functions/methods.
>
> That is a procedural view on software developing [a very outdated one,
> IMO].

No it is not. Classes have methods. (I'm surprised you even said that).

>
> But design by contract is a mere engineering principle,

And a bit more. I always took it as a paradigm associated with a specific 
product (Eiffel). As a matter of fact, I felt kind of "irked" when it was 
presented as "something new" for before that, it was engineering 
technique. I mean, "it's like.. the function/method specification is a 
"contract"... blah, blah", well, duh! But, I digress.

> it applies
> absolutely everywhere in engineering. If you disagree, please, show
> examples.

That's what I meant by saying you can choose to use the terminology in a 
general way, to which I "retort", "well, duh!". Statements of the 
obvious, so what?

>
> [I don't mean here Meyer's notion of DbC, which is largely wrong, IMO]

When I see "Design by Contract", my mind automagically puts a "tm" after 
it. Not that I'm knocking anything.

>
>>>> Well then, if wrapping is OK, what else is needed and why?
>>>
>>> Wrapping is an implementation, the question is WHAT does this
>>> implementation actually implement?
>>
>> I was just asking why you find that inadequate such that something
>> (largely more complex?) is needed.
>
> Hmm, then I didn't understand your question.

The whole time, I've been trying to get a handle on this "nebulous" (to 
me) ideal language of yours and how it differs from "the low hanging 
fruit" of traditional language design and implementation. I furthered 
saying that I have a feeling you are seeking some "purist" ideal or 
something for comp sci labs in academia. That's about it.

>
>> Oh, you were thinking something else when I wrote: "Can that be
>> achieved? At what degree of complexity?". I asked if creating your
>> ideal language (this "definitively specified higher level" language)
>> is feasible and if so, what does that do to the complexity of the
>> implemenation (compiler)?
>
> I think it should make the compiler simpler.

Do explain please. I can appreciate that in the abstract things look 
rosey, but the devil is always in the details. One can design the perfect 
bridge, but the problem is that landscapes are never perfect.

> Implementation of the RTL is a different issue.

>
>> Is a comparison of Ada and C++ pretty much that answer?
>
> As a matter of fact, since I partially did both, to compile Ada is way
> simpler than to compile C++.

That's not saying much of anything. Compare it to C.

>
>>> Contract specifications. "int" is not a contract.
>>
>> Does Ada meet your desired requirements?
>
> Ada is the closest existing approximation.

I wonder if there is more to discover in using it than just reading about 
it. I think it has this "module as class-like thing" that seems so 
archaic, so I would fight it at every step of coding, so maybe I'll stick 
to just reading more about it. 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:52                                         ` Dmitry A. Kazakov
  2011-08-13 11:10                                           ` Ian Collins
@ 2011-08-14  4:35                                           ` Jed
  2011-08-14  6:46                                             ` Dmitry A. Kazakov
  2011-08-14  4:49                                           ` Jed
  2 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-14  4:35 UTC (permalink / raw)


Dmitry A. Kazakov wrote:

> First of all, performance is not a functional requirement.

What planet are you from? Of course it is a functional requirement. I.e, 
where performance matters and is thus specified. Nuff said. Not worthy of 
a tangent thread of discussion. But not meant to curb any thoughts, and 
yes, this statement of yours does not need any surrounding context, for 
you made the dirrect statement.





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:52                                         ` Dmitry A. Kazakov
  2011-08-13 11:10                                           ` Ian Collins
  2011-08-14  4:35                                           ` Jed
@ 2011-08-14  4:49                                           ` Jed
  2011-08-14  6:51                                             ` Dmitry A. Kazakov
  2 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-14  4:49 UTC (permalink / raw)


Dmitry A. Kazakov wrote:

> Choosing an implementation is an optimization
> issue,

Hmm. The truth of this is "easily" determined (by academics, not by me!). 
Starting in on the theory, the "register" keyword is a good starting 
point for thought. Yes, it decidedly does not belong in a language, for 
it is for a compiler to worry about. OTOH, (the other extreme), total 
abstraction from the machine is <something> because ... ah ha! I know 
why. :) OK, figured it out. "Nevermind".

> you can aim at memory use, at time, at best lock-free sharing
> and so on. Fixing a machine type makes your program potentially less
> efficient. More precisely you specify the semantics of required type,
> more knowledge the compiler has to generate efficient code.
>
>>> 2. I don't care about the platform, I care about what the program is
>>> supposed to do. The type's properties shall be defined by the
>>> application domain.
>>
>> Can you give an example of where that would be an issues not solved
>> by user defined types?
>
> BTW, the point was: don't use built-in numeric types.

OK, but after the hardware guys all conform to a standard. Till then, it 
ain't happenin. 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 11:10                                           ` Ian Collins
  2011-08-13 11:46                                             ` Georg Bauhaus
  2011-08-13 11:54                                             ` Brian Drummond
@ 2011-08-14  4:54                                             ` Jed
  2 siblings, 0 replies; 87+ messages in thread
From: Jed @ 2011-08-14  4:54 UTC (permalink / raw)


Ian Collins wrote:

> Yes I agree being able to specify a type without reference to an
> underlying fundamental type is a powerful feature C and C++ lack.

It's not a language thing. You would need to trust the hardware guys to 
do that. I'm all for it, as long as it is a homeless person, and not 
someone who has had 50 instead of 15 minutes of fame already, while 
someone is worried about the onset of winter. 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:37                                 ` Georg Bauhaus
@ 2011-08-14  5:22                                   ` Jed
  0 siblings, 0 replies; 87+ messages in thread
From: Jed @ 2011-08-14  5:22 UTC (permalink / raw)


Georg Bauhaus wrote:
> On 12.08.11 21:51, Jed wrote:
>
>>> The general idea is that you start from the problem
>>> without giving thought to representation.  Typically,
>>> without loss of efficiency.  Compiler makers know a fair
>>> bit about good choice of representation.  This means:
>>> Describe precisely what matters to your program, on
>>> any machine, without reference to any representation.
>>> Then, and only then, and only if necessary, throw in your
>>> knowledge of representation.
>>
>> That doesn't work when you do IO.
>
>
> Specifying representation when needed, and separate  from
> the type declaration, works perfectly well for I/O.
> For I/O in particular!

Maybe true, but maybe I'd agree, and maybe I'd maybe I'd "bail out" at 
the last step before boarding the airplane, which of course I could maybe 
never ever do, for being so afraid of being in the sky under someone(s) 
else's control... I mean, like Rain Man.

>
> Short summary:
>
> Declaring a type for computation does *not* require specification
> of representation.

That goes without saying. But maybe I LIKE it that way.

> However, should it become necessary

It's not a question of necessity.

> to
> specify a representation of objects of a type, the programmer
> should be able to do so, but *not* by forcing the representation
> into every existing type.

Maybe, but maybe that is too "pie in the sky" academia/wishful-thinking, 
etc. Who are the beneficiaries of such and it is a benefit at all?

>
>
> (Incidentally, bit-oriented, abstracting types (bit fields
> in C, std::bitset in C++, bit arrays in D, or packed arrays of
> Booleans in a host of Pascalish languages) don't make a
> language 4GL.)

I know what a 4GL is, do you? ;)

>
> Examples of how omitting (implied) representation from type
> declaration works well with I/O.
>
> 1. (I/O As-is) Suppose I/O means writing to a particular address.
> There is an output port to write to.  It has CHAR_BIT bits,
> and CHAR_BIT == 8.  You want to write an object of type long
> which happens to consists of four octets.  One approach is to
> write a function that shifts bits into a char object and then
> pushes this object out the port.
>
> Using the fundamental type system of C++, you need some size
> computations, at compile time, or by the programmer, so that
> the program will perform the right number of shifts, for a
> given implementation.  With C++ and long, long[], etc, there
> are idioms. They typically use the sizeof operator, and division.
>
> The very same technique is available for objects declared of a
> type 1e0 .. 1e5 that does not mention any other type's name!
> The programmer may leave it to the compiler to choose a
> representation for objects of type 1e0 .. 1e5 that is
> best on the target hardware for computation.  For I/O, the
> programmer again uses the type system and inquires of the type
> how many octets there are in the type.  But!  He does not have
> to name a type for the "computational objects", such as
> declaring them to be "long", he just lists the values that
> should be stored in objects of the type, and exactly these, no
> more, no less.

Pfft. There is no "int", "long", "short"... but there is int32, uint8, 
etc. Don't bother me with past mistakes. Given that, then move on. All 
else is banter/irrelevant (to progress).

>
> In pseudo code,
>
>    type T is exactly 1e0 .. 1e5.
>    T obj.   // representation left to the compiler
>
>    // compute, compute, compute, ...  then,
>
>    for k in [0, octet_size(T)) do
>      x <- shift_octet_from (obj@k),
>      write_octet(x, port_address)
>    done.

Ech, I have to skip past example code mostly. Until the higher level is 
worked out. "Design in code" is a fantasy (for the most part, it is just 
wailing on the guitar, which is, of course, masturbation).

>
> 2. (Conversion) Suppose I/O means writing a number of structured
> objects to a stream.

Suppose? Suppose this: there is no "stream". :P
 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 10:27                                 ` Georg Bauhaus
@ 2011-08-14  5:35                                   ` Jed
  2011-08-14 20:13                                     ` Georg Bauhaus
  2011-08-15 11:38                                     ` Georg Bauhaus
  0 siblings, 2 replies; 87+ messages in thread
From: Jed @ 2011-08-14  5:35 UTC (permalink / raw)


Georg Bauhaus wrote:
> On 12.08.11 21:51, Jed wrote:
>
>> So the benefit, then, just not having to learn something?
>
>
> "Oh no, not again."

What.

>
>
> (1) If you need to know X in order to write program P, you
>     should have learned X.

If you chose the path to implement program P, then it behooves you to 
understand the tool you choose to do that with. Else you may end up with 
less than five digits on your right hand (assuming you are right-handed).

>
> (2) If you do not need to know X for writing P, having
>     learned X neither helps nor hurts. (e.g. X ::= float)

If one is a "soul-searching geriatric", or a "penis-bound adolescent", 
your theory "holds" (probably not). So who died and left you boss?

>
> (3) If a language L1 requires X for P, you must learn X.

The consequences of choice.

>
> (4) If a language L2 doesn't require X for P, you may
>     learn X, but need not.

But maybe one would want to. And maybe one would choose not to tell you 
why! :P

>
> (5) If P produced by L1 equals P produced by L2, then
>     we don't need X for P.  QED

Isn't that why the USA "economy" is bust? Don't make me say "snakeoil". 
(And for your info, Mr. High on His Horse, I have a college degree, not a 
QED (so don't even start this shit that you are smarter than me) :P ).

[snippage]





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 11:02                                 ` Georg Bauhaus
@ 2011-08-14  5:56                                   ` Jed
  0 siblings, 0 replies; 87+ messages in thread
From: Jed @ 2011-08-14  5:56 UTC (permalink / raw)


Georg Bauhaus wrote:
> On 12.08.11 21:51, Jed wrote:
>
>> I like the semantic also (not the syntax though). I still don't have
>> a feel for the scope of the desired (your) semantics and the compiler
>> implementation effort requited though. So far, I'm clear on "ranged
>> integers + size attribute".
>
> As others have argued, and as has come out of the study
> mentioned earlier,

(attempt at "appeal to authority" noted)

> the language should

"should" doesn't fly on Wikipedia.

> Sorry, I can't present to you a complete and consistent summary
> of the world's programmers' ideas about whole number types in
> their programs, other than in the negative: they are rarely covered
> by int, as is exemplified in this thread.

Do "call me" when your thoughts are accepted on Wikipedia.

>
> A positive addition:

Oh, you were being "negative"?  I didn't notice!

> we

Who is "we"? Trying to include me, unsuspecting, may be a warring action. 
So just don't do that anymore.

> already have alternative ways of
> declaring types and they lead to exactly as efficient
> programs.

It's called reality. Deal with it (better than you dad did, an of course 
your mom does not matter). I didn't say accept it.

>
> These types are an interesting challenge, anyway.

Your types are known, and not interesting.

> A few compiler
> writers

There is no such thing.

> have done some homework and decided that cyclic types (beyond
> wrapping ones such as unsigned int) would be nightmarish to
> implement.  At least efficiently, IIUC.
>
> But do they have to implement all kind of ideas, or should
> a language just offer building blocks for the type system?

Dmitry "knows" that. Ask him.

>> So, a control panel on top of the low-level things. (4GL vs. 3GL).
>
> int refers only to what the C++ standard has to say about
> minimal properties of int.  Simply being more specific about what
> values I have in mind is far, far away from a 4GL language.

I assure you I know abstraction. You can take your "vote", and shove it 
up your ass. Cry me a river. Reap what you sow.







^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-14  4:35                                           ` Jed
@ 2011-08-14  6:46                                             ` Dmitry A. Kazakov
  0 siblings, 0 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-14  6:46 UTC (permalink / raw)


On Sat, 13 Aug 2011 23:35:58 -0500, Jed wrote:

> Dmitry A. Kazakov wrote:
> 
>> First of all, performance is not a functional requirement.
> 
> What planet are you from?

It happened so, that 50% of what I am doing is designing embedded and
real-time systems.

> Of course it is a functional requirement. I.e, 
> where performance matters and is thus specified.

Where performance (timing) really matters at the level of severity making
it functional, it is specified in the form of real-time constraints. Then
your case looks much worse, fatal rather, sorry.

You have to show that an activity X is completed before the deadline T,
that must follow from the types selected and overall design. This means
that timing of the operations shall be contracted (which sill does not
guarantee you anything, but necessary to any form of static analysis). And
you have no contracts at all!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-14  4:49                                           ` Jed
@ 2011-08-14  6:51                                             ` Dmitry A. Kazakov
  0 siblings, 0 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-14  6:51 UTC (permalink / raw)


On Sat, 13 Aug 2011 23:49:00 -0500, Jed wrote:

> Dmitry A. Kazakov wrote:
> 
>> Choosing an implementation is an optimization
>> issue,
> 
> Hmm. The truth of this is "easily" determined (by academics, not by me!). 
> Starting in on the theory, the "register" keyword is a good starting 
> point for thought.

In early 60s, maybe. Nowadays it should be "register", "L1 cache", "L2
cache", "L3 cache" etc.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 21:39                         ` Fritz Wuehler
@ 2011-08-14  6:52                           ` Jed
  2011-08-14  8:13                             ` Nomen Nescio
  0 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-14  6:52 UTC (permalink / raw)


Fritz Wuehler wrote:
> "Jed" <jehdiah@orbitway.net> wrote:
>
>> I didn't say it was. I said it probably arose from that. I'll bet
>> integer literals are signed in most assembly languages (going out on
>> a limb with this, because I really don't know).
>
> Absolutely not.

Liar.

> Assembly language is about talking to the machine
> directly. If you can't manipulate native types then your assembler
> isn't an assembler.
>

Pedantic "discussion" is annoying to me, so curb it.

>> Hence, a no-brainer translation of  HLL code to assembly
>> (recognizing, of course, that compilers are free to generate machine
>> code directly, rather than generating assembly).
>
> Most (vast majority) of the compilers on IBM platforms generate object
> code.

Their loss! ;)

> It sounds like you think a goal of HLL design is to be easy to
> implement.

The goal? What "goal" are you on about?

> That's wrong.

What is "right" and what is "wrong". (This is great, because you know 
this and everyone else wants to know, right? Do tell, what is WRONG, and 
right.)

>The goal of HLL design

I didn't know it was a <something>.

>  is to make it easy to
> solve problems in the problem domain.

The pageant queen saying "world peace". You can't hang dude. She is 
pretty and has value, while you have none.

>
>> It's generally considered "typeless" from the POV of HLL
>> programmers. It has relative meaning. No need to be pedantic about
>> it.
>
> It wasn't pedantic,

I said it was, so it is. Don't F with me.

> it was a simple comment from an assembly coder.

Bah.

> Assembly is typed or it wouldn't be useful.

Strawman. Context matters.

> However there's no type
> enforcement at the assembler level (usually, although at the hardware
> level there is.)

masturbation.

> That is no the same as not being typed, although to
> an HLL programmer it may not be obvious that those two things are not
> one and the same.
>
>>> especially with certain
>>> systems and certain assemblers. The type in assembly language *does*
>>> usually reflect the native types of the underlying machine very
>>> closely,
>>> obviously.
>>
>> And I'll bet, more often than not, C/C++ built-in types reflect that
>> also. It would be "silly" to specify a language to what is uncommon.
>
> You have the tail wagging the dog.

Says the adolescent with dick in hand.

> The C/C++ built-in types don't
> reflect anything but the language design

Your you inquisitiveness noted. But what about all those people you 
killed?

>
>> It would be "silly" to specify a language to what is uncommon.
>
> That's exactly the point of HLL. It must provide useful abstractions
> for the problem domain. The underlying implementation should not
> matter at all.

You have not platform. You are a killer. Men kill. Women don't. 





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 21:22                                 ` Ludovic Brenta
@ 2011-08-14  7:00                                   ` Jed
  2011-08-16 13:06                                     ` Ludovic Brenta
  0 siblings, 1 reply; 87+ messages in thread
From: Jed @ 2011-08-14  7:00 UTC (permalink / raw)


Ludovic Brenta wrote:
> Jed writes:
>>> When have you last thought about the size of the components
>>> of std::string, of of a vtable?  Why, then, is there a
>>> pressing need to think about the sizes of int types?
>>
>> You know that that was a very bad comparison. Completely invalid.
>
> On the contrary I think this comparison is perfectly valid. ]

Who cares what you think?





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-14  4:29                                       ` Jed
@ 2011-08-14  7:29                                         ` Dmitry A. Kazakov
  0 siblings, 0 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-14  7:29 UTC (permalink / raw)


On Sat, 13 Aug 2011 23:29:12 -0500, Jed wrote:

> Dmitry A. Kazakov wrote:
>> On Sat, 13 Aug 2011 02:53:32 -0500, Jed wrote:
>>
>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>>> news:1gu6ni1yb54k3$.4nbvfqqndl8m$.dlg@40tude.net...
>>>> On Fri, 12 Aug 2011 14:06:55 -0500, Jed wrote:
>>>>
>>>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>>>>> news:fwnlcp4mgj03$.1tjeqjtos00o8$.dlg@40tude.net...
>>>>
>>>>>> I want the type semantics specified. C/C++ int is neither lower
>>>>>> level nor closer to the machine it is just ill-defined. The
>>>>>> concern is this, not its relation to the machine, of which I (as
>>>>>> a programmer) just do not
>>>>>> care.
>>>>>
>>>>> What more definition do you want?
>>>>
>>>> More than what? I see nothing.
>>>
>>> The semantics of C++ types are specified.
>>
>> By the word "int"?
> 
> By "specified", I mean "you know what to expect" (being, you have not a 
> guarantee).

No, "what to expect" follows from the specification. E.g. "the can opener
operates in the temperature range -10..+40 �C". This is a specification.
"The can opener may operate at some temperature" is not.

>> int, long, unsinged represent an example of wrong language design.
> 
> I agree (did you mean in C++, or in general?).

In general. It is

1. untyped approach. Not because of implicit conversions, which are bad of
course. The main reason is that the programmer should be encouraged to
choose different integer types for semantically different things, even if
numerically these things could be implemented by the same type.

2. Lack of contracts, machine/language- rather than application domain
driven choice of types.

>>  The
>> point was that the semantics of an [integer] must be derived from the
>> application domain.
> 
> Maybe it's good enough in a lot of instances. Say, a memory address. 
> 32-bit unsigned address works nicely (on a 32-bit machine).

Address is not an integer type. Consider segmented architecture as an
example.

>> My point was that the program
>> code does not specify the properties of int.
> 
> And I'm saying, maybe ("maybe" is a key word here) that is just a purist 
> approach and not better than practical compromise.

A compromise between what and what?

>> 2. I don't care about the platform, I care about what the program is
>> supposed to do.
> 
> Another way to skin the cat: write to a virtual machine. Java. .Net.

No, it is an attempt to constrain to one platform. VM, when used, is *the*
hardware. As history shows, all such attempts were in vain.

>>>>>> Why should you? Again, considering design by contract, and
>>>>>> attempt to reveal the implementation behind the contract is a
>>>>>> design error. You shall not rely on anything except the contract,
>>>>>> that is a fundamental principle.
>>>>>
>>>>> While that paradigm does have a place, that place is not
>>>>> "everywhere".
>>>>
>>>> Where?
>>>
>>> The formality of "design by contract" as used in developing
>>> functions/methods.
>>
>> That is a procedural view on software developing [a very outdated one,
>> IMO].
> 
> No it is not. Classes have methods. (I'm surprised you even said that).

Yes, they do. Yet an ADT (class is a type) is seen as something more than
just a collection of subprograms. It is not just methods (and other
operations) are subjects of contracts.

> I furthered 
> saying that I have a feeling you are seeking some "purist" ideal or 
> something for comp sci labs in academia.

Not at all. I am looking for a language for practical projects we are doing
(automation, embedded, networking, RT systems). Using a bad language is an
economic disadvantage.

>>> Oh, you were thinking something else when I wrote: "Can that be
>>> achieved? At what degree of complexity?". I asked if creating your
>>> ideal language (this "definitively specified higher level" language)
>>> is feasible and if so, what does that do to the complexity of the
>>> implemenation (compiler)?
>>
>> I think it should make the compiler simpler.
> 
> Do explain please. I can appreciate that in the abstract things look 
> rosey, but the devil is always in the details. One can design the perfect 
> bridge, but the problem is that landscapes are never perfect.

It is simpler to perform semantic analysis if you have a unified types
system. Each special case multiplies complexity.

>>> Is a comparison of Ada and C++ pretty much that answer?
>>
>> As a matter of fact, since I partially did both, to compile Ada is way
>> simpler than to compile C++.
> 
> That's not saying much of anything. Compare it to C.

K&R C or ANSI C? Up to the semantic analysis Ada 95 is much simpler than C.
After that C should be simpler.

> I think it has this "module as class-like thing" that seems so 
> archaic,

Hmm, actually it is C++, which conflates types (classes) with modules. This
error is shared by many OOPLs. In Ada a module (package) can provide any
number of types.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-14  6:52                           ` Jed
@ 2011-08-14  8:13                             ` Nomen Nescio
  0 siblings, 0 replies; 87+ messages in thread
From: Nomen Nescio @ 2011-08-14  8:13 UTC (permalink / raw)


"Jed" <jehdiah@orbitway.net> wrote:

> Fritz Wuehler wrote:
> > "Jed" <jehdiah@orbitway.net> wrote:
> >
> >> I didn't say it was. I said it probably arose from that. I'll bet
> >> integer literals are signed in most assembly languages (going out on
> >> a limb with this, because I really don't know).
> >
> > Absolutely not.
> 
> Liar.

Dumb jerkoff.

> > Assembly language is about talking to the machine
> > directly. If you can't manipulate native types then your assembler
> > isn't an assembler.
> >
> 
> Pedantic "discussion" is annoying to me, so curb it.

I really don't care what's annoying to you or not. This is usenet, if you
don't like it, go fuck yourself. Nobody will miss you.

> >> Hence, a no-brainer translation of  HLL code to assembly
> >> (recognizing, of course, that compilers are free to generate machine
> >> code directly, rather than generating assembly).
> >
> > Most (vast majority) of the compilers on IBM platforms generate object
> > code.
> 
> Their loss! ;)

Wrong again, shit-for-brains. They've been doing it before your mom had the
ten or so guys who could claim they were your father.


> Says the adolescent with dick in hand.

Better than being you, the guy sucking somebody else's adolescent dick.












^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13 13:12                                               ` Simon Wright
@ 2011-08-14 11:01                                                 ` Brian Drummond
  0 siblings, 0 replies; 87+ messages in thread
From: Brian Drummond @ 2011-08-14 11:01 UTC (permalink / raw)


On Sat, 13 Aug 2011 14:12:51 +0100, Simon Wright wrote:

> Brian Drummond <brian@shapes.demon.co.uk> writes:
> 
>> Incidentally, the values of "lower" and "upper" need not be determined
>> until runtime. The compiler can infer the best base type from whatever
>> constraints it finds on "lower" and "upper".
> 
> I don't think that can be true? OK, the range can be postponed until
> runtime, but the base type of which the actual bounds are instances must
> be known (talking about a compiler here, not an interpreter).

Yes, that's what I meant by the constraints on the bounds ... the base 
type of the range must be wide enough to accommodate both bounds. I see 
no need to specify it; that's the compiler's job.

- Brian




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-14  5:35                                   ` Jed
@ 2011-08-14 20:13                                     ` Georg Bauhaus
  2011-08-15 11:38                                     ` Georg Bauhaus
  1 sibling, 0 replies; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-14 20:13 UTC (permalink / raw)


"Jed" <jehdiah@orbitway.net> wrote:

>> (4) If a language L2 doesn't require X for P, you may
>>     learn X, but need not.
> 
> But maybe one would want to.

Sure. I think that a type system designed on behalf
of others will profit when the designers look at the
results of existing type systems in the hand of real
people.

If there were evidence that 80% of CVEs are caused
by use of some fundamental types, would you suggest
keeping those types and, I don't know, fire the guys
who wrote the programs, telling them they shouldn't
be programming?



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-14  5:35                                   ` Jed
  2011-08-14 20:13                                     ` Georg Bauhaus
@ 2011-08-15 11:38                                     ` Georg Bauhaus
  1 sibling, 0 replies; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-15 11:38 UTC (permalink / raw)


On 14.08.11 07:35, Jed wrote:

>> (4) If a language L2 doesn't require X for P, you may
>>     learn X, but need not.
> 
> But maybe one would want to.

Sure. I think that a type system designed on behalf
of others will profit when the designers look at the
results of existing type systems in the hand of real
people.

If there were evidence that 80% of CVEs are caused
by use of some fundamental types, would you suggest
keeping those types and, I don't know, fire the guys
who wrote the programs, telling them they shouldn't
be programming?




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-12 15:50                     ` Stuart Redmann
  2011-08-12 17:02                       ` Bill Findlay
@ 2011-08-15 12:59                       ` Vinzent Hoefler
  1 sibling, 0 replies; 87+ messages in thread
From: Vinzent Hoefler @ 2011-08-15 12:59 UTC (permalink / raw)


Stuart Redmann wrote:

> Stuart Redmann wrote:
> [snip]
>> > Be honest, how often do you really _want_ a wrapping int?
>
> On 12 Aug. Vinzent Hoefler wrote:
>> Thinking indices instead of pointers, quite often, actually.
>
> Could you please provide some example? Seriously, I'm really
> interested in the usage of wrapping types since in the last ten years
> I have not felt the need for wrapping ints at all (well, maybe two or
> three times).

Indexing of a double-buffer would be the most simple example. Switching
between both halves can then be expressed by a simple increment instead of
conditional statements, or explicit masking of the upper bits.

And while extending the data structure to a ring-buffer, masking does only
work for ring-buffers with a size of power-of-two. So, using a modular type
could even provide a slight performance gain in the special case while the
code still works in the general case.


Vinzent.

-- 
f u cn rd ths, u cn gt a gd jb n cmptr prgrmmng.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-13  9:15                                     ` Dmitry A. Kazakov
  2011-08-13  9:29                                       ` Ian Collins
  2011-08-14  4:29                                       ` Jed
@ 2011-08-16  8:18                                       ` Nick Keighley
  2011-08-16  8:47                                         ` Dmitry A. Kazakov
  2 siblings, 1 reply; 87+ messages in thread
From: Nick Keighley @ 2011-08-16  8:18 UTC (permalink / raw)


On Aug 13, 10:15 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Sat, 13 Aug 2011 02:53:32 -0500, Jed wrote:
> > "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote in message
> >news:1gu6ni1yb54k3$.4nbvfqqndl8m$.dlg@40tude.net...
> >> On Fri, 12 Aug 2011 14:06:55 -0500, Jed wrote:
> >>> "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote in message
> >>>news:fwnlcp4mgj03$.1tjeqjtos00o8$.dlg@40tude.net...


> >>>> I want the type semantics specified. C/C++ int is neither lower level nor
> >>>> closer to the machine it is just ill-defined. The concern is this, not its
> >>>> relation to the machine, of which I (as a programmer) just do not
> >>>> care.
>
> >>> What more definition do you want?
>
> >> More than what? I see nothing.
>
> > The semantics of C++ types are specified.
>
> By the word "int"?

an integer having a range at leaast large enough to hold the values
-32767..32767
(I may be off a bit on the values but it translates to 16 bits or
larger)

<snip>

> 2. I don't care about the platform, I care about what the program is
> supposed to do. The type's properties shall be defined by the application
> domain.

whilst this degree of abstraction is often good the ability to fiddle
with representaion is also sometimes useful. And if we want our
programs to terminate before the sun goes cold knowing about
representaion is sometimes useful.

I accept many langaues don't reveal represetnation details to the
degree that C (and C++) does.

<snip>




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16  8:18                                       ` Nick Keighley
@ 2011-08-16  8:47                                         ` Dmitry A. Kazakov
  2011-08-16  9:52                                           ` Nick Keighley
  2011-08-16 10:23                                           ` Georg Bauhaus
  0 siblings, 2 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-16  8:47 UTC (permalink / raw)


On Tue, 16 Aug 2011 01:18:31 -0700 (PDT), Nick Keighley wrote:

>> 2. I don't care about the platform, I care about what the program is
>> supposed to do. The type's properties shall be defined by the application
>> domain.
> 
> whilst this degree of abstraction is often good the ability to fiddle
> with representaion is also sometimes useful.

Never. It is a strong claim, but it holds. The cases of which you believe
you have to handle the type's layout, are those where the *application*
domain is the machine hardware itself. There exist such rare cases, but
they represent no exception to what I said.

> And if we want our
> programs to terminate before the sun goes cold knowing about
> representaion is sometimes useful.

A premature optimization does not guaranty you anything about performance,
in fact the opposite. Unless your claim is that a deliberate use of a wrong
representation (e.g. shorter than required) might result in a better
performance, due to malfunction.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16  8:47                                         ` Dmitry A. Kazakov
@ 2011-08-16  9:52                                           ` Nick Keighley
  2011-08-16 10:39                                             ` Dmitry A. Kazakov
  2011-08-16 10:23                                           ` Georg Bauhaus
  1 sibling, 1 reply; 87+ messages in thread
From: Nick Keighley @ 2011-08-16  9:52 UTC (permalink / raw)


On Aug 16, 9:47 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Tue, 16 Aug 2011 01:18:31 -0700 (PDT), Nick Keighley wrote:

> >> 2. I don't care about the platform, I care about what the program is
> >> supposed to do. The type's properties shall be defined by the application
> >> domain.
>
> > whilst this degree of abstraction is often good the ability to fiddle
> > with representaion is also sometimes useful.
>
> Never. It is a strong claim, but it holds. The cases of which you believe
> you have to handle the type's layout, are those where the *application*
> domain is the machine hardware itself. There exist such rare cases, but
> they represent no exception to what I said.

this reminds of the maths that states a quadratic equation always has
two roots (solutions). Those cases where there seems to only one root
is actaully two roots both with the same value.

You attempt to win the argument by definition engineering.


> > And if we want our
> > programs to terminate before the sun goes cold knowing about
> > representaion is sometimes useful.
>
> A premature optimization does not guaranty you anything about performance,
> in fact the opposite. Unless your claim is that a deliberate use of a wrong
> representation (e.g. shorter than required) might result in a better
> performance, due to malfunction.

I'm simply arguning that in the real world performance sometimes
matters. This may involve getting down and dirty with the
representaion and other (usually) implementation details.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16  8:47                                         ` Dmitry A. Kazakov
  2011-08-16  9:52                                           ` Nick Keighley
@ 2011-08-16 10:23                                           ` Georg Bauhaus
  2011-08-16 10:58                                             ` Dmitry A. Kazakov
  1 sibling, 1 reply; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-16 10:23 UTC (permalink / raw)


On 16.08.11 10:47, Dmitry A. Kazakov wrote:

> A premature optimization does not guaranty you anything about performance,
> in fact the opposite. Unless your claim is that a deliberate use of a wrong
> representation (e.g. shorter than required) might result in a better
> performance, due to malfunction.

Isn't a program closer to optimal in some formal sense when

- the type stays as declared, i.e., without representation affecting
  values ("percent" = {0..100}) and operations (+, -, ...) in  the
  abstract, but

- eventually, one set of machine operations reflects the program's
  purpose better that another (customers preferring the faster/smoother/
  more stable/uninterrupted performance)?

IOW, both programs will perform the same system functions, but
one does more "efficiently" than the other.  And sells better.
Choosing a different type may change the program's meaning.
Is changing only the representation of types, not the types,
necessarily premature optimization?

Compilers, or their writers, try to achieve good representations.
Why shouldn't programmers at least try the same when the program
has matured and can be polished without hurting any of the system
functions?  As long as this attempt doesn't evolve into general hubris?
And most importantly, as long as the type of values does not change,
as it must standard-wise, for example, in C++ when s/int/long/, say?

In this sense, I think any kind of representation specification is
somewhat like a pragma that says optimize space or optimize time.
Anything wrong with pragmatic hints?

Imagine redefining "int" to be something like

__internal.hpp:

class int <lo, hi, predicate=id>
{
    __int<lo, hi> value;
public:
    ...
};

where __int<...> chooses a compile time checked representation.




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16  9:52                                           ` Nick Keighley
@ 2011-08-16 10:39                                             ` Dmitry A. Kazakov
  0 siblings, 0 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-16 10:39 UTC (permalink / raw)


On Tue, 16 Aug 2011 02:52:27 -0700 (PDT), Nick Keighley wrote:

> On Aug 16, 9:47�am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Tue, 16 Aug 2011 01:18:31 -0700 (PDT), Nick Keighley wrote:
> 
>>>> 2. I don't care about the platform, I care about what the program is
>>>> supposed to do. The type's properties shall be defined by the application
>>>> domain.
>>
>>> whilst this degree of abstraction is often good the ability to fiddle
>>> with representaion is also sometimes useful.
>>
>> Never. It is a strong claim, but it holds. The cases of which you believe
>> you have to handle the type's layout, are those where the *application*
>> domain is the machine hardware itself. There exist such rare cases, but
>> they represent no exception to what I said.
> 
> this reminds of the maths that states a quadratic equation always has
> two roots (solutions). Those cases where there seems to only one root
> is actaully two roots both with the same value.
> 
> You attempt to win the argument by definition engineering.

Do you have other definitions?

>>> And if we want our
>>> programs to terminate before the sun goes cold knowing about
>>> representaion is sometimes useful.
>>
>> A premature optimization does not guaranty you anything about performance,
>> in fact the opposite. Unless your claim is that a deliberate use of a wrong
>> representation (e.g. shorter than required) might result in a better
>> performance, due to malfunction.
> 
> I'm simply arguning that in the real world performance sometimes
> matters.

And the conclusion is that the required semantics need not be implemented
if that might degrade performance?

Aren't you trying to redefine the word "required"?

> This may involve getting down and dirty with the
> representaion and other (usually) implementation details.

Aren't type specification and implementation (of that specification)
different things?

"Down and dirty" means "does wrong things", or something else?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16 10:23                                           ` Georg Bauhaus
@ 2011-08-16 10:58                                             ` Dmitry A. Kazakov
  2011-08-16 11:44                                               ` Georg Bauhaus
  2011-08-16 14:51                                               ` Bill Findlay
  0 siblings, 2 replies; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-16 10:58 UTC (permalink / raw)


On Tue, 16 Aug 2011 12:23:48 +0200, Georg Bauhaus wrote:

> Is changing only the representation of types, not the types,
> necessarily premature optimization?

The answer depends on the time the change is made. "Premature" in its
original meaning (Knuth) means: during design, before functional
requirements are met.
 
> Compilers, or their writers, try to achieve good representations.
> Why shouldn't programmers at least try the same when the program
> has matured and can be polished without hurting any of the system
> functions?

Because this requires heavy machinery: static analysis/coverage tests to
prove consistency and profiling of the most frequent use cases in order to
show performance gains, while the actual gain is usually marginal when
integral types are involved. It is beyond the capabilities of an average
programmer to estimate the effect of a representation change for a modern
machine with its caches, pipelines etc.

> In this sense, I think any kind of representation specification is
> somewhat like a pragma that says optimize space or optimize time.
> Anything wrong with pragmatic hints?

Compiler hints (switches) should not appear in the program code.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16 10:58                                             ` Dmitry A. Kazakov
@ 2011-08-16 11:44                                               ` Georg Bauhaus
  2011-08-16 14:51                                               ` Bill Findlay
  1 sibling, 0 replies; 87+ messages in thread
From: Georg Bauhaus @ 2011-08-16 11:44 UTC (permalink / raw)


On 16.08.11 12:58, Dmitry A. Kazakov wrote:

> Compiler hints (switches) should not appear in the program code.

The won't have to appear in program code if there are

file.hpp
file.hrp  // representations_opt
file.cpp
file.crp  // representations_opt



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-14  7:00                                   ` Jed
@ 2011-08-16 13:06                                     ` Ludovic Brenta
  0 siblings, 0 replies; 87+ messages in thread
From: Ludovic Brenta @ 2011-08-16 13:06 UTC (permalink / raw)


Jed wrote:
> Ludovic Brenta wrote:
>> Jed writes:
>>>> When have you last thought about the size of the components
>>>> of std::string, of of a vtable?  Why, then, is there a
>>>> pressing need to think about the sizes of int types?
>
>>> You know that that was a very bad comparison. Completely invalid.
>
>> On the contrary I think this comparison is perfectly valid. ]
>
> Who cares what you think?

Don't insult me (or anyone) and make a fool of yourself in front of
everyone, please.  Your time would be better spent reading what I
wrote.

--
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16 10:58                                             ` Dmitry A. Kazakov
  2011-08-16 11:44                                               ` Georg Bauhaus
@ 2011-08-16 14:51                                               ` Bill Findlay
  2011-08-16 19:13                                                 ` Dmitry A. Kazakov
  1 sibling, 1 reply; 87+ messages in thread
From: Bill Findlay @ 2011-08-16 14:51 UTC (permalink / raw)


On 16/08/2011 11:58, in article 17sfxzivd6ba0.1lpjrmelcfuoa$.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Tue, 16 Aug 2011 12:23:48 +0200, Georg Bauhaus wrote:
> 
>> Is changing only the representation of types, not the types,
>> necessarily premature optimization?
> 
> The answer depends on the time the change is made. "Premature" in its
> original meaning (Knuth) means: during design, before functional
> requirements are met.

There are important cases in which the representation of types is an
essential part of the functional requirement. For example, writing a
bit-accurate emulator for an existing computer architecture; or processing
data generated by a legacy non-Ada system.

These cases are not the most common, but they are also not the ignorable
trivia you claim them to be.

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;




^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16 14:51                                               ` Bill Findlay
@ 2011-08-16 19:13                                                 ` Dmitry A. Kazakov
  2011-08-16 19:23                                                   ` Bill Findlay
  0 siblings, 1 reply; 87+ messages in thread
From: Dmitry A. Kazakov @ 2011-08-16 19:13 UTC (permalink / raw)


On Tue, 16 Aug 2011 15:51:29 +0100, Bill Findlay wrote:

> On 16/08/2011 11:58, in article 17sfxzivd6ba0.1lpjrmelcfuoa$.dlg@40tude.net,
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:
> 
>> On Tue, 16 Aug 2011 12:23:48 +0200, Georg Bauhaus wrote:
>> 
>>> Is changing only the representation of types, not the types,
>>> necessarily premature optimization?
>> 
>> The answer depends on the time the change is made. "Premature" in its
>> original meaning (Knuth) means: during design, before functional
>> requirements are met.
> 
> There are important cases in which the representation of types is an
> essential part of the functional requirement.

I addressed this in my earlier post.

> For example, writing a
> bit-accurate emulator for an existing computer architecture;

Actually not. You can represent an array of bits in any desired way. The
order of bits and their actual layout in the memory (if observable), are
unrelated:

   type Word is array (1..16) of Boolean;
   type Word is mod 2**16;
   type Word is range 0..2**16-1;
   type Word is new Wide_Character;

You are free to choose any of them and yet remain "bit-accurate". Bit
accuracy is defined by the implementation of Get_Bit, Get_Byte etc
operations.

> or processing data generated by a legacy non-Ada system.

You need only one type for an octet or other atomic type used as the I/O
unit. Note that this type is predefined by the I/O library, so normally
there is no need not to define it. You cannot use anything more complex
than that because there is no any guaranty on I/O semantics of composite
type.

> These cases are not the most common, but they are also not the ignorable
> trivia you claim them to be.

They are. The second case is communication to alien hardware/software
protocols, it is my daily job. You don't need any representation clauses
there.

It is amazing how many people think otherwise. It seems that conflation of
the encoding issues with type representations is very deep rooted.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-16 19:13                                                 ` Dmitry A. Kazakov
@ 2011-08-16 19:23                                                   ` Bill Findlay
  0 siblings, 0 replies; 87+ messages in thread
From: Bill Findlay @ 2011-08-16 19:23 UTC (permalink / raw)


On 16/08/2011 20:13, in article 1qwybpbhjv11s.6orqbnwyy15z.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Tue, 16 Aug 2011 15:51:29 +0100, Bill Findlay wrote:
> 
> 
>> These cases are not the most common, but they are also not the ignorable
>> trivia you claim them to be.
> 
> They are. The second case is communication to alien hardware/software
> protocols, it is my daily job. You don't need any representation clauses
> there.
> 
> It is amazing how many people think otherwise. It seems that conflation of
> the encoding issues with type representations is very deep rooted.

We seem to have a different conceptualization of "need".

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;





^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: Why use C++?
  2011-08-11  7:54             ` Dmitry A. Kazakov
  2011-08-11  8:20               ` Jed
  2011-08-12  5:02               ` Randy Brukardt
@ 2011-08-18 13:39               ` Louisa
  2 siblings, 0 replies; 87+ messages in thread
From: Louisa @ 2011-08-18 13:39 UTC (permalink / raw)


On Aug 11, 5:54 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Wed, 10 Aug 2011 17:37:28 -0500, Randy Brukardt wrote:
> > There are uses for wrapping types, but they are far less likely than wanting
> > overflow detection. The default should be to catch errors, not turn them
> > into different ones.
>
> The OP mentioned image processing, the behavior frequently needed there is
> saturated integer arithmetic, which is nether ranged nor modular.
>
> As for modular types, wrapping is the mathematically correct behavior, it
> is not an error.
>
> You just cannot provide every possible arithmetic at the language level.

Signed and unsigned arithmetic are provided in PL/I.
Thus, 0:2**32-1 is available for unsigned binary arithmetic.
So is 0:2**64-1.



^ permalink raw reply	[flat|nested] 87+ messages in thread

end of thread, other threads:[~2011-08-18 13:39 UTC | newest]

Thread overview: 87+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <fb9f787c-af06-427a-82b6-b0684e8dcbc5@s2g2000vby.googlegroups.com>
     [not found] ` <j1kaj8$dge$1@adenine.netfront.net>
     [not found]   ` <1e292299-2cbe-4443-86f3-b19b8af50fff@c29g2000yqd.googlegroups.com>
     [not found]     ` <j1tha5$11q5$1@adenine.netfront.net>
     [not found]       ` <1fd0cc9b-859d-428e-b68a-11e34de84225@gz10g2000vbb.googlegroups.com>
2011-08-10 19:05         ` Why use C++? Niklas Holsti
2011-08-10 22:37           ` Randy Brukardt
2011-08-10 22:49             ` Ludovic Brenta
2011-08-12  4:54               ` Randy Brukardt
2011-08-11  7:54             ` Dmitry A. Kazakov
2011-08-11  8:20               ` Jed
2011-08-11  9:13                 ` Dmitry A. Kazakov
2011-08-11 10:57                   ` Jed
2011-08-11 11:43                     ` Georg Bauhaus
2011-08-12  5:07                       ` Jed
2011-08-11 13:11                     ` Nomen Nescio
2011-08-11 15:11                       ` Paul
2011-08-12  5:15                       ` Jed
2011-08-12 21:39                         ` Fritz Wuehler
2011-08-14  6:52                           ` Jed
2011-08-14  8:13                             ` Nomen Nescio
2011-08-11 15:09                     ` Dmitry A. Kazakov
2011-08-12  5:03                       ` Jed
2011-08-12  8:32                         ` Georg Bauhaus
2011-08-12 13:15                           ` Hyman Rosen
2011-08-12 22:09                             ` Randy Brukardt
2011-08-12 15:14                           ` Jed
2011-08-12 17:20                             ` Georg Bauhaus
2011-08-12 19:51                               ` Jed
2011-08-12 21:22                                 ` Ludovic Brenta
2011-08-14  7:00                                   ` Jed
2011-08-16 13:06                                     ` Ludovic Brenta
2011-08-13  9:37                                 ` Georg Bauhaus
2011-08-14  5:22                                   ` Jed
2011-08-13 10:27                                 ` Georg Bauhaus
2011-08-14  5:35                                   ` Jed
2011-08-14 20:13                                     ` Georg Bauhaus
2011-08-15 11:38                                     ` Georg Bauhaus
2011-08-13 11:02                                 ` Georg Bauhaus
2011-08-14  5:56                                   ` Jed
2011-08-12  9:21                         ` Dmitry A. Kazakov
2011-08-12 13:26                           ` Jed
2011-08-12 14:30                             ` Dmitry A. Kazakov
2011-08-12 19:06                               ` Jed
2011-08-12 20:05                                 ` Dmitry A. Kazakov
2011-08-13  7:53                                   ` Jed
2011-08-13  9:15                                     ` Dmitry A. Kazakov
2011-08-13  9:29                                       ` Ian Collins
2011-08-13  9:52                                         ` Dmitry A. Kazakov
2011-08-13 11:10                                           ` Ian Collins
2011-08-13 11:46                                             ` Georg Bauhaus
2011-08-13 20:30                                               ` Ian Collins
2011-08-13 11:54                                             ` Brian Drummond
2011-08-13 13:12                                               ` Simon Wright
2011-08-14 11:01                                                 ` Brian Drummond
2011-08-14  4:54                                             ` Jed
2011-08-14  4:35                                           ` Jed
2011-08-14  6:46                                             ` Dmitry A. Kazakov
2011-08-14  4:49                                           ` Jed
2011-08-14  6:51                                             ` Dmitry A. Kazakov
2011-08-14  4:29                                       ` Jed
2011-08-14  7:29                                         ` Dmitry A. Kazakov
2011-08-16  8:18                                       ` Nick Keighley
2011-08-16  8:47                                         ` Dmitry A. Kazakov
2011-08-16  9:52                                           ` Nick Keighley
2011-08-16 10:39                                             ` Dmitry A. Kazakov
2011-08-16 10:23                                           ` Georg Bauhaus
2011-08-16 10:58                                             ` Dmitry A. Kazakov
2011-08-16 11:44                                               ` Georg Bauhaus
2011-08-16 14:51                                               ` Bill Findlay
2011-08-16 19:13                                                 ` Dmitry A. Kazakov
2011-08-16 19:23                                                   ` Bill Findlay
2011-08-12 11:48                 ` Stuart Redmann
2011-08-12 13:12                   ` Vinzent Hoefler
2011-08-12 15:50                     ` Stuart Redmann
2011-08-12 17:02                       ` Bill Findlay
2011-08-15 12:59                       ` Vinzent Hoefler
2011-08-12  5:02               ` Randy Brukardt
2011-08-12  5:16                 ` Robert Wessel
2011-08-12 16:39                   ` Adam Beneschan
2011-08-12  5:24                 ` Jed
2011-08-12  6:51                   ` Paavo Helde
2011-08-12  7:41                     ` Georg Bauhaus
2011-08-12 15:50                   ` Fritz Wuehler
2011-08-12 19:59                     ` Jed
2011-08-13  8:06                     ` Stephen Leake
2011-08-12  9:40                 ` Dmitry A. Kazakov
2011-08-12  9:45                   ` Ludovic Brenta
2011-08-12 10:48                     ` Georg Bauhaus
2011-08-12 15:56                       ` Ludovic Brenta
2011-08-13  8:08                   ` Stephen Leake
2011-08-18 13:39               ` Louisa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox