comp.lang.ada
 help / color / mirror / Atom feed
* ChatGPT
@ 2023-03-30 21:49 Anatoly Chernyshev
  2023-03-30 22:32 ` ChatGPT Jerry
  2023-03-30 23:00 ` ChatGPT Jeffrey R.Carter
  0 siblings, 2 replies; 14+ messages in thread
From: Anatoly Chernyshev @ 2023-03-30 21:49 UTC (permalink / raw)


Just had an exercise with ChatGPT for Ada.

Here is what my request was: "Produce Ada code for solving a quadratic equation."

And here's the output from ChatGPT (quite flawless forgiving the absence of a math lib in the header):
----------------------------
with Ada.Text_IO; use Ada.Text_IO;

procedure Quadratic_Equation is
   A, B, C, D : Float;
   X1, X2 : Float;
begin
   -- read coefficients A, B, and C from the user
   Put("Enter coefficient A: ");
   Get(A);
   
   Put("Enter coefficient B: ");
   Get(B);
   
   Put("Enter coefficient C: ");
   Get(C);

   -- calculate the discriminant
   D := B*B - 4.0*A*C;

   -- check if the discriminant is negative
   if D < 0.0 then
      Put_Line("No real roots exist.");
   else
      -- calculate the roots
      X1 := (-B + Sqrt(D)) / (2.0*A);
      X2 := (-B - Sqrt(D)) / (2.0*A);
      
      -- output the roots
      Put_Line("Root 1 = " & Float'Image(X1));
      Put_Line("Root 2 = " & Float'Image(X2));
   end if;
end Quadratic_Equation;

and an extra remark:

Note that this implementation assumes that the discriminant (B^2 - 4AC) is positive or zero. If the discriminant is negative, then no real roots exist.
-----------------------------------------------

Before crying wolf, my impression is that ChatGPT might be useful for drafting simple pieces of code (say, homeworks). Very soon though one will arrive at a situation when the amount of efforts to explain the task to ChatGPT will be the same as to write the code yourself (I shall call it a limiting Kolmogorov complexity).

What do you think?



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-03-30 21:49 ChatGPT Anatoly Chernyshev
@ 2023-03-30 22:32 ` Jerry
  2023-04-01 12:10   ` ChatGPT Hou Van Boere
  2023-03-30 23:00 ` ChatGPT Jeffrey R.Carter
  1 sibling, 1 reply; 14+ messages in thread
From: Jerry @ 2023-03-30 22:32 UTC (permalink / raw)


On Thursday, March 30, 2023 at 2:49:45 PM UTC-7, Anatoly Chernyshev wrote:
> What do you think?

ChatGPT demands my phone number which it will not get. Has anyone found a workaround?
Jerry

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-03-30 21:49 ChatGPT Anatoly Chernyshev
  2023-03-30 22:32 ` ChatGPT Jerry
@ 2023-03-30 23:00 ` Jeffrey R.Carter
  2023-03-31  6:54   ` ChatGPT Dmitry A. Kazakov
  1 sibling, 1 reply; 14+ messages in thread
From: Jeffrey R.Carter @ 2023-03-30 23:00 UTC (permalink / raw)


On 2023-03-30 23:49, Anatoly Chernyshev wrote:
> 
> What do you think?

No doubt there are a large number of such programs in the training data. If it 
had simply regurgitated one of those, at least the program would have compiled. 
That it couldn't even do as good as that is not impressive.

-- 
Jeff Carter
"Fundamental improvements in performance
are most often made by algorithm changes,
not by tuning."
Elements of Programming Style
201

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-03-30 23:00 ` ChatGPT Jeffrey R.Carter
@ 2023-03-31  6:54   ` Dmitry A. Kazakov
  2023-03-31 11:04     ` ChatGPT magardner2010
  2023-03-31 21:44     ` ChatGPT Anatoly Chernyshev
  0 siblings, 2 replies; 14+ messages in thread
From: Dmitry A. Kazakov @ 2023-03-31  6:54 UTC (permalink / raw)


On 2023-03-31 01:00, Jeffrey R.Carter wrote:
> On 2023-03-30 23:49, Anatoly Chernyshev wrote:
>>
>> What do you think?
> 
> No doubt there are a large number of such programs in the training data. 
> If it had simply regurgitated one of those, at least the program would 
> have compiled. That it couldn't even do as good as that is not impressive.

Right. Fun would be adding qualifiers to the request. E.g. "in extended 
precision", "taking arguments from user input" etc. Parroting works up 
to some limit.

What I find interesting is that the whole swindle highlights that some 
human activities considered creative are in fact not. BTW, it repeats 
much earlier dethroning of playing chess as being indicative to human 
intellect. The machine easily beats us in chess.

Actually, it is the abilities to sort out garbage and pick up vegetables 
which make us intelligent! (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-03-31  6:54   ` ChatGPT Dmitry A. Kazakov
@ 2023-03-31 11:04     ` magardner2010
  2023-03-31 21:44     ` ChatGPT Anatoly Chernyshev
  1 sibling, 0 replies; 14+ messages in thread
From: magardner2010 @ 2023-03-31 11:04 UTC (permalink / raw)


On 31/03/2023 09:54, Dmitry A. Kazakov wrote:
> On 2023-03-31 01:00, Jeffrey R.Carter wrote:
>> On 2023-03-30 23:49, Anatoly Chernyshev wrote:
>>>
>>> What do you think?
>>
>> No doubt there are a large number of such programs in the training 
>> data. If it had simply regurgitated one of those, at least the program 
>> would have compiled. That it couldn't even do as good as that is not 
>> impressive.
> 
> Right. Fun would be adding qualifiers to the request. E.g. "in extended 
> precision", "taking arguments from user input" etc. Parroting works up 
> to some limit.
> 

I have been trying to get chatGPT to help me with a larger Ada project 
on and off for the past few months. It not only has no idea regarding 
what functions do and do not exist in a given library, but it sometimes 
doesn't even understand how private types or tasks work. I will admit, 
those are not necessarily super common, but given how easy Ada makes it 
to use them, I can honestly say that Ada is the only language in which I 
am comfortable writing multithreaded programs that require more 
sophistication than a simple fork() call.

So, yeah. ChatGPT knows the syntax of Ada, and it is familiar with 
Ada.Text_IO's more common functions. It knows about the existence of 
GNAT.Sockets and AdaSockets, but it has no idea on how to use them.

I would be quite confident that that is pretty much it as far as ChatGPT 
and Ada goes.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-03-31  6:54   ` ChatGPT Dmitry A. Kazakov
  2023-03-31 11:04     ` ChatGPT magardner2010
@ 2023-03-31 21:44     ` Anatoly Chernyshev
  2023-04-01  7:39       ` ChatGPT Dmitry A. Kazakov
  1 sibling, 1 reply; 14+ messages in thread
From: Anatoly Chernyshev @ 2023-03-31 21:44 UTC (permalink / raw)


Data science people swear it's just a matter of the size of training set used...
I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as of now.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-03-31 21:44     ` ChatGPT Anatoly Chernyshev
@ 2023-04-01  7:39       ` Dmitry A. Kazakov
  2023-04-07  1:51         ` ChatGPT Ken Burtch
  0 siblings, 1 reply; 14+ messages in thread
From: Dmitry A. Kazakov @ 2023-04-01  7:39 UTC (permalink / raw)


On 2023-03-31 23:44, Anatoly Chernyshev wrote:
> Data science people swear it's just a matter of the size of training set used...

They lie. In machine learning overtraining is as much a problem as 
undertraining. The simplest example from mathematics is polynomial 
interpolation becoming unstable with higher orders.

And this does not even touch contradictory samples requiring retraining 
or time constrained samples etc.

> I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as of now.

One must remember that ChatGPT is nothing but ELIZA on steroids.

    https://en.wikipedia.org/wiki/ELIZA

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-03-30 22:32 ` ChatGPT Jerry
@ 2023-04-01 12:10   ` Hou Van Boere
  0 siblings, 0 replies; 14+ messages in thread
From: Hou Van Boere @ 2023-04-01 12:10 UTC (permalink / raw)


I have the same concerns. There are websites that allow for temp phone numbers but they never work. Get a burners phone with a pay as you go plan that does not auto-renew. Maybe you already have a retired phone. A simm card will be $10 and a month's worth of talk time perhaps another $10. Wish it was easier :(

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-04-01  7:39       ` ChatGPT Dmitry A. Kazakov
@ 2023-04-07  1:51         ` Ken Burtch
  2024-10-16 22:22           ` ChatGPT Nioclásán Caileán de Ghlostéir
  0 siblings, 1 reply; 14+ messages in thread
From: Ken Burtch @ 2023-04-07  1:51 UTC (permalink / raw)


On Saturday, April 1, 2023 at 3:39:51 AM UTC-4, Dmitry A. Kazakov wrote:
> On 2023-03-31 23:44, Anatoly Chernyshev wrote: 
> > Data science people swear it's just a matter of the size of training set used...
> They lie. In machine learning overtraining is as much a problem as 
> undertraining. The simplest example from mathematics is polynomial 
> interpolation becoming unstable with higher orders. 
> 
> And this does not even touch contradictory samples requiring retraining 
> or time constrained samples etc.
> > I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as of now.
> One must remember that ChatGPT is nothing but ELIZA on steroids. 
> 
> https://en.wikipedia.org/wiki/ELIZA
> -- 
> Regards, 
> Dmitry A. Kazakov 
> http://www.dmitry-kazakov.de

For what it's worth on the subject of the chatbot, " "Produce Ada code for solving a quadratic equation."  is a terrible choice for a test of ChatGPT as one is asking if it can do a Google search.  To test its abilities, you have to pick a challenge that cannot be solved with a Google search.

My short assessment of ChatGPT, with the history of chatbots, are available on my February blog post.  I gave it a simple programming problem and it failed 3 times out of 4.  It's not surprising as I've learned since February that the chatbot doesn't actually understand programming: it uses examples off the Internet and tries to predict what you might have typed based on keyword patterns.  It is an imitation of an imitation, smoke and mirrors.  This is why Vint Cerf denounced it.  You can read my thoughts on my blog:

https://www.pegasoft.ca/coder/coder_february_2023.html

Ken Burtch

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2023-04-07  1:51         ` ChatGPT Ken Burtch
@ 2024-10-16 22:22           ` Nioclásán Caileán de Ghlostéir
  2024-10-20  0:32             ` ChatGPT Kevin Chadwick
  0 siblings, 1 reply; 14+ messages in thread
From: Nioclásán Caileán de Ghlostéir @ 2024-10-16 22:22 UTC (permalink / raw)


[-- Attachment #1: Type: text/plain, Size: 4873 bytes --]

On Thu, 6 Apr 2023, Ken Burtch wrote:
"On Saturday, April 1, 2023 at 3:39:51 AM UTC-4, Dmitry A. Kazakov wrote:
> On 2023-03-31 23:44, Anatoly Chernyshev wrote: 
> > Data science people swear it's just a matter of the size of training set used...
> They lie. In machine learning overtraining is as much a problem as 
> undertraining. The simplest example from mathematics is polynomial 
> interpolation becoming unstable with higher orders. 
> 
> And this does not even touch contradictory samples requiring retraining 
> or time constrained samples etc.
> > I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as of now.
> One must remember that ChatGPT is nothing but ELIZA on steroids. 
> 
> https://en.wikipedia.org/wiki/ELIZA
> -- 
> Regards, 
> Dmitry A. Kazakov 
> http://www.dmitry-kazakov.de

For what it's worth on the subject of the chatbot, " "Produce Ada code for solving a quadratic equation."  is a terrible choice for a test of ChatGPT as one is asking if it can do a Google search.  To test its abilities, you have to pick a challenge that cannot be solved with a Google search.

My short assessment of ChatGPT, with the history of chatbots, are available on my February blog post.  I gave it a simple programming problem and it failed 3 times out of 4.  It's not surprising as I've learned since February that the chatbot doesn't actually understand programming: it uses examples off the Internet and tries to predict what you might have typed based on keyword patterns.  It is an imitation of an imitation, smoke and mirrors.  This is why Vint Cerf denounced it.  You can read my thoughts on my blog:

https://www.pegasoft.ca/coder/coder_february_2023.html

Ken Burtch"


As an experiment I asked WWW.Perplexity.AI "Why is ChatGPT bad at Ada?" so 
Perplexity alleged:
"Answer
ChatGPT's performance with the Ada programming language can be perceived 
as inadequate due to several factors related to both the language's 
characteristics and the limitations of the AI model itself.
Factors Contributing to ChatGPT's Performance Issues with Ada
1. Complexity of Ada:
Ada is known for its strong emphasis on safety and correctness, which 
includes features like runtime checks and strong typing. While these 
features are beneficial for developing reliable software, they can also 
make the language more complex and verbose. This complexity may lead to 
misunderstandings or oversights when ChatGPT attempts to generate or 
analyze Ada code, especially for more intricate tasks12.
2. Performance Overhead:
The safety features inherent in Ada, such as range checking and bounds 
checking, introduce additional overhead that can affect performance. This 
overhead is crucial for safety-critical applications but may not be 
well-handled by ChatGPT when generating code or discussing performance 
optimizations. The model may struggle to provide accurate assessments or 
optimizations for Ada code compared to languages like C or C++, which are 
often perceived as more straightforward in terms of low-level 
optimizations1.
3. Limitations of AI Understanding:
ChatGPT operates based on patterns in the data it has been trained on. As 
a result, it may not fully grasp the nuances of niche programming 
languages like Ada, particularly when users ask complex questions or 
request sophisticated code examples. This limitation can lead to 
incorrect or overly simplistic responses when users seek detailed help 
with Ada programming23.
4. User Expectations and Experience:
Users have reported that while ChatGPT can handle basic queries related 
to Ada, it often fails with more complex tasks. This inconsistency can 
lead to frustration, as users expect a level of expertise that the model 
may not be able to provide due to its training limitations and the 
inherent complexities of the Ada language25.
5. Hallucination and Inaccuracy:
ChatGPT has a tendency to "hallucinate" or generate incorrect 
information, particularly in specialized fields where it lacks extensive 
training data. This issue is exacerbated in niche programming languages 
like Ada, where accurate information may not be as prevalent in its 
training corpus. Consequently, users might receive misleading advice or 
code snippets that do not function as intended34.
In summary, ChatGPT's challenges with Ada stem from the language's 
complexity, performance overhead from safety features, limitations in AI 
understanding, user expectations, and a propensity for inaccuracies. 
These factors collectively contribute to a perception that ChatGPT is not 
well-suited for generating or analyzing Ada code effectively."

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2024-10-16 22:22           ` ChatGPT Nioclásán Caileán de Ghlostéir
@ 2024-10-20  0:32             ` Kevin Chadwick
  2024-10-20 16:21               ` ChatGPT Nioclásán Caileán de Ghlostéir
  2025-01-18 18:26               ` ChatGPT Niocláiſín Cóilín de Ġloſtéir
  0 siblings, 2 replies; 14+ messages in thread
From: Kevin Chadwick @ 2024-10-20  0:32 UTC (permalink / raw)


>2. Performance Overhead:
>The safety features inherent in Ada, such as range checking and bounds 
>checking, introduce additional overhead that can affect performance. This 
>overhead is crucial for safety-critical applications but may not be 
>well-handled by ChatGPT when generating code

ChatGPT is even completely wrong about why it's wrong!

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2024-10-20  0:32             ` ChatGPT Kevin Chadwick
@ 2024-10-20 16:21               ` Nioclásán Caileán de Ghlostéir
  2025-01-18 18:26               ` ChatGPT Niocláiſín Cóilín de Ġloſtéir
  1 sibling, 0 replies; 14+ messages in thread
From: Nioclásán Caileán de Ghlostéir @ 2024-10-20 16:21 UTC (permalink / raw)


Hi,

This Automatic-Idiocy criticism about ChatGPT is not by ChatGPT . . . it 
is by ChatGPT rival WWW.Perplexity.AI

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2024-10-20  0:32             ` ChatGPT Kevin Chadwick
  2024-10-20 16:21               ` ChatGPT Nioclásán Caileán de Ghlostéir
@ 2025-01-18 18:26               ` Niocláiſín Cóilín de Ġloſtéir
  2025-02-01  9:04                 ` ChatGPT Niocláiſín Cóilín de Ġloſtéir
  1 sibling, 1 reply; 14+ messages in thread
From: Niocláiſín Cóilín de Ġloſtéir @ 2025-01-18 18:26 UTC (permalink / raw)


Happy New Year!

On Sun, 20 Oct 2024, Kevin Chadwick wrote:
"[. . . A.I.] is even completely wrong about why it's wrong!"

E.g.
"I fear that before long, the WWW will be hopelessly polluted by
AI-generated content. "Facts" will become irrelevant."
says Chuck Guzis today via
HTTPS://classicCmp.org/mailman3/hyperkitty/list/cctalk@classiccmp.org/thread/NY6PGN2YRILSNPJGDZDKNDC4VUWG5UF3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ChatGPT
  2025-01-18 18:26               ` ChatGPT Niocláiſín Cóilín de Ġloſtéir
@ 2025-02-01  9:04                 ` Niocláiſín Cóilín de Ġloſtéir
  0 siblings, 0 replies; 14+ messages in thread
From: Niocláiſín Cóilín de Ġloſtéir @ 2025-02-01  9:04 UTC (permalink / raw)


[-- Attachment #1: Type: text/plain, Size: 266 bytes --]

"AI is Creating a Generation of Illiterate Programmers
Jan 24, 2025
[. . .]
Hi, I’m Namanyay [Goel] — a professional developer since the age of 
fourteen, now building AI tools to enhance human potential."
says
HTTPS://Nmn.Gl/blog/ai-illiterate-programmers

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-02-01  9:04 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-30 21:49 ChatGPT Anatoly Chernyshev
2023-03-30 22:32 ` ChatGPT Jerry
2023-04-01 12:10   ` ChatGPT Hou Van Boere
2023-03-30 23:00 ` ChatGPT Jeffrey R.Carter
2023-03-31  6:54   ` ChatGPT Dmitry A. Kazakov
2023-03-31 11:04     ` ChatGPT magardner2010
2023-03-31 21:44     ` ChatGPT Anatoly Chernyshev
2023-04-01  7:39       ` ChatGPT Dmitry A. Kazakov
2023-04-07  1:51         ` ChatGPT Ken Burtch
2024-10-16 22:22           ` ChatGPT Nioclásán Caileán de Ghlostéir
2024-10-20  0:32             ` ChatGPT Kevin Chadwick
2024-10-20 16:21               ` ChatGPT Nioclásán Caileán de Ghlostéir
2025-01-18 18:26               ` ChatGPT Niocláiſín Cóilín de Ġloſtéir
2025-02-01  9:04                 ` ChatGPT Niocláiſín Cóilín de Ġloſtéir

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox