help / color / mirror / Atom feed
From: "Dmitry A. Kazakov" <>
Subject: Re: ChatGPT
Date: Sat, 1 Apr 2023 09:39:49 +0200	[thread overview]
Message-ID: <u08n44$1qrr0$> (raw)
In-Reply-To: <>

On 2023-03-31 23:44, Anatoly Chernyshev wrote:
> Data science people swear it's just a matter of the size of training set used...

They lie. In machine learning overtraining is as much a problem as 
undertraining. The simplest example from mathematics is polynomial 
interpolation becoming unstable with higher orders.

And this does not even touch contradictory samples requiring retraining 
or time constrained samples etc.

> I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as of now.

One must remember that ChatGPT is nothing but ELIZA on steroids.

Dmitry A. Kazakov

  reply	other threads:[~2023-04-01  7:39 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-30 21:49 ChatGPT Anatoly Chernyshev
2023-03-30 22:32 ` ChatGPT Jerry
2023-04-01 12:10   ` ChatGPT Hou Van Boere
2023-03-30 23:00 ` ChatGPT Jeffrey R.Carter
2023-03-31  6:54   ` ChatGPT Dmitry A. Kazakov
2023-03-31 11:04     ` ChatGPT magardner2010
2023-03-31 21:44     ` ChatGPT Anatoly Chernyshev
2023-04-01  7:39       ` Dmitry A. Kazakov [this message]
2023-04-07  1:51         ` ChatGPT Ken Burtch
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox