From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!feeder.eternal-september.org!aioe.org!.POSTED!not-for-mail From: "Dmitry A. Kazakov" Newsgroups: comp.lang.ada Subject: Re: requiring Ada(2020?) in self-driving autonomous automobiles Date: Mon, 26 Nov 2018 18:31:48 +0100 Organization: Aioe.org NNTP Server Message-ID: References: <3c3d7301-c0dd-4953-bce1-5ac4050457a8@googlegroups.com> <139fa27c-2f53-4758-81ba-dd403a30e4ef@googlegroups.com> <21e81c06-9352-407d-891c-ec54eed1b249@googlegroups.com> <94e66f34-494b-4aa3-9298-5cc2ca073b26@googlegroups.com> NNTP-Posting-Host: V1inN/299uPbHAesdcbRXA.user.gioia.aioe.org Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: abuse@aioe.org User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.3.1 X-Notice: Filtered by postfilter v. 0.8.3 X-Mozilla-News-Host: news://news.aioe.org Content-Language: en-US Xref: reader01.eternal-september.org comp.lang.ada:54900 Date: 2018-11-26T18:31:48+01:00 List-Id: On 2018-11-26 12:44, Marius Amado-Alves wrote: >> Humans can explain things, forget things, do erratic things time to >> time. This is how it works with us based on our worldview, moral, >> ethics, instincts etc. How would it work with an NN? Well, most likely >> it will not. (Dmitry A. Kazakov) > > Theoretically it is possible with deep networks of the right architecture. How do you know that? This theoretical knowledge requires "human intelligence completeness", i.e. *knowing* that the class problems solvable by the network includes human intelligence level used in driving cars. We know basically nothing about human intelligence, not even if it belongs to the class of FSM solvable problems, or Turing machine solvable problems with or without attached incomputable elements. > Problem is this right architecture is very hard to find, requires a "deep" understanding of the theory, and the very few people at this level like Ng, Hinton, are now "directors" of something, don't create architectures any more. Well, that is a secondary problem. Though usually if a problem is solvable, an engineering problem we would say, then that pretty much defines the architecture. And reversely, if the architecture becomes "rocket science", the chances are high that somebody is fooling someone. > That is now done in a typewriting monkey way by the new machine learning practioners, turning knobs by instinct at best. So, yeah, it will not happen soon. Yes, that is the problem with "evolving intelligence" / "genetic algorithms" / "swarm intelligence" etc approaches. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de