comp.lang.ada
 help / color / mirror / Atom feed
* requiring Ada(2020?) in self-driving autonomous automobiles
@ 2018-03-28 13:26 Dan'l Miller
  2018-03-28 14:24 ` Dan'l Miller
  0 siblings, 1 reply; 10+ messages in thread
From: Dan'l Miller @ 2018-03-28 13:26 UTC (permalink / raw)


Perhaps this is best a new top-level posting instead of buried as a reply on the “Ada-oriented GUI” top-level posting.

On Tuesday, March 27, 2018 at 7:04:46 PM UTC-5, Randy Brukardt wrote: 
> "Dan'l Miller" wrote in message 
- hide quoted text -
> news:0e59a988-ed21-4e45-a2ed-7a51995dbe6c@googlegroups.com... 
> ... 
> > ... it is up to the consumer to read the label on a product to see whether 
> > the product has been designed and manufactured in accordance with 
> > various voluntary regimes of best-practices in industry-standards. 
> 
> That's surely helpful for the lady that was run over crossing the street. 
> 
> Self-driving cars have safety requirements near those of avionics software. 
> But the actual requirements on such software is far from the same. I'm 
> afraid it will take a bunch of lawsuits to get the automakers in line, and 
> quite possibly not even that. (I hear they actively want to avoid making 
> software that actually is known to work.) 
> 
>                         Randy. 

My point is that self-driving-automobile companies that, say, adopt DO-178B/C level of rigor (and implement in Ada with proper RTOS) will have vastly more de facto safe harbor in a lawsuit from either that lady's estate, the attorney general, or the FTC/NTSB.  Conversely, companies that cannot demonstrate adherence to regimes of rigor (or demonstrate adherence to a relatively worthless regime of rigor) are wide open to lawsuits incoming from multiple trajectories, precisely aimed at their deep pockets. 

Randy, if you want to achieve your goal of Ada2020 saving lives in self-driving vehicles, then you (personally and as all of ARG in the plural) need to hitch your Ada2020 wagon to IATF 16949 or other automotive quality-management best-practices to bring DO-178B/C or equivalent requirements to the automotive industry.  Ada2020 can have the most perfect solution to provably-correct tasking, but if its adoption is neither required nor strongly motivated by a quality-management regime, then no one will even know of your achievement. 

There exists an expedient backdoor to forcibly ramming Ada and DO-178B/C-esque requirements on the self-driving automotive industry (and/or the larger drive-by-wire automotive industry, e.g., the Toyota debacle).  That backdoor is the automotive property-casualty insurance industry.  If no insurer will affordably insure self-driven automobiles that lack, say, DO-178B/C compliance (and better yet the promised Ada2020 provable-correctness in tasking), then Ada will win the safety-critical-vehicles war and many of the major battles in that war.  Trusting that Ada2020's forthcoming awesome goodness will magically appear in automobiles by passive osmosis would be a recipe for being a coulda-woulda-shoulda footnote in the history books.  Statutory laws tend to follow whatever the automotive property-casualty insurance industry's lobbyists demand in Washington DC and in state legislatures. 

(Here I am assuming that your Ada2020 goals come to fruition practically.  I hope that they do, even though I am skeptical.)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-03-28 13:26 requiring Ada(2020?) in self-driving autonomous automobiles Dan'l Miller
@ 2018-03-28 14:24 ` Dan'l Miller
  2018-03-28 14:26   ` Dan'l Miller
  0 siblings, 1 reply; 10+ messages in thread
From: Dan'l Miller @ 2018-03-28 14:24 UTC (permalink / raw)


https://quality-one.com/iatf-16949
Ada(2020?) and perhaps, say, DO-178B/C would need to be made a de facto minimum-best-practices requirement in the IATF 16949 community in the drive-by-wire and autonomous automotive industry.

http://www.fcanorthamerica.com/company/leadership/Pages/Management.aspx
Scott R. Garberding is the chief quality-management officer at Fiat-Chrysler America.

https://www.gm.com/company/leadership/corporate-officers.html
Mark Reuss is the chief quality-management officer at General Motors.

https://media.ford.com/content/fordmedia/fna/us/en/people/linda-cash.html
Linda Cash in the chief quality-management officer at Ford.

http://ir.tesla.com/management.cfm
It is unclear who acts as the chief quality-management officer at Telsa.  Likely, either J.B. Straubel or Elon Musk himself.

https://forums.tesla.com/forum/forums/quality-assurance-and-quality-management
http://www.iatfglobaloversight.org/oem-requirements/customer-specific-requirements
It is unclear whether Tesla views itself as adhering to IATF 16949  (which conforms to my summary of the voluntary-membership-in-effectively-guilds summary of the USA's regulatory system of technology.

For the Ubers & Googles of the world, there appears to be little evidence of their own voluntary adherence to any rigorous quality-management regimes, such as IATF 16949.  Although in Uber's case, it is conceivable that Uber is participating in IATF 16949 via Ford-subsidiary Volvo's activity in IATF 16949, depending on how “base vehicle” is defined to include or exclude the autonomous self-driving hardware & software on those vehicles:
http://electronics360.globalspec.com/article/7180/volvo-and-uber-partner-for-self-driving-cars

https://www2.deloitte.com/insights/us/en/focus/future-of-mobility/mobility-ecosystem-future-of-auto-insurance.html?id=us:2ps:3bi:confidence:eng:cons:::na:vvsboamD:1077703202:76622248597681:bb:Future_of_Mobility:Auto_Insurance_BMM:nb&msclkid=bc7f9b7e6a5b15731e6bc1c5f569f859
A summary of the future of property-casualty automotive insurance as self-driving autonomous vehicles (and ride-sharing fleets) increase in the future.  Who pays for automotive insurance in the future and does the manufacturer of the autonomous-vehicle control cause drastically-different premium rates?  (It should; the Ada & DO-179B/C communities would need to drive that point home.)

https://globalnews.ca/news/3270429/self-driving-cars-insurance-liability
Insurance companies can very much be taught about deep design flaws in hackery and nonrigorous quality-management regimes.  The true driver of a wrecked autonomous self-drive automobile is the software & hardware manufacturer of the self-driving control system.  When the wreck is the self-driving software/hardware's fault, then that autonomous-control-system manufacturer had better have deep pockets like something resembling malpractice/bonded insurance commonplace in the medical & licensed-professional-engineer industries.  The most-expedient way to eliminate cavalier yahoo hackery in the software for self-driving autonomous automobiles is to financially drive them out of business by excessive liability burden and other excessive fees tied to their cavalier yahoo hackery.  Even better if this message can be publicized with the general public via, say, the insurance industry.

These are the beginnings of points of contacts for the Ada ARG to reach out and influence the North American and European automobile manufacturers (even the ones without much self-driven autonomous vehicle footprint) to crack down on the cavalier yahoos in their industry.  Likewise these are the beginnings of points of contacts for the Ada ARG to reach out to the property-casualty automotive insurance industry to educate them on which technological regimes are safe self-drivers and which technological regimes are analogous to a drunk driver, and that their premiums should be low or punitive depending on the technology & quality-management regime inside the self-driving autonomous control-system software & hardware.  Punitive insurance premiums will quickly drive cavalier yahoo software out of the marketplace, toward the strongly-preferred-by-cheaper-premiums Ada-based and DO-178B/C-based self-driving autonomous control-systems.

Indeed, by bringing in IATF 16949 luminaries (and perhaps even property-casualty automotive insurance luminaries) in now during the Ada2020 process, Ada2020 can be crafted to do precisely what Randy seeks:  be the crowning achievement for total-quality-management in self-driving autonomous vehicles.  (Or the ARG can work heads down on Ada2020's provably-correct tasking without connecting the dots with the quality-management and insurance-industry people, and then Ada2020 can appear as a coulda-woulda-shoulda footnote in some automotive-industry history book 30 years from now, somewhat the way MULTICS is now in the history of operating-system-feature advances.)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-03-28 14:24 ` Dan'l Miller
@ 2018-03-28 14:26   ` Dan'l Miller
  2018-03-28 21:38     ` Bojan Bozovic
  0 siblings, 1 reply; 10+ messages in thread
From: Dan'l Miller @ 2018-03-28 14:26 UTC (permalink / raw)


On Wednesday, March 28, 2018 at 9:24:02 AM UTC-5, Dan'l Miller wrote:
> https://quality-one.com/iatf-16949
> Ada(2020?)

Grrrrrr.  I see on some newsgroup browsers that Ada(2020?) on the next line (!) somehow got included in the URL.

https://quality-one.com/iatf-16949

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-03-28 14:26   ` Dan'l Miller
@ 2018-03-28 21:38     ` Bojan Bozovic
  2018-03-29  3:56       ` Dan'l Miller
  0 siblings, 1 reply; 10+ messages in thread
From: Bojan Bozovic @ 2018-03-28 21:38 UTC (permalink / raw)


Programming (or software development, if you like that words better) methodology needs to be changed, and verifying software against specification together with testing must be viewed as indispensable, not some arcane procedure reserved for those that code aircraft, spacecraft and missile software. Anything that makes writing correct programs easier is welcome and needed, self-driving cars are just one aspect of the problem.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-03-28 21:38     ` Bojan Bozovic
@ 2018-03-29  3:56       ` Dan'l Miller
  2018-03-29  7:21         ` Bojan Bozovic
  2018-04-02 21:06         ` Robert I. Eachus
  0 siblings, 2 replies; 10+ messages in thread
From: Dan'l Miller @ 2018-03-29  3:56 UTC (permalink / raw)


On Wednesday, March 28, 2018 at 4:38:42 PM UTC-5, Bojan Bozovic wrote:
> Programming (or software development, if you like that words better) methodology needs to be changed,
> and verifying software against specification

  So the specification for self-driving cars is in good shape, defining exactly what the expectations in the problem-space are to emulate human-quality of driving?  No, they are not, because in general they are falling into the category of:
0) Use the reference implementations of realtime software provided by the sensor hardware manufacturers.
1) Have a neural network.
2) Train the neural network on dry roads on a sunny day without road construction.
3) Once the neural network drives no worse than a nervous teenager, add one hazard at a time to further train the neural network.
3) Then a miracle occurs.
4) The neural network knows how to have the reflex of the spinal cord & brain stem.  The neural network knows how to have the reptilian brain for tracking object identity and movement.  The neural network knows how to have the mammalian brain for social order.  The neural network knows how to have a human-level of inductive & deductive logical & spatial reasoning about why spontaneous events are happening and how the other guy will likely respond.  And it knows how to coordinate all those layers of brain processing in all the varieties of road hazards concurrently:  pedestrian walks out onto the roadway during a snow storm at night when the road is glazed with ice with a tractor-trailer semi truck tailgating behind with a disabled vehicle on the shoulder up ahead.

What we would recognize as software in these self-driving autonomous vehicles is primary in the drive-by-wire category of operating the realtime control of the LIDAR and other sensors.  The failure of self-driving autonomous vehicle software is far more likely in the vicinity of steps 2 through 4 of the requirements specification above.  Fidelity of transliteration of the requirements specification into neural nets on one hand and realtime software control of sensor hardware on the other hand probably doesn't even make the top 10 root causes of wrecks of self-driving automobiles.  The neural network and its undesired ability to learn the wrong lesson during training is far more likely the root cause of wrecks & mishaps & maiming & deaths.

> together with testing must be viewed as indispensable,

Remember what you think of as testing (either the verification of the fidelity of transliteration of the requirements or the validation of whether the requirements specification was wise in the first place), they think of as training the neural net.  What you think of as a sacred methodical process, the neural network thinks more of as merely more opportunities for trial-and-error training.

> not some arcane procedure reserved for those that code aircraft, spacecraft and missile software.

Yeah right.  Vehicles with hardware sensors in the air have an entirely different set of •realtime•-software characteristics than vehicles with hardware sensors on the ground.  Both real and time differ when in the air and when on the ground.  Fly-by-wire is entirely different physics & control-theory processing than drive-by-wire.  Yeah right.

> Anything that makes writing correct programs easier

Define easier with respect to emulating a human being's conscious brain atop a mammalian brain atop a reptilian brain atop a brain stem atop a spinal cord in front of a steering wheel and gas pedal and brake pedal (and clutch pedal and gear shift in the case of self-driving autonomous tractor-trailer semis/lorries).  

Even independent of this difficult emulation, I think Python is generally perceived to be easier for programmers than, say, Ada.  That is, easier in every metric other than assuring correctness (and perhaps readability).  What kind of ease do you wish to optimize?  (Probably the wrong kind.)

> is welcome and needed, self-driving cars are just one aspect of the problem.

Randy/ARG has identified self-driving autonomous automobiles as a likely killer* app of Ada2020, so that is why it is a very important topic to discuss, despite your attempt to deflect away from it.

* so that self-driving autonomous automobiles are no longer killers.  Oh the irony.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-03-29  3:56       ` Dan'l Miller
@ 2018-03-29  7:21         ` Bojan Bozovic
  2018-04-02 21:06         ` Robert I. Eachus
  1 sibling, 0 replies; 10+ messages in thread
From: Bojan Bozovic @ 2018-03-29  7:21 UTC (permalink / raw)


On Thursday, March 29, 2018 at 5:56:22 AM UTC+2, Dan'l Miller wrote:
> On Wednesday, March 28, 2018 at 4:38:42 PM UTC-5, Bojan Bozovic wrote:
> > Programming (or software development, if you like that words better) methodology needs to be changed,
> > and verifying software against specification
> 
>   So the specification for self-driving cars is in good shape, defining exactly what the expectations in the problem-space are to emulate human-quality of driving?  No, they are not, because in general they are falling into the category of:
> 0) Use the reference implementations of realtime software provided by the sensor hardware manufacturers.
> 1) Have a neural network.
> 2) Train the neural network on dry roads on a sunny day without road construction.
> 3) Once the neural network drives no worse than a nervous teenager, add one hazard at a time to further train the neural network.
> 3) Then a miracle occurs.
> 4) The neural network knows how to have the reflex of the spinal cord & brain stem.  The neural network knows how to have the reptilian brain for tracking object identity and movement.  The neural network knows how to have the mammalian brain for social order.  The neural network knows how to have a human-level of inductive & deductive logical & spatial reasoning about why spontaneous events are happening and how the other guy will likely respond.  And it knows how to coordinate all those layers of brain processing in all the varieties of road hazards concurrently:  pedestrian walks out onto the roadway during a snow storm at night when the road is glazed with ice with a tractor-trailer semi truck tailgating behind with a disabled vehicle on the shoulder up ahead.
> 
> What we would recognize as software in these self-driving autonomous vehicles is primary in the drive-by-wire category of operating the realtime control of the LIDAR and other sensors.  The failure of self-driving autonomous vehicle software is far more likely in the vicinity of steps 2 through 4 of the requirements specification above.  Fidelity of transliteration of the requirements specification into neural nets on one hand and realtime software control of sensor hardware on the other hand probably doesn't even make the top 10 root causes of wrecks of self-driving automobiles.  The neural network and its undesired ability to learn the wrong lesson during training is far more likely the root cause of wrecks & mishaps & maiming & deaths.
> 
> > together with testing must be viewed as indispensable,
> 
> Remember what you think of as testing (either the verification of the fidelity of transliteration of the requirements or the validation of whether the requirements specification was wise in the first place), they think of as training the neural net.  What you think of as a sacred methodical process, the neural network thinks more of as merely more opportunities for trial-and-error training.
> 
> > not some arcane procedure reserved for those that code aircraft, spacecraft and missile software.
> 
> Yeah right.  Vehicles with hardware sensors in the air have an entirely different set of •realtime•-software characteristics than vehicles with hardware sensors on the ground.  Both real and time differ when in the air and when on the ground.  Fly-by-wire is entirely different physics & control-theory processing than drive-by-wire.  Yeah right.
> 
> > Anything that makes writing correct programs easier
> 
> Define easier with respect to emulating a human being's conscious brain atop a mammalian brain atop a reptilian brain atop a brain stem atop a spinal cord in front of a steering wheel and gas pedal and brake pedal (and clutch pedal and gear shift in the case of self-driving autonomous tractor-trailer semis/lorries).  
> 
> Even independent of this difficult emulation, I think Python is generally perceived to be easier for programmers than, say, Ada.  That is, easier in every metric other than assuring correctness (and perhaps readability).  What kind of ease do you wish to optimize?  (Probably the wrong kind.)
> 
> > is welcome and needed, self-driving cars are just one aspect of the problem.
> 
> Randy/ARG has identified self-driving autonomous automobiles as a likely killer* app of Ada2020, so that is why it is a very important topic to discuss, despite your attempt to deflect away from it.
> 
> * so that self-driving autonomous automobiles are no longer killers.  Oh the irony.

Now, if the self-driving cars are made from a neural network doing trial and error learning, sound logical proof is needed that such methodology is really not going to endanger other participants of the traffic. As such proof doesn't exist (at least to my knowledge) self-driving cars are hazard that can't be allowed on the streets. I didn't say that real-time systems in the aircraft have the need of safety that, for example, robots used for car manufacture doesn't, or that medical devices doesn't, many, even those who should know better, don't think such systems run something like VxWorks and Ada rather than some form of Linux. "There is always one more bug" is a mantra that is repeated over and over again, and needn't be so. Also, if we were to compare Python and Ada, even for a beginner Ada wins for ease of programming. Documentation is free and readily available, as are the tutorials and books on the subject. On the other hand Python has nothing like Ada LRM/ALRM online, and Ada 83, it still compiles, 35 years later, on Ada 2012 compiler, while Python went isn't compatible between 2.x and 3.x version. I wasn't trying to deflect the topic from self-driving automobiles, you wrongly accuse me, but I don't see how the problem really differs from an autopilot in an aircraft that is certified to DO-178B/C. At least nobody has tried to put a primitive neural network into these and justify it by saying it performs better than a drunk pilot.
And no, constructing software to be correct as better to debugging isn't my idea, but one by C.A.R "Tony" Hoare and Edsger W. Dijkstra, I'm just repeating it. It predates Ada.
Mr Dan'l Miller, you started out too defensive, as if I somehow insulted you, or Mr. Randy Brukhardt. If it sounded so, it wasn't my intention. If neural networks are indeed the future, the onus is on those who implement those systems to prove they pose no risk. One way or another it will come to pass.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-03-29  3:56       ` Dan'l Miller
  2018-03-29  7:21         ` Bojan Bozovic
@ 2018-04-02 21:06         ` Robert I. Eachus
  2018-04-03  8:58           ` Dmitry A. Kazakov
  1 sibling, 1 reply; 10+ messages in thread
From: Robert I. Eachus @ 2018-04-02 21:06 UTC (permalink / raw)


On 3/28/2018 11:56 PM, Dan'l Miller wrote:
> On Wednesday, March 28, 2018 at 4:38:42 PM UTC-5, Bojan Bozovic wrote:
>> Programming (or software development, if you like that words better) methodology needs to be changed,
>> and verifying software against specification
> 
>    So the specification for self-driving cars is in good shape, defining exactly what the expectations in the problem-space are to emulate human-quality of driving?  No, they are not, because in general they are falling into the category of:
> 0) Use the reference implementations of realtime software provided by the sensor hardware manufacturers.
> 1) Have a neural network.
> 2) Train the neural network on dry roads on a sunny day without road construction.
> 3) Once the neural network drives no worse than a nervous teenager, add one hazard at a time to further train the neural network.
> 3) Then a miracle occurs.

My mother pointed out to me that if you see body damage on a car near 
you in traffic, expect the driver to do the same thing again.  Over 
fifty years of driving, probably helped me avoid several fender benders.

My father taught us to pay attention to the drivers around you on long 
trips, and choose to be surrounded by good drivers not bad drivers. 
When Lola was driving from Massachusetts to Pennsylvania for another 
sister's wedding, her steering failed catastrophically on the NJ 
Turnpike in the rain, at rush hour, in the leftmost lane.  Her car hit 
the guard rail, bounced, and ended up on the grass between the highway 
and a service area.

She was explaining what happened to me on the way to the rehearsal.  She 
said, "I have no idea how the car on my right avoided me."

I said, "Yes you do your father..."

"Oh, right!  I had picked him out about ten miles earlier."

Now what I want to know is how that level of training is going to get 
into a self-driving car.  The necessary data could be picked up from the 
records of other drivers.  Maybe Deep Learning can figure it out.  But I 
won't hold my breath.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-04-02 21:06         ` Robert I. Eachus
@ 2018-04-03  8:58           ` Dmitry A. Kazakov
  2018-11-26 11:44             ` Marius Amado-Alves
  0 siblings, 1 reply; 10+ messages in thread
From: Dmitry A. Kazakov @ 2018-04-03  8:58 UTC (permalink / raw)


On 02/04/2018 23:06, Robert I. Eachus wrote:

> Now what I want to know is how that level of training is going to get 
> into a self-driving car.  The necessary data could be picked up from the 
> records of other drivers.  Maybe Deep Learning can figure it out.  But I 
> won't hold my breath.

These are ones of major machine learning problems:

- Knowledge extraction
- Reinforcement learning

Humans can explain things, forget things, do erratic things time to 
time. This is how it works with us based on our worldview, moral, 
ethics, instincts etc. How would it work with an NN? Well, most likely 
it will not. But it is possible that the solution will be to replace the 
original problem with something else. Cars were designed around the 
human driver. If you cannot replace the driver, replace the car, the 
road etc.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-04-03  8:58           ` Dmitry A. Kazakov
@ 2018-11-26 11:44             ` Marius Amado-Alves
  2018-11-26 17:31               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 10+ messages in thread
From: Marius Amado-Alves @ 2018-11-26 11:44 UTC (permalink / raw)


> Humans can explain things, forget things, do erratic things time to 
> time. This is how it works with us based on our worldview, moral, 
> ethics, instincts etc. How would it work with an NN? Well, most likely 
> it will not. (Dmitry A. Kazakov)

Theoretically it is possible with deep networks of the right architecture. Problem is this right architecture is very hard to find, requires a "deep" understanding of the theory, and the very few people at this level like Ng, Hinton, are now "directors" of something, don't create architectures any more. That is now done in a typewriting monkey way by the new machine learning practioners, turning knobs by instinct at best. So, yeah, it will not happen soon.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: requiring Ada(2020?) in self-driving autonomous automobiles
  2018-11-26 11:44             ` Marius Amado-Alves
@ 2018-11-26 17:31               ` Dmitry A. Kazakov
  0 siblings, 0 replies; 10+ messages in thread
From: Dmitry A. Kazakov @ 2018-11-26 17:31 UTC (permalink / raw)


On 2018-11-26 12:44, Marius Amado-Alves wrote:
>> Humans can explain things, forget things, do erratic things time to
>> time. This is how it works with us based on our worldview, moral,
>> ethics, instincts etc. How would it work with an NN? Well, most likely
>> it will not. (Dmitry A. Kazakov)
> 
> Theoretically it is possible with deep networks of the right architecture.

How do you know that? This theoretical knowledge requires "human 
intelligence completeness", i.e. *knowing* that the class problems 
solvable by the network includes human intelligence level used in 
driving cars. We know basically nothing about human intelligence, not 
even if it belongs to the class of FSM solvable problems, or Turing 
machine solvable problems with or without attached incomputable elements.

> Problem is this right architecture is very hard to find, requires a "deep" understanding of the theory, and the very few people at this level like Ng, Hinton, are now "directors" of something, don't create architectures any more.

Well, that is a secondary problem. Though usually if a problem is 
solvable, an engineering problem we would say, then that pretty much 
defines the architecture. And reversely, if the architecture becomes 
"rocket science", the chances are high that somebody is fooling someone.

> That is now done in a typewriting monkey way by the new machine learning practioners, turning knobs by instinct at best. So, yeah, it will not happen soon.

Yes, that is the problem with "evolving intelligence" / "genetic 
algorithms" / "swarm intelligence" etc approaches.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-11-26 17:31 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-28 13:26 requiring Ada(2020?) in self-driving autonomous automobiles Dan'l Miller
2018-03-28 14:24 ` Dan'l Miller
2018-03-28 14:26   ` Dan'l Miller
2018-03-28 21:38     ` Bojan Bozovic
2018-03-29  3:56       ` Dan'l Miller
2018-03-29  7:21         ` Bojan Bozovic
2018-04-02 21:06         ` Robert I. Eachus
2018-04-03  8:58           ` Dmitry A. Kazakov
2018-11-26 11:44             ` Marius Amado-Alves
2018-11-26 17:31               ` Dmitry A. Kazakov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox