B1. Self Improving AI

Self-improving AI is a meme that has been circulating since the 1980s. Current proponents of the idea include Boström and Omohundro. My own summary goes something like this:

If we get any kind of AGI going, no matter how slow it is and how buggy it is, we can give it access to its own source code and let it analyze it and clean up and fix the bugs and then re-write its code to be as good as it can make it. We then start up this slightly smarter AGI and repeat the process until the AGIs get superintelligent.

On the surface, this is irrefutable. We already have examples of systems improving themselves: We can buy a cheap 3D printer and then quite cheaply print out parts for a much better 3D printer. Or to make computer chips that go into computers that design better computer chips.

Not to mention evolution of all species in nature.

I look at it from an Epistemologist's point of view and say "That's a Hardline Reductionist idea that should not have made it out of the 20th century".

The idea, as its inception, imagined an AGI as something that was written by teams of human programmers using software development tools and mathematical equations.

But I think the only thing that even approximates this outcome is that the code is perfect, and humans as well as machines all agree there are no more improvements to be made. And the resulting AGIs are still not superintelligent.

The most likely outcome is that we all realize the folly in this argument and won't even try.

It's not about the code.

The number of lines of code in AI related projects has been declining rapidly.

2004 Cyc 6 million FOPC/CycL propositions
2012 34000 lines .py .cuda Krizhevsky et al for Imagenet
2013 1571 lines of lua to play Atari games
2017 196 lines of Keras to implement Deep Dream
2018 <100 lines Keras for research paper level results

And all of these (Except Cyc, included as the most famous example of a 20th century Reductionist AI system) demonstrates new levels of power of Machine Learning.

The limits to intelligence are not in the code. In fact, they are not even technological.

The limit of intelligence is the complexity of the world. Omniscience is unavailable. The main purpose of intelligence is to guess, to jump to conclusions on scant evidence, and to do it well, based on a large set of historical patterns of problems and their solutions or events and their consequences.

Because scant evidence is all we will ever have. We don't even know what goes on behind our back.

And because all intelligence is guessing, I have repeatedly claimed that "All Intelligences Are Fallible".

We are already making machines that are better than humans in some aspect of guessing. Protein Folding and playing Go are examples of this.

And these machines will get bigger and better at what they do and will be superhuman in various ways and in many problem domains, simply based on larger capacity to hold, lookup, or search useful patterns.

The code doing that can be hand-optimized to the point where any AI improvement would be insignificant. My own code in the inner loop for Understanding any language on the planet (once it has learned it, in "inference mode") is about 90 lines of Java. We can expect at best minor improvements to efficiency and speed.

It comes down to the corpus. In my domain, NLU, simple tests can be scored at 100% after a few minutes of learning on a laptop. Continued learning for days and weeks would provide a larger sample set of vocabulary-in-appropriate-contexts which would mainly correct misunderstandings in corner cases. But these corpora are not comparable, by several orders of magnitude, to the gathered life experience of a human at age 25.

The main limit of intelligence is corpus size in a ML situation.

Future Artificial Intelligences will be nothing like what AGI fans have been fear-mongering about. These are 20th century Reductionist AI ideas; the proponents are blind to the most fundamental basics of epistemology. Reductionist GOFAI has been demonstrated to being inferior in their own domains to even semi-trivial Machine Learning methods.

We need AGL, not AGI.

Machines learning to code

As of this writing, there are a handful of available code-writing systems based on ML technology that has learned from large quantities of open source code, for example GitHub Copilot, OpenAI Codex, and Amazon CodeWhisperer.

They have not yet surpassed human programmers.

But it's not about writing code either. AIs writing code is about as silly as AI magazine covers with pictures of robots typing. :-D

In the future, if we want a computer to do something, we will have a conversation (speaking and listening) with the computer. The conversation is at the level of discussing a problem with a competent co-worker or professional.

It may spontaneously ask clarifying questions. I call this "Contiguously Rolling Topic Mixed Initiative Dialog"; others talk of these bots as "Dialog Agents". But this will go beyond Siri or Alexa. And when the computer Understands exactly what you want done. it just does it. Why would Reductionist style programming be a necessary step?

Yes, there will still be lots of places where we want to use code. But whether that code is written by humans or AIs will make much less of a
difference than we might expect based on today's use of computers.