Monday, October 8, 2007

Strong assumptions and Instinct


I am a strong AI proponent, meaning that I believe that, given sufficient time, we will succeed in building a thinking machine that thinks in a non-trivial way beyond the capabilities it was programmed with.

And I have pretty much always been a strong AI proponent. I remember very well wandering irrigation canals in Southern New Mexico on my way home from high school and thinking about this topic. My foster dad and mom had an HP3000 minicomputer in our house that they were using for applications development after a try at timesharing was displaced by the rise of the personal computer. It sat right next to the Altair that was used for burning EPROMS back then. Among the first applications of that HP was the game of animal guessing, where a twenty-questions-style interrogation is performed of the user to guess the animal they are thinking about. Does it have hair? Does it have claws? Etc. If your animal is not in the program’s database, it gets added to the decision tree, growing the learned response set as time progresses.

So I am walking home and thinking about Animal and convinced that programmed intelligent response and learning was not what we mean by the notion of artificial intelligence. We mean something more, something that demonstrates behavioral plasticity, that specifically overcomes the limitations inherent in the programmed capabilities imbued in the system, something that shows us novel behavior.

I came up with the idea of the “instinct” kernel while walking along that day, based on the assumption that understanding the natural phenomena of intelligence would, inevitably, lead to the natural-like capabilities that I was certain were essential for intelligence. This instinct kernel somehow encompassed that natural history that girds intelligence but also served as an essentially unknowable core that expressed the metaphorical notion that intelligence is difficult and unprogrammable.

So here we are 25 years later and we have made remarkable progress in automatic speech recognition, machine translation, Roomba vacuums, and search. Yet, the level of autonomous action akin to that ultimate goal still seems out of reach, despite the achievements in chess or in efficient stock picking. What are we to make of this outcome? Does 25 years of effort mean that the goal is unachievable? Of course not. The goal remains because the goal is tied directly to a fundamental philosophical assumption that there is either something metaphysically unique about mind that can’t be simulated or that strong AI will ultimately be untangled in much the same way that thermodynamics was untangled from demons gating atoms from vessel to vessel.

Anything less robs us of our humanity by declaring there are limitation that are unachievable. I can’t believe that, and there is no evidence that says we should believe that.


No comments: