Monday, December 17, 2007

Valleys and Zombies

The Uncanny Valley.

It's not the name of somewhere near Monterey, California, with Steinbeck wincing in the distance. No, it is a notion closely related to Dennett's intentional stance. The uncanny valley was proposed by Japanese robotics theorist Masahiro Mori in the early 70s. It is not a psychologically-motivated theory, per se, but rather a raw speculation that as the level of humanlike behavior improves in animatronics of various forms, there is a point at which we will become creeped-out by the similarity to real people.

For moving robotic simulacra, the weirdness is that they seem like zombies. Or, with a simulated arm, it looks too much like some Frankensteinian body part animated by lightning and stinking of embalming fluid. The reaction is revulsion.

Recent Robert Zemeckis films like Polar Express and Beowulf have caused similar reactions among some reviewers who felt that seeing Angelina Jolie or Tom Hanks as some lifelike muppet versions of themselves, moving around unnaturally, invoked a creep factor replete with skin-crawling sensations. The lesser-known simulated actors and actresses did not invoke the same reaction, however, because there was no uncanny resemblance at work in the viewer’s mind.

I suspect that our very productive human recognition circuits are at work here. I note a symmetrical oddity from my own childhood. Watching PBS and owning the Time-Life science book series, I saw many medical programs that showed open-chest or open-limb surgeries, and was intrigued by the anatomy of it all. But one evening Nova was about wide-awake brain surgery. For some reason, the exposure of the brain and, very critically, the notion that the brain is somehow connected to our sense of self, personality and thoughts, struck me as extremely disturbing. I certainly knew that the brain is responsible for thought and emotion. I even knew the basic physiology and had a small understanding of modularization. But observing the connection between volitional self and brain in real time just creeped me out to the extent that I remember that show to this day.

I see that experience of revulsion as projecting backwards over the uncanny valley, in a way, showing the mechanical aspects of the intentional aspects of human identity. We are not just creative, volitional tours-de-force riding an existential wave of thought, but are just across a gentle dip filled with zombies and robots, from nothing at all.

Monday, December 10, 2007

Davies and Materialism

Paul Davies took a solid beating and responded in turn at The Edge.

Part of his response was to the effect that he didn’t mean all of science in his original discussion, but only meant instead the area of cosmology and theoretical physics which is his bailiwick. He exempts evolutionary biologists with that stroke. His opinions on cognitive science or neuropsychology would likely be similar (I bring this up since I see a few “grand problems” for science beyond the origin of the universe and some final theory of physics; the problem of mind and abiogenesis as similarly important).

But part of his discussion continued to hammer at the notion that uniformity and understandability of natural law was somehow intrinsically related to monotheism. It’s a rarefied argument that kind of bootstraps itself on the fact that Newton, Kepler and Galileo saw the hand of God in the correspondence of their mathematical abstractions to physical observations. There is an odd hint about his proclivities, though, in Davies’ mention of Lee Smolin’s evolutionary selection of universes, where other metaphorical narratives have informed the physical theory; a similar parallel exists in the use of computer metaphors in cognitive science, of course, or in ecological theories of perception. There are only a few basic algorithms available to try to explain unexplained phenomena: stochastic selection with replication (evolution), deterministic interaction (Newtonian dynamics), quantized interactive behaviors (quantum mechanics), thermodynamic uniformization, cybernetic control and feedback, computation and, yes, pure irrationality or theology.

I did some digging on some of the Davies’ arguments, passing back through the Wigner paper and the follow-on by Hamming, “The Unreasonable Effectiveness of Mathematics,” which builds-out the notion that there are essentially irrational drivers (aesthetics, play, mysticism) that push forward mathematics and that the results in turn drive scientific theory. All of this effort in some ways parallels or rediscovers the ongoing work during the same time period concerning aspects of irrationality in the philosophy of science (Kuhn, Feyeraband, etc.). The scientific method is not an acidic, scalding, and sacred pursuit devoid of irrational influences, nor are individual scientists devoid of personal faiths about their capabilities or the possibilities of their theories, but proclaiming the entire enterprise as strongly influenced by a monotheistic worldview is a strange preoccupation. Indeed, the only constant across the varied scientific pursuits is that any metaphysics is material in nature because no other explanations have provided any hint of validation or added to the task at hand (thus any metaphysics is tentative, itself), and that mathematics is a useful tool because it is simply a way of expressing relationships between objects that are not irrational but that vary according to sometimes complex but non-arbitrary ways.

Friday, December 7, 2007

Dimensional Folding and Coherence

I was reading Terence Yao's blog on mathematics earlier today, enjoying some measure of understanding since his recent posts and lectures focus on combinatorics. In addition to the subject matter, though, I was interested in the way he is using his blog to communicate complex ideas. The method is rather unique in that is less formal than a book presentation, less holographic than a journal article for professional publication, more technical than an article in a popular science magazine, and yet not as sketchy as just throwing up a series of PowerPoint slides. And of course there is interaction with readers, as well.

There is some rather interesting work in cognitive psychology and psycholinguistics on the relationship between writing styles and reader uptake. Specifically, the construction-integration model by Walter Kintsch and others tries to tease out how information learning is modulated by the learner's pre-existing cognitive model. There is a bit of parallelism with "constructivism" in educational circles that postulates that learning is a process of building, tearing down and re-engineering cognitive frameworks over time, requiring each student to be uniquely understood as bringing pre-existing knowledge systems to the classroom.

In construction-integration theory, an odd fact has been observed: if a text has lots of linking words bridging concepts from paragraph to paragraph, those with limited understanding of a field can get up to speed with greater fluidity than if the text is high-level and not written with that expectation in mind. In turn, those who are advanced in the subject matter actually learn faster when the text is more sparse and the learner bridges the gaps with their pre-existing mental model.

We can even measure this notion of textual coherence or cohesion using some pretty mathematics. If we count all the shared terms from one paragraph to the next in a text and then try to eliminate the noisy outliers, we can get a good estimate of the relative cohesion between paragraphs. These outliers arise due to lexical ambiguity or because many terms are less semantically significant than they are syntactically valuable.

A singular value decomposition (SVD) is one way to do the de-noising. In essence, the SVD is an operation that changes a large matrix of counts into a product of three matrixes, one of which contains "singular values" along the matrix diagonal. We can then order those values by magnitude and eliminate the small ones, then re-constitute a version of the original matrix. By doing this, we are in effect asking which of the original counts contribute little or nothing to the original matrix and eliminating those less influential terms.

There are some other useful applications of this same principle (broadly called "Latent Semantic Analysis" or LSA). For instance, we can automatically discover terms that are related to one another even though they may not co-occur in texts. The reduction and reconstitution approach, when applied to the contexts in which the terms occur, will tend to "fold" together contexts that are similar, exposing the contextual similarity of terms. This has applications in information retrieval, automatic essay grading and even machine translation. For the latter, if we take "parallel" texts (texts that are translations of one another by human translators), we can fold them all into the same reduced subspace and get semantically-similar terms usefully joined together.

Terence Yao's presentation is clearly aimed at grad students, advanced undergrads and other mathematics professionals, so his language tends to be fairly non-cohering (not, I note, do I think he incoherent!), and much of the background is left out or is connected via Wikipedia links. The links are a nice addition that is helpful to those of us not active in the field, and a technique that provides a little more textual cohesion without unduly bothering the expert.