Monday, December 17, 2007

Valleys and Zombies

The Uncanny Valley.


It's not the name of somewhere near Monterey, California, with Steinbeck wincing in the distance. No, it is a notion closely related to Dennett's intentional stance. The uncanny valley was proposed by Japanese robotics theorist Masahiro Mori in the early 70s. It is not a psychologically-motivated theory, per se, but rather a raw speculation that as the level of humanlike behavior improves in animatronics of various forms, there is a point at which we will become creeped-out by the similarity to real people.

For moving robotic simulacra, the weirdness is that they seem like zombies. Or, with a simulated arm, it looks too much like some Frankensteinian body part animated by lightning and stinking of embalming fluid. The reaction is revulsion.

Recent Robert Zemeckis films like Polar Express and Beowulf have caused similar reactions among some reviewers who felt that seeing Angelina Jolie or Tom Hanks as some lifelike muppet versions of themselves, moving around unnaturally, invoked a creep factor replete with skin-crawling sensations. The lesser-known simulated actors and actresses did not invoke the same reaction, however, because there was no uncanny resemblance at work in the viewer’s mind.

I suspect that our very productive human recognition circuits are at work here. I note a symmetrical oddity from my own childhood. Watching PBS and owning the Time-Life science book series, I saw many medical programs that showed open-chest or open-limb surgeries, and was intrigued by the anatomy of it all. But one evening Nova was about wide-awake brain surgery. For some reason, the exposure of the brain and, very critically, the notion that the brain is somehow connected to our sense of self, personality and thoughts, struck me as extremely disturbing. I certainly knew that the brain is responsible for thought and emotion. I even knew the basic physiology and had a small understanding of modularization. But observing the connection between volitional self and brain in real time just creeped me out to the extent that I remember that show to this day.

I see that experience of revulsion as projecting backwards over the uncanny valley, in a way, showing the mechanical aspects of the intentional aspects of human identity. We are not just creative, volitional tours-de-force riding an existential wave of thought, but are just across a gentle dip filled with zombies and robots, from nothing at all.


Monday, December 10, 2007

Davies and Materialism

Paul Davies took a solid beating and responded in turn at The Edge.

Part of his response was to the effect that he didn’t mean all of science in his original discussion, but only meant instead the area of cosmology and theoretical physics which is his bailiwick. He exempts evolutionary biologists with that stroke. His opinions on cognitive science or neuropsychology would likely be similar (I bring this up since I see a few “grand problems” for science beyond the origin of the universe and some final theory of physics; the problem of mind and abiogenesis as similarly important).

But part of his discussion continued to hammer at the notion that uniformity and understandability of natural law was somehow intrinsically related to monotheism. It’s a rarefied argument that kind of bootstraps itself on the fact that Newton, Kepler and Galileo saw the hand of God in the correspondence of their mathematical abstractions to physical observations. There is an odd hint about his proclivities, though, in Davies’ mention of Lee Smolin’s evolutionary selection of universes, where other metaphorical narratives have informed the physical theory; a similar parallel exists in the use of computer metaphors in cognitive science, of course, or in ecological theories of perception. There are only a few basic algorithms available to try to explain unexplained phenomena: stochastic selection with replication (evolution), deterministic interaction (Newtonian dynamics), quantized interactive behaviors (quantum mechanics), thermodynamic uniformization, cybernetic control and feedback, computation and, yes, pure irrationality or theology.

I did some digging on some of the Davies’ arguments, passing back through the Wigner paper and the follow-on by Hamming, “The Unreasonable Effectiveness of Mathematics,” which builds-out the notion that there are essentially irrational drivers (aesthetics, play, mysticism) that push forward mathematics and that the results in turn drive scientific theory. All of this effort in some ways parallels or rediscovers the ongoing work during the same time period concerning aspects of irrationality in the philosophy of science (Kuhn, Feyeraband, etc.). The scientific method is not an acidic, scalding, and sacred pursuit devoid of irrational influences, nor are individual scientists devoid of personal faiths about their capabilities or the possibilities of their theories, but proclaiming the entire enterprise as strongly influenced by a monotheistic worldview is a strange preoccupation. Indeed, the only constant across the varied scientific pursuits is that any metaphysics is material in nature because no other explanations have provided any hint of validation or added to the task at hand (thus any metaphysics is tentative, itself), and that mathematics is a useful tool because it is simply a way of expressing relationships between objects that are not irrational but that vary according to sometimes complex but non-arbitrary ways.


Friday, December 7, 2007

Dimensional Folding and Coherence

I was reading Terence Yao's blog on mathematics earlier today, enjoying some measure of understanding since his recent posts and lectures focus on combinatorics. In addition to the subject matter, though, I was interested in the way he is using his blog to communicate complex ideas. The method is rather unique in that is less formal than a book presentation, less holographic than a journal article for professional publication, more technical than an article in a popular science magazine, and yet not as sketchy as just throwing up a series of PowerPoint slides. And of course there is interaction with readers, as well.

There is some rather interesting work in cognitive psychology and psycholinguistics on the relationship between writing styles and reader uptake. Specifically, the construction-integration model by Walter Kintsch and others tries to tease out how information learning is modulated by the learner's pre-existing cognitive model. There is a bit of parallelism with "constructivism" in educational circles that postulates that learning is a process of building, tearing down and re-engineering cognitive frameworks over time, requiring each student to be uniquely understood as bringing pre-existing knowledge systems to the classroom.

In construction-integration theory, an odd fact has been observed: if a text has lots of linking words bridging concepts from paragraph to paragraph, those with limited understanding of a field can get up to speed with greater fluidity than if the text is high-level and not written with that expectation in mind. In turn, those who are advanced in the subject matter actually learn faster when the text is more sparse and the learner bridges the gaps with their pre-existing mental model.

We can even measure this notion of textual coherence or cohesion using some pretty mathematics. If we count all the shared terms from one paragraph to the next in a text and then try to eliminate the noisy outliers, we can get a good estimate of the relative cohesion between paragraphs. These outliers arise due to lexical ambiguity or because many terms are less semantically significant than they are syntactically valuable.

A singular value decomposition (SVD) is one way to do the de-noising. In essence, the SVD is an operation that changes a large matrix of counts into a product of three matrixes, one of which contains "singular values" along the matrix diagonal. We can then order those values by magnitude and eliminate the small ones, then re-constitute a version of the original matrix. By doing this, we are in effect asking which of the original counts contribute little or nothing to the original matrix and eliminating those less influential terms.

There are some other useful applications of this same principle (broadly called "Latent Semantic Analysis" or LSA). For instance, we can automatically discover terms that are related to one another even though they may not co-occur in texts. The reduction and reconstitution approach, when applied to the contexts in which the terms occur, will tend to "fold" together contexts that are similar, exposing the contextual similarity of terms. This has applications in information retrieval, automatic essay grading and even machine translation. For the latter, if we take "parallel" texts (texts that are translations of one another by human translators), we can fold them all into the same reduced subspace and get semantically-similar terms usefully joined together.

Terence Yao's presentation is clearly aimed at grad students, advanced undergrads and other mathematics professionals, so his language tends to be fairly non-cohering (not, I note, do I think he incoherent!), and much of the background is left out or is connected via Wikipedia links. The links are a nice addition that is helpful to those of us not active in the field, and a technique that provides a little more textual cohesion without unduly bothering the expert.

Saturday, November 24, 2007

Multiverses and Regress

Paul Davies and Dinesh D’Souza have much in common. Both have summoned Newton’s faithfulness that supported his belief in an ordered universe as a means for explaining how modern science is dependent on faith. D’Souza goes much further than Davies, however, in denying that any other culture other than Christian Europe could have developed science and reason as a central cultural pillar upon which liberalism and America ultimately emerged.

D’Souza’s flaw is primarily in his historical dismissal of the achievements of other cultures, like Islam, in the development of math and science. In his presentations and articles he tends to quote a single Islamic philosopher who believed that Allah could be as irrational as he wanted to be, as if that led all Islamic thinkers to dismiss as impossible any understanding of the universe beyond the ideas of the Koran. Of course we know that was not the case, and hence we get algebra through Aldebaron.

Ptolemy, Pythagoras, Democritus and Epicurus would be similarly shocked by D'Souza's claims.

Davies’ error is in his assumption that science must be a closed intellectual system, thus submitting the entire explanatory framework to an argument of infinite regress. If we see symmetry in forces, there must be an explanation for that symmetry. Then we need an explanation beyond that symmetry. If we assume that the existing laws are effective explanations, he declares that we have invoked faith that has the same essential character as the faith of the religious.

This, of course, dismisses the entire project of inference and abduction—the contingency of liberal rationalism—using a logical positivist conception of theory. We do not, as Davies declares, claim scientifically that there may be no explanation for the ordered nature of physical law. We simply hold that there is no clear theoretical construct and supporting evidence to provide such an explanation. We keep looking and constantly ask ourselves: are there any exceptions to relativistic conceptions of gravitation? We look for anomalous disconfirmation because it leads to conceptual revision. We tentatively hold forth multiple universes and ask whether there is any way to confirm or refute the idea, or whether the idea has consequences.

That, to my mind, is nothing like religious faith, and is only loosely allied with the grand ordered Deism of Newton.


Friday, November 23, 2007

Math, Science and Popular Culture

A digital video recorder (DVR) destroyed my evenings. I write that, though, with some guilty pleasure. I really was not much of a television viewer until I upgraded our technology stack, including an HD receiver with DVR pumping glorious time-shifted detail through an LCD television with a surround-sound system.

That was just over a year ago and my evening productivity has suffered. I spent probably six months just exploring features, programming a universal remote and capturing programming. Then I settled in to watch some specific programs in addition to movies and documentaries.

There are four network programs that I want to mention because of their unexpected cultural importance: the CSI franchise, House, Numb3rs and Criminal Minds. In each case, science or mathematics plays an essential role. In each case, the background material is actually well-researched, although the outcomes are almost always ridiculously neat in order to fit the format.

Numb3rs is the show closest to my background in terms of using algorithms and mathematics to solve problems. In a recent show, in fact, one mathematician used a classification and regression tree (CART) algorithm to do something. I’ve used CART before. Some of the other topics in social network analysis and covering algorithms also ring vaguely true, though they are distorted through a lens of excited elaboration that gets tiresome over time.

Northeastern University keeps track of some of the Numb3rs mathematics, here.

But more important, I think, is the cultural message that intelligent people with highly developed skill sets are heroes. Even Gregory House is a hero of sorts in his coldly analytical pursuit of truth, his anti-theism and dedication to correct diagnoses. I contrast this kind of programming with 90210, Dallas or other cultural phenomena that seemed to cater to baser ideas of wealth, power and privilege. Is the brain on the rise?

I have to admit that I am getting some kind of network television fatigue here after a year with the DVR and shows are stacking up on the hard drive. I may have to go back to a stricter media diet, but hope the science and math keeps a place on the television menu.

Wednesday, November 14, 2007

Folksonomics and Conceptual Metadata

I necessarily think about information design as a part of my professional duties. I also try to keep frosty on novel ways that information might be presented. So I naturally have been revisiting these notions in some avocational research I have been delving into concerning health care reform.

Health care reform is, of course, a deeply political topic that breaks down along several ideological and interest dimensions. In supporting the claims for all sides, basic research is mined and often cherry-picked to build a case. And my point in this entry is not to make an ideological or political claim but to describe in a way how that information is discovered, used and reused.

Now I was quite a novice on the topic of health care economics and reform ideas when I began researching the topic. I certainly had personal negative and positive experiences over the years. I also had heard the reviews and blowback over Michael Moore’s Sicko (though have not yet seen the film). But, beyond that, I had no real understanding of who the players were, what the research suggested, or what the counterclaims were.

My understanding built from a range of sources, most of which were simply not accessible even a decade ago to casual researchers like me. Instead, you had to be a Beltway insider who subscribed to think tank newsletters and research publications. But now I can download and read Commonwealth Fund reports, CATO news briefs, and a host of other resources and become a moderately well-informed amateur researcher. I can access huge swaths of blogs and commentary, reflecting different perspectives. I can even organize the information by collecting it together and then labeling it for easy recovery based on a recollection.

What I can’t easily do yet is to be able to answer specific questions that have not already been answered in some publication, but that emerge out of the collected information. For example, after I read how the German medical system did not have the kinds of rationing and waits for access that we associate with certain aspects of the British and Canadian systems in a Commonwealth Fund report, I wanted to know the details of the German system. Was it a single-payer or nationalized health service? Perhaps a hybrid? What was the role of doctors and information technology? It took quite a lot of searching to finally be able to answer those questions, ultimately using a Siemens Medical Technology prĂ©cis and market analysis of the German system.

Could approaches like structured metadata via Semantic Web technologies assist me in these tasks? Perhaps, but it seems to require that propositional information above the level of named entity extraction could be accurately indexed. For the first question, I would need documents labeled with “Structure of the German Medical System” or the equivalent rather than the many nuanced and varied ways that we write. Moreover, the proposition needs to express the timeliness of the resource, a problem I frequently encounter when trying to fix problems with my Linux computers. How timely is a given piece of information?

We can’t, however, expect individuals to be able to code that metadata in any consistent way, though I believe there is a folksonomic method that can lend a hand: a community of users can gradually improve the metadata structure and content in much the same way that Wikipedia is gradually improved. The Wikipedia model has also shown that quality can be maintained—with fits and starts—by a community of users and some policies in place.

Friday, November 2, 2007

Leftward and Magical Thinking

What if all of the beauty of the arts could be boiled down to a tendency to turn left? I’m not kidding, and I’m not making a political statement. I’m talking about the real thing: physical orientation.

So where to start?

There is a remarkable literature on the relationship between magical thinking and schizophrenic behavior. Not surprisingly, schizotypic ideation (delusional psychotic thoughts) correlates with magical thinking indexes. So, if you believe in mystical connections throughout the universe, you might also believe that forces are communicating with you.

But the connections are more interesting, still. If you ask people to grade the relationship between different words on a 1-5 scale, with 1 indicating the words are unrelated and 5 meaning they are nearly identical, people who rate highly on magical thinking indexes also tend to rate unrelated words more related than those with low index ratings.

The theory is as follows: semantic memory is tied to right-brain functioning and dopamine activity is abnormally high in both schizotypic and magical thinking people in the right hemisphere. Specifically, spreading activation is enhanced by dopamine. We slur ideas together when we have high levels of dopamine. We believe things are mystically connected and, at the extreme, we even get semantic relationships gated into perceptual memory and turned into hallucinations. But at lower levels of activation, we get the insanity of artistry.

And it is the right hemisphere action that results in the tendency to turn left because the right hemisphere controls left motor functions. So those with schizophrenia, those who damage their brains with amphetamines, and those with magical thinking tendencies will veer a bit left when asked to mark the center of a piece of paper. At the extreme, Parkinson’s patients with extreme dopamine issues will wander in leftward arcs. So will rats.

Leftward, always leftward. The artists all lingered to the left while finding patterns where the rest of us found only randomness. It is the source of gurus and mystics, artists and poets, and, perhaps, NASCAR drivers.

Sunday, October 28, 2007

Mind and Connectivity

David Brooks in New York Times proclaims the outsourcing of his mind into GPS devices, Wikipedia and cell phones. Technology is changing the way we think and remember things. It is making us both mentally lazier and, as I’ve suggested before, more accurate.

I lamented this recently when my Casio atomic solar watch died due to the gumming up of the backlight button from too much swimming, riding and sweating over the giant, clunky, but remarkably functional watch. I had to start remembering the date and the day of the week. I had to stop using my preset alarms to remind me of the differences between Monday early-release days and other days for my son’s school. I had to expect to be inaccurate with my analog backup watch.

And then there was another day in September when I was out-of-town and having lunch with friends. The topic of tapioca pudding came up and none of us could recall what the origin of tapioca was. Out came the IPhone and we quickly resolved the question, fixing my partially inaccurate recollection from my Peace Corps time in Fiji that tapioca was related to dalo (taro). In fact, tapioca is from another root crop called cassava.

So these technologies may sometimes be reducing our cognitive commitments to certain information (the day and date), but they are also allowing us to reduce ambiguity in answering questions of a factual nature, and in a manner that exceeds what our natural mental capacities allow.

But what is missing for me is information convergence, where the knowledge contained in your DVR or GPS path memory is instantly accessible, exportable, importable, and available for all your needs. My car links via Bluetooth to my phone, but has separate voice memory for dialing. I can download new tunes through ITunes but my car can’t automatically grab Car Talk off NPR through the satellite radio and store it for later listening. Nor can I export my DVR content easily to my laptop for watching while at the park.

But the right technologies are available for the next step forward. The Semantic Web provides a beginning to achieve this step by providing a standard for self-describing information. Extensible Markup Language (XML) is the basis for Semantic Web. XML is an information representation language that is similar to HTML but is not merely designed for web page representation. The problem is that a representation language is useless unless every technology agrees on how to represent different information resources. Semantic Web standards provide a meta-description for XML data that makes it possible to code information in meaningful ways, where “meaningful” is defined by the capacity to share information between systems, converting information into knowledge in some sense.

Someday soon, my Ofamind.com approximation of Vannevar Bush’s Memory Extender may be able to talk to my DVR and help me find quotes from PBS’s The War, and my GPS will help me retrace the path of the California gold rush, as exported by Wikipedia. Someday soon.

Monday, October 22, 2007

Funding and Alternatives

My weekend ended with marvelous news: a new funding round for my startup effort! The final terms are still under negotiation, but it should serve as an excellent next step for critical phases of R&D towards a large-scale, online knowledge management technology.

Meanwhile, my 9-year-old has been spending a remarkable amount of time on YouTube...

Friday, October 19, 2007

Mere Reason and Education

I commented recently on the problematic issue of education and indoctrination of the young in religious matters on Recursivity. My position has always been that extremism in religious viewpoints (and note the emphasis) must be primarily related to early religious indoctrination that essentially forbids nuance and careful evaluation of facts, opinions and ideas. I can just barely imagine that an adult exposed to a rich panoply of ideas and perspectives can come to hold extremist views, and this relates to my concern over issues like school voucher programs that contribute to religious schools. In fact, when confronted with an extremist, my folk psychology would immediately ask how it is that a person raised in a nurturing and unbiased environment could possibly have transitioned to a perspective of extremism. In effect, I would want to know who harmed them or what crystallizing moment of injustice caused their change.

I’ve argued previously that it is precisely the exposure to the humanity of others via television that has occurred post-WWII that has changed the way we regard war, aggression and the universality of human rights. Racism, cruelty and mass civilian deaths in wartime are no longer acceptable because we now see others as human, instantaneously, via satellite, and with full translations. Why not accord them the dignity of humanity, spare them collective punishment, and avoid torture when we would want the same for ourselves? Our morals and ethics have improved because of secular education and reason combined with extrinsic factors like technology and social dialog, not through some new form of faith that took hold during that period.

The standard response and challenge to me is to apply a standard of intellectual arbitrariness to the topic and claim that any perspective is still a perspective, and therefore I am as guilty as the religious. There is a curious symmetry with the postmodernist critique of science and reason, here, in the claim that there is no standard for judging the merits of ideas except through a subjective narrative. And my narrative is no better than anyone else's.

But, by this standard, I see faith and reason being leveled to the same standards as intellectual mechanisms, and that I think robs the faithful of their most powerful way of regarding faith: that they have special knowledge that is transcendental to mere reason. Then evolutionary arguments are interesting but irrelevant because they are “mere reason” and the schools are no threats whatsoever in the matter of ideas.

This will make little difference to people like Tim LaHaye, author of the Left Behind series, who writes in The Atlantic this month:

“Until we break the secular educational monopoly that currently expels God, Judeo-Christian moral values and personal accountability from the halls of learning, we will continue to see academic performance decline and the costs of education increase, to the great detriment of millions of young lives.”

His article comes directly after Sam Harris’ musings titled “God Drunk Society” in a collection of short subjects on “The American Idea” by many august writers. LaHaye even slurs together some dubious claims about socialism in early America to justify his claims. Actually, California schools teach a complex set of values that seem to transcend and encompass LaHaye’s desires quite nicely:

"Each teacher shall endeavor to impress upon the minds of the pupils the principles of morality, truth, justice, patriotism, and a true comprehension of the rights, duties, and dignity of American citizenship, and the meaning of equality and human dignity, including the promotion of harmonious relations, kindness toward domestic pets and the humane treatment of living creatures, to teach them to avoid idleness, profanity, and falsehood, and to instruct them in manners and morals and the principles of a free government. (b) Each teacher is also encouraged to create and foster an environment that encourages pupils to realize their full potential and that is free from discriminatory attitudes, practices, events, or activities, in order to prevent acts of hate violence, as defined in subdivision (e) of Section 233."

And this is merely the result of very modern reason.


Monday, October 15, 2007

Discovery and Multiple Explanations

My startup, Ofamind, has both a “classification engine” and a “discovery engine” as part of the core technology. A discovery engine is an algorithmic system that tries to show you new content based on what you have been looking at in the past. For Ofamind, the discoveries are currently over scientific newsfeeds, scientific papers and patents. The role of the classification engine is to make it easier to add content to your interest collections (“views” in Ofamind) as you browse the web by automatically suggesting how to add the new web content (via a Firefox extension).

Both are based on a combination of content and linkages between documents. For content, the system goes a step further than current methods by using extracted people, places and organizations to improve the quality of the matches, as well as leverage aspects of document structure. Ongoing work (prior to the full public release of the system) is trying to improve even further the disambiguation of “named entities” to make it possible to answer useful research questions about topics and the researchers who are involved in those topics.

I was therefore intrigued when I learned about the Netflix Prize from 3QuarksDaily. The Netflix Prize is offering a US$1M purse to any group that can do a better job of predicting Netflix film rankings by people than their current system, Cinematch. The prize term runs through 2011 and I am seriously considering giving it a run.

Reading through the work by the leading group (at around 8% improvement over Cinematch so far), the approaches seem rather ho hum at first glance: look at the rankings of people who have seen movies similar to my viewing choices, then use their other rankings to suggest new movies to me. Then we get into the fine details, and start to see two main themes develop. First, there is the problem of high levels of variability for some interest areas versus others. In other words, the landscape of choices is not very smooth. Different movie genres, director’s outputs and actor choices may all influence small pools of choices made by individuals that otherwise share my interests. So smoothing methods are introduced that try to capture latent variables or trends in the data that can reduce the distortions of outliers and improve the overall system performance.

But the other methodology that several groups have started looking at is based on combining decision making between several different approaches. Indeed, one blogger from Columbia University noted that this was similar to Epicurus’ Principle of Multiple Explanations. It also is widely used in classification algorithms like AdaBoost in hoping to overcome the problem of overfitting to training data, which means creating a decision process that is too finely tuned to any special oddities lurking in the training data.

One area that is not currently exploited, though, is the direct use of movie metadata (directors, actors, release date, genre) in the models. It can be argued that some (if not most) of that metadata and its influence is encapsulated in the choices made by people, thus producing an expert analysis in their ranking strategy. But I think there may be some significant value in a hybrid approach that looks at when the metadata connections make better predictions than the crowds. And that kind of hybrid approach is precisely what I am working on with Ofamind.

Tuesday, October 9, 2007

Swearing and Power

Steven Pinker lassos up a whole rodeo of bad words and swearing at The New Republic. His arguments largely concern the connotative nature of swear words for creating an affective reaction in the reader or hearer and are a pairing-down of the broad treatment in The Stuff of Thought.

There is an oddity, though, in his treatment of the word "fuck" from an evolutionary psychology perspective. The problem of parasitism has been put forward as a driver for the evolution of sex because sexual recombination is a good diversity pump for immunological competence in the face of a rapidly changing threat environment. So it is natural that Pinker would invoke this theory of sex to try to get a handle on why sexual terms might carry taboo weight down through history. But the argument falls somewhat flat in the most common way "fuck" is used: "fuck you." He rightly points out that there is a symmetry with "damn you" but with the unmodern religious nature of "damning" replaced by the up-to-date invocation of taboo sexual terminology. "Eat shit" has about the same level of nastiness as "fuck you" but is more clearly tied to the problem of parasitism.

Yet, the idea that our connotations are tied to the icky and emotionally fraught aspects of sex (he even invokes parental investment theory at one point) strikes me as less likely than seeing the word as exactly the kind of power word that the 70s feminists saw in "cunt." "Fuck you" is a power phrase that is tied to non-voluntary sex and rape. Rape (and the protection of females from rape) must have had a profound role in the environmentally adaptive environment that backgrounds our psychological makeup. The fact that "fuck you" was most often used between men until recently (I suspect) bears this out. It is ungrammatically asserting that the receiver of the phrase is somehow going to receive the profane act in a passive role. It demeans them with a conveniently short and phonologically plosive imprecation.

True, the term "fuck" has common currency these days but still resonates with a more animalistic reference to sex when used in that fashion: "What have those two been up to? They were fucking" as compared with "They were having sex" (clinical) or "They were making love" (euphemistic and 70s-ish).

So do we linguistically sort and invent terms over time to find optimal curses that carry the right level of phonological, semantic and pragmatic properties? Absofuckinglutely.

Monday, October 8, 2007

Strong assumptions and Instinct


I am a strong AI proponent, meaning that I believe that, given sufficient time, we will succeed in building a thinking machine that thinks in a non-trivial way beyond the capabilities it was programmed with.

And I have pretty much always been a strong AI proponent. I remember very well wandering irrigation canals in Southern New Mexico on my way home from high school and thinking about this topic. My foster dad and mom had an HP3000 minicomputer in our house that they were using for applications development after a try at timesharing was displaced by the rise of the personal computer. It sat right next to the Altair that was used for burning EPROMS back then. Among the first applications of that HP was the game of animal guessing, where a twenty-questions-style interrogation is performed of the user to guess the animal they are thinking about. Does it have hair? Does it have claws? Etc. If your animal is not in the program’s database, it gets added to the decision tree, growing the learned response set as time progresses.

So I am walking home and thinking about Animal and convinced that programmed intelligent response and learning was not what we mean by the notion of artificial intelligence. We mean something more, something that demonstrates behavioral plasticity, that specifically overcomes the limitations inherent in the programmed capabilities imbued in the system, something that shows us novel behavior.

I came up with the idea of the “instinct” kernel while walking along that day, based on the assumption that understanding the natural phenomena of intelligence would, inevitably, lead to the natural-like capabilities that I was certain were essential for intelligence. This instinct kernel somehow encompassed that natural history that girds intelligence but also served as an essentially unknowable core that expressed the metaphorical notion that intelligence is difficult and unprogrammable.

So here we are 25 years later and we have made remarkable progress in automatic speech recognition, machine translation, Roomba vacuums, and search. Yet, the level of autonomous action akin to that ultimate goal still seems out of reach, despite the achievements in chess or in efficient stock picking. What are we to make of this outcome? Does 25 years of effort mean that the goal is unachievable? Of course not. The goal remains because the goal is tied directly to a fundamental philosophical assumption that there is either something metaphysically unique about mind that can’t be simulated or that strong AI will ultimately be untangled in much the same way that thermodynamics was untangled from demons gating atoms from vessel to vessel.

Anything less robs us of our humanity by declaring there are limitation that are unachievable. I can’t believe that, and there is no evidence that says we should believe that.


Thursday, October 4, 2007

Wireless and Metaphors

I fixed two wireless problems today, both of which were obscure metaphorical problems in a way, since I could only visualize and not directly observe the radio waves pulsing through the non-ether. I mention the issue of metaphor because I picked up Steven Pinker's new book, The Stuff of Thought, which deals some with the topic of metaphor. Well, since I am only in the introductory section of the book, that is the initial takeaway at any rate.

So my problems with wireless were odd enough, even without the rhetorical wind-up of the problem of metaphor. First, my key fobs were recalled by my car manufacturer because they could be reprogrammed by close proximity to cell phones. There is a certain irony to this in that the car also bridges through Bluetooth to provide voice-recognition dialing and general operations. So I could start the car remotely and use the phone, but if I brought the phone and starter device together too closely, the result would be that I could neither start the car nor use the phone.

Finally, though, the manufacturer fixed the problem and it only took about an hour at a local dealership while my wife and I got Indian down the street.

The second problem was weirder, though. My 802.11g router started to fail randomly and oddly. And it started around the same time as when I added an HP wireless printer to the zoo of technology. The symptoms were bizarre and manifested themselves randomly with packet dropping from my wireless camera, different laptops and the aforementioned printer. Since one laptop was initially affected more than others, I started with trying to solve the problem with an external USB wireless adapter. No luck. So, after backtracking to an old 802.11b router and seeing some improvements across the board (and after 10 hours work reconfiguring everything), I replaced the router with a new Linksys model today.

There were moments in there where I speculated my neighbor's old 2.4Ghz wireless phones were interfering with the system, but couldn't shuffle channels enough to make any difference. There were moments when I thought the printer had to be to blame since the problems first started around the same time that I got the printer. But no, it was irrational and ultimately the packet failures didn't match any conceivable model for digital system failures where any chip failure results in total non-functionality. Cosmic rays?

If only I could have seen (maybe with false coloration of UV/IR/2.4Ghz rays) what was happening? If only I could have had diagnostics beyond traceroute and ping that would tell me not just that packets were dropped, but who dropped them? If only I had something more than a metaphor (wireless networking is like networking but slightly different) to work from?

Even tonight, following resolution of the recall, I diligently separated my phone from my key fob. The metaphor of fearful interactions was still lingering.

Monday, October 1, 2007

Neoepicureanism and Joplin

Sometime in late high school, sometime after I had been ejected from the partier crowd for being just not too cool, and sometime after I decided that my role-playing gaming and interests in ideas were good things despite their incompatibility with even the stable and geeky cliques, I proclaimed the philosophy of "neoepicureanism" and held that, "we never do anything we don't want to do." It served as a Socratic seed for discussions (sometimes under the influence) with friends concerning parents, tribulations, the role of fear in human action, and my own libertarian leanings.

No one does anything except by choice, even if that choice is under duress. A youthfully simplistic principle, but one that could defuse anger and hostility and transform discussions into positive appreciations of ambitions, goals, and baser pleasures, as well.

So here I have Veronica Gventsadze's "Atomism and Gassendi's Conception of the Human Soul" in front of me describing the Epicurean atomic swerve that was used in opposition to the purely deterministic atomism of Democritus. It's a long way out from high school and my interests in philosophy took on a terrifically sober and analytic form through college and then largely folded into scientific practice with the arrival at evolutionary epistemology and algorithmic information theory. Still, revisiting Epicureanism strikes me as remarkable in its monism, in the conception of the gods as prime movers detached from interaction with the corporeal world, and in the atomistic justification for free will and an ethics derived from reciprocity.

Gassendi was a 17th Century thinker who expanded on Epicureanism, reintroducing it to the West on the cusp of the Scientific Revolution. He resurrected the core ideas while enhancing the psychological descriptions, suggesting how we create mental simulacra and the influence of those simulacra in creating new ideas.

I wonder, though, since I had not read any Epicurus back in high school, and certainly hadn't read any Gassendi, whether the real source of my theory was "freedom is just another word for nothing left to lose?"

Sunday, September 23, 2007

Halo and Intentionality

Halo 3 is just around the corner and I bit the bullet and upgraded the Xbox to an Xbox360. I also preordered the game and my son and I started going back through Halo and Halo 2 to refresh our memory about the storyline (I know, I know, I'm technically a bad dad in that those games are recommended for mature audiences only...)

That aside, though, my son has been more interested in the backstory of those games than in almost anything else (and he reads extensively). Partly, I attribute that interest to the vagueness of the story exposition in the games, leaving much to imaginatively fill-in. But, also, his interest arises because of the dual embedded themes of the Covenant and Flood in the game. He sees parallels between the religious war of the Covenant and current events, but his interest in the Flood keeps popping up with an odd intensity.

For those who are not Halo fans, the Flood is a parasitic organism that constructs Frankensteinian golems out of body parts. The relation to the alien Covenant is that the Halo (ringworlds) devices were created by an ancient civilization to periodically destroy all life in the universe and cleanse the universe of the Flood, but the Covenant believes that they will send them on The Long Journey if they activate them, paralleling the nihilism of extreme jihadists, in a way.

The Flood, as a parasite, caused my son initial consternation but has also brought about some amazing discussions, including one today during lunch. My wife and I were explaining how viruses were not intelligent but were creatures that simply survive and do not really have a purposeful origin. They are not really malevolent, either, and perhaps the Flood are similar. Malevolence is an attribute that we ascribe to intentionality and parasites are not intentional.

We emphasized a spectrum of lifeforms and described how, for instance, ants have individual nervous systems and also use chemical messages to communicate threats and food sources, and also how, unlike viruses, ants are unique in that only the queen reproduces and so the workers do not derive their individual purpose from reproduction alone, but from supporting the queen in reproducing.

Finding the language to explain this to a nine-year-old was surprisingly difficult, I found. The very idea of non-intentionality combined with intrinsic purpose is remarkably outside our language and intuitive framework of explanation. Children think stuff happens either because it is a mindless, natural phenomena or because it is a result of intentional action. Cats and dogs are intentional. Sunlight and waves are not. Getting into the middle world of what Ernst Mayr called teleonomic (appearing purposeful due to an adaptive algorithm) is surprisingly non-conforming to everyday ideas.

Still, my son was intrigued by the idea that children think so oddly about the world. During our discussion, a sea plane taxied by on the water outside and disrupted the conversation with sheer coolness. We wandered the docks outside the restaurant afterwards and got to peer in the cockpit. Strapped to the pilot's yoke was an aftermarket GPS unit, helping him fly with precise intentionality.

Wednesday, September 19, 2007

Words gone wild

A friend recently sent me along a quirky little poem that took some time to understand:

A fabulous verb is "to pronk"
The antelopes jump when you honk.
Its synonym, "stot"
Was made up by some Scot.
I think that he must have been dronk.

And its arrival was close behind Word of the Day's minority definition of tattoo:

Tattoo is an alteration of earlier taptoo, from Dutch taptoe, "a tap(house)-shut," from tap, "faucet" + toe, "shut" -- meaning, essentially, that the tavern is about to shut.

Now, here I found a treasure for etymologically-disposed searchers searching for strikes in the literary loam:

OED Appeals List

Could it really be that "poo" only became feces in 1981? And "wife-beater (T-shirt)" only 1993?

Thursday, September 13, 2007

Affect and The Natural History of Morality

A must read from Edge on affect, morality, charity and the New Atheism.

Wednesday, September 12, 2007

Strangeness and Bias


I always come back to the problem of how and why people believe strange things. I am certain that I have believed strange things before, and likely will again, but feel that I have also developed a certain level of critical detachment that helps me to hold ideas contingently. I like to think that I am always "on guard" for the support structure of an argument. So it both worries and interests me when I read the political blogs and my local paper, when I keep hearing the framing efforts of the President to portray the situation in Iraq as entwined with terrorism, and when I hear kids educated in our public schools talking about almost anything.

But really I believe very many strange things, as you will see.

Norbert Schwarz and compatriots at Michigan work extensively on the issue of how our minds process ideas. I first encountered a discussion of Schwarz' work over the weekend as we were driving to lunch and On the Media was on Sirius NPR Talk. I tracked down one of the papers:

Metacognitive Experiences and the Intricacies of Setting People Straight: Implications for Debiasing and Public Information Campaigns

Now, "metacognitive experiences" initially struck me as strange and New Agey. In the paper and broader literature, however, the phrase refers to mental experiences that accompany or affect cognitive processing. Calling it an "experience" seems at odds with most cognitive psychology, but I think the term was chosen because it the mental facilities are still being identified and detangled. Much of this work builds on the kinds of bias studies like Kahnemann and Tversky and others during the heyday of cognitive psychology, but carries it forward to link it to social psychology.

Anyway, the gist of the paper is that "rational" information processing is altered and biased by these metacognitive experiences. One example is an experiment with a facts versus myths flyer created by CDC to combat common myths about flu vaccines. It turns out that over time the myths get incorporated as facts with an alarming rate, essentially reinforcing the myths rather than debunking them. It only takes a few minutes for this to happen! We are prone to regard recall preferentially as factual, it seems.

So we stop off after lunch at REI to get some replacement socks for hiking and general winter applications. As I wander around, though, I notice something intriguingly strange about my own biases. North Face and Marmot brands are superior to Columbia or Mountain Hard Wear in my thinking it seems. But not because of any rational or experiential facts, really, but purely because of a combination of the exclusivity of the brands (Columbia is more of a commodity brand) and because of a name bias that Mountain Hard Wear is just a dumb name. No advertising polluted my thinking, really, because I don't read outdoor magazines. And while I have seen other people with North Face jackets, I can't recall seeing anyone with a Marmot jacket so my bias is not based on associating the products with exemplars per se.

I just believe strange things.

Thursday, September 6, 2007

Compression and Focus

I just happened on Greg Chaitin's 2006 Scientific American paper, The Limits of Reason, which the author was kind enough to publish to his website. I first encountered Chaitin in another SciAm article while in graduate school in the early 90s and I found the nascent topic of algorithmic probability so beautiful and profound that I dragged my boss to see Chaitin talk several hundred miles away at University of New Mexico. He kept dramatically grabbing his bald pate during the talk and gradually coated his head with colors from the whiteboard markers he was using. It was immensely distracting but I still got the gist of his arguments.

The odd thing is that I somewhat discovered the same idea of algorithmic complexity as an undergrad, but only in a very limited and intuitive form. I was in a Philosophy of Language class and was worrying over parsing and halting problems, partly informed by all the Frege and Wittgenstein we had been discussing, and partly driven by Godel, Escher, Bach..., the popular text on AI at the time. For my final report, I speculated that an evolutionary algorithm might be able to partially solve the halting problem by guessing when to halt from the productions of the machine and past experiences with other machines. There would be a probability of success at halting, at the very least, I speculated, and the evolutionary algorithm was a general solution.

Anyway, I ended up having to drop by my prof's office to discuss the paper and I recall descending into a discussion about the teleological language we use to discuss evolutionary processes. His face twisted up when I mentioned GEB and we ended with me querying him over whether he didn't like the book because it was popular. I got an A despite myself, I think.

Then, years later, I encounter Chaitin-Kolmogorov complexity and am blown away by the ideas. I trace back into the application to inference and machine learning and discover Ray Solomonoff's work on the topic, published in a series of technical reports at a small research firm called Zator.

Continuing this journey, I encounter Eliot Sober's philosophical treatments that include discussion of Occam's Razor, AIC, BIC and other ways of negotiating inferencing, which leads me to Minimum Description Length by Jorma Rissanen. Soon thereafter I publish a paper on applications of MDL to semantic analysis, showing how compact coding of data streams into trees has similar properties to methods like Latent Semantic Analysis and provides a general way to explain human grammar learning. My boss at the time coins "compression is truth" paralleling Chaitin's "compression in comprehension."

And here, now, Chaitin again adding new spices to this melange as he continues on with his life's work. I envy his focus.

Monday, September 3, 2007

Effective Procedures and Transcendentalism


Newsweek has two articles that are joined together by the thesis that we are reaching a point where computational intelligence built on statistical methods outperforms human decision making and intuition. Effective medicine is the shared topic between the two articles and, in effective medicine circles, the term "algorithm" is used to describe treatment cycles. This takes us back to the classic expert system for blood disease diagnosis, Mycin, that was a glorified decision tree of sorts for searching through the differential diagnosis space based on patient symptoms.

I got a chance to read E.O. Wilson's Consilience on my trip this week and the Newsweek articles resonated deeply with the core of Wilson's strongest claim: that empirical materialism is the only effective way forward to tackle problems in the remaining gaps in the sciences, in ethics, in the humanities, and even the arts. Because if optimal decision making can be automated by what cognitive scientists call an effective procedure and achieve 95% success (for example) in a given application, that level of success is more than likely better than the agreement level between practitioners relying on their own intuitions and experience.

The claim that intuition is somehow unintelligible is essentially a transcendental claim and transcendental claims arise because of ambiguity or skepticism about the validity of other knowledge procedures. I personally attacked this problem in a 1998 paper in which I used evolutionary algorithms to create automatic art forms that were partly random. The randomness was constrained by an abstract complexity metric based on an analysis of the connectedness in the space of production grammars that the evolutionary algorithm searched through. I was essentially creating an effective procedure that mimicked evolutionary epistemology and had randomness and creativity of a certain sort mixed in (though admittedly without the experiential aspect of human art).

In addition to reducing errors, one of the outcomes of effective medicine is that the ability of doctors to be swayed by pharmaceutical incentives is gradually being whittled down because the algorithms constrain treatment options to certain formularies and procedures. Similar efforts have been used to predict the quality of wines based on environmental monitoring and the predictions outperform oenophiles.

So we begin to see the rolling back of transcendental justifications and claims, first in intuitions and then, as Wilson suggests, moving into the realm of the humanities freshly scoured by the scathing wit of the postmodernists.

Wednesday, August 29, 2007

Free Will and Neuropeptides


I'm stuck in a mental vortex because I have to fly out-of-town today. My running joke is that I have to be careful in the airport restrooms to avoid getting bothered by creepy Republican senators.

I can't start any serious work, nor can I do nothing at all. So, am I a victim of circumstances concerning my state of mind? I wish I could absolve responsibility and somehow push the issue back to the set of externals and maybe my genes, but that would be a bit rash.

I ask that rhetorically, however, because it is an issue that arises in an article in The Psychiatric Times called Hume's Fork and Psychiatry's Explanations: Determinism and the Dimensions of Freedom. In the article, the author posits the discovery of a neuropeptide called "assaultin" that is coded by the gene, BAD2U. When assaultin is injected into the cerebrospinal fluid of human subjects, 65% become dangerous and attack others. Now the 35% don't become assaultive because they use some kind of impulse control mantra that can also be taught to the others.

The author uses this framework to build a contingency-based theory of free will and promises to develop a legal framework in subsequent articles. Overall, he suggests a continuum of responsibility that needs to be reflected in the relative strength of punishments and treatments that should be applied to people.

Tuesday, August 21, 2007

Totalism and Liberalism


Mark Lilla's exceptional piece, The Politics of God, in New York Times paints an historical analysis that has Hobbes front and center in refashioning the Will of God as a political force into a belief that fear is the driver of men's wills and that alleviating fear can bring about peace. Lilla carries forward through Locke and coins "Great Separation" to describe the forceps that pried apart theocratic impulses and political philosophy, echoing Jefferson's Wall of Separation that would come to America.

Strangely, he says of the American experiment that "It's a miracle" that our institutions have held fast against tides of cultural opposition that have desired to refashion liberal, secular democracy with messianic drivers. But I don't think so. There were several unique starting conditions that were essential to American success. There was the lack of existing institutions in the New World combined with the diverse religious character of the early immigrants themselves. This washed over into a unique opportunity to create governance completely anew and in a way that trusted no one and no higher authority. And the preservation of the system during the initial 90 years was derived from a shared belief in the value of institutions, themselves, arising from Northern European sensibilities about order, only crashing mightily during the Civil War but surviving and thriving by dint of Lincoln's victory.

Overlooked, too, is the impact of geography, with America just too far from our allies and enemies for any state to have too great an impact on America's development of an independent strain of morality that verbally holds fast to religious principles but in action subjugates them to secular law.

It's interesting that Lilla begins with a discussion of the letter to Bush by Mahmoud Ahmadinejad wherein he claims that liberalism and democracy have failed, and the failure is that they apparently do not provide the kind of totalism that the Iranian president thinks is essential to human existence, with a unification of God's will with that of man. To me, there is no effective answer to that except in Bertrand Russell's notion that contingency is the essential aspect of the liberal mind and the reflexive desire to build ever stronger walls between the liberal and the illiberal.

Wednesday, August 15, 2007

Moore and Semantic Skepticism


Strange, I found the Paglia essay interpenetrating all of my thoughts over the past few days, dredging up language swarms from old Derrida and Feyeraband essays, and dipping over into my work on disambiguation and ontology. See, linguistics-wise, I was once an empiricist with an almost palpable antagonism to the value of knowledge resources like ontologies in solving specific problems. I would reach first for a statistical model that was trained on the contexts of word occurrences, expecting that words can only be known by the company that they keep.

Even the notion that the Semantic Web can achieve any level of crispness in assigning metadata to online content was doubtful in that it was inherently impossible for content authors to assign metadata consistently. The position is postmodern relativism, if you will, derived from the same kind of semantic and pragmatic arguments that have been used to deconstruct machine learning: do I translate this as "terrorist" or "freedom fighter"? Well, what is your frame of reference? What is your meta-narrative?

A radical position is the folksonomy view that folks are themselves are the best determiners of how to tag metadata. In this view, they use whatever tags seem appropriate based on their own intuitions about the content. But does this get us around the Bono issue, below? Unlikely. It seems more appropriate to purely abstract and controversial concepts like "terrorist" or "justice".

So I think we need a gradation of semantic forms that range from relatively simple propositions about identity up through propositions about meaning and intent. The latter are purely Wittgensteinian word games, with agreement and disagreement strewn across the symbol space, but the former have lower average rates of disagreement over referential attachment.

This parallels the notion of post-postmodernism in a way, by accepting fluidity and chaotic symbol/signifier interactions but still anticipating a useful and uncontroversial basis for facts. G.E. Moore would raise his hand in salute.

Tuesday, August 7, 2007

Paglia and Reconstructionism


A few things I've read lately:

Camille Paglia's Religion and the Arts in America from Arion (cross-pollinated from 3QuarksDaily)

Pioch et. al.'s A Link and Group Analysis Toolkit (LGAT) for Intelligence Analysis (scary, huh?)

Malcolm Gladwell's The Moral Hazard Myth from The New Yorker

Wikipedia on the Casimir Effect

Paglia's speech at Colorado College was especially interesting. Her central thesis is that the role of religion in the arts in America has been sidelined into pure identity and narrow partisan politics which only reinforce antagonism from the Conservative Right. The end result has been the strangling of arts funding from government sources though she points out that no one in the avant-garde should accept government funding anyway. In some ways she echoes an Allan Bloom in decrying "sterile and now fading poststructuralism and postmodernism." Like Bloom, she sees a vacuum created in deconstructing traditional notions of values and aesthetic criteria. Like Bloom, she longs for something more powerful, more lively.

But her answer is, in part, to reinvigorate the arts through a re-examination of the spiritual roots that underlay so much traditional art, from spiritual hymns to rock to rap. In that, I think she misses one of the crowning achievements of our civilization even while she points out how technology is the most current creation of "American genius." Is it the failure of the arts and humanities to embrace materialism, science and technology as a central facet of modern life that leaves us in this condition of limitations and craven ennui?

Even while I read about the Casimir effect and try to imagine some of the most abstract and beautiful ideas ever conceived of--that vacuum itself is pervaded by energetic influences and zero point energy--Paglia thinks polyphonic differences between Calvinist and Lutheran hymns are a source of inspiration. Even while I imagine the subtle mathematics of group dynamic evolution using sophisticated achievements in graph theory, Paglia ponders the political implications of Madonna images festooned with elephant crap.

Why isn't rationality and all that it has achieved the greatest source for artistic inspiration in modernity? These are not sterile thoughts at all, but stunning achievements that have changed human existence more than all the stained glass in all of history.


Monday, August 6, 2007

Swarms and Social Cybernetics


David Sloan Wilson, in his spectacular Darwin's Cathedral, does an in-depth analysis of Korean-American Christian Churches in the Houston area. Newly arriving immigrants, some with only a few hundred dollars in their pockets, use the church as a transitional community asset that supports them through jobs, business development, loans and other benefits. Many second generation children complain that their parents have only the church and other church members as their community, even after twenty or thirty years.

Wilson's analysis also points to some of the relatively simple mechanisms that are used to try to keep church members actively involved. Every Sunday, for instance, there is a flier placed in mail boxes assigned to each member. After the service, the church staff contacts any parishioners who failed to pick-up their flier, giving them a clear attendance record.

Wilson never uses the term "cybernetic" to describe the pushes and pulls that are needed to keep a community actively engaged, especially communities that expect tithes and human capital, but that was the term that kept popping up as I read through his slim manifesto. I visualized a swarm of points in space orbiting each other in close formation. Occasionally a point would break away and start to orbit into another group, only to be pulled back to the original center of gravity by attractive forces (incentives) combined with shame forces (disincentives). The steam governor at work in sociology. The tighter knit or more extreme the ideas, the stronger those attractive forces.

A New York Times article shared similar thoughts. In it, a Harvard Law prof went through a Conservative Jewish Yeshiva and went on to a do great things. When he went back to a wedding of an old friend from school, the subsequent wedding photos did not show him. He had been literally erased from the photos. The reason: he had been accompanied by an Asian American girlfriend. The motivation was to remove the record of his failure to abide the expected rules, thereby both shaming him and eliminating any temptation for other young men who might see the photo and start thinking outside the Hassidic box, so to speak. Defeating free thinking and Hellenism prevented assimilation once. Defeating Asian chicks is a comparatively minor self-correction.

But could such qualitative social forces as shame and sense of belonging be given a quantitative reality that helps describe the rate of change of social and religious groups over time? We might be able to use group membership counts and look at correlations between the subjective opinions of group members as to the attitudes of other group members as a proxy for the cohesion mechanisms or memes in the group. Wilson does a bit of this when he reviews a survey of the orthodoxy of different religious groups as gauged by a random sample of religious scholars.

Sunday, July 29, 2007

Semantics and Sonny Bono


"Bono and the tree became one"

That sentence has been an object of scrutiny for me over the past several weeks. It is short enough and the meaning seems fairly easy to digest: Sonny Bono died in a skiing accident. It might have shown up in a blog back when the event transpired, or in casual conversation around the same time.

So what is so fascinating about it? It is the range of semantic tools that are needed to resolve Bono to Sonny Bono and not to U2's Bono or any of the thousands of other Bonos that likely exist. First, we need background knowledge that Sonny Bono died in a skiing accident. Next we need either the specific knowledge that a tree was involved or the inference that skiing accidents sometimes involve trees. Finally, we need a choice preference that rates notable people as more likely to be the object of the discussion than everyday folk.

We could still be wrong, of course. The statement might be about Frank Bono, a guy from down the street who likes to commune with nature. It might be, but for a statement in isolation the notability preference serves a de facto role as a disambiguator.

How, then, can we design technology to correctly assign the correct referent to occurrences like Bono in the text above? We have several choices and the choices overlap to varying degrees. We could, for instance, collect together all of the contexts that contain the term Bono (with or without Sonny), label them as to their referent, and try to infer statistical models that use the term context to partition our choices. This could be as simple as using a feature vector of counts of terms that co-occur with Bono and then looking at the vector distance between a new context vector (formed from the sentence above) with the existing assignments.

We could also try to create a model that recreates our selection preferences and the skiing <-> tree relationship and does some matching combined with some inferencing to try to identify
the correct referent. That is fairly tricky to do over the vast sea of possible names, but is easy enough for a single one, like Bono.

It turns out all of these approaches have been tried, as well as interesting hybridizations of them. For instance, express the notability preference as a probability weighting based on web search mentions, while adding-in the distance between different concepts in a tree-based ontology, trying to exploit human-created semantic knowledge to assist in the process. It turns out that fairly simple statistics do pretty well over large sets of names (just choose the most likely assignment all the time), but don't really capture the kinds of semantic processing that we believe we undertake in our own "folk psychologies" as described above.

Still, I see the limited success of knowledge resources as an opportunity rather than a source of discouragement. We definitely have not exhausted the well.

Saturday, July 21, 2007

Fiber Optics and Amateur Access

An elderly woman in Sweden got a 40Gb/s fiber optic pipe installed to her home, recently. She hardly uses the web but can now download a feature length film in 2 seconds. I was lamenting the death of satellite and cable TV when we all have fiber to the house with those kinds of bandwidths over lunch today. It came up because my neighbor dropped by for a drink the other night and ended up staying until 1 AM, sucking down my gin and complaining about the 9th Circuit Court of Appeals. He called home at some point and apparently interrupted his wife's enjoyment of The Closer while also missing dinner.

It was the fact that she missed her TV show that struck me. I don't have that problem. I didn't watch much TV beyond the news, Frontline or some random late-night sitcom until recently when we upgraded everything. Now I have a DVR and actually watch some programs (including The Closer) but only because I can comfortably time-shift and pause TV as I see fit, and all in HD where available.

But what happens with full on-demand TV? The satellites will not be de-orbited for some time as they will continue to serve remote areas, but eventually they will go away. Even the notion of networks and channels would dry up over time. Channels are a delivery mechanism for content that are only useful as branding labels in an on-demand universe. Studios can equally well disintermediate their content and swing deals directly with advertising clearinghouses. This is already happening somewhat in the online video space, but the bandwidth and quality issues remain a stumbling block until that fiber optic pipe arrives.

Now, suddenly, without the channels to filter content choices down to a few hundred options (sheepishly, I have a few hundred channels; George Chadwick's Aphrodite is playing via SIRIUS Symphony Hall through Dish Network right now, blurring the lines between mediums) we will instead start using other mechanisms to make content choices. There will be individual critic lists, popularity recommendation engines and, most importantly, content cross-advertising to try to attract eyeballs. The amateur will mix with the pro as technological and artistic means for producing amazing content becomes increasingly inexpensive.

Friday, July 13, 2007

Framing and Dissonance

Finally, and with little fanfare, I closed on my final report for my most recent grant this evening. The champagne rests in the fridge for the moment. The last two weeks have been, well, consuming. Now I rush headlong toward a second phase.

While writing and experimenting, I was occasionally drawn into blog and editorial discussions, some of which were mildly amusing. I even learned some new things, though not directly from the blogs, I'm afraid. Specifically, the topic of semantic "framing" came up during a cross-Wikipedia excursion in pursuit of a recollection about Newspeak driven by Christopher Hitchens' discussion of cognitive tyranny in a variety of forums. As a biographer of Jefferson and Orwell, Hitchens is uniquely qualified to address the problem of tyranny and fascism.

Semantic framing is the use of distinctive metaphorical terminology that is designed to provide a clarifying distinction with alternatives. It is the opposite of nuance in a way, and relies on positioning issues as risky (when opposed) versus beneficial (when in agreement). Interestingly, framing effects on economic decision making appear to be less effective on some people than others, with the distinguishing mental characteristic related to emotionalism (exposed as increased amygdala activity during fMRIs).

But the question that arose to me was whether we have an innate property that resists framing (and that, when we have it, drives us towards more analytical tasks and higher education levels; yes, based on my own supposition that higher education levels correspond to greater cognitive moderation) or whether it is itself a learned response to moderate one's emotional reaction to arguments and information that corresponds to the "liberal" aspects of higher education?