Sunday, October 28, 2007

Mind and Connectivity

David Brooks in New York Times proclaims the outsourcing of his mind into GPS devices, Wikipedia and cell phones. Technology is changing the way we think and remember things. It is making us both mentally lazier and, as I’ve suggested before, more accurate.

I lamented this recently when my Casio atomic solar watch died due to the gumming up of the backlight button from too much swimming, riding and sweating over the giant, clunky, but remarkably functional watch. I had to start remembering the date and the day of the week. I had to stop using my preset alarms to remind me of the differences between Monday early-release days and other days for my son’s school. I had to expect to be inaccurate with my analog backup watch.

And then there was another day in September when I was out-of-town and having lunch with friends. The topic of tapioca pudding came up and none of us could recall what the origin of tapioca was. Out came the IPhone and we quickly resolved the question, fixing my partially inaccurate recollection from my Peace Corps time in Fiji that tapioca was related to dalo (taro). In fact, tapioca is from another root crop called cassava.

So these technologies may sometimes be reducing our cognitive commitments to certain information (the day and date), but they are also allowing us to reduce ambiguity in answering questions of a factual nature, and in a manner that exceeds what our natural mental capacities allow.

But what is missing for me is information convergence, where the knowledge contained in your DVR or GPS path memory is instantly accessible, exportable, importable, and available for all your needs. My car links via Bluetooth to my phone, but has separate voice memory for dialing. I can download new tunes through ITunes but my car can’t automatically grab Car Talk off NPR through the satellite radio and store it for later listening. Nor can I export my DVR content easily to my laptop for watching while at the park.

But the right technologies are available for the next step forward. The Semantic Web provides a beginning to achieve this step by providing a standard for self-describing information. Extensible Markup Language (XML) is the basis for Semantic Web. XML is an information representation language that is similar to HTML but is not merely designed for web page representation. The problem is that a representation language is useless unless every technology agrees on how to represent different information resources. Semantic Web standards provide a meta-description for XML data that makes it possible to code information in meaningful ways, where “meaningful” is defined by the capacity to share information between systems, converting information into knowledge in some sense.

Someday soon, my Ofamind.com approximation of Vannevar Bush’s Memory Extender may be able to talk to my DVR and help me find quotes from PBS’s The War, and my GPS will help me retrace the path of the California gold rush, as exported by Wikipedia. Someday soon.

Monday, October 22, 2007

Funding and Alternatives

My weekend ended with marvelous news: a new funding round for my startup effort! The final terms are still under negotiation, but it should serve as an excellent next step for critical phases of R&D towards a large-scale, online knowledge management technology.

Meanwhile, my 9-year-old has been spending a remarkable amount of time on YouTube...

Friday, October 19, 2007

Mere Reason and Education

I commented recently on the problematic issue of education and indoctrination of the young in religious matters on Recursivity. My position has always been that extremism in religious viewpoints (and note the emphasis) must be primarily related to early religious indoctrination that essentially forbids nuance and careful evaluation of facts, opinions and ideas. I can just barely imagine that an adult exposed to a rich panoply of ideas and perspectives can come to hold extremist views, and this relates to my concern over issues like school voucher programs that contribute to religious schools. In fact, when confronted with an extremist, my folk psychology would immediately ask how it is that a person raised in a nurturing and unbiased environment could possibly have transitioned to a perspective of extremism. In effect, I would want to know who harmed them or what crystallizing moment of injustice caused their change.

I’ve argued previously that it is precisely the exposure to the humanity of others via television that has occurred post-WWII that has changed the way we regard war, aggression and the universality of human rights. Racism, cruelty and mass civilian deaths in wartime are no longer acceptable because we now see others as human, instantaneously, via satellite, and with full translations. Why not accord them the dignity of humanity, spare them collective punishment, and avoid torture when we would want the same for ourselves? Our morals and ethics have improved because of secular education and reason combined with extrinsic factors like technology and social dialog, not through some new form of faith that took hold during that period.

The standard response and challenge to me is to apply a standard of intellectual arbitrariness to the topic and claim that any perspective is still a perspective, and therefore I am as guilty as the religious. There is a curious symmetry with the postmodernist critique of science and reason, here, in the claim that there is no standard for judging the merits of ideas except through a subjective narrative. And my narrative is no better than anyone else's.

But, by this standard, I see faith and reason being leveled to the same standards as intellectual mechanisms, and that I think robs the faithful of their most powerful way of regarding faith: that they have special knowledge that is transcendental to mere reason. Then evolutionary arguments are interesting but irrelevant because they are “mere reason” and the schools are no threats whatsoever in the matter of ideas.

This will make little difference to people like Tim LaHaye, author of the Left Behind series, who writes in The Atlantic this month:

“Until we break the secular educational monopoly that currently expels God, Judeo-Christian moral values and personal accountability from the halls of learning, we will continue to see academic performance decline and the costs of education increase, to the great detriment of millions of young lives.”

His article comes directly after Sam Harris’ musings titled “God Drunk Society” in a collection of short subjects on “The American Idea” by many august writers. LaHaye even slurs together some dubious claims about socialism in early America to justify his claims. Actually, California schools teach a complex set of values that seem to transcend and encompass LaHaye’s desires quite nicely:

"Each teacher shall endeavor to impress upon the minds of the pupils the principles of morality, truth, justice, patriotism, and a true comprehension of the rights, duties, and dignity of American citizenship, and the meaning of equality and human dignity, including the promotion of harmonious relations, kindness toward domestic pets and the humane treatment of living creatures, to teach them to avoid idleness, profanity, and falsehood, and to instruct them in manners and morals and the principles of a free government. (b) Each teacher is also encouraged to create and foster an environment that encourages pupils to realize their full potential and that is free from discriminatory attitudes, practices, events, or activities, in order to prevent acts of hate violence, as defined in subdivision (e) of Section 233."

And this is merely the result of very modern reason.


Monday, October 15, 2007

Discovery and Multiple Explanations

My startup, Ofamind, has both a “classification engine” and a “discovery engine” as part of the core technology. A discovery engine is an algorithmic system that tries to show you new content based on what you have been looking at in the past. For Ofamind, the discoveries are currently over scientific newsfeeds, scientific papers and patents. The role of the classification engine is to make it easier to add content to your interest collections (“views” in Ofamind) as you browse the web by automatically suggesting how to add the new web content (via a Firefox extension).

Both are based on a combination of content and linkages between documents. For content, the system goes a step further than current methods by using extracted people, places and organizations to improve the quality of the matches, as well as leverage aspects of document structure. Ongoing work (prior to the full public release of the system) is trying to improve even further the disambiguation of “named entities” to make it possible to answer useful research questions about topics and the researchers who are involved in those topics.

I was therefore intrigued when I learned about the Netflix Prize from 3QuarksDaily. The Netflix Prize is offering a US$1M purse to any group that can do a better job of predicting Netflix film rankings by people than their current system, Cinematch. The prize term runs through 2011 and I am seriously considering giving it a run.

Reading through the work by the leading group (at around 8% improvement over Cinematch so far), the approaches seem rather ho hum at first glance: look at the rankings of people who have seen movies similar to my viewing choices, then use their other rankings to suggest new movies to me. Then we get into the fine details, and start to see two main themes develop. First, there is the problem of high levels of variability for some interest areas versus others. In other words, the landscape of choices is not very smooth. Different movie genres, director’s outputs and actor choices may all influence small pools of choices made by individuals that otherwise share my interests. So smoothing methods are introduced that try to capture latent variables or trends in the data that can reduce the distortions of outliers and improve the overall system performance.

But the other methodology that several groups have started looking at is based on combining decision making between several different approaches. Indeed, one blogger from Columbia University noted that this was similar to Epicurus’ Principle of Multiple Explanations. It also is widely used in classification algorithms like AdaBoost in hoping to overcome the problem of overfitting to training data, which means creating a decision process that is too finely tuned to any special oddities lurking in the training data.

One area that is not currently exploited, though, is the direct use of movie metadata (directors, actors, release date, genre) in the models. It can be argued that some (if not most) of that metadata and its influence is encapsulated in the choices made by people, thus producing an expert analysis in their ranking strategy. But I think there may be some significant value in a hybrid approach that looks at when the metadata connections make better predictions than the crowds. And that kind of hybrid approach is precisely what I am working on with Ofamind.

Tuesday, October 9, 2007

Swearing and Power

Steven Pinker lassos up a whole rodeo of bad words and swearing at The New Republic. His arguments largely concern the connotative nature of swear words for creating an affective reaction in the reader or hearer and are a pairing-down of the broad treatment in The Stuff of Thought.

There is an oddity, though, in his treatment of the word "fuck" from an evolutionary psychology perspective. The problem of parasitism has been put forward as a driver for the evolution of sex because sexual recombination is a good diversity pump for immunological competence in the face of a rapidly changing threat environment. So it is natural that Pinker would invoke this theory of sex to try to get a handle on why sexual terms might carry taboo weight down through history. But the argument falls somewhat flat in the most common way "fuck" is used: "fuck you." He rightly points out that there is a symmetry with "damn you" but with the unmodern religious nature of "damning" replaced by the up-to-date invocation of taboo sexual terminology. "Eat shit" has about the same level of nastiness as "fuck you" but is more clearly tied to the problem of parasitism.

Yet, the idea that our connotations are tied to the icky and emotionally fraught aspects of sex (he even invokes parental investment theory at one point) strikes me as less likely than seeing the word as exactly the kind of power word that the 70s feminists saw in "cunt." "Fuck you" is a power phrase that is tied to non-voluntary sex and rape. Rape (and the protection of females from rape) must have had a profound role in the environmentally adaptive environment that backgrounds our psychological makeup. The fact that "fuck you" was most often used between men until recently (I suspect) bears this out. It is ungrammatically asserting that the receiver of the phrase is somehow going to receive the profane act in a passive role. It demeans them with a conveniently short and phonologically plosive imprecation.

True, the term "fuck" has common currency these days but still resonates with a more animalistic reference to sex when used in that fashion: "What have those two been up to? They were fucking" as compared with "They were having sex" (clinical) or "They were making love" (euphemistic and 70s-ish).

So do we linguistically sort and invent terms over time to find optimal curses that carry the right level of phonological, semantic and pragmatic properties? Absofuckinglutely.

Monday, October 8, 2007

Strong assumptions and Instinct


I am a strong AI proponent, meaning that I believe that, given sufficient time, we will succeed in building a thinking machine that thinks in a non-trivial way beyond the capabilities it was programmed with.

And I have pretty much always been a strong AI proponent. I remember very well wandering irrigation canals in Southern New Mexico on my way home from high school and thinking about this topic. My foster dad and mom had an HP3000 minicomputer in our house that they were using for applications development after a try at timesharing was displaced by the rise of the personal computer. It sat right next to the Altair that was used for burning EPROMS back then. Among the first applications of that HP was the game of animal guessing, where a twenty-questions-style interrogation is performed of the user to guess the animal they are thinking about. Does it have hair? Does it have claws? Etc. If your animal is not in the program’s database, it gets added to the decision tree, growing the learned response set as time progresses.

So I am walking home and thinking about Animal and convinced that programmed intelligent response and learning was not what we mean by the notion of artificial intelligence. We mean something more, something that demonstrates behavioral plasticity, that specifically overcomes the limitations inherent in the programmed capabilities imbued in the system, something that shows us novel behavior.

I came up with the idea of the “instinct” kernel while walking along that day, based on the assumption that understanding the natural phenomena of intelligence would, inevitably, lead to the natural-like capabilities that I was certain were essential for intelligence. This instinct kernel somehow encompassed that natural history that girds intelligence but also served as an essentially unknowable core that expressed the metaphorical notion that intelligence is difficult and unprogrammable.

So here we are 25 years later and we have made remarkable progress in automatic speech recognition, machine translation, Roomba vacuums, and search. Yet, the level of autonomous action akin to that ultimate goal still seems out of reach, despite the achievements in chess or in efficient stock picking. What are we to make of this outcome? Does 25 years of effort mean that the goal is unachievable? Of course not. The goal remains because the goal is tied directly to a fundamental philosophical assumption that there is either something metaphysically unique about mind that can’t be simulated or that strong AI will ultimately be untangled in much the same way that thermodynamics was untangled from demons gating atoms from vessel to vessel.

Anything less robs us of our humanity by declaring there are limitation that are unachievable. I can’t believe that, and there is no evidence that says we should believe that.


Thursday, October 4, 2007

Wireless and Metaphors

I fixed two wireless problems today, both of which were obscure metaphorical problems in a way, since I could only visualize and not directly observe the radio waves pulsing through the non-ether. I mention the issue of metaphor because I picked up Steven Pinker's new book, The Stuff of Thought, which deals some with the topic of metaphor. Well, since I am only in the introductory section of the book, that is the initial takeaway at any rate.

So my problems with wireless were odd enough, even without the rhetorical wind-up of the problem of metaphor. First, my key fobs were recalled by my car manufacturer because they could be reprogrammed by close proximity to cell phones. There is a certain irony to this in that the car also bridges through Bluetooth to provide voice-recognition dialing and general operations. So I could start the car remotely and use the phone, but if I brought the phone and starter device together too closely, the result would be that I could neither start the car nor use the phone.

Finally, though, the manufacturer fixed the problem and it only took about an hour at a local dealership while my wife and I got Indian down the street.

The second problem was weirder, though. My 802.11g router started to fail randomly and oddly. And it started around the same time as when I added an HP wireless printer to the zoo of technology. The symptoms were bizarre and manifested themselves randomly with packet dropping from my wireless camera, different laptops and the aforementioned printer. Since one laptop was initially affected more than others, I started with trying to solve the problem with an external USB wireless adapter. No luck. So, after backtracking to an old 802.11b router and seeing some improvements across the board (and after 10 hours work reconfiguring everything), I replaced the router with a new Linksys model today.

There were moments in there where I speculated my neighbor's old 2.4Ghz wireless phones were interfering with the system, but couldn't shuffle channels enough to make any difference. There were moments when I thought the printer had to be to blame since the problems first started around the same time that I got the printer. But no, it was irrational and ultimately the packet failures didn't match any conceivable model for digital system failures where any chip failure results in total non-functionality. Cosmic rays?

If only I could have seen (maybe with false coloration of UV/IR/2.4Ghz rays) what was happening? If only I could have had diagnostics beyond traceroute and ping that would tell me not just that packets were dropped, but who dropped them? If only I had something more than a metaphor (wireless networking is like networking but slightly different) to work from?

Even tonight, following resolution of the recall, I diligently separated my phone from my key fob. The metaphor of fearful interactions was still lingering.

Monday, October 1, 2007

Neoepicureanism and Joplin

Sometime in late high school, sometime after I had been ejected from the partier crowd for being just not too cool, and sometime after I decided that my role-playing gaming and interests in ideas were good things despite their incompatibility with even the stable and geeky cliques, I proclaimed the philosophy of "neoepicureanism" and held that, "we never do anything we don't want to do." It served as a Socratic seed for discussions (sometimes under the influence) with friends concerning parents, tribulations, the role of fear in human action, and my own libertarian leanings.

No one does anything except by choice, even if that choice is under duress. A youthfully simplistic principle, but one that could defuse anger and hostility and transform discussions into positive appreciations of ambitions, goals, and baser pleasures, as well.

So here I have Veronica Gventsadze's "Atomism and Gassendi's Conception of the Human Soul" in front of me describing the Epicurean atomic swerve that was used in opposition to the purely deterministic atomism of Democritus. It's a long way out from high school and my interests in philosophy took on a terrifically sober and analytic form through college and then largely folded into scientific practice with the arrival at evolutionary epistemology and algorithmic information theory. Still, revisiting Epicureanism strikes me as remarkable in its monism, in the conception of the gods as prime movers detached from interaction with the corporeal world, and in the atomistic justification for free will and an ethics derived from reciprocity.

Gassendi was a 17th Century thinker who expanded on Epicureanism, reintroducing it to the West on the cusp of the Scientific Revolution. He resurrected the core ideas while enhancing the psychological descriptions, suggesting how we create mental simulacra and the influence of those simulacra in creating new ideas.

I wonder, though, since I had not read any Epicurus back in high school, and certainly hadn't read any Gassendi, whether the real source of my theory was "freedom is just another word for nothing left to lose?"