Teaching the Web to Think: Mindpixel Corpus attempts to create an artificial intelligence on the Web.

On That Topic ...
Resume/Clippings
of
Christopher Simpson.
My Resume
Contact Me


Contents

Personal Articles

On That Topic - Blog
A blog. Not too religiously maintained, but there are a few items of interest -- at least, I think so.

Christopher's Shorts
No underwear. Some short stories, but no underwear

Canadian Politics

V8 Juice & Canadian Unity
Who knew the secret to Canadian unity lay in French-only V8 Juice ads?

Art

Chicken Art & Canadian Politics
When Rob Thompson caged two people to protest the plight of commercially grown chickens, he was merely following a great Canadian political tradition.

Web Items

Teaching The Web To Think
With the help of surfers everywhere, the Mindpixel Corpus group hopes to create an online consciousness.

Culture & Traditions

Mummers And Pagans And Wrens, Oh My!
A history of the mummering tradition and why it's better than plunging into a frozen canal.

Author Interview

In The Foxhole With Maeve
An interview with internationally acclaimed author Maeve Binchy in which she discovers the secret numerology of Tara Road.

Business & Information Articles

Wipe That Smile Off Your Face
Marketing research is an honourable profession — maybe. But when your product is toilet paper ... ?

Where Have All The Toasters Gone?
Being a student is tough; but being a student and trying to decide which bank has the best Student Account is almost impossible.

You may not know GAC yet, but when he's put together he'll be sure to introduce himself.Millions of years ago, out of a primordial sludge consisting of water, saline, and amino acids, arose life forms. Through the course of evolution and natural selection, some of these life forms evolved into rational, thinking beings (others, of course, became nearly-elected leaders).

Today, a similar process is taking place right here on the Web. Out of a primordial sludge consisting of chat rooms, tiny wireless cameras, and Spice Girls fan pages, a new life form is being born which may well evolve into a new kind of rational, thinking consciousness. Either that or we've got a new presidential candidate for 2004.

Making the Implicit, Explicit

The force behind this new brain is the Mindpixel Corpus. With their vaguely perplexing motto, "Making the implicit, explicit," Mindpixel's aim is to gather millions and millions of simple human observations which are true regardless of race, gender, or individual differences, and compile them into a working model of the human mind.

Here's how it works.

Visitors are encouraged to submit what the Mindpixel people call "a binary statement of consensus fact such as 'Water is wet' or 'It is difficult to swim with ski pants on'." These statements are called "mindpixels."

Upon submitting your entry, ten previously submitted mindpixels appear which you are asked to rate according to their truth and value. The truth rating of the statement is a simple "True" or "False," the value rating is a five-unit scale from "Poor" to "Excellent."

The plan is to gradually create a "brain" comprised of individual units of human experience, graded according to consensus and reliability.

You Don't Know GAC!

And Turin said, "Let there be circuits!"The brain being created in this grand experiment is called "Generic Artificial Consciousness," or GAC (pronounced "Jack").

The project head is Dr. Robert Epstein, which the Mindpixel site calls "one of the world's leading experts on human and machine behavior." Dr. Epstein has a psychology doctorate from Harvard (1981), is the founder and Director Emeritus of the Cambridge Center for Behavioral Studies in Massachusetts, and is Adjunct Professor of Psychology at San Diego State University.

In its first year on-line, the Mindpixel Corpus has received nearly 8 million individual measurements of more than 355,000 individual items of "human consensus experience" from contributors. Upon the completion of its collection phase in 2010, work will begin to create a statistical model of an average human mind with the aim of using it as a foundation for true artificial consciousness.

And the whole thing is possible only because of the Web. Without the aid of the Internet, the data entry alone would have cost $250 million.

This Translator Definitely Needs A Union

Will it be successful? To his credit, Dr. Epstein isn't sure. "We don't know if it is possible to build a normal personality out of millions of little pieces. This experiment will tell us how reasonable the idea is."

Of course, whether or not it succeeds depends largely upon how we define "success." If by "success" we mean the creation of a truly artificial consciousness, many experts believed it is probably doomed to failure. They claim that electronically storing millions of simple statements about experience, and processing them according to an established set of rules is no more likely to produce consciousness than doing the same thing with statements written on pieces of paper.

One long-standing argument against the belief that consciousness is inherently a rule-defined processes is John Searle's "Chinese Room Argument."

In this thought-experiment, a man is locked into a room with nothing more than a book of complex rules. Through a slot in the door come slips of paper upon which are written Chinese words. The man compares these symbols with his rule book, writes down the result (which is an English translation), and shoves it back through the slot.

The point Searle is making is that nowhere in this entire system is there any actual understanding of Chinese. In other words, mental states, such as the understanding of language, cannot be created by a system of input and output rules, even when a human consciousness is involved as part of the system. (See sidebar).

If, on the other hand, we consider "success" to mean the creation of a system which will emulate and mirror the human consciousness in ways that can improve our understanding, then there is every reason for hope. Mechanical models (be they levers and cranks or electrons and silicone) have a long history of providing invaluable insights into the way minds work.

Survey Says ...

If nothing else, GAC could completely revamp the way we conduct polls and surveys. No need for dozens of employees to phone thousands of average citizens to discover which product name they like better — soon we can just ask GAC.

Of course, we'd have to know exactly what GAC's "demographic" is. In other words, we'd have to find out just what kind of "person" is residing in GAC's virtual mind.

Fortunately, even as I write these words, GAC is undergoing a months-long psychological test. In fact, GAC will be the first machine-based artificial personality to be tested by the MMPI (Minnesota Multiphasic Personality Inventory), which is the same test used in both corporate hiring practices, and criminal court proceedings.

It will be interesting to see whether GAC turns out to be an ideal CEO, or found "not guilty by reasons of insanity."

Reprinted from Circa2000, May 16, 2001.

Back to top

John Searle's Chinese Room Argument

John Searle, phenomenologist, cognitist.

Against "strong AI," Searle (1980a) asks you to imagine yourself a monolingual English speaker "locked in a room, and given a large batch of Chinese writing" plus "a second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first batch."

The rules "correlate one set of formal symbols with another set of formal symbols"; "formal" (or "syntactic") meaning you "can identify the symbols entirely by their shapes." A third batch of Chinese symbols and more instructions in English enable you "to correlate elements of this third batch with elements of the first two batches" and instruct you, thereby, "to give back certain sorts of Chinese symbols with certain sorts of shapes in response."

Those giving you the symbols "call the first batch 'a script' [a data structure with natural language processing applications], "they call the second batch 'a story', and they call the third batch 'questions'; the symbols you give back "they call . . . 'answers to the questions'"; "the set of rules in English . . . they call 'the program'": you yourself know none of this. Nevertheless, you "get so good at following the instructions" that "from the point of view of someone outside the room" your responses are "absolutely indistinguishable from those of Chinese speakers." Just by looking at your answers, nobody can tell you "don't speak a word of Chinese."

Producing answers "by manipulating uninterpreted formal symbols," it seems "[a]s far as the Chinese is concerned," you "simply behave like a computer"; specifically, like a computer running Schank and Abelson's (1977) "Script Applier Mechanism" story understanding program (SAM), which Searle's takes for his example.

But in imagining himself to be the person in the room, Searle thinks it's "quite obvious . . . I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing." "For the same reasons," Searle concludes, "Schank's computer understands nothing of any stories" since "the computer has nothing more than I have in the case where I understand nothing" (1980a, p. 418). Furthermore, since in the thought experiment "nothing . . . depends on the details of Schank's programs," the same "would apply to any [computer] simulation" of any "human mental phenomenon" (1980a, p. 417); that's all it would be, simulation.

Contrary to "strong AI", then, no matter how intelligent-seeming a computer behaves and no matter what programming makes it behave that way, since the symbols it processes are meaningless (lack semantics) to it, it's not really intelligent. It's not actually thinking. Its internal states and processes, being purely syntactic, lack semantics (meaning); so, it doesn't really have intentional (i.e., meaningful) mental states.

For more, click here.

Back to top


Rebutting the Chinese Room Argument

Naturally, those who believe in "Strong AI" (meaning artificial intelligence that actually has consciousness rather than merely mimicking it) have developed many counter arguments to Searle's Chinese Room. Although differing in approach, the basic strategy underlying these arguments is essentially the same and can best be described as "throwing sand in the opponent's eyes."

For an example, read Larry Steven Hauser''s dissertation against Searle here.

Back to top