A companion blog site to the comunications studies course

Sunday, November 19, 2006

Anthony Tsikouras
#0655775
Alex Sévigny
CMST 1A03
Rita Tourkova
Due Date: November 20, 2006

A Critical Examination of Alan Turing’s Concept of Artificial Intelligence

“The machines will rise.”

- Tagline from “Terminator 3: Rise of the Machines”

Artificial intelligence has long been a popular subject in science fiction. The most popular of these stories feature machines that, once they become self-aware, rebel against the human race, and ultimately take control. While an admittedly terrifying plot, many critics believe the entire idea of artificial intelligence to be a pure impossibility. Alan Turing, however, argues that if a machine is indistinguishable from a human being in a controlled study, it has achieved a level of intelligence (p. 77). It is a study Turing refers to as the imitation game. The game consists of an interviewer communicating through text conversations with another person and a machine. The interviewer then has to identify which personality was a real person and which was a machine.

While the imitation game does provide a good starting ground for which artificial intelligence can be measured, there are two serious flaws. Turing never explains how the machine would understand language as well as being able to respond in language. The only way the imitation game would be possible then would be if the same questions were always asked, so the machine could keep set answers stored in its memory. This would actually not be a form of artificial intelligence, since all the answers are predictable and not learned. It is the equivalent of a well-trained parrot. The second flaw threatens to destroy Turing’s basis of what consists of artificial intelligence: the machine has no life to protect or genes to pass on. Because of this, its conversation would be different, sometimes drastically, from that of a human. This is not to say that the theory is lost, and that artificial intelligence is impossible, but it certainly does require a tweaking.

Before elaborating on how to fix this definition, it is important to realize that man is a machine, in the most literal sense. To understand this, consider a squirrel. If one were to watch a squirrel’s movements throughout a day, they would find them to be very repetitive and predictable. A squirrel is one of the most definitive examples of an animal being controlled by its instincts. It is not a stretch of the imagination to consider programming a robot that would behave exactly like a squirrel. Next, consider a bird: while its behaviour is somewhat less predictable, it still follows its instincts for the most part. The idea is that as we advance to species of higher and higher intellect, the programming code associated with them becomes harder and harder, until we reach human, the species of highest intellect on the planet. Though the coding does get harder as we advance up to humans, there is no one point where we can draw the line, where one animal was predictable enough to code and another was not. It is simply that the animals become so complex that we can not understand their processes well enough to code them. Therefore, if one considers a squirrel to be a machine, then a human must be a machine as well.

It is now possible to define the problem correctly. If we cannot base artificial intelligence on the result of the imitation game, then what qualities do we look for in artificial intelligence. What would seem simplest is that a thinking machine must be capable of producing thought. This may seem too vague and improvable to be the definition, but by combining Turing’s theory with some ideas of Steven Pinker, the definition becomes suitable. We must also specify that the machine is man-made, since otherwise a human being could still qualify.

Pinker’s main theory involves man’s ability to learn language naturally (p. 44). He claims that language is not an invention of man, but rather it is instinctual, with the proof being that language has developed in every human tribe on Earth. His idea is that every human mind has some basis of grammar that transcends all languages, referred to as the Universal Grammar. In other words, every language is formed of the same sub-components, for instance: nouns, verbs and adjectives. A child is able to learn a language from the people speaking around him by using the Universal Grammar, along with the situational context of the sentences.

At first glance, this process appears to already be installed in every word processor: the feature of Spell Check has become a standard. Spell Check is now able to correct text according to grammar and syntax for nearly any language. It is also able to add new words to its repertoire if the user so wishes, and find synonyms and antonyms for words. Despite all of this, it still does not understand language. It has a basic Universal Grammar, being able to recognize sentence fragments, and it can therefore verify that flawless sentences are written, but it is far from comprehending what it is reading. To understand language, words must exist as more than letters; they must represent something to the computer as well. Consider that the dictionary in a computer has the word “apple”, and its definition is “a kind of fruit”. The computer still has no idea what an “apple” is since it does not know what a “fruit” is.

The only way for a machine to learn a language is for it to have senses like humans have. How would you describe an apple to me if you could not make reference to what it looks like, what if feels like, what it tastes like, or what it smells like? It is technically impossible. Therefore, for a computer to learn a language, it must have sensors that can deduce the sensory qualities of a given object. If a computer was set up with the proper sensors and its basic universal grammar, and it was immersed in a language, given a suitable situation with references to objects, it would be able to quite literally learn words, with the correct associated meanings as well. Up to now, this has only gone to explain how to define objects for machines. Verbs can be learnt in a similar way, but adjectives cannot be learnt until after nouns and verbs are learnt, since their meanings are based on the characteristics of nouns and verbs.

This may seem like a very long process ensuing before the machine could respond in the language of its surroundings, and that is absolutely true. It would take years of near constant immersion in language before the machine would know enough to be able to use it. However, far from being a problem with the learning process, it could be seen as a great success. Human children take two years of constant immersion before they say their first word. Considering that humans are machines, it is likely that the process designed for the machine accurately matches the organic equivalent inside every human.

Despite this major accomplishment, there is still a large flaw that will prevent the computer
from ever being able to communicate like a human being. Consider the stereotypical example of artificial intelligence: the on-board computer “HAL”, from “2001: A Space Odyssey”. After HAL incorrectly detects a malfunction on the ship, the astronauts on board see HAL is a danger to their safety and try to disconnect him. HAL, however, wants to survive, and so does everything he can to prevent the astronauts from shutting him down. This is unrealistic since the machine should not be interested in its own survival. This is the great error when one theorizes on artificial intelligence. Just because a machine is conscious does not mean that its life has meaning. The machine will do what it was programmed for with the same kind of perseverance as animals try to stay alive.

If we return to the idea that humans are machines, then one might expect them to have a main driving goal as well, and correctly. Humans, along with every other animal are built to survive and procreate, which is the survival of genes. This may seem like a bleak oversimplification, but in the end, that is what we all have in common, and any activities we take part in are all different means to the same end. A human’s goal is survival, just like antivirus software’s goal is to scan for viruses, and the lives of both do not extend past these respective goals. Proof of this is clear everywhere in society: the urge for offspring to get good jobs and the immense focus on sex in the media are examples of survival and procreation respectively.

Besides the many ways our primary objective affects our lives overall, there are many ways it affects our communication. Adler and Rodman catalog many observations of human tendencies in communication: we cling to first impressions, we tend to believe the worst in people, and we are more likely to interpret comments as insults when we are already in a bad mood (p. 37). These are mechanisms for survival that have been proven to work. At a glance, it is not hard to decipher why they are effective. First impressions are usually correct since the other person is dressed in a way that represents himself. The other two mechanisms are based on the “better safe than sorry” approach, where our survival is more assured by taking the cautious, or worst case scenario, route.

These are just a very select few of the survival mechanisms that constantly play a role in our interpersonal communication. Aside from the more complex mechanisms observed by Adler and Rodman, there are the more basic ones: emotions. These are all based on human survival method, so they will not be apparent in a machine’s side of a conversation.

Combining the methods discussed for creating an artificially intelligent machine, we have a machine that can learn language through sensory recognition of objects. It would actually be intelligent in the terms Turing presented, and yet it would still fail his test. This is because, while it would actually be more accurately intelligent, since intelligence demands the creation of your own sentences, it would be clear that the artificial intelligence displays no emotion, none of the
qualities we take for granted in people.

If a machine could talk, what would it say? Like us, it would depend on its goal.


Works Cited:

Adler, Ronald B. and George Rodman. Understanding Human Communication, 9th ed. New York: Oxford University Press, 2006.

Pinker, Steven. "An Instinct to Acquire an Art". Communications Studies 1A03 Custom Courseware. Ed. Alex Sévigny. Dubuque, Iowa: Kendall/Hunt Publishing Company, 2006, 41-45.

Terminator 3: Rise of the Machines. Dir. Jonathan Mostow. Perf. Arnold Schwarzenegger, Kate Nick Stahl and Claire Danes. Warner Bros., 2003.

Turing, A.M. "Computing Machinery and Intelligence". Communications Studies 1A03 Custom Courseware. Ed. Alex Sévigny. Dubuque, Iowa: Kendall/Hunt Publishing Company, 2006, 77-85.

0 Comments:

Post a Comment

<< Home