Can Computers Think?

by Steven Zierk

Computers have become astronomically faster since the introduction of ENIAC, the first computer, in 1946 [4].  That first computer’s rate of 18 calculations per second, so advanced back then, is left behind by the simplest of calculators today.  The most advanced computers today, such as the K Computer in Kobe, Japan, measure calculations in quadrillions per second. These computers can play perfect checkers, crack some of the most complex codes imaginable, or display high-resolution videos at many frames per second.

Yet these machines generally lack one thing: the ability to think for themselves. Furthermore, they lack any semblance of independence. Programmers give them specific codes, which are followed exactly. No matter how good a computer may be at checkers, it is only carrying out the orders given to it by a human. Even a basic error in a program will not be “realized” by the computer running it, as any programmer learns the hard way. In other words, computers, no matter how fast or complex, cannot do anything other than follow the exact directions given to them, and the slightest miscommunication will not be corrected, but will instead be carried out as any other order.

Genetic Algorithms: How Computers Learn

However, the gap between computers and intelligence is much smaller than most people realize. It is important to note the fact that computers can learn, though obviously not yet in the same manner as humans. Even simple programs can change their actions based on past results and current input. These programs, known as genetic algorithms, continually adapt to whatever information they are being given. One example of this is a program known as the “trash collector”. The trash collector consists of a robot in a 10 x 10 grid strewn with soda cans. The objective of the robot is to pick up as many soda cans as possible in a certain number of steps. It can see one square in each direction as well as the one it is standing on, and identify whether there are walls, empty squares, or soda cans in each direction. There is a fine of one point for trying to pick up a can in an empty space, and a fine of five points for walking into a wall. Each can the robot picks up is worth ten points.

fig1_cancomputersthinkFigure 1 : The robot with all its trash. (Source: Complexity: A Guided Tour)

There are two possible ways to create an ideal robot with genetic algorithms. The ideal way here, and the one explored by Dr. Melanie Mitchell of Portland State University in her work Complexity: A Guided Tour, is to create a lot of robots and use the strategies that do well.  Dr. Mitchell begins by creating 200 robots, with completely random strategies. The strategies consist of a list of all possible situations (based on what is in the robot’s square, and what it sees around it in each direction) along with a random command for each scenario. Needless to say, the robots will initially be very inept trash collectors. Unless they are quite lucky, they will not know that they “should” pick up the can in their square, and will sometimes walk straight into walls. Furthermore, many of these robots will actually attempt to walk into the wall: since this is impossible, they will do so again and again, accumulating huge penalties.

Obviously, these uneducated programs are far from ideal. But this is where learning comes in. It is highly unlikely that any of the initial robots will do well, but invariably some will do better than others. These “better” robots are used to create a new set of trash collectors in the next generation. The genetic algorithm randomly chooses two of the better robots and takes half of the code from each. It gives these halves to a new robot, along with one or two random mutations, and creates a new set of 200 robots in this manner. Even if they have yet to learn a good strategy, they will on average be much better than the last generation. Repeatedly applying the genetic algorithm in this way eventually results in a near-ideal strategy. These genetic algorithms have an enormous variety of applications, including aircraft and microchip design, computer animation, and finance analysis.

In the case of trash collecting, Mitchell’s genetic algorithm was extremely successful regardless of how incapable the robots initially were. To see how effective it would be, Mitchell first created a robot with the most natural human strategy: pick up a can if the robot is at one, move to a can if one is in sight, and otherwise have a random direction for each scenario. This strategy, on a 10 x 10 grid with 50 cans, averaged 346 points out of a possible 500. On the other hand, the first generation of robots scored between –825 and –81 points, obviously far from ideal. Nonetheless, the genetic algorithm enabled rapid improvement: within 1,000 generations the robots were nearly ideal, with a top score of 483. The genetic algorithm, simulating evolution, not only caused drastic improvement, it also outperformed the human strategy by a wide margin.

Naturally, it is quite debatable to what extent intelligence is really involved in this process. Yet this algorithm closely resembles many evolutionary processes for life as a whole and for mental decision-making. Simply put, if there is a good result the decision is encouraged, whether we really understand why or not. Many would correctly point out that the robot is still following directions without any conscious understanding of what is it doing. Yet evolution works the exact same way. One example pointed out by Sarah Fecht, a graduate student at New York University and writer for Scientific American, is hair-raising. Hair-raising is an instinctive human reaction to a sudden shock or scare. As it turns out, hair standing on end has the effect of making one look larger, a useful defense mechanism. However, this evolution occurred without any conscious appreciation of its effect. As Fecht put it, “at some point millions of years ago, one of our chilly, puffed-up ancestors scared away a would-be attacker, and hair-raising was slowly established as a useful defense mechanism” (Scientific American, November 2011). There is a strong parallel between this human evolution and the evolution of the trash collectors; they both learned and improved without what some would call a conscious appreciation of what they were doing.

Nonetheless, there is still an arguable difference between evolutionary processes and mental ones. Mental processes do involve direct thought; yet the similarities between human decision-making and genetic algorithms are striking. They both involve a straightforward analysis of past experience; touching a hot stove turned out badly, and should be avoided in the future. Trying to write an essay an hour before the deadline turned out badly, and should be avoided. (In fact, one could argue that genetic algorithms are more intelligent in this case because people do not always learn from past mistakes).  One good question is this: if the processes involved in the human mind and genetic algorithms are so similar, can a computer program act just like a human?

The Turing Test      This question is most commonly asked in the form of the Turing test. In the Turing test, a judge enters two conversations: one with a human, and one with a computer. The conversation can either be via text chat (less difficult) or face to face with a human-resembling robot (more difficult, requiring complex facial and body expressions). The judge is trying to determine who is the computer and who is the human, and the computer is attempting to imitate human behavior as well as possible. Alan Turing’s 1950 paper “Computing Machinery and Intelligence” proposed this as a practical test of computer capabilities.

It is indicative of the current state of computers attempting the Turing Test that, as of this writing, they have all failed miserably. No computer program has yet managed to achieve even a mediocre score on the Turing Test. Some of the programs that have been entered in the Test, such as ALICE (Artificial Linguistic Internet Computer Entity), are online, and anyone who wants a quick laugh can see current attempts at imitating human conversation. Here is a sample of my own conversation with ALICE, one of the top Turing Test competitors:

Judge: Want to help me with my essay?
ALICE: Can you ask for help in the form of a question?
Judge: That was a question.
ALICE: Ask me again a different way please.
Judge: Can you help me with my essay?
ALICE: Certainly, I have an extensive built-in help system.
Judge: Would a human really say that?
ALICE: I don’t care.

Figure 2: A sample conversation with a Turing program attempting humanlike behavior.

Needless to say, few would be convinced that there is a human behind ALICE’s words. Despite the vast room for improvement, interest in the Turing Test has never boomed, for a couple important reasons.

Firstly, there is a key distinction between knowledge and learning, one that is completely ignored by both the Turing Test and a lot of theories about artificial intelligence. Many statements about artificial intelligence do not refer to acting intelligent so much as acting human. As mentioned, a computer program that does not learn, no matter how complex or advanced, cannot truly be considered intelligent. Intelligence is related to the ability to learn and adapt to a given task or environment.  In the context of the Turing Test, a program that started out with very little understanding of conversation and demonstrated an ability to learn would be far more intelligent than one given a heap of knowledge about conversation and told to use it.

Some of the techniques used by the programs also reflect the disconnection between the Turing Test and genuine intelligence. For example, the Turing computers were deliberately programmed to make occasional grammatical or spelling errors, as that would seem more human. The fact that this improved the programs’ scores should be a sign of concern, as it is hard to argue that this makes them more intelligent. Rather than test for intelligence, the Turing Test somewhat arrogantly equates imitating human behavior with acting intelligent.

Intelligence

Intelligence has two key aspects: the ability to perform a given task, and the ability to learn about that given task. Generally, and especially in the case of a robot, it is sensible to define intelligence in terms of one’s ability to survive. If a robot (or a human) can go into an environment such as a city or a forest, even with prior knowledge, and then adapt and thrive, it can be said to be intelligent. (Connotatively, intelligence refers to intellectual understanding, which is useful but not broad enough. Especially in the context of machine intelligence, the former definition is far more practical). Subsets of intelligence are useful in given situations, and commonly used. Social or intellectual intelligence, the two types most people think of intelligent machines having, are not prerequisites to artificial intelligence, although they may be very important.  For example, social intelligence refers to one’s ability to handle social situations. While social intelligence is very useful in civilization, if someone is lost in the forest, social intelligence would have no relevance. The different types of people across the planet reflect how the importance of different kinds of intelligence varies depending on the individual environment. UCLA professor Jared Diamond’s Pulitzer prize-winning book Guns, Germs, and Steel demonstrates this point effectively. Diamond points out that “tests of cognitive ability (like IQ tests) tend to measure cultural learning and not pure innate intelligence” (20). This is the exact problem with the Turing test. It measures a computer’s ability to simulate a colloquial conversation with an arbitrary person. This is a very narrow topic and cannot be said to correlate with intelligence as a whole. A robot’s intelligence would be more aptly determined by its ability to thrive in a given environment, which is (not coincidentally) the most important practical test for any life form.  In addition, Diamond points out that intelligence in one environment does not translate to intelligence in another.  Although he is a professor with a PhD at UCLA, extremely intelligent in his academic environment, Diamond recalls during anthropological studies in Papua, New Guinea his “incompetence at simple tasks (such as following a jungle trail or erecting a shelter)” (20). Needless to say, these are key skills for New Guineans, and the people Diamond was working with had been familiar with them since birth. In their environment they can be considered more intelligent than Diamond. At UCLA, however, the exact reverse would be true.

The genetic algorithm for trash collecting is very useful in the given environment. However, it would be all but useless in any real world situation; any human or robot that followed the given instructions would do nothing but pick up trash (at best) and soon die. Thus, although the program is to a degree intelligent in itself, it cannot be said to have artificial intelligence. Even though the program does learn and adapt, it has no ability to survive in a real world situation. A machine that could both adapt and sustain itself in a real world situation could be said to be intelligent. It is noteworthy that this assessment does not have any connection with acting human or having human characteristics. Although this may seem odd at first, it is logical; it is somewhat egocentric to equate intelligence with any purely human trait, rather than simply equating it with the ability to learn and survive.

Possibilities offered by Computers

With this definition in context, it is safe to say that computers can think – indeed, they can be said to do so already, though not on the same scale as humans. They are capable of adapting and learning, and the only real obstacle is obtaining an advanced enough program on a very fast computer, which would be capable of handling as complicated a place as the real world. It is much easier to create a program that can play checkers, even though checkers is a (mathematically at least) very complicated game. Computers have advanced enough that the Chinook computer program simply has a database containing the best moves of every possible position in a game of checkers. Intelligent programming is no longer necessary. On the other hand, the real world presents an incalculable number of scenarios that cannot just be put into a database. Therefore, simply developing faster computers with more memory is insufficient, at least for the foreseeable future. However, it seems that, if computers can adapt to a simple task, it is only a matter of speed and coding before they will be capable of doing more advanced things. If a genetic algorithm can easily program a computer to solve the garbage pickup problem, then a computer can similarly learn and adapt to handle more complicated tasks, such as interacting in a complex social environment. Genetic algorithms provide the key.

Another question, the question of emotional intelligence and understanding, is more complicated. Can a machine learn to feel and empathize in the same way that humans and some other animals do? Emotional understanding is often what people mean when they refer to intelligence: in particular, cinema and literature containing the artificial intelligence theme often involve a robot developing feelings.

This question is much harder to answer definitively than the first. The main reason is that the use of genetic algorithms is much more questionable here. If a learning program were written and told to act emotional, it could do so, but most people would rightly consider these emotions an “empty shell”. The computer would be essentially performing the Turing Test, acting human, even if there is nothing below the surface. It is difficult to judge how much more there is to emotion than this.

At this point the question becomes philosophical; exactly how significant are human emotions? In my opinion there is no unquantifiable deeper meaning to emotions; that is, there is no aspect of emotion that cannot be explained by the chemistry of the brain and how neurons fire. There are serious practical difficulties (hence the current lack of progress) but on a fundamental level, neurons are similar to bits on a computer, with an off state and an on state. If the only physical aspect computers cannot simulate are chemical balances and imbalances in the brain (e.g. serotonin levels), it should certainly be possible to incorporate this into a robot, perhaps with the exact same chemicals.

It should certainly be theoretically possible to design intelligent machines that can learn to perform just about any task at the same level as a human. Although the practical difficulties, far beyond the reach of current technology, have left a strong impression on many people, it is worth noting that the idea of a computer barely existed a century ago. Many of the abilities computers have today were not even recognized as concepts then, let alone considered as possibilities.

Works Cited

  1. Diamond, Jared M. Guns, Germs, and Steel: The Fates of Human Societies. New York City: W. W. Norton, 1997. Print.
  2. Fecht, Sarah. “Stuff You Just Need To Know,” Scientific American, 214, 90-91. November 2011.
  3. Mitchell, Melanie. Complexity: A Guided Tour. Oxford: Oxford University Press, 2009. Print.
  4. Nordhaus, William D. “Two Centuries of Productivity Growth in Computing,” The Journal of Economic History, Volume 67-1 (128-159), March 2007.
  5. Turing, Alan M. “Computing Machinery and Intelligence,” Mind, 59, 433-460. 1950.

Zierk_Steven_photoSteven Zierk grew up in Los Gatos, California but often dreamed of coming to MIT for college. He is a Computer Science major and a brother of the Phi Kappa Sigma Fraternity. He was a very serious chess player most of his childhood and won the World Under-18 Championships in 2010. Steven’s other interests include weightlifting and reading anything he can get his hands on.

Steven wanted to write this essay because he has always found the sheer power of computers fascinating. This is what began his interest in Computer Science and has kept it going ever since. Computers now are nothing short of incredible, and what we have achieved with them is only the tip of the iceberg.