Saturday, September 29, 2007

Supercomputers and Nanotechnology


Many problems we regard as needing cleverness can sometimes be solved by resorting to exhaustive searches, that is, by using massive, raw computer power. This is what happens in most of those inexpensive pocket chess computers. These little machines use programs much like the ones that we developed in the 1960s, using what were then some of the largest research computers in the world. Those old programs worked by examining the consequences of tens of thousands of possible moves before choosing one to actually make. But in those days the programs took so long to make those moves that the concepts they used were discarded as inadequate. Today, however, we can run the same programs on faster computers so that they can consider millions of possible moves, and now they play much better chess. However, that shouldn't fool us into thinking that we now understand the basic problem any better. There is good reason to believe that outstanding human chess players actually examine merely dozens, rather than millions, of possible moves, subjecting each to more thoughtful analysis.

In any case, as computers improved in speed and memory size, quite a few programming methods became practical, ones that had actually been discarded in the earlier years of AI research. An Apple desktop computer (or an Amiga, Atari, IBM, or whatever) can do more than could a typical million-dollar machine of a decade earlier, yet private citizens can afford to play games with them. In 1960 a million-bit memory cost a million dollars; today a memory of the same size (and working a hundred times faster) can be purchased for the price of a good dinner. Some seers predict another hundredfold decrease in size and cast, perhaps in less than a decade, when we learn how to make each microcircuit ten times smaller in linear size and thus a hundred times smaller in area. What will happen after that? No one knows, but we can be sure of one thing: those two-dimensional chips we use today make very inefficient use of space. Once we start to build three-dimensional microstructures, we might gain another millionfald in density. To be sure, that would involve serious new problems with power, insulation, and heat. For a futuristic but sensible discussion of such possibilities, I recommend Eric Drexler's Engines of Creation (Falcon Press, 1986).

Not only have small components become cheaper; they have also become faster. In 1960 a typical component required a microsecond to function; today our circuits operate a thousand times faster. Few optimists, however, predict another thousandfold increase in speed over the next generation. Does this mean that even with decreasing costs we will soon encounter limits on what we can make computers do? The answer is no, because we are just beginning a new era of parallel computers.

Most computers today are still serial; that is, they do only one thing at a time. Typically, a serial computer has millions of memory elements, but only a few of them operate at any moment, while the rest of them wait for their turn: in each cycle of operation, a serial computer can retrieve and use only one of the items in its memory banks. Wouldn't it be better to keep more of the hardware in actual operation? A more active type of computer architecture was proposed in Daniel Hillis's Connection Machine (MIT Press, 1986), which describes a way to assemble a large machine from a large number of very small, serial computers that operate concurrently and pass messages among themselves. Only a few years after being conceived, Connection Machines are already commercially available, and they indeed appear to have fulfilled their promise to break through some of the speed limitations of serial computers. In certain respects they are now the fastest computers in the world.

This is not to say that parallel computers do not have their own limitations. For, just as one cannot start building a house before the boards and bricks have arrived, you cannot always start work simultaneously on all aspects of solving a problem. T would certainly be nice if we could take any program for a serial computer, divide it into a million parts, and then get the answer a million times faster by running those parts simultaneously on that many computers in parallel. But that can't be done, in general, particularly when certain parts of the solution depend upon the solutions to other parts. Nevertheless, this quite often turns out to be feasible in actual practice. And although this is only a guess, I suspect that it will happen surprisingly often for the purposes of artificial intelligence. Why do I think so? Simply because it seems very clear that our brains themselves must work that way.

Consider that brain cells work at very modest speeds in comparison to the speeds of computer parts. They work at rates of less than a thousand operations per second, a million times slower than what happens inside a modern computer circuit chip. Could any computer with such slow parts do all the things that a person can do? The answer must lie in parallel computation: different parts of the brain must do many more different things at the same time. True, that would take at least a billion nerve cells working in parallel, but the brain has many times that number of cells.

No comments yet