Artificial Intelligence/Artificial Life AI AND A-LIFE


[ Neural networks: ]

Crash course | Duncan Graham-Rowe
A software copilot can rapidly learn to fly a damaged plane

Clever kitty | Duncan Graham-Rowe
What has four paws and a dazzling silicon mind? Artificial life's next big thing

Gas on the brain |
How do you make a powerful computer with few components and no internal wiring? Add a whiff of nitric oxide, of course. Clive Davidson reports

Cellwars |
It's good, but not good enough, all the while women are dying. Could cervical screening get some much needed help from Ronald Reagan's Star Wars programme, asks Helen Gavaghan

The creativity machine | Bob Holmes
It writes music, invents soft drinks and dreams up hard materials. The man who built it points the way to immortality says Bob Holmes


[ Traditional AI ]

[ New wave AI ]

[ AI meets music ]

[ Intelligent robots ]

[ Computers and emotions ]

[ Virtual worlds ]

[ Evolving machines ]

[ Artificial immune system ]

[ Modelling real life ]

[ Intelligent agents ]

SUBSCRIBE | to New Scientist
If you like the New Scientist website, you'll love New Scientist magazine - the world's leading science and technology news weekly.


 

 

HOME · CONTENTS · JOBS


 

The creativity machine

It writes music, invents soft drinks and dreams up hard materials. The man who built it points the way to immortality says Bob Holmes By Bob Holmes

STEVE THALER THINKS he knows how to kindle a soul in the circuitry of a computer. With his key to consciousness in hand, computers will no longer be mere drones but creative beings with free will. Someday, he predicts, these machines will become so powerful that humans will choose to leave their obsolete flesh-and-blood "wetware" behind and live full time in the hardware world.

"And we wouldn't have to die, provided someone pays the electric bill," he says. He pauses to consider the implications of that. "I think there's going to be a big market for immortality."

Yes, quite.

Who is this guy?

A physical description - middle 40s, shortish, medium build, glasses - misses the essence of the man entirely. What impresses most about Thaler is his inexhaustible energy and enthusiasm. A visitor to his office, in the basement of his St Louis home, is swept away by a torrent of ideas, explanations, anecdotes, dreams, plans, examples and achievements, all focused on Thaler's path to artificial intelligence. Clearly, Thaler believes in what he's doing, even if other AI experts do not.

"People say `who in the hell is Steve Thaler anyway? He's making some really bold claims'," he says. "But look at Columbus. He was an outcast for the most part, with a lot of ridicule and scorn. Look at Galileo, excommunicated by the Pope, possibly about to lose his life. The community had not made the same leap they had."

Thaler admits that immortality is a bit beyond his grasp right now, but he's working on it. In the meantime he has developed a computer program that he claims is capable of independent, creative invention. Already, Thaler says, he and his Creativity Machine have written music, designed soft drinks, and discovered novel minerals that may rival diamond in hardness. A high- technology company has hired the Creativity Machine to search for new high- temperature superconductors, while Thaler has put the machine to work designing its own successor.

What began as a casual interest twenty years ago grew into a hobby and finally exploded into an obsession that in recent years has gobbled 60 hours a week. Not unreasonable for a driven professional nowadays, except that until last autumn, Thaler worked full time as a scientist for the aerospace firm McDonnell Douglas. He left there in October to concentrate on the Creativity Machine.

Thaler's creation grew out of his work on neural networks - computers that mimic the structure of interconnected nerve cells within the brain. The type of network Thaler uses is made up of individual nodes, analogous to brain cells, arranged in layers. The user feeds data into an input layer, and the result emerges from an output layer. In between these two are one or more "hidden layers". Each node receives signals from nodes in the previous layer and adds them. If the total is big enough, then - like a brain cell firing - the node sends signals on to nodes in the next layer.

The network takes information as an array of "off" or "on" states in the input layer, processes it, and generates a result as a series of off or on states. For any input, the programmer can change the result by adjusting the strength of signals sent out by a single node to others further on - the "weight" of each connection in neural network jargon.

Unlike conventional computers, which are programmed in painstaking, step- by-step detail, neural networks can be trained. For example, to train a network to recognise cars, one might feed in a digitised representation of a Ford car, then jiggle the connection weights in the network until the output shows the desired on-off pattern that stands for "Ford". After doing the same with several other Ford models, one might go on to Mercedes or Jaguars.

As the training continues, a surprising thing happens. Nodes in the hidden layer begin to specialise, so that each stands for a particular characteristic. One node might activate whenever a bumper with a particular curve is present, for instance, while another might turn on only in response to a boxy profile.

At the same time, the network comes to associate certain combinations of these characteristics with particular makes of car. In effect, the network picks out the key features needed to identify cars so it can classify even models it has never seen before. And it learns these concepts without having to be fed an exhaustive set of "if-then" rules.

Near-death experience

Thaler took this basic design and added an extra wrinkle, born of his personal experience. "I had a near-death experience when I was a child," he says. The strangeness of this experience prompted him, years later, to see what would happen to a neural network if you tried to kill it.

He trained a network, then held its input constant and watched what happened to the output as he gradually turned off connections at random - the human equivalent of having individual connections between neurons die. To his surprise, instead of disintegrating, the network produced an ever-changing stream of familiar outputs. The network interpreted the shifts in its internal state as though they were caused by changes in input, and tried to guess at reasonable outputs, much as a human might guess a word with some letters missing.

Later, Thaler found that the network responded the same way if instead of deleting links he simply changed some connection weights. All it took was some sort of disturbance, or noise, in the network's internal workings and the network would begin generating responses to nonexistent inputs - what Thaler calls the virtual input effect. "This is the equivalent of internal imagery. It's like staring at a blank screen and having internal images parade past," says Thaler. This "imagining" network forms half of the Creativity Machine.

The noisy network will generate variations on whatever the original network was trained to produce. To generate silhouettes of car shapes, for example, Thaler defines the salient points of a car's profile - top of windscreen, bottom of windscreen, front of bonnet - using a string of zeros and ones, which represent the position of these points in two dimensions. As he trains the network, nodes in the hidden layer come to represent particular components of car shape, and the connection weights that link these nodes to the output layer represent ways in which the components can be combined.

Then Thaler begins to change connection weights at random, activating unusual combinations of components. If he adds too much noise, the network tosses components together haphazardly, producing what he calls "Picasso cars" - bizarre, twisted shapes with wasp waists and their wheels in the air, for example. If he adds too little noise, the network produces only the cars it has seen during training. But between these extremes is fertile ground in which the network assembles components in new ways that still conform to a basic notion of carness.

This brainstorming phase produces boring, ugly or poor variations as well as good ones, however. So Thaler adds a second, filtering neural network, which he trains by showing it a set of examples and saying for each "this is good" or "this is bad".

The Creativity Machine's basic design can be used for myriad purposes, says Thaler. One weekend, for example, he showed the machine a smattering of popular songs - actually just short phrases of about 10 notes without any accompanying harmonies - then turned it loose to imagine some new ones. The filtering network selected 11 000 of the best themes and Thaler sent them to the US Library of Congress to be copyrighted. "That makes me technically the most prolific songwriter of all time," he boasts.

"It is mind-boggling to think about it," says Thaler. "Who else wrote 11 000 musical [themes] in their lifetime? And the same applies to patents. I could become the most prolific inventor, technically speaking. According to my attorney, whatever the Creativity Machine invents is, by default, my intellectual property."

Sanity check

Thaler also set to work on the more serious question of discovering ultrahard materials. He is reluctant to talk about the precise configuration of Creativity Machine used because he has applied for a patent on it. But he will say that the imagining network is large, with inputs and outputs representing every possible quantum state for every electron in every atom of a molecule.

Thaler trained this network by showing it about 200 examples of two-element molecules, such as water and iron oxide, to teach it plausible combinations and proportions. He also trained the filtering network so its output correctly gave the Mohs score - a standard measure of hardness - for each molecule, and then filtered out those with a low score. Then he turned on the noise and waited. "I felt I was in a good position as a materials scientist to run a sanity check on the output of that machine," says Thaler, whose work with McDonnell Douglas included making synthetic diamonds.

The results looked promising. The machine correctly identified known ultrahard materials such as boron nitride and boron carbide, even though it had never seen these during training. It also proposed C, which a group of theoreticians at Harvard University suggested almost simultaneously as a likely ultrahard material.

The machine also pointed out several untested polymers of boron, beryllium, or carbon doped with small amounts of hydrogen. "At first I looked at this output and said this is ridiculous. But then I made a model, and it made sense. It recognised a rule, implicitly, that I think human beings hadn't thought about much." He has now licensed his Creativity Machine to Advanced Refractories Technology, a company in Buffalo, New York, to develop new ultrahard materials and high-temperature superconductors.

The other possibilities are endless, Thaler says. "Five years from now, you're going to have every computer science graduate student writing a creativity machine program - sometimes for very mundane tasks like how to successfully flirt or how to optimise your wardrobe," he says. He is now developing software that will let anyone turn their home computers into creativity machines.

Few other AI researchers know Thaler's work, and they view his work with caution. The Creativity Machine sounds clever, they say, but Thaler may be overstepping the line to call it truly creative.

"Such claims in AI ought to be subject to the same scepticism as claims of perpetual motion," says Jordan Pollack, a cognitive scientist at Brandeis University in Waltham, Massachusetts. His is a typical reaction to Thaler's machine. "The entire history of AI is filled with what is called `creative naming'. You have a program that does something and you name it `the thinker'. By naming the program that way, you can get confused and think that's what your program is actually doing."

Thaler's program takes a familiar path into the realm of computer creativity, says Stan Franklin of the University of Memphis, Tennessee. `Almost everyone who has looked at creativity, in cognitive science and computer science has agreed that it's a generate-and-test process," he says. Such programs - and there are a number - work by producing a range of possibilities, then testing them for usefulness or aesthetic value.

Thaler's approach is different in that his idea generator is trained so that it implicitly "knows" the constraints of the subject, be it music or minerals. Given the right level of noise - something only Thaler adds to the hidden layer of a neural network - the network generates possibilities without wasting time on clearly unworkable solutions.

Even so, Thaler's imagining network still tosses out a mishmash of good and bad ideas indiscriminately - just like all the others. To be called truly creative, a program must be able to sort the good from the bad, says Pollack. Here, Thaler imposes all the creativity from outside by training the filtering network to his own standards. "Is that creativity or is that trainable, mechanical filtering based on the person who selected the network?" asks Pollack.

In the end, experts may never agree on whether a computer is being creative, because no one knows what creativity is. "There's no consensus about how you can tell when a human being is doing something creative," says Margaret Boden, an expert on creativity at the University of Sussex. "The concept isn't a scientific one. It's an intuitive one."

Thaler sees deep links between his program and true humanlike creativity. Human brains use the same principles of construction as neural network computers, he argues, and all biological systems have some level of noise, so the virtual input effect should apply to human brains just as readily as to computers. These spontaneous outputs in our brains are what we call dreams and, at higher levels of noise, hallucinations, he says.

In waking life, noise may also be the source of the thoughts and flashes of inspiration from the unconscious, the very essence of free will, human consciousness, and even the soul. "I tend to think of the soul as the manifestation of this stochastic process going on," says Thaler. "It's the noise in the machine, rather than the ghost in the machine. I'm sure the Church is going to want to burn me on the front lawn, but I'll be in good company."

But this spontaneous fount of ideas - the soul, if you believe Thaler - is not all that's going on in a human mind. "You can be led astray into thinking that I'm saying that the stream of consciousness is nothing more than noise being translated into different thoughts.

But that's not the case," he says. "You've got noise, which is producing neural images in no distinct order, but those helter-skelter thoughts can now excite whole chains of associations. That is the systematic part of intelligence."

Of course, to put all this into a neural network would probably need hundreds of billions of neurons connected in a complex tangle - rather like the human brain, in fact. Building and programming such a monstrosity is still far beyond the ability of Thaler or anyone else. Indeed some, like Pollack, think the "scaling up" is the hardest part of creating true artificial intelligence.

But for Thaler, the hard work has been done and scaling up will inevitably follow. "The big breakthrough is building something that has a free will. That technical problem has already been overcome.

"This is almost a private religion for me, in that at last there's something tangible which I can see as having a free will. People tend to think of evolution as being strictly protoplasmic or biological, but I think that evolution can now take many forms, including silicon-based electronics," he says. "People will start to ask the question: how am I distinct from these machines? And I think the inevitable answer, though sidestepped for decades, is that we're the same. Once we looked at the birds and emulated what was needed to fly, and now we're flying. And now I think we know enough about the brain that we can make real brains and do the things we thought were sacrosanct, like creation."

From New Scientist, 20 January 1996




  


 

© Copyright New Scientist, RBI Limited 1999