Saturday 3 April 2010

Discovery News - Will a Computer's Conscious Mind Emerge (Nov 2008)

http://news.discovery.com/tech/computer-conscious-mind-emerge.html

Will a Computer's Conscious Mind Emerge?

If the human brain is data being passed from neuron to neuron at its basic level and we can simulate that in a computer, shouldn’t a conscious mind start to emerge?

http://imagec12.247realmedia.com/RealMedia/ads/Creatives/default/empty.gif

Fri Nov 20, 2009 09:48 AM ET | content provided by Greg Fish

computer brain

If you want to simulate how the brain works, you need to imitate the electrical signals there that tell neurons which neurotransmitters to release.
iStockphoto

As you might have heard, supercomputers are now powerful enough to simulate crucial parts of cat brains and are on their way to map sections of the human mind to learn more about its basic functions. One day in the near future, we may very well be looking at complete simulations of a human brain that can imitate our key mental abilities. And if you believe some of the more ambitious computer science theoreticians, we’d make a giant leap towards creating conscious and aware artificial intelligence.

If the human brain is data being passed from neuron to neuron at its basic level and we can simulate that in a computer, shouldn’t a conscious mind start to emerge?

Simulated Thought Is a Long Way from Real Thinking

This argument, advanced by Michael Vassar or the Singularity Institute and his colleagues, is one of those ideas that sound intuitively plausible, but highly dubious in practice. The difference between simulated thinking and conscious thinking can be illustrated by thinking about the difference between a computer simulated boat and a real one.

High end graphic programs will let you draw a boat and put it on a virtual plane of water. It will let you specify the environment, solve a number of Navier-Stokes equations, calculate the exact amount of force to apply to each section of the ship and then it will calculate how the ship reacts to the changes. The end result is a visualization of what we think looks right instead of a real boat.

If you want to simulate how the brain works, you need to imitate the electrical signals there that tell neurons which neurotransmitters to release. It's a messy and complicated process rife with constant misfiring.

Just like our example of a virtual boat, a digital human brain would be a visualization of what we’re pretty sure happens in our heads according to current scientific knowledge. This is why the manager of IBM’s Cognitive Computing Unit, Dharmendra Modha, says, "Our hope is that by incorporating many of the ingredients that neuroscientists think may be important to cognition in the brain, such as general statistical connectivity pattern and plastic synapses, we may be able to use the model as a tool to help understand how the brain produces cognition."

Translation: the simulations of a human brain will give us an approximate map of how the thought process plays out and a conscious, self-aware mind is not going to arise from this statistical construct. The point is to try and make a computer that comes up with several approaches to tackling a problem, not to create a virtual human, or a digital cat that can match wits with a real human or a real feline respectively.

A Computer Brain is Still Just Code

In the future, if we model an entire brain in real time on the level of every neuron, every signal, and every burst of the neurotransmitter, we’ll just end up with a very complex visualization controlled by a complex set of routines and subroutines.

These models could help neurosurgeons by mimicking what would happen during novel brain surgery, or provide ideas for neuroscientists, but they’re not going to become alive or self aware since as far as a computer is concerned, they live as millions of lines of code based on a multitude of formulas and rules. The real chemistry that makes our brains work will be locked in our heads, far away from the circuitry trying to reproduce its results.

Now, if we built a new generation of computers using organic components, the simulations we could run could have some very interesting results.

New Scientist magazine - Memristor Minds

New Scientist magazine - 04 July 2009 Issue 2715

Memristor minds


What connects our own human intelligence to the unsung cunning of slime moulds? An electronic component that no one thought existed, as Justin Mullins explains

EVER had the feeling something is missing? If so, you’re in good company. Dmitri Mendeleev did in 1869 when he noticed four gaps in his periodic table. They turned out to be the undiscovered elements scandium, gallium, technetium and germanium. Paul Dirac did in 1929 when he looked deep into the quantum-mechanical equation he had formulated to describe the electron. Besides the electron, he saw something else that looked rather like it, but different. It was only in 1932, when the electron’s antimatter sibling, the positron, was sighted in cosmic rays that such a thing was found to exist.

In 1971, Leon Chua had that feeling. A young electronics engineer with a penchant for mathematics at the University of California, Berkeley, he was fascinated by the fact that electronics had no rigorous mathematical foundation. So like any diligent scientist, he set about trying to derive one.

And he found something missing: a fourth basic circuit element besides the standard trio of resistor, capacitor and inductor. Chua dubbed it the “memristor”. The only problem was that as far as Chua or anyone else could see, memristors did not actually exist.

Except that they do. Within the past couple of years, memristors have morphed from obscure jargon into one of the hottest properties in physics. They’ve not only been made, but their unique capabilities might revolutionise consumer electronics. More than that, though, along with completing the jigsaw of electronics, they might solve the puzzle of how nature makes that most delicate and powerful of computers – the brain.

That would be a fitting pay-off for a story which, in its beginnings, is a triumph of pure logic. Back in 1971, Chua was examining the four basic quantities that define an electronic circuit. First, there is electric charge. Then there is the change in that charge over time, better known as current. Currents create magnetic fields, leading to a third variable, magnetic flux, which characterises the field’s strength. Finally, magnetic flux varies with time, leading to the quantity we call voltage.

Four interconnected things, mathematics says, can be related in six ways. Charge and current, and magnetic flux and voltage, are connected through their definitions. That’s two. Three more associations correspond to the three traditional circuit elements. A resistor is any device that, when you pass current through it, creates a voltage. For a given voltage a capacitor will store a certain amount of charge. Pass a current through an inductor, and you create a magnetic flux. That makes five. Something missing?

Indeed. Where was the device that connected charge and magnetic flux? The short answer was there wasn’t one. But there should have been.

Chua set about exploring what this device would do. It was something that no combination of resistors, capacitors and inductors would do. Because moving charges make currents, and changing magnetic fluxes breed voltages, the new device would generate a voltage from a current rather like a resistor, but in a complex, dynamic way. In fact, Chua calculated, it would behave like a resistor that could “remember” what current had flowed through it before (see diagram, page 44). Thus the memristor was born.

And promptly abandoned. Though it was welcome in theory, no physical device or material seemed capable of the resistancewith-memory effect. The fundamentals of electronics have kept Chua busy ever since, but even he had low expectations for his baby. “I never thought I’d see one of these devices in my lifetime,” he says.

He had reckoned without Stan Williams, senior fellow at the Hewlett-Packard Laboratories in Palo Alto, California. In the early 2000s, Williams and his team were wondering whether you could create a fast, low-power switch by placing two tiny resistors made of titanium dioxide over one another, using the current in one to somehow toggle the resistance in the other on and off.

Nanoscale novelty

They found that they could, but the resistance in different switches behaved in a way that was impossible to predict using any conventional model. Williams was stumped. It took three years and a chance tip-off from a colleague about Chua’s work before the revelation came. “I realised suddenly that the equations I was writing down to describe our device were very similar to Chua’s,” says Williams. “Then everything fell into place.”

What was happening was this: in its pure state of repeating units of one titanium and two oxygen atoms, titanium dioxide is a semiconductor. Heat the material, though, and some of the oxygen is driven out of the structure, leaving electrically charged bubbles that make the material behave like a metal.

In Williams’s switches, the upper resistor was made of pure semiconductor, and the lower of the oxygen-deficient metal. Applying a voltage to the device pushes charged bubbles up from the metal, radically reducing the semiconductor’s resistance and making it into a full-blown conductor. A voltage applied in the other direction starts the merry-goround revolving the other way: the bubbles drain back down into the lower layer, and the upper layer reverts to a high-resistance, semiconducting state.

The crucial thing is that, every time the voltage is switched off, the merry-go-round stops and the resistance is frozen. When the voltage is switched on again, the system “remembers” where it was, waking up in the same resistance state ( Nature, vol 453, p 80 ) .

Williams had accidentally made a memristor just as Chua had described it. Williams could also show why a memristor had never been seen before. Because the effect depends on atomic-scale movements, it only popped up on the nanoscale of Williams’s devices. “On the millimetre scale, it is essentially unobservable,” he says.

Nanoscale or no, it rapidly became clear just how useful memristors might be. Information can be written into the material as the resistance state of the memristor in a few nanoseconds using just a few picojoules of energy – “as good as anything needs to be”, according to Williams. And once written, memristive memory stays written even when the power is switched off.

Memory mould

A memristor never forgets

The “resistor with memory” that Leon Chua described behaves like a pipe whose diameter varies according to the amount and direction of the current passing through it

IF THE CURRENT IS TURNED OFF, THE PIPE’S DIAMETER STAYS THE SAME UNTIL IT IS SWITCHED ON AGAIN – IT “REMEMBERS” WHAT CURRENT HAS FLOWED THROUGH IT

This was a revelation. For 50 years, electronics engineers had been building networks of dozens of transistors – the building blocks of memory chips – to store single bits of information without knowing it was memristance they were attempting to simulate. Now Williams, standing on the shoulders of Chua, had showed that a single tiny component was all they needed.

The most immediate potential use is as a powerful replacement for flash memory – the kind used in applications that require quick writing and rewriting capabilities, such as in cameras and USB memory sticks. Like flash memory, memristive memory can only be written 10,000 times or so before the constant atomic movements within the device cause it to break down. That makes it unsuitable for computer memories. Still, Williams believes it will be possible to improve the durability of memristors. Then, he says, they could be just the thing for a superfast random access memory (RAM), the working memory that computers use to store data on the fly, and ultimately even for hard drives.

Were this an article about a conventional breakthrough in electronics, that would be the end of the story. Better memory materials alone do not set the pulse racing. We have come to regard ever zippier consumer electronics as a basic right, and are notoriously insouciant about the improvements in basic physics that make them possible. What’s different about memristors?

Explaining that requires a dramatic change of scene – to the world of the slime mould Physarum polycephalum. In an understated way, this large, gloopy, single-celled organism is a beast of surprising intelligence. It can sense and react to its environment, and can even solve simple puzzles . Perhaps its most remarkable skill, though, was reported last year by Tetsu Saisuga and his colleagues at Hokkaido University in Sapporo, Japan: it can anticipate periodic events.

Here’s how we know. P. polycephalum can move around by passing a watery substance known as sol through its viscous, gelatinous interior, allowing it to extend itself in a particular direction. At room temperature, the slime mould moves at a slothful rate of about a centimetre per hour, but you can speed this movement up by giving the mould a blast of warm, moist air.

You can also slow it down with a cool, dry breeze, which is what the Japanese researchers did. They exposed the gloop to 10 minutes of cold air, allowed it to warm up again for a set period of time, and repeated the sequence three times. Sure enough, the mould slowed down and sped up in time with the temperature changes.

But then they changed the rules. Instead of giving P. polycephalum a fourth blast of cold air, they did nothing. The slime mould’s reaction was remarkable: it slowed down again, in anticipation of a blast that never came ( Physical Review Letters, vol 100, p 018101).

It’s worth taking a moment to think about what this means. Somehow, this single-celled organism had memorised the pattern of events it was faced with and changed its behaviour to anticipate a future event. That’s something we humans have trouble enough with, let alone a single-celled organism without a neuron to call its own.

The Japanese paper rang a bell with Max Di Ventra , a physicist at the University of California, San Diego. He was one of the few who had followed Chua’s work, and recognized that the slime mould was behaving like a memristive circuit. To prove his contention, he and his colleagues set about building a circuit that would, like the slime mould, learn and predict future signals.

The analogous circuit proved simple to derive. Changes in an external voltage applied to the circuit simulated changes in the temperature and humidity of the slime mould’s environment, and the voltage across a memristive element represented the slime mould’s speed. Wired up the right way, the memristor’s voltage would vary in tempo with an arbitrary series of external voltage pulses. When “trained” through a series of three equally spaced voltage pulses, the memristor voltage repeated the response even when subsequent pulses did not appear ( www.arxiv.org/abs/0810.4179).

Di Ventra speculates that the viscosities of the sol and gel components of the slime mould make for a mechanical analogue of memristance. When the external temperature rises, the gel component starts to break down and become less viscous, creating new pathways through which the sol can flow and speeding up the cell’s movement. A lowered temperature reverses that process, but how the initial state is regained depends on where the pathways were formed, and therefore on the cell’s internal history.

In true memristive fashion, Chua had anticipated the idea that memristors might have something to say about how biological organisms learn. While completing his first paper on memristors, he became fascinated by synapses – the gaps between nerve cells in higher organisms across which nerve impulses must pass. In particular, he noticed their complex electrical response to the ebb and flow of potassium and sodium ions across the membranes of each cell, which allow the synapses to alter their response according to the frequency and strength of signals. It looked maddeningly similar to the response a memristor would produce. “I realised then that synapses were memristors,” he says. “The ion channel was the missing circuit element I was looking for, and it already existed in nature.”

To Chua, this all points to a home truth. Despite years of effort, attempts to build an electronic intelligence that can mimic the awesome power of a brain have seen little success. And that might be simply because we were lacking the crucial electronic components – memristors.

So now we’ve found them, might a new era in artificial intelligence be at hand? The Defense Advanced Research Projects Agency certainly thinks so. DARPA is a US Department of Defense outfit with a strong record in backing high-risk, high-pay-off projects – things like the internet. In April last year, it announced the Systems of Neuromorphic Adaptive Plastic Scalable Electronics Program, SyNAPSE for short, to create “electronic neuromorphic machine technology that is scalable to biological levels”.

I, memristor

Williams’s team from Hewlett-Packard is heavily involved. Late last year, in an obscure US Department of Energy publication called SciDAC Review, his colleague Greg Snider set out how a memristor-based chip might be wired up to test more complex models of synapses. He points out that in the human cortex synapses are packed at a density of about 1010 per square centimetre, whereas today’s microprocessors only manage densities 10 times less. “That is one important reason intelligent machines are not yet walking around on the street,” he says.

Snider’s dream is of a field he calls “cortical computing” that harnesses the possibilities of memristors to mimic how the brain’s neurons interact. It’s an entirely new idea. “People confuse these kinds of networks with neural networks,” says Williams. But neural networks – the previous best hope for creating an artificial brain – are software working on standard computing hardware. “What we’re aiming for is actually a change in architecture,” he says.

The first steps are already being taken. Williams and Snider have teamed up with Gail Carpenter and Stephen Grossberg at Boston University, who are pioneers in reducing neural behaviours to systems of differential equations, to create hybrid transitormemristor chips designed to reproduce some of the brain’s thought processes. Di Ventra and his colleague Yuriy Pershin have gone further and built a memristive synapse that they claim behaves like the real thing( www.arxiv.org/abs/0905.2935 ).

The electronic brain will be a time coming. “We’re still getting to grips with this chip,” says Williams. Part of the problem is that the chip is just too intelligent – rather than a standard digital pulse it produces an analogue output that flummoxes the standard software used to test chips. So Williams and his colleagues have had to develop their own test software. “All that takes time,” he says.

Chua, meanwhile, is not resting on his laurels. He has been busy extending his theory of fundamental circuit elements, asking what happens if you combine the properties of memristors with those of capacitors and inductors to produce compound devices called memcapacitors and meminductors, and then what happens if you combine those devices, and so on.

“Memcapacitors may be even more useful than memristors,” says Chua, “because they don’t have any resistance.” In theory at least, a memcapacitor could store data without dissipating any energy at all. Mighty handy – whatever you want to do with them. Williams agrees. In fact, his team is already on the case, producing a first prototype memcapacitor earlier this year, a result that he aims to publish soon. “We haven’t characterised it yet,” he says. With so many fundamental breakthroughs to work on, he says, it’s hard to decide what to do next. Maybe a memristor could help.

They might not look much, but slime moulds can

be surprisingly quick-witted beasts

Justin Mullins is a consultant editor for New Scientist