Wednesday 13 October 2010

How Quantum Computers Work

 
How stuff works
How Quantum Computers Work



The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.

Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.

You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine. Most digital computers, like the one you are using to read this article, are based on the Turing Theory. Learn what this is in the next section.


Defining the Quantum Computer

The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.

Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memoryprocessor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers. and a


The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.

This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).

Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.

Next, we'll look at some recent advancements in the field of quantum computing.

Qubit Control

Computer scientists control the microscopic particles that act as qubits in quantum computers by using control devices.

· Ion traps use optical or magnetic fields (or a combination of both) to trap ions.

· Optical traps use light waves to trap and control particles.

· Quantum dots are made of semiconductor material and are used to contain and manipulate electrons.

· Semiconductor impurities contain electrons by using "unwanted" atoms found in semiconductor material.

· Superconducting circuits allow electrons to flow with almost no resistance at very low temperatures.


Today's Quantum Computers

Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.

The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers. Several key advancements have been made in quantum computing in the last few years. Let's look at a few of the quantum computers that have been developed.

1998
Los Alamos and MIT researchers managed to spread a single qubit across three nuclear spins in each molecule of a liquid solution of alanine (an amino acid used to analyze quantum state decay) or trichloroethylene (a chlorinated hydrocarbon used for quantum error correction) molecules. Spreading out the qubit made it harder to corrupt, allowing researchers to use entanglement to study interactions between states as an indirect method for analyzing the quantum information.

2000
In March, scientists at Los Alamos National Laboratory announced the development of a 7-qubit quantum computer within a single drop of liquid. The quantum computer uses nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid, a simple fluid consisting of molecules made up of six hydrogen and four carbon atoms. The NMR is used to apply electromagnetic pulses, which force the particles to line up. These particles in positions parallel or counter to the magnetic field allow the quantum computer to mimic the information-encoding of bits in digital computers.

Researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer to date in August. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by NMR instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.

2001
Scientists from IBM and Stanford University successfully demonstrated Shor's Algorithm on a quantum computer. Shor's Algorithm is a method for finding the prime factors of numbers (which plays an intrinsic role in cryptography). They used a 7-qubit computer to find the factors of 15. The computer correctly deduced that the prime factors were 3 and 5.

2005
The Institute of Quantum Optics and Quantum Information at the University of Innsbruck announced that scientists had created the first qubyte, or series of 8 qubits, using ion traps.

2006
Scientists in Waterloo and Massachusetts devised methods for quantum control on a 12-qubit system. Quantum control becomes more complex as systems employ more qubits.

2007
Canadian startup company D-Wave demonstrated a 16-qubit quantum computer. The computer solved a sudoku puzzle and other pattern matching problems. The company claims it will produce practical systems by 2008. Skeptics believe practical quantum computers are still decades away, that the system D-Wave has created isn't scaleable, and that many of the claims on D-Wave's Web site are simply impossible (or at least impossible to know for certain given our understanding of quantum mechanics).

If functional quantum computers can be built, they will be valuable in factoring large numbers, and therefore extremely useful for decoding and encoding secret information. If one were to be built today, no information on the Internet would be safe. Our current methods of encryption are simple compared to the complicated methods possible in quantum computers. Quantum computers could also be used to search large databases in a fraction of the time that it would take a conventional computer. Other applications could include using quantum computers to study quantum mechanics, or even to design other quantum computers.

But quantum computing is still in its early stages of development, and many computer scientists believe the technology needed to create a practical quantum computer is years away. Quantum computers must have at least several dozen qubits to be able to solve real-world problems, and thus serve as a viable computing method.

For more information on quantum computers and related topics, check out the links on the next page.


Photo courtesy © 2007 D-Wave Systems, Inc.
D-Wave's 16-qubit quantum computer




Lots More Information

Related HowStuffWorks Articles












More Great Links






Sources

· "12-qubits Reached In Quantum Information Quest." Science Daily, May 2006.
http://www.sciencedaily.com/releases/2006/05/060508164700.htm

· Aaronson, Scott. "Shtetl-Optimized." April 10, 2007.
http://scottaaronson.com/blog

· Bone, Simone and Matias Castro. "A Brief History of Quantum Computing." Imperial College, London, Department of Computing. 1997.
http://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol4/spb3/

· Boyle, Alan. "A quantum leap in computing." MSNBC, May 18, 2000.
http://www.msnbc.msn.com/id/3077363

· "Center for Extreme Quantum Information Theory (xQIT), MIT." TechNews, March 2007.
http://www.technologynewsdaily.com/node/6280

· Centre for Quantum Computer Technology
http://www.qcaustralia.org/

· Cory, D.G., et al. "Experimental Quantum Error Correction." Amerian Physical Society, Physical Review Online Archive, September 1998.
http://prola.aps.org/abstract/PRL/v81/i10/p2152_1

· Grover, Lov K. "Quantum Computing." The Sciences, July/August 1999.
http://cryptome.org/qc-grover.htm

· Hogg, Tad. "An Overview of Quantum Computing." Quantum Computing and Phase Transitions in Combinatorial Search. Journal of Artificial Intelligence Research, 4, 91-128 (1996).
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume4/ hogg96a-html/node6.html

· "IBM's Test-Tube Quantum Computer Makes History." IBM Research, December 19, 2001.
http://domino.watson.ibm.com/comm/pr.nsf/pages/ news.20011219_quantum.html

· Institute for Quantum Computing.
http://www.iqc.ca

· Jonietz, Erika. "Quantum Calculation." Technology Review, July 2005.
http://www.technologyreview.com/Infotech/14591

· Maney, Kevin. "Beyond the PC: Atomic QC." USA Today.
http://www.amd1.com/quantum_computers.html

· "Quantum Computing." Stanford Encyclopedia of Philosophy, February 26, 2007.
http://plato.stanford.edu/entries/qt-quantcomp

· Qubit.org
http://www.qubit.org

· Simonite, Tom. "Flat 'ion trap' holds quantum computing promise." NewScientistTech, July 2006.
http://www.newscientisttech.com/article/ dn9502-flat-ion-trap-holds-quantum-computing-promise.html

· Vance, Ashlee. "D-Wave qubits in the era of Quantum Computing." The Register, February 13, 2007.
http://www.theregister.co.uk/2007/02/13/dwave_quantum

· West, Jacob. "The Quantum Computer." Computer Science at CalTech, April 28, 2000.
http://www.cs.caltech.edu/~westside/quantum-intro.html

Monday 11 October 2010

Second Life - Use shortcut keys for easier navigation

Task: Use Second Life Shortcut Keys
Required software: Second Life

1.
Alt+Left (Right) Arrow key- when you hold the Alt key, point to an object (by mouse left-clicking on it) and press the Left (Right) Arrow key, then the video camera will start rotating arround this object

Alt+Mouse Left Button+Left(Right) Moving the mouse - does the same thing as specified above

2.
Alt+Up (Down) Arrow key - when you hold the Alt key, point to an object (by mouse left-clicking on it) and press the Up (Down) Arrow key, the camera will start zooming in (out) to (from) the object.

Alt+Mouse Left Button+Mouse Move Forward (Backwards) - does the same thing as specified above

*If you want to return to the default camera angle view just press the Left (Right) Arrow key or "C" key

3.
Ctrl+Alt+Mouse Left Button+move mouse to the left / right - rotates camera to the left / right of the focused object horizontally (the distance between the camera and object stays fixed)
Ctrl+Alt+Mouse Left Button+move mouse forward / backwards - rotates camera up / down (the distance between the camera and object stays fixed)

4.
Ctrl+8 - Zoom out from the object that is on focus

Ctrl+0 - Zoom in to the object that the camera is shooting at

Ctrl+9 - switches to the default distance between the camera and the object in focus


5. By pressing the M key you will switch into Mouse view which means that the camera will be actually your eyesight view. In this view when you move the mouse to the sides or up/down your body will start turning to the sides or respectively your head will turn up/down (during this movements you don't see your avatar's body).


6. E (PgUp) key - Jump

7. F (Home) - Fly on / off

*Note: Most basic keys:

Left (Right) Arrow Key - turn left , turn right
Up (Down) Arrow Key - move forward, move backwards

References:
- special thanks to Melissa
- http://wiki.secondlife.com/wiki/All_keyboard_shortcut_keys
- http://wiki.secondlife.com/wiki/Help:Keyboard_shortcut_keys

Linux (Ubuntu) - Capture, convert and edit Second Life videos with glc and kdenlive


Task: Capture, edit and convert Second Life videos
Operating System: Ubuntu Linux 10.04 LTS
Used software: glc, kdenlive


The 2nd video that I have captured in Second Life The Galaxy Night Club: http://www.youtube.com/watch?v=BzJ_qV-HU-U

Full instructions for Second Life videos capturing using Ubuntu Linux 10.04 and glc:


Part A.

Install glc and mencoder on your computer (running Ubuntu Linux)

1. Ubuntu (32-bit) users need following packages installed in order to compile glc:
sudo apt-get install build-essential cmake libx11-dev libxxf86vm-dev libgl1-mesa-dev libasound2-dev libpng12-dev


2. Additionally on 64-bit systems following commands are necessary:

sudo apt-get install gcc-multilib
sudo ln -s /usr/lib32/libGL.so.1 /usr/lib32/libGL.so
sudo ln -s /usr/lib32/libasound.so.2 /usr/lib32/libasound.so
sudo ln -s /usr/lib32/libXxf86vm.so.1 /usr/lib32/libXxf86vm.so
sudo ln -s /usr/lib32/libX11.so.6 /usr/lib32/libX11.so
sudo ln -s /usr/lib32/libpng12.so.0 /usr/lib32/libpng.so

Note: For some other Linux distributions, please follow the instructions on the link:
http://nullkey.ath.cx/projects/glc/wiki/HowtoInstall

3. Install and compile glc from the Terminal

wget http://nullkey.ath.cx/glc/scripts/glc-build.sh
chmod a+x glc-build.sh
./glc-build.sh

*while installing the package you will be prompted to choose from some options (yes / no) - please choose the default values and complete the installation

4. Install mencoder which is required in Part B, Step 6 for converting the .glc project file to .avi video

Open Synaptics Package Manager and on the left side highlight All. In the search field type mencoder, mark the box (if it is not installed yet) and click on Apply



Part B.


1. Download the Second Life Viewer for Linux from http://secondlife.com/support/downloads/?lang=en-US

Then unarchive the .tar.bz2 file in a folder on your computer (for ex. on your Desktop):

/home/username/Desktop/SecondLife-i686-2.1.1.208043/secondlife

2. Create a GLC Project folder where you will save your glc project file (.glc) and the converted .avi file:

sudo mkdir /home/username/Desktop/glcvideos

3. In my specific case I get an error when I try to extract the audio from the raw .glc file and that is why I use the command that extracts only the video in order to avoid recording unreasonably large files. When you run the command below in the Terminal Second Life Viewer will launch and GLC will start capturing media from there:

glc-capture -o /home/username/Desktop/glcvideos/test.glc -r 1 -l disable-audio -s /home/username/Desktop/SecondLife-i686-2.1.1.208043/secondlife

If you want to use the resize option while recording (-r) and to have the audio capturing enabled, just type:

glc-capture -o /home/username/Desktop/glcvideos/test.glc -r 0.5 -s /home/username/Desktop/SecondLife-i686-2.1.1.208043/secondlife

* -r -- is a parameter specifying the resize level

4. While you have the Second Life window maximized, press Shift+F8 to stop capturing media

5. Go to the terminal and type the command below in order to play the already captured video from the created project file .glc saved in your GLC project folder:

glc-play /home/username/Desktop/glcvideos/test.glc

6. Transcode the video from the .glc file into .avi file (without sound)
*The glc audio compiling option explained on http://nullkey.ath.cx/projects/glc/wiki/HowtoEncode (Part 1: transcode audio to MP3) is not working for me with videos captured in Second Life (getting error "Warning: Unsupported audio format"), that is why I am applying only the command that compresses the video:

glc-play /home/username/Desktop/glcvideos/test.glc -o - -y 1 | mencoder -demuxer y4m - -nosound -ovc x264 -x264encopts qp=18:pass=1 -of avi -o /home/username/Desktop/glcvideos/video.avi

Part C.

In this part I will show you how to add an audio track to the already captured video, how to add some effects and save the video in .mp4 format using Kdenlive ... coming soon

References:


http://nullkey.ath.cx/projects/glc
http://nullkey.ath.cx/projects/glc/wiki/HowtoInstall
http://nullkey.ath.cx/projects/glc/wiki/HowtoCapture
http://nullkey.ath.cx/projects/glc/wiki/HowtoPlayback
http://nullkey.ath.cx/projects/glc/wiki/HowtoEncode
http://www.linux.com/community/blogs/Recording-your-3D-Games-Made-Easy.html
http://primforge.com/2009-02-07/capturing-with-glc/
http://www.5min.com/Video/How-to-Render-Videos-in-Kdenlive-161784133
http://www.kdenlive.org/rendering-profiles
http://www.kdenlive.org/youtube-1280x720-3000k-2-pass
http://www.kdenlive.org/mlt-profiles



Acknowledgements:


*I would like to express my endless gratitude to the glc & kdenlive developers and contributors, also to the people that have written the amazing user guides and articles for these software packages.

*Special thanks to my friend Pav for helping me out with my blog's web design and saving me a lot of time. Many kisses !!!

Wednesday 6 October 2010

Second Life - The Galaxy pt.2 (captured with glc & kdenlive, Ubuntu)



This is my second video uploaded to YouTube (captured in SL). I have improved the video capture by :

1. Enabling The Galaxy club's default permanent dancing option by left clicking with the mouse on the Galaxy Host , then a window popped up with a question: Do you want to animate your Avatar? I clicked Yes.

2. Using the Fade in / Fade out effect in kdenlive

Open your kdenlive project. Highlight the audio track on the timeline - click on Tools - Add Audio Effect - and click on Fade in / fade out . Choose the time span for the relevant fade in / fade out effect.

3. Using Second Life shortcut keys for easier navigation (please follow the link)

Sunday 3 October 2010

Second Life - The Galaxy pt.1 (captured with glc & kdenlive)



Hi, guys! I hope you will enjoy the video captured by me from the Galaxy club in Second Life. I have initially recorded it in .avi format using glc, then I have added the Korn's track to the video and converted it in .mp4 format using kdenlive (on Ubuntu Linux 10.04)

Here it is the link to the full article "Capture, edit & convert Second Life videos"

Saturday 3 April 2010

Discovery News - Will a Computer's Conscious Mind Emerge (Nov 2008)

http://news.discovery.com/tech/computer-conscious-mind-emerge.html

Will a Computer's Conscious Mind Emerge?

If the human brain is data being passed from neuron to neuron at its basic level and we can simulate that in a computer, shouldn’t a conscious mind start to emerge?

http://imagec12.247realmedia.com/RealMedia/ads/Creatives/default/empty.gif

Fri Nov 20, 2009 09:48 AM ET | content provided by Greg Fish

computer brain

If you want to simulate how the brain works, you need to imitate the electrical signals there that tell neurons which neurotransmitters to release.
iStockphoto

As you might have heard, supercomputers are now powerful enough to simulate crucial parts of cat brains and are on their way to map sections of the human mind to learn more about its basic functions. One day in the near future, we may very well be looking at complete simulations of a human brain that can imitate our key mental abilities. And if you believe some of the more ambitious computer science theoreticians, we’d make a giant leap towards creating conscious and aware artificial intelligence.

If the human brain is data being passed from neuron to neuron at its basic level and we can simulate that in a computer, shouldn’t a conscious mind start to emerge?

Simulated Thought Is a Long Way from Real Thinking

This argument, advanced by Michael Vassar or the Singularity Institute and his colleagues, is one of those ideas that sound intuitively plausible, but highly dubious in practice. The difference between simulated thinking and conscious thinking can be illustrated by thinking about the difference between a computer simulated boat and a real one.

High end graphic programs will let you draw a boat and put it on a virtual plane of water. It will let you specify the environment, solve a number of Navier-Stokes equations, calculate the exact amount of force to apply to each section of the ship and then it will calculate how the ship reacts to the changes. The end result is a visualization of what we think looks right instead of a real boat.

If you want to simulate how the brain works, you need to imitate the electrical signals there that tell neurons which neurotransmitters to release. It's a messy and complicated process rife with constant misfiring.

Just like our example of a virtual boat, a digital human brain would be a visualization of what we’re pretty sure happens in our heads according to current scientific knowledge. This is why the manager of IBM’s Cognitive Computing Unit, Dharmendra Modha, says, "Our hope is that by incorporating many of the ingredients that neuroscientists think may be important to cognition in the brain, such as general statistical connectivity pattern and plastic synapses, we may be able to use the model as a tool to help understand how the brain produces cognition."

Translation: the simulations of a human brain will give us an approximate map of how the thought process plays out and a conscious, self-aware mind is not going to arise from this statistical construct. The point is to try and make a computer that comes up with several approaches to tackling a problem, not to create a virtual human, or a digital cat that can match wits with a real human or a real feline respectively.

A Computer Brain is Still Just Code

In the future, if we model an entire brain in real time on the level of every neuron, every signal, and every burst of the neurotransmitter, we’ll just end up with a very complex visualization controlled by a complex set of routines and subroutines.

These models could help neurosurgeons by mimicking what would happen during novel brain surgery, or provide ideas for neuroscientists, but they’re not going to become alive or self aware since as far as a computer is concerned, they live as millions of lines of code based on a multitude of formulas and rules. The real chemistry that makes our brains work will be locked in our heads, far away from the circuitry trying to reproduce its results.

Now, if we built a new generation of computers using organic components, the simulations we could run could have some very interesting results.

New Scientist magazine - Memristor Minds

New Scientist magazine - 04 July 2009 Issue 2715

Memristor minds


What connects our own human intelligence to the unsung cunning of slime moulds? An electronic component that no one thought existed, as Justin Mullins explains

EVER had the feeling something is missing? If so, you’re in good company. Dmitri Mendeleev did in 1869 when he noticed four gaps in his periodic table. They turned out to be the undiscovered elements scandium, gallium, technetium and germanium. Paul Dirac did in 1929 when he looked deep into the quantum-mechanical equation he had formulated to describe the electron. Besides the electron, he saw something else that looked rather like it, but different. It was only in 1932, when the electron’s antimatter sibling, the positron, was sighted in cosmic rays that such a thing was found to exist.

In 1971, Leon Chua had that feeling. A young electronics engineer with a penchant for mathematics at the University of California, Berkeley, he was fascinated by the fact that electronics had no rigorous mathematical foundation. So like any diligent scientist, he set about trying to derive one.

And he found something missing: a fourth basic circuit element besides the standard trio of resistor, capacitor and inductor. Chua dubbed it the “memristor”. The only problem was that as far as Chua or anyone else could see, memristors did not actually exist.

Except that they do. Within the past couple of years, memristors have morphed from obscure jargon into one of the hottest properties in physics. They’ve not only been made, but their unique capabilities might revolutionise consumer electronics. More than that, though, along with completing the jigsaw of electronics, they might solve the puzzle of how nature makes that most delicate and powerful of computers – the brain.

That would be a fitting pay-off for a story which, in its beginnings, is a triumph of pure logic. Back in 1971, Chua was examining the four basic quantities that define an electronic circuit. First, there is electric charge. Then there is the change in that charge over time, better known as current. Currents create magnetic fields, leading to a third variable, magnetic flux, which characterises the field’s strength. Finally, magnetic flux varies with time, leading to the quantity we call voltage.

Four interconnected things, mathematics says, can be related in six ways. Charge and current, and magnetic flux and voltage, are connected through their definitions. That’s two. Three more associations correspond to the three traditional circuit elements. A resistor is any device that, when you pass current through it, creates a voltage. For a given voltage a capacitor will store a certain amount of charge. Pass a current through an inductor, and you create a magnetic flux. That makes five. Something missing?

Indeed. Where was the device that connected charge and magnetic flux? The short answer was there wasn’t one. But there should have been.

Chua set about exploring what this device would do. It was something that no combination of resistors, capacitors and inductors would do. Because moving charges make currents, and changing magnetic fluxes breed voltages, the new device would generate a voltage from a current rather like a resistor, but in a complex, dynamic way. In fact, Chua calculated, it would behave like a resistor that could “remember” what current had flowed through it before (see diagram, page 44). Thus the memristor was born.

And promptly abandoned. Though it was welcome in theory, no physical device or material seemed capable of the resistancewith-memory effect. The fundamentals of electronics have kept Chua busy ever since, but even he had low expectations for his baby. “I never thought I’d see one of these devices in my lifetime,” he says.

He had reckoned without Stan Williams, senior fellow at the Hewlett-Packard Laboratories in Palo Alto, California. In the early 2000s, Williams and his team were wondering whether you could create a fast, low-power switch by placing two tiny resistors made of titanium dioxide over one another, using the current in one to somehow toggle the resistance in the other on and off.

Nanoscale novelty

They found that they could, but the resistance in different switches behaved in a way that was impossible to predict using any conventional model. Williams was stumped. It took three years and a chance tip-off from a colleague about Chua’s work before the revelation came. “I realised suddenly that the equations I was writing down to describe our device were very similar to Chua’s,” says Williams. “Then everything fell into place.”

What was happening was this: in its pure state of repeating units of one titanium and two oxygen atoms, titanium dioxide is a semiconductor. Heat the material, though, and some of the oxygen is driven out of the structure, leaving electrically charged bubbles that make the material behave like a metal.

In Williams’s switches, the upper resistor was made of pure semiconductor, and the lower of the oxygen-deficient metal. Applying a voltage to the device pushes charged bubbles up from the metal, radically reducing the semiconductor’s resistance and making it into a full-blown conductor. A voltage applied in the other direction starts the merry-goround revolving the other way: the bubbles drain back down into the lower layer, and the upper layer reverts to a high-resistance, semiconducting state.

The crucial thing is that, every time the voltage is switched off, the merry-go-round stops and the resistance is frozen. When the voltage is switched on again, the system “remembers” where it was, waking up in the same resistance state ( Nature, vol 453, p 80 ) .

Williams had accidentally made a memristor just as Chua had described it. Williams could also show why a memristor had never been seen before. Because the effect depends on atomic-scale movements, it only popped up on the nanoscale of Williams’s devices. “On the millimetre scale, it is essentially unobservable,” he says.

Nanoscale or no, it rapidly became clear just how useful memristors might be. Information can be written into the material as the resistance state of the memristor in a few nanoseconds using just a few picojoules of energy – “as good as anything needs to be”, according to Williams. And once written, memristive memory stays written even when the power is switched off.

Memory mould

A memristor never forgets

The “resistor with memory” that Leon Chua described behaves like a pipe whose diameter varies according to the amount and direction of the current passing through it

IF THE CURRENT IS TURNED OFF, THE PIPE’S DIAMETER STAYS THE SAME UNTIL IT IS SWITCHED ON AGAIN – IT “REMEMBERS” WHAT CURRENT HAS FLOWED THROUGH IT

This was a revelation. For 50 years, electronics engineers had been building networks of dozens of transistors – the building blocks of memory chips – to store single bits of information without knowing it was memristance they were attempting to simulate. Now Williams, standing on the shoulders of Chua, had showed that a single tiny component was all they needed.

The most immediate potential use is as a powerful replacement for flash memory – the kind used in applications that require quick writing and rewriting capabilities, such as in cameras and USB memory sticks. Like flash memory, memristive memory can only be written 10,000 times or so before the constant atomic movements within the device cause it to break down. That makes it unsuitable for computer memories. Still, Williams believes it will be possible to improve the durability of memristors. Then, he says, they could be just the thing for a superfast random access memory (RAM), the working memory that computers use to store data on the fly, and ultimately even for hard drives.

Were this an article about a conventional breakthrough in electronics, that would be the end of the story. Better memory materials alone do not set the pulse racing. We have come to regard ever zippier consumer electronics as a basic right, and are notoriously insouciant about the improvements in basic physics that make them possible. What’s different about memristors?

Explaining that requires a dramatic change of scene – to the world of the slime mould Physarum polycephalum. In an understated way, this large, gloopy, single-celled organism is a beast of surprising intelligence. It can sense and react to its environment, and can even solve simple puzzles . Perhaps its most remarkable skill, though, was reported last year by Tetsu Saisuga and his colleagues at Hokkaido University in Sapporo, Japan: it can anticipate periodic events.

Here’s how we know. P. polycephalum can move around by passing a watery substance known as sol through its viscous, gelatinous interior, allowing it to extend itself in a particular direction. At room temperature, the slime mould moves at a slothful rate of about a centimetre per hour, but you can speed this movement up by giving the mould a blast of warm, moist air.

You can also slow it down with a cool, dry breeze, which is what the Japanese researchers did. They exposed the gloop to 10 minutes of cold air, allowed it to warm up again for a set period of time, and repeated the sequence three times. Sure enough, the mould slowed down and sped up in time with the temperature changes.

But then they changed the rules. Instead of giving P. polycephalum a fourth blast of cold air, they did nothing. The slime mould’s reaction was remarkable: it slowed down again, in anticipation of a blast that never came ( Physical Review Letters, vol 100, p 018101).

It’s worth taking a moment to think about what this means. Somehow, this single-celled organism had memorised the pattern of events it was faced with and changed its behaviour to anticipate a future event. That’s something we humans have trouble enough with, let alone a single-celled organism without a neuron to call its own.

The Japanese paper rang a bell with Max Di Ventra , a physicist at the University of California, San Diego. He was one of the few who had followed Chua’s work, and recognized that the slime mould was behaving like a memristive circuit. To prove his contention, he and his colleagues set about building a circuit that would, like the slime mould, learn and predict future signals.

The analogous circuit proved simple to derive. Changes in an external voltage applied to the circuit simulated changes in the temperature and humidity of the slime mould’s environment, and the voltage across a memristive element represented the slime mould’s speed. Wired up the right way, the memristor’s voltage would vary in tempo with an arbitrary series of external voltage pulses. When “trained” through a series of three equally spaced voltage pulses, the memristor voltage repeated the response even when subsequent pulses did not appear ( www.arxiv.org/abs/0810.4179).

Di Ventra speculates that the viscosities of the sol and gel components of the slime mould make for a mechanical analogue of memristance. When the external temperature rises, the gel component starts to break down and become less viscous, creating new pathways through which the sol can flow and speeding up the cell’s movement. A lowered temperature reverses that process, but how the initial state is regained depends on where the pathways were formed, and therefore on the cell’s internal history.

In true memristive fashion, Chua had anticipated the idea that memristors might have something to say about how biological organisms learn. While completing his first paper on memristors, he became fascinated by synapses – the gaps between nerve cells in higher organisms across which nerve impulses must pass. In particular, he noticed their complex electrical response to the ebb and flow of potassium and sodium ions across the membranes of each cell, which allow the synapses to alter their response according to the frequency and strength of signals. It looked maddeningly similar to the response a memristor would produce. “I realised then that synapses were memristors,” he says. “The ion channel was the missing circuit element I was looking for, and it already existed in nature.”

To Chua, this all points to a home truth. Despite years of effort, attempts to build an electronic intelligence that can mimic the awesome power of a brain have seen little success. And that might be simply because we were lacking the crucial electronic components – memristors.

So now we’ve found them, might a new era in artificial intelligence be at hand? The Defense Advanced Research Projects Agency certainly thinks so. DARPA is a US Department of Defense outfit with a strong record in backing high-risk, high-pay-off projects – things like the internet. In April last year, it announced the Systems of Neuromorphic Adaptive Plastic Scalable Electronics Program, SyNAPSE for short, to create “electronic neuromorphic machine technology that is scalable to biological levels”.

I, memristor

Williams’s team from Hewlett-Packard is heavily involved. Late last year, in an obscure US Department of Energy publication called SciDAC Review, his colleague Greg Snider set out how a memristor-based chip might be wired up to test more complex models of synapses. He points out that in the human cortex synapses are packed at a density of about 1010 per square centimetre, whereas today’s microprocessors only manage densities 10 times less. “That is one important reason intelligent machines are not yet walking around on the street,” he says.

Snider’s dream is of a field he calls “cortical computing” that harnesses the possibilities of memristors to mimic how the brain’s neurons interact. It’s an entirely new idea. “People confuse these kinds of networks with neural networks,” says Williams. But neural networks – the previous best hope for creating an artificial brain – are software working on standard computing hardware. “What we’re aiming for is actually a change in architecture,” he says.

The first steps are already being taken. Williams and Snider have teamed up with Gail Carpenter and Stephen Grossberg at Boston University, who are pioneers in reducing neural behaviours to systems of differential equations, to create hybrid transitormemristor chips designed to reproduce some of the brain’s thought processes. Di Ventra and his colleague Yuriy Pershin have gone further and built a memristive synapse that they claim behaves like the real thing( www.arxiv.org/abs/0905.2935 ).

The electronic brain will be a time coming. “We’re still getting to grips with this chip,” says Williams. Part of the problem is that the chip is just too intelligent – rather than a standard digital pulse it produces an analogue output that flummoxes the standard software used to test chips. So Williams and his colleagues have had to develop their own test software. “All that takes time,” he says.

Chua, meanwhile, is not resting on his laurels. He has been busy extending his theory of fundamental circuit elements, asking what happens if you combine the properties of memristors with those of capacitors and inductors to produce compound devices called memcapacitors and meminductors, and then what happens if you combine those devices, and so on.

“Memcapacitors may be even more useful than memristors,” says Chua, “because they don’t have any resistance.” In theory at least, a memcapacitor could store data without dissipating any energy at all. Mighty handy – whatever you want to do with them. Williams agrees. In fact, his team is already on the case, producing a first prototype memcapacitor earlier this year, a result that he aims to publish soon. “We haven’t characterised it yet,” he says. With so many fundamental breakthroughs to work on, he says, it’s hard to decide what to do next. Maybe a memristor could help.

They might not look much, but slime moulds can

be surprisingly quick-witted beasts

Justin Mullins is a consultant editor for New Scientist