Thursday, May 20, 2010

Scientific American - When Will We Be Able to Build Brains Like Ours?

This is an older article that I hadn't seen until now - more silliness from the artificial intelligence folks. If the mind were just the brain, they might be right, but a brain without a body and psycho-social environmental and temporal context is not much use.

I wonder if they have figured out how to program irrational and emotionally based cognition?

When Will We Be Able to Build Brains Like Ours?

Sooner than you think -- and the race has lately caused a 'catfight'

By Terry Sejnowski

When physicists puzzle out the workings of some new part of nature, that knowledge can be used to build devices that do amazing things -- airplanes that fly, radios that reach millions of listeners. When we come to understand how brains function, we should become able to build amazing devices with cognitive abilities -- such as cognitive cars that are better at driving than we are because they communicate with other cars and share knowledge on road conditions. In 2008, the National Academy of Engineering chose as one of its grand challenges to reverse-engineer the human brain. When will this happen? Some are predicting that the first wave of results will arrive within the decade, propelled by rapid advances in both brain science and computer science. This sounds astonishing, but it’s becoming increasingly plausible. So plausible, in fact, that the great race to reverse-engineer the brain is already triggering a dispute over historic “firsts.”

The backdrop for the debate is one of dramatic progress. Neuroscientists are disassembling brains into their component parts, down to the last molecule, and trying to understand how they work from the bottom up. Researchers are racing to work out the wiring diagrams of big brains, starting with mice, cats and eventually humans, a new field called connectomics. New techniques are making it possible to record from many neurons simultaneously, and to selectively stimulate or silence specific neurons. There is an excitement in the air and a sense that we are beginning to understand how the brain works at the circuit level. Brain modelers have so far been limited to modeling small networks with only a few thousand neurons, but this is rapidly changing.

Meanwhile, digital computers are increasing exponentially in processing power, memory storage and communications bandwidth. Up until recently, this was accomplished by accelerating the clock speed, which has leaped from kilohertz to gigahertz in my lifetime. But computer clocks have plateaued and now, advances in computing power are coming from increases in the number of processors and improved abilities to distribute a problem across them. The fastest supercomputers have hundreds of thousands of processors, and graphics processing units (GPUs) give desktop personal computers the same speed that supercomputers had ten years ago. If Moore’s Law of exponential growth in computing power does not break down first, at some point computers should become powerful enough, and our knowledge of the brain should be complete enough, to build devices based on the principles of neural computation. Like brains, these devices will be based on probabilistic rather than deterministic logic and will reason inductively rather than deductively.

Now, to the dispute, widely known as the “catfight.” Last November, IBM researcher Dharmendra Modha announced at a supercomputing conference that his team had written a program that simulated a cat brain. This news took many by surprise, since he had leapfrogged over the mouse brain and beaten other groups to this milestone. For this work, Modha won the prestigious ACM Gordon Bell prize, which is awarded to recognize outstanding achievement in high-performance computing applications.

However, his audacious claim was challenged by Henry Markram, a neuroscientist at the Ecole Polytechnique Fédérale de Lausanne and the leader of the Blue Brain project, who announced in 2009 that: "It is not impossible to build a human brain and we can do it in 10 years.". In an open letter to IBM Chief Technical Officer Bernard Meyerson, Markram accused Modha of “mass deception” and called his paper a “hoax” and a “scam.” This has become a cause célèbre in the blogosphere and remains a hot topic among those of us who inhabit the intersection of brain and computer science.

The crux of the dispute is: What does it mean to model the cat brain? Both groups are simulating a large number of model neurons and connections between them. Both models run much, much slower than real time. The neurons in Modha’s model only have a soma -- the cell body containing the cell nucleus -- and simplified spikes. In contrast, Markram’s model has detailed reconstructions of neurons, with complex systems of branching connections called dendrites and even a full range of gating and communication mechanisms such as ion channels. The synapses and connections between the neurons in Modha’s model are simplified compared to the detailed biophysical synapses in Markram’s model. These two models are at the extremes of simplicity and complex realism.

This controversy puts into perspective a tension between wanting to use simplified models of neurons, in order to run simulations faster, versus including the biological details of neurons in order to understand them. Looking at the same neuron, physicists and engineers tend to see the simplicity whereas biologists tend to see the complexity. The problem with simplified models is that they may be throwing away the baby with the bathwater. The problem with biophysical models is that the number of details is nearly infinite and much of it is unknown. How much brain function is lost by using simplified neurons and circuits? This is one of the questions we might be able to answer if we could get Modha and Markram to directly compare their models.

Unfortunately, the large-scale simulations from both groups at present resemble sleep rhythms or epilepsy far more closely than they resemble cat behavior, since neither has sensory inputs or motor outputs. They are also missing essential subcortical structures, – such as the cerebellum that organizes movements, the amygdala that creates emotional states and the spinal cord that runs the musculature. Nonetheless, from Modha’s model we are learning how to program large-scale parallel architectures to perform simulations that scale up to the large numbers of neurons and synapses in real brains. From Markram’s models, we are learning how to integrate many levels of detail into these models. In his paper, Modha predicts that the largest supercomputer will be able to simulate the basic elements of a human brain in real time by 2019, so apparently he and Markram agree on this date; however, at best these simulations will resemble a baby brain, or perhaps a psychotic one. There is much more to a human brain than the sum of its parts.

Of course, it may not be necessary or desirable to build a cat or a human brain, since we already have fully functional cats and humans. This technology could, however, enable other applications. In 2005, Simon Haykin, director of the Cognitive Systems Laboratory at McMaster University, wrote an influential article called “Cognitive radio: Brain-empowered wireless communications” which laid the groundwork for a new generation of wireless networks that use computational principles from brains to predictively model the use of the electromagnetic spectrum, and are more efficient at using the bandwidth than current standards. This is not pie in the sky. Plans to deploy early versions of these intelligent communications systems in the next federal auction of the electromagnetic spectrum were discussed at a recent meeting of the Council of Advisors on Science and Technology with President Obama.

Soon to come are similar ways to enhance other utilities, such as the “cognitive power grid,” and other devices, such as the cognitive car. The sensorium and motorium of these cognitive systems will be the infrastructure of the world. Sensors will stream information -- on the use of electricity, road conditions, weather patterns, the spread of diseases -- and use this information to optimize goals, such as reducing power usage and travel time, by regulating the flow of resources. Parts of this system are already in place but there is as yet no central nervous system to integrate this torrent of information and take appropriate actions. Someday soon, it appears, there will be. And gradually, as it increasingly mimics the workings of our brains, the world around us will become smarter and more efficient. As this cognitive infrastructure evolves, it may someday even reach a point where it will rival our brains in power and sophistication. Intelligence will inherit the earth.

ABOUT THE AUTHOR(S)

Terry Sejnowski is the Francis Crick Professor at the Salk Institute for Biological Studies,where he directs the Computational Neurobiology Laboratory.


No comments: