When Rene Descartes said, “I think, therefore I am,” the philosopher probably didn’t imagine a stamp-sized clump of rat neurons grown in a dish, hooked to a computer.
For years, scientists have learned about brain development by watching the firing patterns of lab-raised brain cells. Until recently, though, the brains-in-a-dish couldn’t receive information. Unlike actual gray matter, they could only send signals.
Scientists at the Georgia Institute of Technology figured they could learn more from neuron clumps that acted more like real brains, so they’ve developed “neurally controlled animats” — a few thousand rat neurons grown atop a grid of electrodes and connected to a robot body or computer-simulated virtual environment.
In theory, animats seem to cross the line from mass of goo to autonomous brain. But Steve Potter, a neuroscientist and head of the Georgia Tech lab where the animats were created, said his brain clumps won’t be reciting French philosophy anytime soon.
“Our goal is not to get something as conscious as a person,” he said. “We’re studying basic mechanisms of learning and memory.” The researchers are focusing on how groups of individual cells interact and change when stimulated.
Rather than create a sentient being, the goal of the work is to learn about the earliest human brain development, according to Daniel Wagenaar, a California Institute of Technology neuroscientist who worked with Potter on the animat.
“When someone is born, they’re still not able to control much of their behavior,” Wagenaar said. “Somehow this system has to learn to control a body. Part of that comes from interactions with environment. We hope to get, at the very simple level of small nervous system, some insight into how that occurs.”
The scientists rely on these models because no technology exists to watch live human brain cells in real-time action.
The first generation of animats performed simple tasks. The virtual mouse tended to move in one direction (right). A dish-brain-controlled robot did manage to stay away from a moving target — impressive-sounding perhaps but not particularly complicated. A robotic arm holding a set of pens and attached to a clump of neurons created art — albeit in the eye of the beholder.
“Since our cultured networks are so interconnected, they have some sense of what is going in themselves,” he said. “We can also feed their activity back to them, to mediate their ‘sense of self.'”
The next phase of animats will likely have an even keener sense of self.
“In the next wave, we hope to sequence behaviors.” Potter said. “The sensory input resulting from one behavior will trigger the next appropriate behavior.” In other words, he hopes the animats will learn.
And if consciousness is a function of complexity, what would happen if a whole bunch of dish-brains were hooked together? Right now, Potter said, the biggest obstacle to trying is the $60,000 price tag of each “rig.”
“That’s the present limit,” he said. “If we had a rich patron, I would love to get more rigs to do some ‘social networks’ experiments.”
Potter hopes his research will eventually lead to better neural prosthetics, understanding of neural pathologies and even artificial intelligence. As for consciousness, he said, “I don’t think it will get that far. But I’d love to be proven wrong.”
I don’t think $60.000 dollars is all that much. As a matter of fact, in the scientific world, $60.000 dollars is nothing. Especially if you take a look at what kind of amazing thing you can set up with it.
But apparently these researchers don’t have the money to set up a few dozen of these rigs just like that. That is okay, because the costs of the enabling technology behind this will come down fast.
Nano-imaging techniques will make possible real-time analysis of neuro-molecular level events in the human brain. The brain imaging bottleneck will be broken around 2015.