Category Archives: AI / robotics

Simulated brain closer to thought


A detailed simulation of a small region of a brain built molecule by molecule has been constructed and has recreated experimental results from real brains.

The “Blue Brain” has been put in a virtual body, and observing it gives the first indications of the molecular and neural basis of thought and memory.

Scaling the simulation to the human brain is only a matter of money, says the project’s head.

The work was presented at the European Future Technologies meeting in Prague.

The Blue Brain project launched in 2005 as the most ambitious brain simulation effort ever undertaken.

While many computer simulations have attempted to code in “brain-like” computation or to mimic parts of the nervous systems and brains of a variety of animals, the Blue Brain project was conceived to reverse-engineer mammal brains from real laboratory data and to build up a computer model down to the level of the molecules that make them up.

The first phase of the project is now complete; researchers have modeled the neocortical column – a unit of the mammalian brain known as the neocortex which is responsible for higher brain functions and thought.


Building a Brain on a Silicon Chip

An international team of scientists in Europe has created a silicon chip designed to function like a human brain. With 200,000 neurons linked up by 50 million synaptic connections, the chip is able to mimic the brain’s ability to learn more closely than any other machine.

Although the chip has a fraction of the number of neurons or connections found in a brain, its design allows it to be scaled up, says Karlheinz Meier, a physicist at Heidelberg University, in Germany, who has coordinated the Fast Analog Computing with Emergent Transient States project, or FACETS.

The hope is that recreating the structure of the brain in computer form may help to further our understanding of how to develop massively parallel, powerful new computers, says Meier.

This is not the first time someone has tried to recreate the workings of the brain. One effort called the Blue Brain project, run by Henry Markram at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, has been using vast databases of biological data recorded by neurologists to create a hugely complex and realistic simulation of the brain on an IBM supercomputer.

FACETS has been tapping into the same databases. “But rather than simulating neurons,” says Karlheinz, “we are building them.” Using a standard eight-inch silicon wafer, the researchers recreate the neurons and synapses as circuits of transistors and capacitors, designed to produce the same sort of electrical activity as their biological counterparts.

A neuron circuit typically consists of about 100 components, while a synapse requires only about 20. However, because there are so much more of them, the synapses take up most of the space on the wafer, says Karlheinz.


The Future of Machine Intelligence

In early March 2009, 100 intellectual adventurers journeyed from various corners of Europe, Asia, America and Australasia to the Crowne Plaza Hotel in Arlington Virginia, to take part in the Second Conference on Artificial General Intelligence, AGI-09: a conference aimed explicitly at the grand goal of the AI field, the creation of thinking machines with general intelligence at the human level and ultimately beyond.

While the majority of the crowd hailed from academic institutions, major firms like Google, GE, AT&T and Autodesk were also represented, along with a substantial contingent of entrepreneurs involved with AI startups, and independent researchers. The conference benefited from sponsorship by several organizations, including, Japanese entrepreneur and investor Joi Ito’s Joi Labs, Itamar Arel’s Machine Intelligence Lab at the University of Tennessee, the University of Memphis, Novamente LLC, Rick Schwall, and the Enhanced Education Foundation.

Since I was the chair of the conference and played a large role in its organization – along with a number of extremely competent and passionate colleagues – my opinion must be considered rather subjective … but, be that as it may, my strong feeling is that the conference was an unqualified success! Admittedly, none of the research papers were written and presented by an AI program, which is evidence that the field still has a long way to go to meet its goals. Still, a great number of fascinating ideas and mathematical and experimental results were reported, building confidence in the research community that real progress toward advanced AGI is occurring.


Regulate armed robots before it’s too late

In this age of super-rapid technological advance, we do well to obey the Boy Scout injunction: “Be prepared”. That requires nimbleness of mind, given that the ever accelerating power of computers is being applied across such a wide range of applications, making it hard to keep track of everything that is happening. The danger is that we only wake up to the need for forethought when in the midst of a storm created by innovations that have already overtaken us.

We are on the brink, and perhaps to some degree already over the edge, in one hugely important area: robotics. Robot sentries patrol the borders of South Korea and Israel. Remote-controlled aircraft mount missile attacks on enemy positions. Other military robots are already in service, and not just for defusing bombs or detecting landmines: a coming generation of autonomous combat robots capable of deep penetration into enemy territory raises questions about whether they will be able to discriminate between soldiers and innocent civilians. Police forces are looking to acquire miniature Taser-firing robot helicopters. In South Korea and Japan the development of robots for feeding and bathing the elderly and children is already advanced. Even in a robot-backward country like the UK, some vacuum cleaners sense their autonomous way around furniture. A driverless car has already negotiated its way through Los Angeles traffic.


Tokyo school to host first robot teacher

Students at a Tokyo primary school will soon be learning from the first robot teacher, a Japanese science professor says.

University of Tokyo Professor Hiroshi Kobayashi has created a robot capable of teaching human students while also expressing a limited range of emotions, including anger in case of unruly children, The Daily Telegraph said Thursday.

The robot is named Saya and has been under development for 15 years leading up to the scheduled school trial.

The robot’s 18 facial motors are what give it the ability to mimic certain human emotions while the humanoid’s other inner workings allow it to speak multiple languages and set tasks, the newspaper said.

Saya’s planned appearance at the primary school will mark the most recent attempt by Japan to integrate robotics into everyday life.


Child-like robots only a few years away

The iCub robot, modelled on a human child, made its first appearance in Britain this week – the latest result of cutting edge robotics research funded by the European Commission

iCub is capable of human style eye, head and leg movement as well as basic object recognition and a realistic hand grasping movement.

The mini humanoid robot has been modelled on a three-and-a-half-year-old child and is the result of a five-year £7.5m project is to develop a fully functioning child-like robot.

“Scientists want to give it the ability to crawl on all fours and sit up, to handle objects with precision and to have head and eye movements that echo those of humans,” reports PA News.

Open source robotic development

For more details you can head over to the RobotCub website, which details the background to the project and is the “home of the iCub”

“Our main goal is to study cognition through the implementation of a humanoid robot the size of a 3.5 year old child: the iCub,” reads the site’s blurb.

“This is an open project in many different ways: we distribute the platform openly, we develop software open-source, and we are open to including new partners and form collaboration worldwide.

The iCub made its first trip to the UK this week to attend the University of Manchester at the Symposium on Humanoid Robotics.


Inside the minds of the thinking computers

What if your computer had a brain, one that worked like our very own grey matter?

It sounds like science fiction, but with incredible advancements in the fields of neuroscience, nanotechnology and supercomputing technology, the time is right for computer scientists to begin trying to create computers that are able to approach the brain’s abilities.

So what would that mean for tomorrow’s computers? It’s a tantalising question that scientists working in the field of cognitive computing are striving to answer. And, if they’re successful in their goal of ousting silicon from the PC and inserting a brain, we could witness a revolution in computing power and potential. Tomorrow’s computers may be able to think rather than just follow programs.


Will Artificial Organism with Advanced Group Intelligence Evolve?

Remember Michael Crichton’s science-fiction novel, “Prey”? Well,  researchers at the University of York have investigated large swarms of up to 10,000 miniature robots which can work together to form a single, artificial life form. The multi-robot approach to artificial intelligence is a relatively new one, and has developed from studies of the swarm behavior of social insects such as ants.

Swarm robotics is a field of study based on the supposition that simple, individual robots can interact and collaborate to form a single artificial organism with more advanced group intelligence.

As a part of an international collaboration dubbed the “Symbiotic Evolutionary Robot Organisms” project, or “Symbrion” for short, researchers are developing an artificial immune system which can protect both the individual robots that form part of a swarm, as well as the larger, collective organism.

The aim of the project is to develop the novel principles behind the ways in which robots can evolve and work together in large ‘swarms’ so that – eventually – these can be applied to real-world applications. The swarms of robots are capable of forming themselves into a ‘symbiotic artificial organism’ and collectively interacting with the physical world using sensors.

The multi-robot organisms will be made up of large-scale swarms of robots, each slightly larger than a sugar cube, which can dock with each other and share energy and computing resources within a single artificial-life-form. The organisms will also be able to manage their own hardware and software, they will be self-healing and self organizing.

Professor Alan Winfield, a member of the project team, explains, “A future application of this technology might be for example where a Symbrion swarm could be released into a collapsed building following an earthquake, and they could form themselves into teams searching for survivors or to lift rubble off stranded people. Some robots might form a chain allowing rescue workers to communicate with survivors while others assemble themselves into a ‘medicine bot’ to give first aid.


Scientists Decode the Super Computer Inside Our Brains

Scientists have decoded the short-term supercomputer that sits inside your head, the processor that wraps up trajectories, wind speeds, rebounds and rough surfaces into a gut feeling that lets you catch a football.  This advance could lead to a new wave of prosthetics, as well as being another piece in the permanently interesting puzzle that is “The Brain”.

Researchers from McGill, MIT and Caltech focused on the posterior parietal cortex (PPC), the section of brain responsible for taking all the “what is going on” data from the senses and planning what your thousand muscles and bones are going to do about it.

Working with robot-arm equipped monkeys (god but science is awesome), they discovered that the PPC runs its own realtime simulation of the future.  Of course, you instinctively knew that – when you try to catch a ball you don’t flail at where you see it, you run to where it’s going to be.  More usefully they uncovered the nature of two distinct signals from this gooey futurefinder: a “goal” signal which describes what the brain wants to happen, and a “trajectory” signal which lays out the path the body part must take to get there.

This pair of signals is incredibly useful data for any robotic limbs or other extras we might add to our limited human forms – whether they be replacements for carelessly lost parts, or entirely new structures. By working from the “goal” signal the mechanical parts can swiftly prepare to move in the desired manner, preparing any components needed and checking the path for hazards, before the “trajectory” signal gets to the fine details of movement.


iRobis Announces Complete Cognitive Software System for Robots

Institute of Robotics in Scandinavia (iRobis) has announced that the world’s first “complete cognitive software system for robotics” is ready for application. The system turns robots into self-developing, adaptive, problem-solving, “thinking” machines.

Brainstorm automatically adapts to onboard sensors and actuators, immediately builds a model of any robot on which it is installed, and automatically writes control programs for the robot’s movements. It can then explore and model its environment. Through simulated interaction using these models, it solves problems and develops new behavior using “imagination.” Once it has “learned” to do something, it can use its imagination to adapt its behavior to a wide range of circumstances.

A methodology known as genetic programming (GP) is “the trick” that makes it all possible. GP is an automated programming methodology inspired by natural evolution that is used to evolve computer programs. Evolving computer programs means the logic developed by the system can be anything that can be expressed by a computer program. That basically means anything. Robots need descriptions of things they are supposed to do and they figure out how to do them. GP itself is not an approach exclusive to robotic behavior. It has been applied to a variety of problems, some already yielding commercial successes. An example well-known to scientists in the field was the development of invention machines that had created two new patentable inventions by 2002. The potential for “thinking robots” goes well beyond developing their own actions.

The system is constructed using components and the learning / adaptive mechanisms can be turned on and off. This provides a broad range of choices to satisfy requirements. It can for example, be used for rapid development of control systems that cannot be modified after testing is complete or the learning adaptive system can remain on during use allowing the robot to continue to evolve as it gains real-life experience. The level of learning and adaptation can be adjusted to requirements. It can be used to build robot software from the ground up fulfilling all requirements or an add-on to an existing system that provides learning and adaptive behavior. Although product development time can be significantly shortened and less costly, it will still follow a familiar pattern. Product developers need to define their product requirements and engineers will make decisions about the best configurations and settings.