Tag Archives: artificial intelligence

The future of robots is rat-shaped

If so, it will be time to scream… but out of joy, rather than fear, for it could be a turning point in the history of robotics.

Psikharpax — named after a cunning king of the , according to a tale attributed to Homer — is the brainchild of European researchers who believe it may push back a frontier in .

Scientists have strived for decades to make a robot that can do some more than make repetitive, programmed gestures. These are fine for making cars or amusing small children, but are of little help in the real world.

One of the biggest obstacles is learning ability. Without the smarts to figure out dangers and opportunities, a robot is helpless without human intervention.

“The autonomy of robots today is similar to that of an insect,” snorts Guillot, a researcher at France’s Institute for Intelligent Systems and Robotics (ISIR), one of the “Psikharpax” team.

Such failures mean it is time to change tack, argue some roboticist.

Source

Will Artificial Organism with Advanced Group Intelligence Evolve?

Remember Michael Crichton’s science-fiction novel, “Prey”? Well,  researchers at the University of York have investigated large swarms of up to 10,000 miniature robots which can work together to form a single, artificial life form. The multi-robot approach to artificial intelligence is a relatively new one, and has developed from studies of the swarm behavior of social insects such as ants.

Swarm robotics is a field of study based on the supposition that simple, individual robots can interact and collaborate to form a single artificial organism with more advanced group intelligence.

As a part of an international collaboration dubbed the “Symbiotic Evolutionary Robot Organisms” project, or “Symbrion” for short, researchers are developing an artificial immune system which can protect both the individual robots that form part of a swarm, as well as the larger, collective organism.

The aim of the project is to develop the novel principles behind the ways in which robots can evolve and work together in large ‘swarms’ so that – eventually – these can be applied to real-world applications. The swarms of robots are capable of forming themselves into a ‘symbiotic artificial organism’ and collectively interacting with the physical world using sensors.

The multi-robot organisms will be made up of large-scale swarms of robots, each slightly larger than a sugar cube, which can dock with each other and share energy and computing resources within a single artificial-life-form. The organisms will also be able to manage their own hardware and software, they will be self-healing and self organizing.

Professor Alan Winfield, a member of the project team, explains, “A future application of this technology might be for example where a Symbrion swarm could be released into a collapsed building following an earthquake, and they could form themselves into teams searching for survivors or to lift rubble off stranded people. Some robots might form a chain allowing rescue workers to communicate with survivors while others assemble themselves into a ‘medicine bot’ to give first aid.

source

Google Is Taking Questions (Spoken, via iPhone)

Pushing ahead in the decades-long effort to get computers to understand human speech, Google researchers have added sophisticated voice recognition technology to the company’s search software for the Apple iPhone.

Users of the free application, which Apple is expected to make available as soon as Friday through its iTunes store, can place the phone to their ear and ask virtually any question, like “Where’s the nearest Starbucks?” or “How tall is Mount Everest?” The sound is converted to a digital file and sent to Google’s servers, which try to determine the words spoken and pass them along to the Google search engine.

The search results, which may be displayed in just seconds on a fast wireless network, will at times include local information, taking advantage of iPhone features that let it determine its location.

The ability to recognize just about any phrase from any person has long been the supreme goal of artificial intelligence researchers looking for ways to make man-machine interactions more natural. Systems that can do this have recently started making their way into commercial products.

source

Tech support software closes in on Turing Test pass

In addition to being one of the fathers of computer science, Alan Turing postulated a very simple test for when computers move beyond calculations and start engaging in what we might consider thought. For Turing, the ultimate test was whether a person, engaged in a text-based conversation with a machine, would believe that it was conversing with another human.

Each year, the University of Reading hosts a competition where software is put to this test, with the winner taking home the Loebner Prize in Artificial Intelligence. This year’s winner, called Elbot, came within one judge of passing the test, but its success may be less important than the underlying technology: Elbot is the product of a company that promises its software can help companies take the requirement for humans out of live chats and e-mail.

Over a dozen competitors took part in this year’s contest, including older favorites like ALICE and Jabberwacky, both of which wound up among the six finalists. Elbot took home the Loebner Prize by convincing three of a dozen judges that it was human; it and most of the rest of the bots received high scores for portions of their conversation.

Typically, fooling 30 percent of people is considered a pass on the Turing Test, so this suggests that the combination of fast processors and sophisticated software is on the verge of passing the test.

source

Looks to me like Kurzweil will be cashing in on his Turing bet with Mitch Kapor soon.

Stealth Semantic Startup Raises $8.5 Million, Won’t Tell Us Anything

I had a phone call late last week with a semantic startup called Siri that was spun out of SRI International (the birthplace of the computer mouse and the LCD screen, among many other important technologies). Most startups are willing to talk about their products “off the record” but this one wouldn’t divulge much beyond the fact that they’ve raised $8.5 million in Series A funding from Menlo Ventures and Morgenthaler.

What we do know is that the company was incorporated in December 2007 with the goal of commercializing aspects of the CALO cognitive learning system, which receives heavy funding ($200 million plus) from the PAL arm of Defense Advanced Research Projects Agency, a supporter of research in a broad range of technologies that could potentially benefit the Department of Defense.

From the sound of things, Siri’s 19 developers – mostly engineers who count Yahoo, Google, Apple, Xerox, Nasa, and Netscape as their former employers – have been working on a system that will use artificial intelligence to automate many of the tasks that people currently conduct manually online. The founders describe themselves as out to change the fundamental ways that people use the internet, apparently by leveraging artificial intelligence that will learn from you and then give you the luxury of thinking less on your own.

source

‘Intelligent’ computers put to the test

Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes.

In the Turing test a machine seeks to fool judges into believing that it could be human. The test is performed by conducting a text-based conversation on any subject. If the computer’s responses are indistinguishable from those of a human, it has passed the Turing test and can be said to be “thinking”.

No machine has yet passed the test devised by Turing, who helped to crack German military codes during the Second World War. But at 9am next Sunday, six computer programs – “artificial conversational entities” – will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognised “thinking” machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be “conscious” – and if humans should have the ‘right’ to switch it off.

source

Robot With Biological Brain

Meet Gordon, probably the world’s first robot controlled exclusively by living brain tissue.

Stitched together from cultured rat neurons, Gordon’s primitive grey matter was designed at the University of Reading by scientists who unveiled the neuron-powered machine on Wednesday.

Their groundbreaking experiments explore the vanishing boundary between natural and artificial intelligence, and could shed light on the fundamental building blocks of memory and learning, one of the lead researchers told AFP.

“The purpose is to figure out how memories are actually stored in a biological brain,” said Kevin Warwick, a professor at the University of Reading and one of the robot’s principle architects.

Observing how the nerve cells cohere into a network as they fire off electrical impulses, he said, may also help scientists combat neurodegenerative diseases that attack the brain such as Alzheimer’s and Parkinson’s.

“If we can understand some of the basics of what is going on in our little model brain, it could have enormous medical spinoffs,” he said.

Looking a bit like the garbage-compacting hero of the blockbuster animation “Wall-E”, Gordon has a brain composed of 50,000 to 100,000 active neurons.

Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes.

This “multi-electrode array” (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment.

Because the brain is living tissue, it must be housed in a special temperature-controlled unit — it communicates with its “body” via a Bluetooth radio link.

The robot has no additional control from a human or computer.

From the very start, the neurons get busy. “Within about 24 hours, they start sending out feelers to each other and making connections,” said Warwick.

“Within a week we get some spontaneous firings and brain-like activity” similar to what happens in a normal rat — or human — brain, he added.

source

Advances like these are why I think superior artificial intelligence will be built in the next decade or two.

How many of you expected to see this happening in 2008?

If you didn’t see a robot with a biological brain coming, then why even bother to hold on to the idea that superior AI won’t be possible for hundreds of years?

Scientists teach a computer to recognize attractiveness in women

Scientists teach a computer to recognize attractiveness in women

“Beauty,” goes the old saying, “is in the eye of the beholder.” But does the beholder have to be human? Not necessarily, say scientists at Tel Aviv University. Amit Kagian, an M.Sc. graduate from the TAU School of Computer Sciences, has successfully “taught” a computer how to interpret attractiveness in women.

Kagian published the findings in the scientific journal Vision Research. Co-authors on the work were Kagian’s supervisors Prof. Eytan Ruppin and Prof. Gideon Dror. The study combined the worlds of computer programming and psychology, an example of the multidisciplinary research for which TAU is world-renowned.

But there’s a more serious dimension to this issue that reaches beyond mere vanity. The discovery is a step towards developing artificial intelligence in computers. Other applications for the software could be in plastic and reconstructive surgery and computer visualization programs such as face recognition technologies.

Matrix-style virtual worlds ‘a few years away’

Matrix-style virtual worlds ‘a few years away’

Are supercomputers on the verge of creating Matrix-style simulated realities? Michael McGuigan at Brookhaven National Laboratory in Upton, New York, thinks so. He says that virtual worlds realistic enough to be mistaken for the real thing are just a few years away.

In 1950, Alan Turing, the father of modern computer science, proposed the ultimate test of artificial intelligence – a human judge engaging in a three-way conversation with a machine and another human should be unable to reliably distinguish man from machine.

A variant on this “Turing Test” is the “Graphics Turing Test”, the twist being that a human judge viewing and interacting with an artificially generated world should be unable to reliably distinguish it from reality.

“By interaction we mean you could control an object – rotate it, for example – and it would render in real-time,” McGuigan says.

Machines ‘to match man by 2029’

Machines ‘to match man by 2029’

Machines will achieve human-level artificial intelligence by 2029, a leading US inventor has predicted.

Humanity is on the brink of advances that will see tiny robots implanted in people’s brains to make them more intelligent said engineer Ray Kurzweil.

He said machines and humans would eventually merge through devices implanted in the body to boost intelligence and health.

“It’s really part of our civilisation,” Mr Kurzweil said.

“But that’s not going to be an alien invasion of intelligent machines to displace us.”

Machines were already doing hundreds of things humans used to do, at human levels of intelligence or better, in many different areas, he said.

“I’ve made the case that we will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029,” he said.