Meet Laura, the virtual personal assistant for those of us who cannot afford a human one.
Built by researchers at Microsoft, Laura appears as a talking head on a screen. You can speak to her and ask her to handle basic tasks like booking appointments for meetings or scheduling a flight.
More compelling, however, is Laura’s ability to make sophisticated decisions about the people in front of her, judging things like their attire, whether they seem impatient, their importance and their preferred times for appointments.
Instead of being a relatively dumb terminal, Laura represents a nuanced attempt to recreate the finer aspects of a relationship that can develop between an executive and an assistant over the course of many years.
“What we’re after is common sense about etiquette and what people want,” said Eric Horvitz, a researcher at Microsoft who specializes in machine learning.
Microsoft wants to put a Laura on the desk of every person who has ever dreamed of having a personal aide. Laura and other devices like her stand as Microsoft’s potential path for diversifying beyond the personal computer, sales of which are stagnating.
Mitsubishi will be pitching a 3D product consisting of Nvidia driver software, 3D glasses with a receiver and a sender that is placed on top of a TV. If you own a home entertainment PC with a potent Nvidia graphics card, the driver software can create 3D imagery from regular video games, we are told. The sending unit reacts to the position of the 3D glasses to create a true 3D feeling.
We were able to testdrive the technology for a few minutes and were deeply impressed. Mitsubishi said that the product will be offered for $200 beginning next month – home entertainment PC and 3D-enabled LCD TV not included.
Several religions have suggested that the human soul can flow seamlessly into new vessels of entrapment during various stages of life. Now Swedish neuroscientists have attained some kind of spiritual plane by simulating the phenomenon in the laboratory.
One experiment involves making partcipants believe that they have swapped bodies with a mannequin. Participants faced screens displaying the output of cameras attached to a mannequins’ eyes. Participants thus shared the vantage point of a mannequin. Glancing downwards, they would see not their stomach but that of their plastic counterpart. Scientists then pressed on the stomach of the participants and mannequins at the same time. Their study of the brain waves of experiment participants suggests that the participants really sensed they were in the bodies of the mannequins.
Through a similar experiment, scientists also established that they could make a person believe he had been transferred into the body of another living being. The experiment did not seem to work in attempting to transfer people into inanimate objects suggesting that there is something about being human or at least human-like in the case of the mannequins.
News items do not make clear how scientists established that the person under scrutiny really perceived himself to be in the position of another. Is it really as simple as measuring electrical voltages across the expanse of our scalps? Surely there is more to human perception than that.
Extraordinarily lifelike characters are to begin appearing in films and computer games thanks to a new type of animation technology.
Emily – the woman in the above animation – was produced using a new modelling technology that enables the most minute details of a facial expression to be captured and recreated.
She is considered to be one of the first animations to have overleapt a long-standing barrier known as ‘uncanny valley’ – which refers to the perception that animation looks less realistic as it approaches human likeness.
Researchers at a Californian company which makes computer-generated imagery for Hollywood films started with a video of an employee talking. They then broke down down the facial movements down into dozens of smaller movements, each of which was given a ‘control system’.
The source has a movieclip you do not want to miss out on.
LivePlace.com has posted a video displaying a very impressive render of a 3D virtual world called City Space. At this point very little is known about LivePlace, other than that the WHOIS lists the domain’s owner as Brad Greenspan, one of the co-founders of MySpace. Note: It appears that in the 20 minutes since I spoke to Greenspan about this post, someone was told to take LivePlace down (apparently nobody was supposed to find it).
The other nugget of information found in the video is that the game is running on OTOY, the 3D engine that renders graphics in the cloud. The technology allows relatively weak computers (or even mobile phones) to display incredibly detailed graphics comparable to those seen in Hollywood movies.
Are supercomputers on the verge of creating Matrix-style simulated realities? Michael McGuigan at Brookhaven National Laboratory in Upton, New York, thinks so. He says that virtual worlds realistic enough to be mistaken for the real thing are just a few years away.
In 1950, Alan Turing, the father of modern computer science, proposed the ultimate test of artificial intelligence – a human judge engaging in a three-way conversation with a machine and another human should be unable to reliably distinguish man from machine.
A variant on this “Turing Test” is the “Graphics Turing Test”, the twist being that a human judge viewing and interacting with an artificially generated world should be unable to reliably distinguish it from reality.
“By interaction we mean you could control an object – rotate it, for example – and it would render in real-time,” McGuigan says.