The surgeons describe their innovative technique in the December 2008 issue of the journal Urology. They have now performed the operation, using the DaVinci robotic surgical system, six times, with good results and no significant complications.
The first patient, treated Feb. 21, 2008, suffered from a very small, spasmodic bladder, a birth defect that led to gradual kidney damage and loss of urinary control.
“We refer to this condition as neurogenic bladder,” said team leader Mohan S. Gundeti, MD, assistant professor of surgery and chief of pediatric urology at the University of Chicago’s Comer Children’s Hospital. “Her bladder could barely hold six ounces. Worse, it produced frequent involuntary contractions, which forced the urine back up into the kidneys, where it slowly but inevitably causes damage, including frequent infections.”
The girl always felt that she urgently had to go to the bathroom. She stopped drinking juice or soda. She even cut back on water, to less than two cups a day. Medication helped a little, but despite two years of trying different treatments, the problem continued to get worse and began to cause kidney damage, which made surgery necessary.
Although Gundeti had performed the operation to enlarge and relax a tiny spasmodic bladder many times, it had never been done robotically–an approach that has produced quicker recovery, less pain and minimal scars in other procedures.
“This is a major, lengthy operation,” he said, “essentially five smaller procedures done in sequence.”
Known as an augmentation ileocystoplasty with Mitrofanoff appendicovesicostomy, the surgery normally begins with a big incision, about six inches long, from above the navel down to the pubic area, followed by placement of retractors to pull the stomach muscles out of the way.
Previous investigations of the neural code for complex object shape have focused on two-dimensional pattern representation. This may be the primary mode for object vision given its simplicity and direct relation to the retinal image. In contrast, three-dimensional shape representation requires higher-dimensional coding derived from extensive computation. We found evidence for an explicit neural code for complex three-dimensional object shape. We used an evolutionary stimulus strategy and linear/nonlinear response models to characterize three-dimensional shape responses in macaque monkey inferotemporal cortex (IT). We found widespread tuning for three-dimensional spatial configurations of surface fragments characterized by their three-dimensional orientations and joint principal curvatures. Configural representation of three-dimensional shape could provide specific knowledge of object structure to support guidance of complex physical interactions and evaluation of object functionality and utility.
A primary goal in the study of object vision is to decipher the neural code for complex object shape. At the retinal level, object shape is represented isomorphically (that is, replicated point for point) across a two-dimensional map comprising approximately 106 pixels. This isomorphic representation is far too unwieldy and unstable (as a result of continual changes in object position and orientation) to be useful for object perception. The ventral pathway of visual cortex1, 2 must transform the isomorphic image into a compact, stable neural code that efficiently captures the shape information needed for identification and other aspects of object vision.
Computer science — it’s not just about hardware and software anymore.
It’s about oceans, stars, cancer cells, proteins and networks of friends. Ken Birman, a computer science professor at Cornell University, says his discipline is on the way to becoming “the universal science,” a framework underpinning all others, including the social sciences.
An extravagant claim from someone with a vested interest? The essence of Birman’s assertion is that computers have gone from being a tool serving science — basically an improvement on the slide rule and abacus — to being part of the science. Consider these recent developments:
“Systems biologists” at Harvard Medical School have developed a “computational language” called “Little b” for modeling biological processes. Going beyond the familiar logical, arithmetic and control constructs of most languages, it reasons about biological data, learns from it, and incorporates past learning into new models and predictors of cells’ behaviors. Its creators call it a “scientific collaborator.”
Microsoft Research (MSR) is supporting a U.S.-Canadian consortium building an enormous underwater observatory on the Juan de Fuca Plate off the coast of Washington state. Project Neptune will connect thousands of chemical, geological and biological sensors on more than 1,000 miles of fiber-optic cables and will stream data continuously to scientists for as long as a decade. Researchers will be able to test their theories by looking at the data, but software tools that MSR is developing will search for patterns and events not anticipated by scientists and present their findings to the scientists.
Last year, researchers from Harvard Medical School and the University of California, San Diego, used statistical analysis to mine heart-disease data from 12,000 people in the Framingham Heart Study and learned that obesity appears to spread via social ties. They were able to construct social networks by employing previously unused information about acquaintances that had been gathered solely for the purpose of locating subjects during the 32-year study.
Computer scientists and plant biologists at Cornell developed algorithms to build and analyze 3-D maps of tomato proteins. They discovered the “plumping” factor that is responsible for the evolution of the tomato from a small berry to the big fruit we eat today. Researchers then devised an algorithm for matching 3-D shapes and used it to determine that the tomato-plumping gene fragment closely resembles an oncogene associated with human cancers. That work would have taken decades without computer science, researchers say.
Combining solar and robots could never be bad (Wall-E!), but at the Solar Power International convention it wasn’t about solar-powered robots as much as it was robotics that can help with the manufacturing and production of solar gear. There were at least four booths touting robotics for stacking solar panels, assembling products and inspecting systems.
We took this short 15-second video of the solar robotic solution from Adept. In the video the Adept Quattro quickly picks up and places the solar products into exact locations, which the company says maximizes productivity and minimizes breakage.
The latest request from the Pentagon jars the senses. At least, it did mine. They are looking for contractors to provide a “Multi-Robot Pursuit System” that will let packs of robots “search for and detect a non-cooperative human”.
One thing that really bugs defence chiefs is having their troops diverted from other duties to control robots. So having a pack of them controlled by one person makes logistical sense. But I’m concerned about where this technology will end up.
Given that iRobot last year struck a deal with Taser International to mount stun weapons on its military robots, how long before we see packs of droids hunting down pesky demonstrators with paralysing weapons? Or could the packs even be lethally armed? I asked two experts on automated weapons what they thought – click the continue reading link to read what they said.
Both were concerned that packs of robots would be entrusted with tasks – and weapons – they were not up to handling without making wrong decisions.
In addition to being one of the fathers of computer science, Alan Turing postulated a very simple test for when computers move beyond calculations and start engaging in what we might consider thought. For Turing, the ultimate test was whether a person, engaged in a text-based conversation with a machine, would believe that it was conversing with another human.
Each year, the University of Reading hosts a competition where software is put to this test, with the winner taking home the Loebner Prize in Artificial Intelligence. This year’s winner, called Elbot, came within one judge of passing the test, but its success may be less important than the underlying technology: Elbot is the product of a company that promises its software can help companies take the requirement for humans out of live chats and e-mail.
Over a dozen competitors took part in this year’s contest, including older favorites like ALICE and Jabberwacky, both of which wound up among the six finalists. Elbot took home the Loebner Prize by convincing three of a dozen judges that it was human; it and most of the rest of the bots received high scores for portions of their conversation.
Typically, fooling 30 percent of people is considered a pass on the Turing Test, so this suggests that the combination of fast processors and sophisticated software is on the verge of passing the test.
Looks to me like Kurzweil will be cashing in on his Turing bet with Mitch Kapor soon.
I had a phone call late last week with a semantic startup called Siri that was spun out of SRI International (the birthplace of the computer mouse and the LCD screen, among many other important technologies). Most startups are willing to talk about their products “off the record” but this one wouldn’t divulge much beyond the fact that they’ve raised $8.5 million in Series A funding from Menlo Ventures and Morgenthaler.
What we do know is that the company was incorporated in December 2007 with the goal of commercializing aspects of the CALO cognitive learning system, which receives heavy funding ($200 million plus) from the PAL arm of Defense Advanced Research Projects Agency, a supporter of research in a broad range of technologies that could potentially benefit the Department of Defense.
From the sound of things, Siri’s 19 developers – mostly engineers who count Yahoo, Google, Apple, Xerox, Nasa, and Netscape as their former employers – have been working on a system that will use artificial intelligence to automate many of the tasks that people currently conduct manually online. The founders describe themselves as out to change the fundamental ways that people use the internet, apparently by leveraging artificial intelligence that will learn from you and then give you the luxury of thinking less on your own.
Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes.
In the Turing test a machine seeks to fool judges into believing that it could be human. The test is performed by conducting a text-based conversation on any subject. If the computer’s responses are indistinguishable from those of a human, it has passed the Turing test and can be said to be “thinking”.
No machine has yet passed the test devised by Turing, who helped to crack German military codes during the Second World War. But at 9am next Sunday, six computer programs – “artificial conversational entities” – will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognised “thinking” machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be “conscious” – and if humans should have the ‘right’ to switch it off.
A new method of house construction is in the final stages of development and you could well be seeing signs stating “Beware Contour Crafting in progress” sometime soon.
Boffins at the University of South California have developed a robotic gantry that builds up walls to almost any shape and specification without any manual labour, plans are input into a computer and the concrete laying machine goes about its duty, able to finish an entire house within a day without a single tea break.
Behrokh Khoshnevis is the brainchild behind this building machine after looking into methods of rapid construction as a way to reconstruct areas devestated by natural disasters such as earthquakes which have plagued his native Iran.