Monthly Archives: July 2006

Suspended Animation In Surgery

Stuck Pig.

Mike Duggan, a veterinary surgeon, holds his gloved hands over an 8-inch incision in the belly of pig 78-6, a 120-pound, pink Yorkshire. He’s waiting for a green light from Hasan Alam, a trauma surgeon at Massachu­setts General Hospital.

“Make the injury,” Alam says. Duggan nods and slips his hands into the gash, fingers probing through inches of fat and the rosy membranes holding the organs in place. He pushes aside the intestines, ovaries, and bladder, and with a quick scalpel stroke slices open the iliac artery. It’s 10:30 am. Pig 78-6 loses a quarter of her blood within moments. Heart rate and blood pressure plummet. Don’t worry – Alam and Duggan are going to save her.

Alam goes to work on the chest, removing part of a rib to reveal the heart, a throbbing, shiny pink ball the size of a fist. He cuts open the aorta – an even more lethal injury – and blood sprays all over our scrubs. The EKG flatlines. The surgeons drain the remaining blood and connect tubes to the aorta and other vessels, filling the circulatory system with chilled organ-preservation fluid – a nearly frozen daiquiri of salts, sugars, and free-radical scavengers.

Her temperature is 50 degrees Fahrenheit; brain activity has ceased. Alam checks the wall clock and asks a nurse to mark the time: 11:25 am.

But 78-6 is, in fact, only mostly dead – the common term for her state is, believe it or not, suspended animation. Long the domain of transhumanist nut-jobs, cryogenic suspension may be just two years away from clinical trials on humans (presuming someone can solve the sticky ethical problems). Trauma surgeons can’t wait – saving people with serious wounds, like gunshots, is always a race against the effects of blood loss. When blood flow drops, toxins accumulate; just five minutes of low oxygen levels causes brain death.

Suspended animation would give surgeons hours, as opposed to minutes, to perform surgery on life-threatening wounds.

Kurzweil: We Will Have Human-level Artificial Intelligence Within 25 Years

Why We Can Be Confident of Turing Test Capability Within a Quarter Century .

Ray Kurzweil, renowned inventor and futurist with a good future-prediction track-record, has written a paper in which he tries to convince the reader why we can be confident that we will have human level AI within 25 years. This will eventually result in a Singularity.

This Turing test that Ray talks about is a test where a person sits behind a computer and engages in a conversation with an AI on the other side. If the AI can convince the person chatting with it that it is not an AI but a real person, then the AI is said to have successfully passed the Turing test.

This AI is also said to be human level AI, because it is functionally indistinguishable from a real person with regards to intelligence.

Ray provides us with a few key insights that may change your mind just in case you’d already convinced yourself that human level AI (and above) was impossible. A quote from Ray’s paper:

Of the three primary revolutions underlying the Singularity (G, N, and R), the most profound is R, which refers to the creation of nonbiological intelligence that exceeds that of unenhanced humans. A more intelligent process will inherently outcompete one that is less intelligent, making intelligence the most powerful force in the universe.

Artificial intelligence at human levels will necessarily greatly exceed human intelligence for several reasons. As I pointed out earlier machines can readily share their knowledge. As unenhanced humans we do not have the means of sharing the vast patterns of interneuronal connections and neurotransmitter-concentration levels that comprise our learning, knowledge, and skills, other than through slow, language-based communication. Of course, even this method of communication has been very beneficial, as it has distinguished us from other animals and has been an enabling factor in the creation of technology.

Machines can pool their resources in ways that humans cannot. Although teams of humans can accomplish both physical and mental feats that individual humans cannot achieve, machines can more easily and readily aggregate their computational, memory and communications resources. As discussed earlier, the Internet is evolving into a worldwide grid of computing resources that can be instantly brought together to form massive supercomputers.

Machines have exacting memories. Contemporary computers can master billions of facts accurately, a capability that is doubling every year.3 The underlying speed and price-performance of computing itself is doubling every year, and the rate of doubling is itself accelerating.

As human knowledge migrates to the Web, machines will be able to read, understand, and synthesize all human-machine information. The last time a biological human was able to grasp all human scientific knowledge was hundreds of years ago.

Another advantage of machine intelligence is that it can consistently perform at peak levels and can combine peak skills. Among humans one person may have mastered music composition, while another may have mastered transistor design, but given the fixed architecture of our brains we do not have the capacity (or the time) to develop and utilize the highest level of skill in every increasingly specialized area. Humans also vary a great deal in a particular skill, so that when we speak, say, of human levels of composing music, do we mean Beethoven, or do we mean the average person? Nonbiological intelligence will be able to match and exceed peak human skills in each area.

For these reasons, once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it, and then continue its double- exponential ascent.

Absolutely true. I agree 100% with Ray on the above.

Our opinions diverge, however, from here on:

I pointed out above that machines will match (and quickly exceed) peak human skills in each area of skill. So instead, let’s take one hundred scientists and engineers. A group of technically trained people with the right backgrounds would be capable of improving accessible designs. If a machine attained equivalence to one hundred (and eventually one thousand, then one million) technically trained humans, each operating much faster than a biological human, a rapid acceleration of intelligence would ultimately follow.

However, this acceleration won’t happen immediately when a computer passes the Turing test. The Turing test is comparable to matching the capabilities of an average, educated human and thus is closer to the example of humans from a shopping mall. It will take time for computers to master all of the requisite skills and to marry these skills with all the necessary knowledge bases.

Once we’ve succeeded in creating a machine that can pass the Turing test (around 2029), the succeeding period will be an era of consolidation in which nonbiological intelligence will make rapid gains. However, the extraordinary expansion contemplated for the Singularity, in which human intelligence is multiplied by billions, won’t take place until the mid-2040s (as discussed in chapter 3 of The Singularity is Near).

My opinion is that there will be a hard take-off towards a full blown Singularity once an AI achieves human level intelligence.

Maybe it’s me, or maybe it’s just that my definition of the Singularity differs from Ray’s. But I really don’t see why it would take 15 years after the achievement of human level AI for our society to be transformed beyond recognition.

It seems to me, that once you attain human level AI, thousands of copies of this AI are easily made. All copies could learn different skills and merge with one another not long after that. All of this would go at blinding speed, since the computers of that time will be extremely fast and globally interconnected into one big giant virtual supercomputer. A radical transformation of the world we live in would be sure to follow.

The only thing that could prevent this scenario from happening, as far as I can see, would be if the first human level AI would use up all our computational resources at once, thereby preventing thousands of copies from being made. However, I think it is very unlikely that we will have a computational power scarcity 25 years from now.

What are your thoughts on superior AI and the implications of it?

Please, feel free to comment on this one. I want to hear my reader’s opinions.

(For everybody interested in more of Ray Kurzweil’s future predictions, read Renowned thinker sees boundless future.)

This Is a Computer on Your Brain

This Is a Computer on Your Brain.

A new brain-computer-interface technology could turn our brains into automatic image-identifying machines that operate faster than human consciousness.

Researchers at Columbia University are combining the processing power of the human brain with computer vision to develop a novel device that will allow people to search through images ten times faster than they can on their own.

Darpa, or the Defense Advanced Research Projects Agency, is funding research into the system with hopes of making federal agents’ jobs easier. The technology would allow hours of footage to be very quickly processed, so security officers could identify terrorists or other criminals caught on surveillance video much more efficiently.

The “cortically coupled computer vision system,” known as C3 Vision, is the brainchild of professor Paul Sajda, director of the Laboratory for Intelligent Imaging and Neural Computing at Columbia University. He received a one-year, $758,000 grant from Darpa for the project in late 2005.

The system harnesses the brain’s well-known ability to recognize an image much faster than the person can identify it.

“Our human visual system is the ultimate visual processor,” says Sajda. “We are just trying to couple that with computer vision techniques to make searching through large volumes of imagery more efficient.”

The brain emits a signal as soon as it sees something interesting, and that “aha” signal can be detected by an electroencephalogram, or EEG cap. While users sift through streaming images or video footage, the technology tags the images that elicit a signal, and ranks them in order of the strength of the neural signatures. Afterwards, the user can examine only the information that their brains identified as important, instead of wading through thousands of images.

This is only the beginning of our coming mind-machine merger.

Every thought about what could happen when we start amplifying our intelligence?

If not, read the Singularity FAQ.

Ray Kurzweil was right when he said We Are Becoming Cyborgs.

Protein DVD To Store 50 Terabytes

Indian-born scientist developing coated DVD’s that can make hard disks obsolete.

An Indian born scientist in the US is working on developing DVD’s which can be coated with a light -sensitive protein and can store up to 50 terabytes (about 50,000 gigabytes) of data.

Professor V Renugopalakrishnan of the Harvard Medical School in Boston has claimed to have developed a layer of protein made from tiny genetically altered microbe proteins which could store enough data to make computer hard disks almost obsolete.

“What this will do eventually is eliminate the need for hard drive memory completely,” ABC quoted Prof. Renugopalakrishnan, a BSc in Chemistry from Madras University and PhD in biophysics from Columbia/State University of New York, Buffalo, New York as saying.

“The protein-based DVDs will be able to store at least 20 times more than the Blue-ray and eventually even up to 50,000 gigabytes (about 50 terabytes) of information. You can pack literally thousands and thousands of those proteins on a media like a DVD, a CD or a film or whatever,” he said.

The high-capacity storage devices will be essential to the defence, medical and entertainment industries.

Robot Car Parks Itself

Park the Beamer by Bot.

Driving your car into a cramped parking space can be a harrowing experience, but BMW says it has developed a robotic parking system to solve the problem.

The luxury carmaker’s parking-assist technology will park your car for you as you stand outside and watch, according to BMW during the demo of a working prototype at its Munich headquarters this week.

All you do is press down on a remote-control button and your Beamer parks itself.

The parking system is very straightforward to operate, said Raymond Freymann, managing director of BMW group research and technology.

“You just press a button,” Freymann said. “It is something simple, but something that is really smart.”

The company says the technology will be available within three years.

There’s a movieclip of the car at the source article.

Extremely Fast Computers In Our Near Future

Intel aims for 32 cores by 2010.

Chicago (IL) and Westlake Village (CA) – Five years ago, Intel envisioned processors running at 20 GHz by the end of this decade. Today we know that the future will look different. CPUs will sacrifice clock speed over core count: Intel’s first “many core” CPU, will run at only two thirds of the clock speed of today’s fastest Xeon CPU – but achieve 15x the performance, thanks to 32 cores.

“Dual-core” is a term Intel never really warmed up to. In fact, two cores per processor is just the first step on a ladder of increasing core counts that, as we believe today, will lead the microprocessor industry into another period of growth. Instead of promoting “dual-core”, Intel typically talks about “multi-core” – a term the company internally refers to as project “Kevet” – and explains the press and analysts that “many-cores” – processors that potentially could hold “dozens of cores” – will be available sometime in the future.

Freescale Unveils Magnetic Memory Chip.

Achieving a long-sought goal of the $48 billion memory chip industry, Freescale Semiconductor Inc. (FSL) announced the commercial availability of a chip that combines traditional memory’s endurance with a hard drive’s ability to keep data while powered down.

The chips, called magnetoresistive random-access memory or MRAM, maintain information by relying on magnetic properties rather than an electrical charge. Unlike flash memory, which also can keep data without power, MRAM is fast to read and write bits, and doesn’t degrade over time.

Freescale, which was spun off of Motorola Inc. (MOT) in July 2004, said Monday it has been producing the 4-megabit MRAM chips at an Arizona factory for two months to build inventory. A number of chip makers have been pursuing the technology for a decade or more, including IBM Corp.

Sometimes referred to as “universal” memory, MRAM could displace a number of chips found in every electronic device, from PCs, cell phones, music players and cameras to the computing components of kitchen appliances, cars and airplanes.

“This is the most significant memory introduction in this decade,” said Will Strauss, an analyst with research firm Forward Concepts. “This is radically new technology. People have been dabbling in this for years, but nobody has been able to make it in volume.”

MRAM is totally cool. It takes all of the advantages of our current memories (harddisks, DDR-RAM, flash-RAM, etc.) and none of the disadvantages.

MRAM is fast and non-volatile. The last one allows for instant-on pc’s because loading the OS on every boot won’t be necessary anymore.

The only reason why we have different types of memories nowadays is because these different memory types have their own advantages. Hard-disks allow for permanent storage. DDR-RAM is fast and therefore adequate for processing data. Cache memory is extremely fast but also very expensive, which is the reason why conventional computers only have very little of it.

In the years to come, all of these types of memory will be replaced by MRAM. And that’s how MRAM got the name of ‘universal memory’.

Computers are not only getting a lot faster; they’re also getting a lot smarter:

How a Computer Knows What Many Managers Don’t.

So why ever trust a computer model to run your investments? Because, in the real world, it seems to pay off.

Replace your mouse with your eye.

“Eye-trackers will one day be so reliable and so simple that they will become yet another input device on your computer, like a much more sophisticated mouse,” said Professor Guang-Zhong Yang of the Department of Computing at Imperial College.

Also see The Future Of Computers.

Tweaking Genes In The Basement

Tweaking Genes in the Basement.

In the 1970s, before the PC era, there were computer hobbyists. A group of them formed the Homebrew Computer Club in a Menlo Park garage in 1975 to trade integrated circuits and swap tips on assembling rudimentary computers, like the Altair 8800, a rig with no inputs or outputs and memory measured in kilobytes.

Among the Club’s members were Apple founders Steve Wozniak and Steve Jobs.

As the tools of biotechnology become accessible (and affordable) to a wider public for the first time, hobbyists are recapturing that collaborative ethos and applying it to tinkering with the building blocks of life.

Eugene Thacker is a professor of literature, culture and communications at Georgia Tech and a member of the Biotech Hobbyist collective. Just as the computer hobbyists sought unconventional applications for computer circuitry, the new collective is looking for “non-prescribed uses” of biotechnology, Thacker said.

The group has published a set of informal DIY articles, mimicking the form of the newsletters and magazines of the computer hobbyists — many of which are archived online. Thacker walks readers through the steps of performing a basic computation using a DNA “computer” in his article “Personal Biocomputing” (PDF). The tools for the project include a $100 high school-science education kit and some used lab equipment.

High-tech Prosthetics

High-tech prosthetics: Out on a limb.

Advances such as telemedicine and the use of wireless devices in hospitals have become an accepted part of medical technology, but the notion of replacing limbs with computer-powered devices seems more like something out of “RoboCop” or “The $6 Million Man.”

Since as far back as the Civil War, prosthetic limbs have consisted of unwieldy lumps of wood, plastic or metal. While some advances in materials have improved comfort for amputees, prosthetics still lack the responsiveness and feel of actual limbs.

Icelandic prosthetic maker Ossur is trying to change that with its Rheo Knee. Billed as the first knee with artificial intelligence, it combines up to 15 sensors, a processor, software and a memory chip to analyze the motion of the prosthetic and learn how to move accordingly. More recently, Ossur introduced the Power Knee, which houses a motor and more sensors. The motor helps replicate some of the action of muscles that have been lost along with the limb.

Bionics industry researchers estimate the next five years will bring major advances, including mind-controlled prosthetics in which sensors are attached directly to a patient’s brain. Already, companies and universities are developing bionic feet, new cochlear implants to restore hearing to deaf people, prosthetic arms with embedded chips to control elbow and wrist movement, and hand prosthetics with artificial intelligence to control grip.

Jesse Sullivan, who lost both arms in a 2001 electrical accident, is testing technology that allows him to use his thoughts to control a bionic arm (the other is prosthetic). Dr. Todd Kuiken at the Rehabilitation Institute of Chicago took nerves from Sullivan’s shoulder and implanted them in his chest, where sensors translate nerve impulses into instructions for a processor in the bionic arm.

How long until bionics appendages outperform our own biological ones?

Food for thought. 🙂

Progress In Stem Cell Research

Crucial immune cells derived from stem cells.

For the first time human embryonic stem cells have been coaxed into becoming T-cells, suggesting new ways to fight immune disorders including AIDS and the “bubble boy” disease, X-SCID.

Embryonic stem cells (ESCs) are an attractive source of human T-cells for research and therapy because ESCs can be genetically manipulated with relative ease and can be grown in large quantities.

T-cells are crucial to the working of the immune system. If these cells are destroyed or absent – as occurs during HIV infection and X-SCID, respectively – the body cannot fight off infections. But despite their importance, much about human T-cell function is unknown because the cells are difficult to analyse with standard tools of genetic engineering.

‘Virgin birth’ stem cells bypass ethical objections.

“VIRGIN-BIRTH” embryos have given rise to human embryonic stem cells capable of differentiating into neurons. The embryos were produced by parthenogenesis, a form of asexual reproduction in which eggs can develop into embryos without being fertilised by sperm. The technique could lead to a source of embryonic stem (ES) cells that could be used therapeutically without having to destroy a viable embryo.

Human eggs have two sets of chromosomes until fertilisation, when the second set is usually expelled. If this expulsion is blocked but the egg is accidentally or experimentally activated as if it had been fertilised, a parthenote is formed (see Diagram).

Because some of the genes needed for development are only activated in chromosomes from the sperm, human parthenotes never develop past a few days. This means that stem cells taken from them should bypass ethical objections of harvesting them from embryos with the potential to form human lives, say Fulvio Gandolfi and Tiziana Brevini of the University of Milan, Italy.

This is valueable research. Stem cells will be able to boost our health immensely.

Say goodbye to cumbersome organ transplants and functionally limited artificial prosthesis. With these babies, we can regrow our diseased/damaged/missing limbs and organs.

Science might even find a way to give us periodic stem cell injections using cells that have our own DNA but are younger than the cells in our body. That way, we would progressively grow younger, instead of older. And the concept is fairly simple.

Is immortality around the corner?

The possibilities boggle the mind.

Also see this post about super regenerative mice.

Nanotechnology To Enable Hydrogen Economy

Nanotechnology to Lower Hydrogen Economy and Fuel Cell Roadblocks.

Nanotechnology will play an important role in addressing many daunting technical challenges to hydrogen-based transportation, a highly regarded scientist and MIT professor said on Tuesday.

Mildred Dresselhaus, a professor of physics and electrical engineering at MIT, gave the keynote address at an MIT conference on nanotechnology and energy. Among other science management positions, Dresselhaus chaired a 2003 Department of Energy report called the Basic Research Needs for a Hydrogen Economy.

During her talk, Dresselhaus said there has been progress since the 2003 report was published, but there remain a number of challenges in hydrogen production, storage, and fuel cells , the devices which convert hydrogen to power.
“If we’re going to use hydrogen for transportation or other large-scale uses, we are faced with a scale factor–we have to increase production by factors of many to achieve the levels of the energy supply for transportation,” she said.

Storage, too, remains a “vexing problem,” she said. “Energy density is the biggest challenge,” Dresselhaus said. “Even if we address that, we still have a whole bunch of things to do.”

For example, work needs to be done in reducing the amount of energy that is released and heat created when transferring hydrogen into a car, for example.

In the short term, most production-related research is focused on using fossil fuels to make hydrogen, which cannot be harvested like fossil fuels.

Longer term, nanostructure materials, used in fuel cell catalysts and other places, could lead to technical breakthroughs, she said.

Echoing the comments of her colleagues at MIT, Dresselhaus said that in the next 50 years new technologies will need to be developed to satisfy growing energy demand and to address climate changes from increased carbon in the environment.

In 2003, the DOE’s authors said that a hydrogen-based economy was difficult but achievable, she noted. “Probably nobody has changed their assessment: the problem is difficult but we are making rapid progress and nanostructures are an important component,” Dresselhaus said.