A little while ago, I posted How Long Until Photorealistic Virtual Environments?
But that was before I had seen this:
A little while ago, I posted How Long Until Photorealistic Virtual Environments?
But that was before I had seen this:
My guess would be… not long.
The term “sketch interpretation” may not get people on the edge of their seats all by itself… but wait until you’ve seen the demonstration of what it can do: Sketch Interpretation.
The idea is that you can draw anything and the software will understand what you have drawn. Not only will it understand, it will apply the proper physics to it. You’ve gotta see it to get it. The movie clip shows a guy drawing a car going down from a hill, and then the software actually makes it happen.
This is a great tool for people who want to build their own videogames or virtual environments. It’s tools like these that will ensure that we will have a very rich Internet-experience a few years in the future, when every site you visit will be a 3D environment.
In general, technology like this simply allows people to easily build stuff that’s cool.
A link to the official site: ASSIST.
When Rene Descartes said, “I think, therefore I am,” the philosopher probably didn’t imagine a stamp-sized clump of rat neurons grown in a dish, hooked to a computer.
For years, scientists have learned about brain development by watching the firing patterns of lab-raised brain cells. Until recently, though, the brains-in-a-dish couldn’t receive information. Unlike actual gray matter, they could only send signals.
Scientists at the Georgia Institute of Technology figured they could learn more from neuron clumps that acted more like real brains, so they’ve developed “neurally controlled animats” — a few thousand rat neurons grown atop a grid of electrodes and connected to a robot body or computer-simulated virtual environment.
In theory, animats seem to cross the line from mass of goo to autonomous brain. But Steve Potter, a neuroscientist and head of the Georgia Tech lab where the animats were created, said his brain clumps won’t be reciting French philosophy anytime soon.
“Our goal is not to get something as conscious as a person,” he said. “We’re studying basic mechanisms of learning and memory.” The researchers are focusing on how groups of individual cells interact and change when stimulated.
Rather than create a sentient being, the goal of the work is to learn about the earliest human brain development, according to Daniel Wagenaar, a California Institute of Technology neuroscientist who worked with Potter on the animat.
“When someone is born, they’re still not able to control much of their behavior,” Wagenaar said. “Somehow this system has to learn to control a body. Part of that comes from interactions with environment. We hope to get, at the very simple level of small nervous system, some insight into how that occurs.”
The scientists rely on these models because no technology exists to watch live human brain cells in real-time action.
The first generation of animats performed simple tasks. The virtual mouse tended to move in one direction (right). A dish-brain-controlled robot did manage to stay away from a moving target — impressive-sounding perhaps but not particularly complicated. A robotic arm holding a set of pens and attached to a clump of neurons created art — albeit in the eye of the beholder.
“Since our cultured networks are so interconnected, they have some sense of what is going in themselves,” he said. “We can also feed their activity back to them, to mediate their ‘sense of self.'”
The next phase of animats will likely have an even keener sense of self.
“In the next wave, we hope to sequence behaviors.” Potter said. “The sensory input resulting from one behavior will trigger the next appropriate behavior.” In other words, he hopes the animats will learn.
And if consciousness is a function of complexity, what would happen if a whole bunch of dish-brains were hooked together? Right now, Potter said, the biggest obstacle to trying is the $60,000 price tag of each “rig.”
“That’s the present limit,” he said. “If we had a rich patron, I would love to get more rigs to do some ‘social networks’ experiments.”
Potter hopes his research will eventually lead to better neural prosthetics, understanding of neural pathologies and even artificial intelligence. As for consciousness, he said, “I don’t think it will get that far. But I’d love to be proven wrong.”
I don’t think $60.000 dollars is all that much. As a matter of fact, in the scientific world, $60.000 dollars is nothing. Especially if you take a look at what kind of amazing thing you can set up with it.
But apparently these researchers don’t have the money to set up a few dozen of these rigs just like that. That is okay, because the costs of the enabling technology behind this will come down fast.
Nano-imaging techniques will make possible real-time analysis of neuro-molecular level events in the human brain. The brain imaging bottleneck will be broken around 2015.
We are developing the tools to reprogram the processes involved in disease and aging, says Ray Kurzweil in his article, “Reprogramming Biology,” in the July 2006 Scientific American and available free in an extended Web version.
He also cites accelerating progress in turning specific genes off by blocking the messenger RNA; adding beneficial genes to patients’ bodies; activating and deactivating enzymes, to increase good cholesterol, for example; regrowing our own cells, tissues and even whole organs; capturing stem cells out of the bloodstream, to create new heart cells, for example; using nanoparticles that recognize and destroy cancer cells; and understanding and even reprogramming the brain.
Kurzweil is also optimistic about radical life extension. “I expect that within 15 years, we’ll be adding more than a year each year to remaining life expectancy. So my advice is: take care of yourself the old-fashioned way for a while longer and you may get to experience the remarkable century ahead.”
By 2020, virtual reality will allow for a full-immersion sensual encounter involving all five senses, says Ray Kurzweil in “The New Human,” an interview in the July 2005 issue of Playboy.
“You’ll feel as though you’re really with that person…. The whole idea of what it means to have a sexual relationship will be different.
“Computers used to be remote: now they’re in our pockets,” says Kurzweil. Next, they’ll make their way into our clothing, our body, and our brain. “You can’t point to a single organ for which we haven’t made enhancements or started work on them.” The latest FDA-approved neural implant even allows you to “upload software from outside the patient.
Ray Kurzweil has been making predictions for a long time now. So far, he just keeps on being right. He’s got a good track record.
His models, which are basically exponential extrapolations of technologies, seem to be quite reliable when it comes to looking into the future. That’s why I choose to take him seriously.
Taking issue with the perception that computer models lack realism, a Sandia National Laboratories researcher told his audience that simulations of the nanoscale provide researchers more detailed results — not less — than experiments alone.
Fang derided the pejorative “garbage in, garbage out” description of computer modeling — the belief that inputs for computer simulations are so generic that outcomes fail to generate the unexpected details found only by actual experiment.
Fang not only denied this truism but reversed it. “There’s another, prettier world beyond what the SEM [scanning electron microscope] shows, and it’s called simulation,” he told his audience. “When you look through a microscope, you don’t see some things that modeling and simulation show.”
“We need to sit back and put our mindset in a different mode,” he told his audience. “We’re all too busy doing [laboratory] research [instead of considering] how we can leverage resources to push our science to the next level.”
I’ve said it before elsewhere on this blog and I’ll say it again: simulations are the future of science.
Soon, all computers worldwide will be linked up to form one giant virtual supercomputer. Computational power will be readily available to run very sophisticated simulations of multi-cellular systems or even organs.
Sidenote for all animal lovers out there: this means laboratory testing animals will be part of the past.
These simulations will be a boon to science. I suspect that we may expect an enormous boost in health and longevity to come forth from science, once it has shifted into next gear.
I’ve also said that the virtual world is better than the physical world in every aspect. If you care to read about it, take a look at The Future Of Virtual Environments.
If Mr. Cerf and about two dozen other pundits Red Herring interviewed about the future of the Internet are right, in 10 years’ time the barriers between our bodies and the Internet will blur as will those between the real world and virtual reality.
Automakers, for instance, might conceivably post their parts catalogs in the virtual world of Second Life, a pixilated 3D online blend of MySpace, eBay, and renaissance fair crossed with a Star Trek convention. Second Life participants—who own the rights to whatever intellectual property they create online—will make money both by using the catalog to design their own cars in cyberspace and by selling their online designs back to the manufacturers, says Danish economist and tech entrepreneur Nikolaj Nyholm.
Today’s devices will disappear. Electronics will instead be embedded in our environment, woven into our clothing, and written directly to our retinas from eyeglasses and contact lenses, predicts inventor, entrepreneur, author, and futurist Ray Kurzweil. “Devices will no longer be spokes on the Internet—they will be the nodes themselves,” he says.
Everything from the family fridge to the office coffee pot—as well as heating, cooling, and security systems—will be managed through the Internet, possibly using souped-up mobile phones doubling as universal remote controls, says Google’s Mr. Cerf. By 2016, he predicts the online population of 1 billion will treble, and a huge portion will be mobile. And by then, the Internet will become so pervasive that connecting to it will no longer be a conscious act.
Bandwidth access of 100 megabits per second or more will become the norm. “It is probably a safe bet that everyone will be able to have a full-motion, high-definition real-time link to anyone,” says Bram Cohen, creator of the popular peer-to-peer program BitTorrent. Once that happens, “the concept of who is online and who is offline will melt away,” says Bradley Horowitz, Yahoo’s director of media and desktop search.
So just how big will Internet business be? “My whole thesis is that information technologies are growing exponentially. Things that we can measure like price performance, capacity, and bandwidth are doubling every year so that’s actually a factor of a thousand in 10 years,” says Mr. Kurzweil. “So if the Internet is already very influential—if there is already a trillion dollars of e-commerce, already a very democratizing technology, then multiplying its size and scope by a factor of a thousand will be a very significant change.”
The article goes on to provide yet more keen insights in the what the web will look like, how it will transform our lives once again, developments to look out for and issues that may arise (such as Big Brother).
TCSDaily has an article on the Singularity, and various person’s opinions on it.
From the article:
I’ve written before about the so-called “Singularity.” In a famous essay, Vernor Vinge described the concept this way:
When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work — the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct “what if’s” in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.
From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. (In , Greg Bear paints a picture of the major changes happening in a matter of hours.)
I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace.
Still don’t understand what the Singularity is? Read my Singularity FAQ.
As a heavy progressive person, I don’t care much for conservative people calling the Singularity a ‘techno religion’, so I won’t be copypasting any of that here.
What I will be copypasting here, is this extremely funny yet oh so insightful quote:
In fact, rather than serving as a dismissal of the Singularity, it seems to me that the Singularity-as-religion argument cuts the other way. How do we know that people want the kinds of things that advanced technology is supposed to offer? Because they’ve been trying to get them through non-technological means for all of recorded history. And as history demonstrates, they’ve been willing to try awfully hard, and in a wide variety of ingenious ways: Jihadists are strapping on suicide bombs today, in the hope of attaining the kind of environment that virtual reality will deliver in 20 years.
Having trouble imagening how we might be having sex with 72 virgins in virtual reality, just 20 years from now?
Then read The Future Of Virtual Environments.
Researchers of the Sheffield HallamUniversity in the UK have created a technology that allows a person to easily and quickly create a fully 3D scan of his/her face.
This technology is truly amazing, since it constructs a 3D face from one single snapshot directly facing the face to be digitized in 40 ms.
Be sure to check out the videoclips at the source. They are well worth your time.
A few screenshots of one of the video’s (click to enlarge):
We are spending more and more of our time in virtual environments (VE). Everytime you converse with someone over your phone or over the Internet, you are spending your time in VE. Everytime you are playing a videogame that really draws you in, you are effectively living your adventure in VE.
As you can read in The Future Of Virtual Environments, VE’s will become more and more compelling in the near future. As a direct result of that, we will be spending more and more of our time in VE. Eventually, we will be living the bigger portion of our lives there.
Naturally, we’ll want digital representations (that we will likely end up enhancing) of ourselves in such a future. This facescanning technology is just the technology we need in order to do that.
For those interested in videogames, it’s a good idea to have a look at these demo movie clips of the AGEIA PhysX Processor.
This nifty piece of hardware promises to deliver physics effects in videogames that rival real life physics (from what I can see in the movie clips, that is).
If you only have the time to download just one, go with Cell Factor. It is by far the most impressive of the three.
If videogames start looking like this in about a year or so, can you imagine what virtual environments will ook like once we get to The Future Of Virtual Environments?