Within the next three to four years, most PC users will see their machines morph into personal supercomputers. This change will be enabled by the emergence of multicore CPUs and, perhaps more importantly, the arrival of massively parallel cores in the graphical processing units.
In fact, ATI (a division of Advanced Micro Devices) and Nvidia are already offering multiple programmable cores in their high-end discreet graphics processing platforms. These cores can be programmed to do many parallel processing tasks, resulting in dramatically better display features and functions for video, especially for gaming. But these platforms currently come at a hefty price and often require significant amounts of power, making them impractical in many laptop designs.
But preliminary steps are being taken to make these high-end multicore and programmable components available to virtually any machine. Vendors are moving to create integrated multicore platforms, with 64 or more specialty cores that can be used in conjunction with the various multicore CPUs now taking hold in the market. Using the most advanced semiconductor processes and geometries (32nm and soon 22nm and beyond), these new classes of devices will achieve incredible processing capability. They will also morph from the primarily graphics-oriented tasks they currently perform to include many more tasks associated with business and personal productivity.
It’s tough to predict the future, especially with cutbacks to R&D budgets in the face of a global economic slowdown. Still, it’s always nice to see a forward-looking corporate-slide related to mobile handsets from the taller, blonder half of that Sony Ericsson partnership. LTE and fast CPUs are certainly no surprise, nor is that 1,024 x 768 XGA screen resolution that Japan’s superphones are already bumping up against. The most compelling vision is that of the embedded camera sensors: 12-20 megapixels capable of recording Full HD video by 2012. Adding more fuel to firey speculation that handsets are about to find themselves embroiled in a megapixel war.
Three-dimensional processors took a baby step towards commercial reality today, thanks to IBM’s water-cooling research. Big Blue and the Fraunhofer Institute have successfully tested a multistack CPU prototype that’s cooled by pumping water directly through the separate layers of the processor. If you aren’t used to thinking of processors in terms of layers, you may need to check Jon Stokes’ “Dagwood Sandwich” analogy before continuing on.
3-D chip stacking uses a technology referred to as “through silicon via” (TSV) to build processors vertically, rather than just horizontally. By using both dimensions, CPU engineers can reduce wire delay, improve CPU efficiency, and significantly reduce total power consumption. We’ve previously covered both Intel and IBM’s efforts in this area; readers should consult those articles for a more comprehensive treatment of the subject.
Thermal dissipation, however, is the Achilles’ heel of any three dimensional processor. The more layers in a processor, the more difficult it is to effectively remove heat emanating from the lower levels. CPU architects can compensate for this by placing the hotter parts of a core on upper layers and by avoiding designs that stack core hotspots vertically, but the complexity of the problem increases with every additional layer. Simply leaving more space between the individual layers is not a solution, as this would quickly recreate the wire delay problems three-dimensional processors are meant to alleviate.
Follow the above link to see part 2. Higher resolution and wmv format also available at the source.
As you can see, the visuals are stunning.
Ofcourse, I was expecting nothing less because of some earlier screenshots that have been released quite some time ago:
You can find more “Crysis vs Real Life” pictures here and here (both links Dutch, but that’s okay because it’s about the pictures anyway).
Personally, I prefer Crysis environments over real life environments.
But then again, I’m a techno geek. 😉
Also very realistic looking (in motion, not visually), is the upcoming game Little Big Planet.
I found another realistic render which is not related to any game, but it’s so impressive I just had to include it.
I’m not sure which one is more impressive… this one, or the black guy from Crysis above.
On with the rest of this mixed bag of kick-ass techno links!
It’s got everything… from selectively wiping out memories in mice to regenerating hair. From robots to the upcoming revolution of renewable energy. From artificial intelligence to really fast internet.
First off, something that struck me as really special.
I was just surfing around and stumbled upon a video that demonstrated blind people receiving electrical signals to their tongues, which allowed them to see.
Cameras are used as replacements for their eyes and electrical signals are sent to their tongues. Eventually, or so the video claims, the blind person will start processing this input with their visual cortex.
This means they would actually see, just like you and me.
I looked up an article to go with the video. The article I found is from 2003 so it’s already pretty old stuff. The technology is called BrainPort.
[update]I also found a newer article. The video shows the technology in its current state, not the 2003-state.[/update]
BrainGate technology pulled off something similar. BrainGate is responsible for allowing Mathew Nagel to move a mousecursor with his thoughts.
Also in this mixed bag, two lightly digestable, semi-scientific writings:
You don’t have to destroy an embryo to create stem cells for medical research. An American biosciences company has succeeded in deriving the cells from embryos without killing them, raising hopes that President Bush will reconsider his veto on federal funding for such work….
Lanza hopes that because the method does not involve destroying embryos, it will lead to the lifting of the veto on federal funding for stem cell research. “We need to jump-start the field – it’s been crippled by a lack of funding,” he says. “This will hopefully solve the political impasse and bring the president on board, as no embryos will be harmed with this method.”
Once we understand how the mind operates, we will be able to program detailed descriptions of these principles into inexpensive computers, which, by the late 2020s, will be thousands of times as powerful as the human brain—another consequence of the law of accelerating returns. So we will have both the hardware and software to achieve human-level intelligence in a machine by 2029. We will also by then be able to construct fully humanlike androids at exquisite levels of detail and send blood-cell-size robots into our bodies and brains to keep us healthy from inside and to augment our intellect. By the time we succeed in building such machines, we will have become part machine ourselves. We will, in other words, finally transcend what we have so long thought of as the ultimate limitations: our bodies and minds.
Nanosolar is a company based in Palo Alto, California, which uses an innovative technique to produce a kind of “solar film”. To make the film, Nanosolar prints CIGS (copper-indium-gallium-selenium) onto a thin polymer using machines that look like printing presses. There is no costly silicon involved in the process, and, ultimately, a solar cell from Nanosolar will cost about one-fifth to one-tenth the cost of a standard silicon solar panel. Nanosolar is only a few years old, but it has laid plans to take on multinational corporations, such as BP and Sharp, in the solar industry.
First it was the typewriter, then the teleprinter. Now a US news service has found a way to replace human beings in the newsroom and is instead using computers to write some of its stories.Thomson Financial, the business information group, has been using computers to generate some stories since March and is so pleased with the results that it plans to expand the practice.
The computers work so fast that an earnings story can be released within 0.3 seconds of the company making results public.
Ditto’s chip is like the microelectronic version of a stem cell: It’s a device that can assume all sorts of different functions. But a chaotic chip goes one step further: It can morph over and over again. For computer design, this has huge implications. In a traditional chip, the basic elements, called logic gates, are hardwired to perform a single, specific task. In a chaotic chip, each logic gate can be converted on the fly to perform any function.What this means is that computers will no longer need separate, costly chips for the CPU, memory, video RAM, graphics accelerators, arithmetic processing units, and so on. Instead, one chip will convert itself to whatever functions the software needs at a given moment.
I entered a conference room in Manhattan and a woman on the TV tossed a handful of rose petals out of the screen, where they floated in the air before my eyes.At least, that’s what I saw. In truth, the image resided on a perfectly flat, 42-inch LCD screen. But the 3-D illusion was fully believable, and I didn’t have to wear a dorky set of polarizing glasses.
A new line of 3-D televisions by Philips uses the familiar trick of sending slightly different images to the left and right eyes — mimicking our stereoscopic view of the real world. But where old-fashioned 3-D movies rely on the special glasses to block images meant for the other eye, Philips’ WOWvx technology places tiny lenses over each of the millions of red, green and blue sub pixels that make up an LCD or plasma screen. The lenses cause each sub pixel to project light at one of nine angles fanning out in front of the display.
Chicago (IL) and Westlake Village (CA) – Five years ago, Intel envisioned processors running at 20 GHz by the end of this decade. Today we know that the future will look different. CPUs will sacrifice clock speed over core count: Intel’s first “many core” CPU, will run at only two thirds of the clock speed of today’s fastest Xeon CPU – but achieve 15x the performance, thanks to 32 cores.
“Dual-core” is a term Intel never really warmed up to. In fact, two cores per processor is just the first step on a ladder of increasing core counts that, as we believe today, will lead the microprocessor industry into another period of growth. Instead of promoting “dual-core”, Intel typically talks about “multi-core” – a term the company internally refers to as project “Kevet” – and explains the press and analysts that “many-cores” – processors that potentially could hold “dozens of cores” – will be available sometime in the future.
Achieving a long-sought goal of the $48 billion memory chip industry, Freescale Semiconductor Inc. (FSL) announced the commercial availability of a chip that combines traditional memory’s endurance with a hard drive’s ability to keep data while powered down.
The chips, called magnetoresistive random-access memory or MRAM, maintain information by relying on magnetic properties rather than an electrical charge. Unlike flash memory, which also can keep data without power, MRAM is fast to read and write bits, and doesn’t degrade over time.
Freescale, which was spun off of Motorola Inc. (MOT) in July 2004, said Monday it has been producing the 4-megabit MRAM chips at an Arizona factory for two months to build inventory. A number of chip makers have been pursuing the technology for a decade or more, including IBM Corp.
Sometimes referred to as “universal” memory, MRAM could displace a number of chips found in every electronic device, from PCs, cell phones, music players and cameras to the computing components of kitchen appliances, cars and airplanes.
“This is the most significant memory introduction in this decade,” said Will Strauss, an analyst with research firm Forward Concepts. “This is radically new technology. People have been dabbling in this for years, but nobody has been able to make it in volume.”
MRAM is totally cool. It takes all of the advantages of our current memories (harddisks, DDR-RAM, flash-RAM, etc.) and none of the disadvantages.
MRAM is fast and non-volatile. The last one allows for instant-on pc’s because loading the OS on every boot won’t be necessary anymore.
The only reason why we have different types of memories nowadays is because these different memory types have their own advantages. Hard-disks allow for permanent storage. DDR-RAM is fast and therefore adequate for processing data. Cache memory is extremely fast but also very expensive, which is the reason why conventional computers only have very little of it.
In the years to come, all of these types of memory will be replaced by MRAM. And that’s how MRAM got the name of ‘universal memory’.
Computers are not only getting a lot faster; they’re also getting a lot smarter:
“Eye-trackers will one day be so reliable and so simple that they will become yet another input device on your computer, like a much more sophisticated mouse,” said Professor Guang-Zhong Yang of the Department of Computing at Imperial College.
An international team of university and industry scientists has discovered a way to improve nanoparticles used to make advanced circuits. These findings could help improve the reliable large-scale manufacture of high quality chips, experts told UPI’s Nano World.
When it comes to making advanced circuitry, the silicon wafers they are based on must be as free of defects and flat as possible. Nanoparticles made of ceria, or cerium dioxide, are some of the abrasives used to smoothen out these wafers.
As the size of the circuitry features shrink to pack more computing power into microchips, the industry has to defects down to ensure mass manufacture of chips remains viable. This remains especially true as inventors develop electronic structures only nanometers or billionths of a meter in size, the scale of molecules. The problem is that ceria nanoparticles synthesized by existing techniques are irregularly faceted crystals, the sharp edges of which are prone to scratching the silicon wafers, explained researcher Zhong Lin Wang, a materials scientist in nanotechnology at the Georgia Institute of Technology in Atlanta.
For superior performance, nanoparticles that are perfect spheres are ideal because they would act like ball bearings, polishing the silicon surface without scratching it. After three years of research, Wang and his colleagues in the United States, Britain and China have now developed a way of creating spherical ceria nanoparticles at large scales.
We are headed towards fullblown nanocomputation. In other words: the CPU’s of the future will be completely ‘nanotechnology’.
This allows for extremely fast CPU’s that are easy to cool and hardly need any power to run.