Category Archives: supercomputer

IBM demonstrates water-cooling for 3D processors

Three-dimensional processors took a baby step towards commercial reality today, thanks to IBM’s water-cooling research. Big Blue and the Fraunhofer Institute have successfully tested a multistack CPU prototype that’s cooled by pumping water directly through the separate layers of the processor. If you aren’t used to thinking of processors in terms of layers, you may need to check Jon Stokes’ “Dagwood Sandwich” analogy before continuing on.

3-D chip stacking uses a technology referred to as “through silicon via” (TSV) to build processors vertically, rather than just horizontally. By using both dimensions, CPU engineers can reduce wire delay, improve CPU efficiency, and significantly reduce total power consumption. We’ve previously covered both Intel and IBM’s efforts in this area; readers should consult those articles for a more comprehensive treatment of the subject.

Thermal dissipation, however, is the Achilles’ heel of any three dimensional processor. The more layers in a processor, the more difficult it is to effectively remove heat emanating from the lower levels. CPU architects can compensate for this by placing the hotter parts of a core on upper layers and by avoiding designs that stack core hotspots vertically, but the complexity of the problem increases with every additional layer. Simply leaving more space between the individual layers is not a solution, as this would quickly recreate the wire delay problems three-dimensional processors are meant to alleviate.

source 

Researchers Create Self-Healing Computer Systems for Spacecraft

Researchers Create Self-Healing Computer Systems for Spacecraft

We’ve all heard about the space missions that are DOA when NASA engineers lose touch with the spacecraft or lander. In other cases, some critical system fails and the mission is compromised.

Both are maddening scenarios because the spacecraft probably could be easily fixed if engineers could just get their hands on the hardware for a few minutes.

Ali Akoglu and his students at The University of Arizona are working on hybrid hardware/software systems that one day might use machine intelligence to allow the spacecraft to heal themselves.

Akoglu, an assistant professor in electrical and computer engineering, is using Field Programmable Gate Arrays, or FPGA, to build these self-healing systems. FPGAs combine software and hardware to produce flexible systems that can be reconfigured at the chip level.

Because some of the hardware functions are carried out at the chip level, the software can be set up to mimic hardware. In this way, the FPGA “firmware” can be reconfigured to emulate different kinds of hardware.

Terahertz computing may not be dead after all

Terahertz computing may not be dead after all

The Gigahertz race was probably one of the most ill-fated ideas in the microprocessor industry in the late 1990s and early 2000s. Intel was almost brought down to its knees by the enormous power consumption and heat dissipation of 3+ GHz speeds in circuits of the time, eventually hitting a wall at 4 GHz. The Gigahertz race has now become a multi-core race, but scientists have ideas to ramp up the clock speed at a faster pace again: Terahertz computers may be within reach – if data is carried over optical instead of electrical circuits.

Researchers at the University of Utah have not given up on the idea of dazzling clock speeds in processors, reminding us of landmark comments made by Intel’s Pat Gelsinger back in 2001: Back then, the executive said that 30 to 40 GHz may be reached by 2010, requiring nuclear power plant-like energy systems within PCs. Ajay Nahata, a University of Utah professor of electrical and computer engine, believes that clock speeds, which are stalling in the range of 3 to 4 GHz today, could grow at a faster pace again within the next years, if systems design will take advantage of optical technologies. Within ten years, Nahata said, superfast far-infrared computers could become commercially available.

Matrix-style virtual worlds ‘a few years away’

Matrix-style virtual worlds ‘a few years away’

Are supercomputers on the verge of creating Matrix-style simulated realities? Michael McGuigan at Brookhaven National Laboratory in Upton, New York, thinks so. He says that virtual worlds realistic enough to be mistaken for the real thing are just a few years away.

In 1950, Alan Turing, the father of modern computer science, proposed the ultimate test of artificial intelligence – a human judge engaging in a three-way conversation with a machine and another human should be unable to reliably distinguish man from machine.

A variant on this “Turing Test” is the “Graphics Turing Test”, the twist being that a human judge viewing and interacting with an artificially generated world should be unable to reliably distinguish it from reality.

“By interaction we mean you could control an object – rotate it, for example – and it would render in real-time,” McGuigan says.

Why scientists love games consoles

Why scientists love games consoles

Reprogram a PlayStation and it will perform feats that would be unthinkable on an ordinary PC because the kinds of calculations required to produce the realistic graphics now seen in sophisticated video games are similar to those used by chemists and physicists as they simulate the interactions between particles ranging from the molecular to the astronomical.
advertisement

Such simulations are usually carried out on a supercomputer, but time on these machines is expensive and in short supply. By comparison, games consoles are cheap and easily available, says New Scientist.

“There is no doubt that the entertainment industry is helping to drive the direction of high performance computational science – exploiting the power available to the masses will lead to many research breakthroughs in the future,” comments Prof Peter Coveney of University College London, who uses supercomputing in chemistry.

Prof Gaurav Khanna at the University of Massachusetts has used an array of 16 PS3s to calculate what will happen when two black holes merge.

According to Prof Khanna, the PS3 has unique features that make it suitable for scientific computations, namely, the Cell processor dubbed a “supercomputer-on-a-chip.” And it runs on Linux, “so it does not limit what you can do.”

“A single high-precision simulation can sometimes cost more than 5,000 hours on the TeraGrid supercomputers. For the same cost, you can build your own supercomputer using PS3s. It works just as well, has no long wait times and can be used over and over again, indefinitely,” Prof Khanna says.

Intel Produces 80 Core Chips

We’ve seen dual core chips and quad core chips.

You’d think 8-core chips would be next, right?

Think again. The title of this post does not contain a typo. 😉

CPU’s with 80 cores are coming.

Intel predicts these 80-core chips to be commercially available within 5 years.

This degree of computing performance is now only available to scientists that have access to supercomputers.

These 80-cores open the door to photorealistic gaming and deeper artificial intelligence.

Can IBM Connect Cores In a Chip With Light?

Can IBM Connect Cores In a Chip With Light?

IBM has come up with a technology that could one day let different cores on a processor exchange signals with pulses of light, rather than electrons, a change that could lead to faster and far more energy efficient chips. The device, known as a silicon Mach-Zehnder electro-optic modulator–converts electrical signals into pulses of light. The trick is that IBM’s modulator is 100 or more times smaller than other small modulators produced by other labs. Eventually, IBM hopes the modulator could be integrated into chips. Here’s how it works. Electric pulses, the yellow dots, hit the modulator, which is also being hit with a constant beam of light from a laser. The modulator emits light pulses to correspond to the electrical pulses. In a sense, the modulator is substituting photons for electrons. Since the beginning of the decade, several companies–Intel, Primarion, Luxtera, IBM–have been coming up with components that, ideally, will let chip designers replace wires in computers and ultimately chips with optical fiber. Wires radiate heat, a big problem, and the signals don’t travel as fast as light pulses. (The research in this area is known as silicon photonics and optoelectronics.)

Researchers Simulate Photosynthesis, Design Better Leaf

Researchers successfully simulate photosynthesis and design a better leaf

University of Illinois researchers have built a better plant, one that produces more leaves and fruit without needing extra fertilizer. The researchers accomplished the feat using a computer model that mimics the process of evolution. Theirs is the first model to simulate every step of the photosynthetic process.

Photosynthesis converts light energy into chemical energy in plants, algae, phytoplankton and some species of bacteria and archaea. Photosynthesis in plants involves an elaborate array of chemical reactions requiring dozens of protein enzymes and other chemical components. Most photosynthesis occurs in a plant’s leaves.

It wasn’t feasible to tackle this question with experiments on actual plants, Long said. With more than 100 proteins involved in photosynthesis, testing one protein at a time would require an enormous investment of time and money.

“But now that we have the photosynthetic process ‘in silico,’ we can test all possible permutations on the supercomputer,” he said.

Using “evolutionary algorithms,” which mimic evolution by selecting for desirable traits, the model hunted for enzymes that – if increased – would enhance plant productivity. If higher concentrations of an enzyme relative to others improved photosynthetic efficiency, the model used the results of that experiment as a parent for the next generation of tests.

This process identified several proteins that could, if present in higher concentrations relative to others, greatly enhance the productivity of the plant. The new findings are consistent with results from other researchers, who found that increases in one of these proteins in transgenic plants increased productivity.

“By rearranging the investment of nitrogen, we could almost double efficiency,” Long said.

64 Core Processors… Coming Up!

TILE64 PROCESSOR FAMILY

The TILE64™ family of multicore processors delivers immense compute performance to drive the latest generation of embedded applications. This revolutionary processor features 64 identical processor cores (tiles) interconnected with Tilera’s iMesh™ on-chip network. Each tile is a complete full-featured processor, including integrated L1 & L2 cache and a non-blocking switch that connects the tile into the mesh. This means that each tile can independently run a full operating system, or multiple tiles taken together can run a multi-processing operating system like SMP Linux.The TILE64™ processor family slashes board real estate and system cost by integrating a complete set of memory and I/O controllers, thus eliminating the need for an external North Bridge or South Bridge. It delivers scalable performance, power efficiency and low processing latency in an extremely compact footprint.