Tag Archives: supercomputer

Building The Exascale Computer

What if we gave scientists machines that dwarf today’s most powerful supercomputers? What could they tell us about the nature of, say, a nuclear explosion? Indeed, what else could they discover about the world? This is the story of the quest for an exascale computer – and how it might change our lives.

What is exascale?

One exaflop is 1,000 times faster than a petaflop. The fastest computer in the world is currently the IBM-based Roadrunner, which is located in Los Alamos, New Mexico. Roadrunner runs at an astounding one petaflop, which equates to more than 1,000 trillion operations per second. The supercomputer has 129,600 processing cores and takes up more room than a small house, yet it’s still not quite fast enough to run some of the most intense global weather simulations, nuclear tests and brain modelling tasks that modern science demands. For example, the lab currently uses the processing power of Roadrunner to run complex visual cortex and cellular modelling experiments in almost real- time. In the next six months, the computer will be used for nuclear simulation and stockpile tests to make sure that the US nuclear weapon reserves are safe. However, when exascale calculations become a reality in the future, the lab could step up to running tests on ocean and atmosphere interactions. These are not currently possible because the data streams involved are simply too large. The move to exascale is therefore critical, because researchers require increasingly fast results from their experiments.

Source

Building a Brain on a Silicon Chip

An international team of scientists in Europe has created a silicon chip designed to function like a human brain. With 200,000 neurons linked up by 50 million synaptic connections, the chip is able to mimic the brain’s ability to learn more closely than any other machine.

Although the chip has a fraction of the number of neurons or connections found in a brain, its design allows it to be scaled up, says Karlheinz Meier, a physicist at Heidelberg University, in Germany, who has coordinated the Fast Analog Computing with Emergent Transient States project, or FACETS.

The hope is that recreating the structure of the brain in computer form may help to further our understanding of how to develop massively parallel, powerful new computers, says Meier.

This is not the first time someone has tried to recreate the workings of the brain. One effort called the Blue Brain project, run by Henry Markram at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, has been using vast databases of biological data recorded by neurologists to create a hugely complex and realistic simulation of the brain on an IBM supercomputer.

FACETS has been tapping into the same databases. “But rather than simulating neurons,” says Karlheinz, “we are building them.” Using a standard eight-inch silicon wafer, the researchers recreate the neurons and synapses as circuits of transistors and capacitors, designed to produce the same sort of electrical activity as their biological counterparts.

A neuron circuit typically consists of about 100 components, while a synapse requires only about 20. However, because there are so much more of them, the synapses take up most of the space on the wafer, says Karlheinz.

source

Personal Supercomputer Is Coming

Within the next three to four years, most PC users will see their machines morph into personal supercomputers. This change will be enabled by the emergence of multicore CPUs and, perhaps more importantly, the arrival of massively parallel cores in the graphical processing units.

In fact, ATI (a division of Advanced Micro Devices) and Nvidia are already offering multiple programmable cores in their high-end discreet graphics processing platforms. These cores can be programmed to do many parallel processing tasks, resulting in dramatically better display features and functions for video, especially for gaming. But these platforms currently come at a hefty price and often require significant amounts of power, making them impractical in many laptop designs.

But preliminary steps are being taken to make these high-end multicore and programmable components available to virtually any machine. Vendors are moving to create integrated multicore platforms, with 64 or more specialty cores that can be used in conjunction with the various multicore CPUs now taking hold in the market. Using the most advanced semiconductor processes and geometries (32nm and soon 22nm and beyond), these new classes of devices will achieve incredible processing capability. They will also morph from the primarily graphics-oriented tasks they currently perform to include many more tasks associated with business and personal productivity.

source

Scientists Decode the Super Computer Inside Our Brains

Scientists have decoded the short-term supercomputer that sits inside your head, the processor that wraps up trajectories, wind speeds, rebounds and rough surfaces into a gut feeling that lets you catch a football.  This advance could lead to a new wave of prosthetics, as well as being another piece in the permanently interesting puzzle that is “The Brain”.

Researchers from McGill, MIT and Caltech focused on the posterior parietal cortex (PPC), the section of brain responsible for taking all the “what is going on” data from the senses and planning what your thousand muscles and bones are going to do about it.

Working with robot-arm equipped monkeys (god but science is awesome), they discovered that the PPC runs its own realtime simulation of the future.  Of course, you instinctively knew that – when you try to catch a ball you don’t flail at where you see it, you run to where it’s going to be.  More usefully they uncovered the nature of two distinct signals from this gooey futurefinder: a “goal” signal which describes what the brain wants to happen, and a “trajectory” signal which lays out the path the body part must take to get there.

This pair of signals is incredibly useful data for any robotic limbs or other extras we might add to our limited human forms – whether they be replacements for carelessly lost parts, or entirely new structures. By working from the “goal” signal the mechanical parts can swiftly prepare to move in the desired manner, preparing any components needed and checking the path for hazards, before the “trajectory” signal gets to the fine details of movement.

source

New solar cell material achieves almost 100% efficiency, could solve world-wide energy problems

Researchers at Ohio State University have accidentally discovered a new solar cell material capable of absorbing all of the sun’s visible light energy. The material is comprised of a hybrid of plastics, molybdenum and titanium. The team discovered it not only fluoresces (as most solar cells do), but also phosphoresces. Electrons in a phosphorescent state remain at a place where they can be “siphoned off” as electricity over 7 million times longer than those generated in a fluorescent state. This combination of materials also utilizes the entire visible spectrum of light energy, translating into a theoretical potential of almost 100% efficiency. Commercial products are still years away, but this foundational work may well pave the way for a truly renewable form of clean, global energy.

Traditional solar cell materials use a property called fluorescence to gather electricity. Energy from the sun strikes whatever material they are made of resulting in a momentary “dislodging” of electrons into an excited state. The excited electrons exist due to a property called fluorescence. They last only a dozen or so picoseconds (trillionths of a second) in this state, which is also called a “singlet state.” The many picosecond dwell there is fairly typical among traditional solar cell material in use today.

The new material, which was accidentally discovered using supercomputers to determine possible theoretical molecular configurations, causes not only fluorescing electrons in the singlet state to be created, but also phosphorescing electrons in what’s called a “triplet state.”

These triplet state electrons remain in their excited state of phosphorescence for scores of microseconds (up to about 200 microseconds, or 0.0002 seconds). With such a long lasting state of free electron flow, their ability to be captured is theoretically significantly greater than existing technologies.

And if the research team’s current efforts (of using only a few molecules of the hybrid materials suspended in a liquid solution) can be extended into practical real-world scales, then products yielding nearly 100% solar efficiency may soon be achievable.

source

New solar tech is over the rainbow

Ohio State University chemists have created a new material that could revolutionise photovoltaic solar panels.

Today’s solar cell materials are sensitive to only a limited range of frequencies, so they can only capture a small fraction of the energy contained in sunligh.

The new hybrid material – an electrically conductive plastic combined with metals including molybdenum and titanium – is the first that is sensitive to all the colours in the rainbow, allowing it to absorb all the energy contained in visible light at once.

Not only is the hybrid material more sensitive than normal solar panels, it also generates much more charge (more free electrons) than the researchers were expecting.

“This long-lived excited state should allow us to better manipulate charge separation,” said Professor Malcom Chisholm, chair of the Ohio State’s Chemistry Department.

To design the as-yet-unnamed hybrid material, Chisholm explored different molecular configurations on a supercomputer before synthesizing molecules of the new material in a liquid solution.

However, he warns that it could be years before high-power hybrid solar panels find their way onto our roofs. Until then, we’re stuck with today’s traditional silicon panels – and hopefully the more efficient thin-film technologies coming soon.

source

‘Intelligent’ computers put to the test

Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes.

In the Turing test a machine seeks to fool judges into believing that it could be human. The test is performed by conducting a text-based conversation on any subject. If the computer’s responses are indistinguishable from those of a human, it has passed the Turing test and can be said to be “thinking”.

No machine has yet passed the test devised by Turing, who helped to crack German military codes during the Second World War. But at 9am next Sunday, six computer programs – “artificial conversational entities” – will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognised “thinking” machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be “conscious” – and if humans should have the ‘right’ to switch it off.

source

IBM’s eight-core Power7 chip to clock in at 4.0GHz

IBM looks set to join the seriously multi-core set with the Power7 chip. Internal documents seen by The Register show Power7 with eight cores per processor and also some very, very large IBM boxes based on the chip.

The IBM documents have the eight-core Power7 being arranged in dual-chip modules. So, that’s 16-cores per module. As IBM tells it, each core will show 32 gigaflops of performance, bringing each chip to 256 gigaflops. Just on the gigaflop basis, that makes Power7 twice as fast per core as today’s dual-core Power6 chips, although the actual clock rate on the Power7 chips should be well below the 5.0GHz Power6 speed demon.

In fact, according to our documents, IBM will ship Power7 at 4.0GHz in 2010 on a 45nm process. We’re also seeing four threads per core on the chip.

For some customers, IBM looks set to create 2U systems with four of the dual-chip modules, giving the server 64 cores of fun. These 2U systems will support up to 128GB of memory and hit 2 teraflops.

IBM has an architecture that will let supercomputing types combine these 2U boxes to form a massive unit with 1,024 cores, hitting 32 teraflops of performance with 2TB of memory.

And, er, if you are a seriously demanding type, boy, does IBM have the system for you.

source

IBM is  planning to build a supercomputer that runs at 10 petaflops in 2011. A petaflop is 10^15 calculations per second. So 10 petaflop would be 10^16.

The military recently built the very first supercomputer to break the 1 petaflop barrier. A tenfold increase within 3 years is no laughing matter.

A 10 petaflop supercomputer would also be powerful enough (according to Kurzweil’s calculations, that is) to simulate a human brain in real time.

Matrix-style virtual worlds ‘a few years away’

Matrix-style virtual worlds ‘a few years away’

Are supercomputers on the verge of creating Matrix-style simulated realities? Michael McGuigan at Brookhaven National Laboratory in Upton, New York, thinks so. He says that virtual worlds realistic enough to be mistaken for the real thing are just a few years away.

In 1950, Alan Turing, the father of modern computer science, proposed the ultimate test of artificial intelligence – a human judge engaging in a three-way conversation with a machine and another human should be unable to reliably distinguish man from machine.

A variant on this “Turing Test” is the “Graphics Turing Test”, the twist being that a human judge viewing and interacting with an artificially generated world should be unable to reliably distinguish it from reality.

“By interaction we mean you could control an object – rotate it, for example – and it would render in real-time,” McGuigan says.

The future of biomedicine: virtual humans

The future of biomedicine: virtual humans

This is your brain on a chip. This is your liver on a slide. This is your body in a supercomputer. Any questions?

It’s a bit more complicated than that, but recently scientists have provided a sneak preview of the future of biomedicine with a range of projects seeking to assemble virtual humans — or parts of them — on computers and “labs on a chip.” Someday, the descendants of these sophisticated new programs and devices could serve as our stand-ins for clinical tests on drugs, cosmetics and toxic compounds.

“I would predict that this century is going to be dominated by our ability to handle biomedical problems in a computational domain,” said Peter Coveney, director of the Centre for Computational Science at University College London.