The 96-rack IBM Blue Gene Q Sequoia supercomputer
The IBM Blue Gene Q Sequoia supercomputer at Lawrence Livermore National Laboratory © Alamy

With its flying buttresses and domed roof, the Terascale Simulation Facility at the Lawrence Livermore National Laboratory was built as a cathedral to the supercomputer.

The design may suggest some religious reverence for the all-powerful machines. More practically, though, it means they are unencumbered by supporting columns, saving space and creating a more direct route for the arteries of cables to feed and cool the world’s fastest supercomputers. But less than 10 years after being built, the building is already in danger of becoming outdated.

Sign up for free articles
If you enjoy technology articles like these, register today on FT.com to see up to eight free stories a month

“When we designed this building, we thought it would be good for 50 years but already it’s only adequate and not robust,” says Mike McCoy, who leads the supercomputing effort at Lawrence Livermore as director of the Advanced Simulation and Computing programme.

Even its name risks becoming an anachronism. Terascale computing has been superseded by petascale computing – 1,000 times faster. By 2020, we will be in the exascale age – a thousand times faster again.

Located less than an hour’s drive from San Francisco, the laboratory – a mile-square high-security campus with 6,000 workers – was built to simulate nuclear tests, thus safeguarding the US nuclear stockpile.

The first supercomputer housed here was the fastest in the world for nearly four years. But the latest to replace it – called Sequoia – was the world’s fastest for only six months before being overtaken last November by Titan – another Department of Energy machine at Oak Ridge National Laboratory in Tennessee.

Last month, Milky Way 2, based at the National Supercomputing Centre in Guangzhou, gave China a decisive lead over Titan on the Top 500 list of the world’s fastest supercomputers, offering more than double the performance, at 55 petaflops, or 55 quadrillion operations per second.

China is now expected to be first to exascale – the target of an international race also involving the EU, India, Japan and Russia. The winner would be first to reap the significant scientific and economic advantages from supercomputers that can be used for everything from detecting weather patterns to making a more efficient combustion engine.

China’s lead comes as budget gridlock in Washington means there is no approved plan – or funding in place – for the US to reach exascale computing by 2020. It is a source of intense frustration among US scientists, engineers and technology companies that have been accustomed to dominating the field and maintaining their technical superiority.

Touring the 48,000 sq ft computing floor at the Terascale facility, the big names of American high technology are represented: Sequoia consists of rows and rows of IBM’s Blue Gene Q servers, with 96 racks packed with 1.6m processing cores. Older machines feature Sun Microsystems servers powered by Intel processors. But these tech titans will be joined – and possibly replaced – by Chinese brands as the country increases its investment and expertise.

Mr McCoy says the building could soon begin creaking under the strain of the latest machines: “The density of the racks is getting so huge they may exceed the capacity of this floor to support them. A single rack of Blue Gene Q weighs 4,400lbs but they could get up to 10,000lbs and we would barely be able to bring them up in the elevators.”

But weight is the least of the problems that need to be overcome to reach exascale computing. Engineers must also reduce the computers’ power requirements, create potent software and remove memory bottlenecks. Resiliency – the ability of the machines to run for more than a day to complete their tasks – is also an important issue. With more than 100m operating parts, supercomputers need the ability to self-heal.

An exascale computer, capable of a quintillion – or 10 to the power of 18 – floating point operations per second, could probably be built today. But it would draw about 100MW of power, equivalent to that used by 80,000 US homes. The US plan to achieve exascale will require lower-power chips, denser circuitry and more efficient cooling to be developed.

Inside an armadillo-shaped auditorium at Livermore, scientists give presentations on the simulations they are achieving in the petaflop era: a tomography of the planet, showing seismic activity under the surface; the inner workings of the human heart; and how best to optimise the firing of the 192 giant lasers in the adjoining National Ignition Facility. In a simulation of the effect of a nuclear bomb, the world’s largest laser system is focused on an object the size of a pencil eraser, triggering an explosion lasting 20 nanoseconds.

While these simulations are impressive, they lack the resolution and scale that exaflop computing will make possible.

Simulation has become the third leg of science alongside theory and experimentation, says Steve Scott, chief technology officer at Nvidia, whose 18,600 Kepler processors help power the Titan machine in Tennessee.

“There are fundamental problems in science that do need an exaflop and new sciences will be enabled as well, so it can be transformative,” he says. “For example, scientists believe you need to do a global climate model at a 1km resolution to understand cloud convections and ocean eddies that are really important to the process. That needs exaflops.”

It is this kind of work that President Barack Obama referred to in his 2011 State of the Union speech when he acknowledged China had just grabbed the lead in supercomputing.

This is our generation’s Sputnik moment,” he said, comparing supercomputing to the space race between Russia and the US in the 1960s.

Engineers put the Cray supercomputer through its paces
1983: engineers put the Cray supercomputer through its paces © Corbis

More than two years later, an exascale strategic plan is only now being submitted to Congress. No direct funding has been approved. Concerned members of Congress have drafted the American High-End Computing Leadership Act to try to co-ordinate and fund the exascale effort in the US.

Experts testifying at a House energy subcommittee hearing on it in May warned that the US had fallen behind China and Japan, despite retaining leadership in intellectual property. “I think it would be truly shameful for us to give up the elements of leadership that we have,” said Roscoe Giles, chairman of the Advanced Scientific Computing Advisory Committee.

Rick Stevens, co-leader of exascale planning in the Department of Energy’s laboratories, said Washington needed to provide an extra $400m a year for an exascale computer in the US to be feasibly deployed by 2020. Current funding levels would put this back to 2025 – seven years after China is expected to reach exascale.

But US tech companies are pressing ahead with their own exascale investments. Intel began assembling an exascale team more than two years ago, hiring Mark Seager, who deployed Livermore’s earlier Blue Gene supercomputers, and Alan Gara, chief architect of three generations of Blue Gene machines. They have launched Xeon Phi, an accelerator processor that China is using in Milky Way 2. Intel plans to keep developing the chip and hopes eventually to seal a partnership with the US Department of Energy – if the bigger funding demands for exascale are met.

“In China, the politburo has approved a five-year plan for them to get to exascale,” Mr Seager says. “They have unlimited funding to get there with three parallel developments. They are as serious as a heart attack.”

While Milky Way 2 depends on Xeon processors, significant parts of the machine are made in China, including the software “stack” and interconnect – a complex piece of networking equipment that needs to be ultra-efficient in connecting the different banks of processors to one another for parallel computing. A second effort is based entirely on western technology, while a third is completely indigenous but further behind. China has also developed its own processor, nicknamed Godson. Its latest version has eight “cores”, which act as the brains of the chip, and features 1.14bn transistors.

“It’s currently believed to be well behind best-of-breed chips like Intel’s,” says Addison Snell, computing analyst with the Intersect360 research firm. “Nevertheless, it’s certainly feasible that by the end of the decade they will be able to field a very large supercomputer made entirely out of Chinese technology.”

China’s move towards a home­grown supercomputer is seen as inevitable, with the US likely in future to limit the export and sale of its best technology for national security and economic reasons. It also wants to have a big say in the establishment of computing standards.

“Where we don’t want to go in the near term is in deep hardware partnerships internationally. I think that’s a place where we want to maintain our competitive edge,” Mr Stevens told the congressional subcommittee.

Intellectual property is the US’s strongest card, with other exascale efforts already well funded. The EU has three projects under way and $2bn in funding for them over the next three years. The Indian government has committed $2bn and Japan has a $1.1bn investment programme.

The lack of US funding means a 1960s-style space race remains a dream, as are the spin-offs that could be expected from new advances in technology.

“There would absolutely be a trickle-down from exascale, just as the laptops of today were supercomputers 10 years ago,” says Mr Gara. “We’re not sitting on our hands right now but the lack of investment at the high end means that we don’t push as hard in some interesting dimensions that have historically paid dividends for mainstream technology.”

The space race analogy is a false one, says Mr Snell. The moon, he adds, was a final destination, while beyond the exaflop there are zettaflop and yottaflop targets.

China boasts the world’s fastest computer
2013: China boasts the world’s fastest computer © Corbis

“The goal is always receding into the distance – and this is the nice thing about supercomputing – it’s really a tool for science and engineering where there are always harder problems to solve. Until we’ve reached the end of science, we will always need more powerful tools.”

While the fastest computer does not always produce the best science, for a country used to being at the leading edge of science and technology, not being competitive in its ultimate expression could be damaging to national pride and competitiveness.

“We are in a perilous place right now. Leadership is ours to lose,” Mr McCoy says. “We have to make a decision. Do we want to control the instruction sets, the programming models? Do we want American technology in these machines or do we want someone else’s in 15 years?”

From nappy absorption to the apocalypse

Whether measuring the decay and safety of the US nuclear stockpile or improving the absorbency of nappies for American babies, the power of supercomputers to simulate conditions from the cradle to the apocalypse is proving vital in materials science.

At Lawrence Livermore National Laboratory in California, years of simulations of atmospheric and underground nuclear explosions have evolved into larger projects of modelling the planet’s weather patterns and creating a subsurface seismic topography – although earthquakes remain as unpredictable as ever.

Just outside the high-security perimeter of the complex, a High-Performance Computing Center was established two years ago to establish partnerships with industry – allowing companies to buy time on supercomputers to create their own simulations and improve their products.

“Leadership is about being able to model physical systems that can be a strategic and economic advantage for this country,” says Mike McCoy, director of the Advanced Simulation and Computing programme. “Think of how a Boeing or Ford or IBM could make use of this.”

The partnerships are still in their early stages. Apart from nappy discussions with Johnson & Johnson, the most impressive feat is the Cardioid project – a simulation of the electrophysiology of the human heart that can model its reaction to drugs intended to treat conditions such as arrhythmia, which is an irregular heartbeat.

At Oak Ridge National Laboratory in Tennessee, the Titan supercomputer is being used to simulate fuel chemistries to try to create engines that are 25-50 per cent more efficient in the combustion process.

Exascale computing will allow modelling that goes beyond a single part of the body or unit of a machine.

“Manufacturers designing complicated machinery won’t have to focus on single-point designs in future,” says Jack Wells, director of science at Oak Ridge.

“We can do whole ensembles of design and collapse months-long workflows into one day.”

——————————————-

Quest for the world’s fastest computer

Fred Hoyle, the great science thinker, and an early IBM
1964: Fred Hoyle, the great science thinker, and an early IBM © Getty Images

1958 Seymour Cray, the “father of supercomputing”, leaves Sperry Corporation to join Control Data Corporation. Two years later, Cray finishes the CDC 604, the first solid-state computer. It is followed by the CDC 6600 in 1964. In 1968 he introduces the 7600, then the fastest computer in the world at 36MHz.

1976 Cray builds the Cray 1, which was installed at the Los Alamos National Laboratory for $8.8m. It has a speed of 160m floating-point operations per second, or 160 megaflops. The Cray-1 has a “C” shape that allows the integrated circuits to be closer together. It is cooled with a refrigeration system using Freon. The Cray Y-MP, released in 1988, sustains over 1 gigaflop on many applications.

1990 NEC’s SX-3/44R becomes the world’s fastest supercomputer with a four-processor model. Fujitsu’s Numerical Wind Tunnel, installed in 1993 at Tokyo’s National Aerospace Laboratory, reaches 124.5 gigaflops. By the end of the 20th century, Intel’s ASCI Red supercomputer is the fastest, using Pentium Pro processors. It has 12 terabytes of disk storage.

2005 IBM’s Blue Genie is installed at the Lawrence Livermore National Laboratory. By 2007, it reaches 478.2 teraflops.

2013 Milky Way 2, based at the National Supercomputing Centre in Guangzhou, gives China the lead in supercomputing with 55 petaflops, or 55 quadrillion (10 to the power of 18) operations per second.

——————————————-

Letter in response to this article:

China’s influence on the world can only increase / From Dr Paul Jowett

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments