Twice a year, an organization called the TOP500 publishes a list of the world’s fastest supercomputers. It is a ranking of the most powerful machines in the world—mammoth installations with names like Sunway TaihuLight and Tianhe-2. Those are both Chinese computers, and the former is the world’s fastest. The most recent version of the list came out on Monday, and the top five supercomputers hail from China, Switzerland, Japan, and the United States.
But while the ranking is a timely who’s who of brawny computers—and right now, China dominates the list, with 202 of the top 500—its publication is also a good time to ask: what makes a supercomputer a supercomputer, and what do scientists use them for?
“A supercomputer is a large machine designed to focus its power on a single problem,” says Bill Gropp, who runs the National Center for Supercomputing Applications at the University of Illinois, home to a machine called Blue Waters. In other words, a large server farm might be powering your Gmail experience or streaming your Netflix, but its computing power is focused on many individual tasks, not a single, complex one.
And importantly, supercomputers are meant to handle problems that can be broken down into smaller pieces—but pieces that don’t remain in isolation. “Those pieces have to communicate with their neighbors,” Gropp says.
To picture what one looks like, imagine refrigerator-sized cabinets packed with components, like processors. Big ones can take up thousands of square feet.
The top supercomputers are ranked using a metric called flops, which stands for floating point operations per second—a measurement of how fast it can do math equations. The Sunway TaihuLight machine topped out at 93 petaflops, which is 93 quadrillion flops. The fastest U.S. machine on the list is called Titan, and it clocks out at over 17 petaflops. (Just don’t confuse them with belly flops, which are totally different and much less useful.)
"We’re studying nature at a very high resolution, atom by atom."
The world on silicon
Think about the complexity of the natural world—the way molecules interact, a tornado forms, or the path a hurricane takes. Simulating that digitally takes a lot of computing power.
Steve Scott, the chief technology officer at Cray Inc—which makes supercomputers—says that the powerful machines play a role in the scientific process. “Basically what computers are doing is simulating the natural world,” he says.
For example: consider HIV. That virus is wrapped in something called a capsid, which is comprised of 1,300 proteins. To better understand the interplay between the capsid and the cell the virus enters, Juan Perilla, an assistant professor of chemistry and biochemistry at the University of Delaware, used two supercomputers to run a simulation. One of those was Titan, at Oak Ridge National Laboratory. Another was Blue Waters, in Illinois.
The simulation produced so much data—almost 100 terabytes—they needed Blue Waters again just to crunch it.
Courtesy www.popsci.com
Comments
Post a Comment