Linux Supercomputer Howls

IBM execs say the creation of the world’s fastest Linux supercomputer will revolutionize science. Then it’ll take on e-commerce. By Michelle Finley and John Gartner.

THE UNIVERSITY OF New Mexico and IBM are teaming up to build the world’s fastest Linux-based supercomputer.

Named “LosLobos”, the new supercomputer is scheduled to be fully operational by the summer.

LosLobos is a departure from the traditional supercomputer set-up. It’s built from 256 IBM Netfinity servers.

The Netfinity servers are linked together using special clustering software and high-speed networking hardware, which causes the separate units to act as one computer, delivering a processing speed of 375 gigaflops, or 375 billion operations per second.

Although its creators believe LosLobos is the fastest Linux supercomputer, it will only rank 24th on the list of the top 500 fastest supercomputers.

Dr. Frank Gilfeather, executive director for the High Performance Computing Education and Research Center at the University of New Mexico, said that LosLobos is part of a “major supercluster movement” involving many people in the Linux and Open-System communities.

He believes superclusters will revolutionize the high-end computing environment.

“The evolution of large Linux superclusters emerges from the proliferation of commodity components such as PCs, the development of high-speed COTS networks, such as Myrinet, and rapid expansion of the open software movement,” Gilfeather said. “Thus, true supercomputers can be created at a extremely reasonable cost in comparison to traditional supercomputers.”

One researcher who tests distributed cluster environments said there is definitely a move away from the standard all-in-one supercomputer model.

“Power is unlimited when you’re using clusters,” said Stephen Scott, a research scientist at Oak Ridge National Lab’s computer-science division. “An average ninth-grader can plug the machinery in,” Scott said.

“The problem is dealing with the networking. I can buy a single switch with non-blocking communications, so any machine can talk to any machine – but it maxes out at 64 machines. Nobody has developed 128 machine switches yet.”

Gilfeather said that the biggest problem with deploying scalable production-class superclusters is the lack of mature and tested management tools comparable to what the traditional supercomputer vendors provide.

“I/O, including scalable file systems, continues to be universal problem for supercomputers,” he said.

The LosLobos supercluster is part of the National Science Foundation’s National Computational Sciences Alliance program, which gives scientists remote access to the fast machines needed for scientific research.

The foundation is developing a country-wide technology grid, which will connect researchers across the country by linking together the supercomputers located in national labs and universities.

“The scientific world likes Linux because it’s close to standard Unix,” Scott said. “Most high-performance environments are Unix, but all of the free GNU tools make it much easier and cheaper to deploy Linux.”

John Patrick, vice president of Internet technology at IBM, said the introduction of superclusters at the top university and government research facilities will impact e-business down the road.

“Right now, even the largest websites are tiny compared to what we will see in the near future,” Patrick said. “The superclusters of today will provide the test bed to create the e-business systems of tomorrow.”

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.