DARPA awards Georgia Tech energy-efficient high-performance computing contract

Goal is to create algorithms that carry supercomputing into the field

Georgia Tech has received $561,130 for the first phase of a negotiated three-phase $2.9 million cooperative agreement contract from the U.S. Defense Advanced Projects Research Agency (DARPA) to create the algorithmic framework for supercomputing systems that require much less energy than traditional high-speed machines, enabling devices in the field to perform calculations that currently require room-sized supercomputers.

Awarded under DARPA’s Power Efficiency Revolution for Embedded Computing Technologies (PERFECT) program, the negotiated cooperative agreement contract (with options out to five years) is one piece of a national effort to increase the computational power efficiency of “embedded systems” by 75-fold over the best current computing performance in areas extending beyond traditional scientific computing. Professor David Bader, executive director of high-performance computing in the School of Computational Science & Engineering, is principal investigator on the Georgia Tech cooperative agreement, along with research scientist and co-PI Jason Riedy.

“Power efficiency is one of the greatest challenges confronting the designer of any computing system, much less one that’s capable of this kind of speed,” Bader said. “We could build this system today, but it would require megawatts of electricity–enough to power a medium-sized city. Our goal is to deliver the same graph analytic capabilities on platforms that require only watts or kilowatts.”

Such a system would have benefits in energy conservation, of course, but it could also save lives. The tactical advantages of supercomputing in military situations–quickly and comprehensively mapping individual or group social-media activity, for example–are becoming more critical every day, and the capacity simply doesn’t exist to deliver massive amounts of data from the field to a central computing system. Georgia Tech’s objective is to bring supercomputer graph-analysis capabilities where they’re needed, from vehicles to field hospitals and beyond. The project bears the acronym GRATEFUL: “Graph Analysis Tackling power-Efficiency, Uncertainty and Locality.”

In addition to power efficiency, the second priority is to maximize computational resiliency, meaning the product algorithms will be able to withstand errors at the application and even hardware level that could result from input error or environmental factors (such as weather and hardware damage).

Bader and Riedy’s task is to develop the algorithmic framework upon which these new embedded systems will operate, and they will consciously remain “architecture-agnostic” so that the end product can be applied as widely as possible. Finally, like all programs funded under DARPA PERFECT, research and testing will be done in simulation rather than on actual embedded systems. GRATEFUL will be broken up into three stages: research & startup (18 months), risk mitigation (18 months) and prototyping (two years).

“Our goal is to make sure we have graph-analysis algorithms that can manage issues across architectures,” Riedy said. “And we’ll be looking at all the issues that concern hardware designers. Today’s platforms maximize the number of operations running at once, while these new platforms consider the most power-efficient levels of that concurrency. These are not new concerns, but our job is to find new ways to deal with them.”

https://www.eurekalert.org/pub_releases/2012-11/giot-dag111312.php

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.