Facebook Research: Announcing the winners of the AI System Hardware/Software Co-Design research awards

In January, Facebook invited university faculty to respond to a call for research proposals on AI System Hardware/Software Co-Design. Co-design implies simultaneous design and optimization of several aspects of the system, including hardware and software, to achieve a set target for a given system metric, such as throughput, latency, power, size, or any combination thereof. Deep learning has been particularly amenable to such co-design processes across various parts of the software and hardware stack, leading to a variety of novel algorithms, numerical optimizations, and AI hardware.

Facebook AI teams have also been using co-design to develop high-performance AI solutions for existing as well as future AI hardware. Through these research awards, we looked to support further exploration of co-design opportunities across a number of new dimensions.

We received 88 submissions, many of which provided promising research direction. The selection committee was composed of 10 engineers representing a wide range of AI hardware/algorithm co-design research areas.

Research award winners

Efficient Neural Network Through Systematic Quantization Kurt Keutzer (UC Berkeley), Amir Gholami (UC Berkeley)

Hardware-Centric AutoML: Design Automation for Efficient Deep Learning Song Han (Massachusetts Institute of Technology)

Low Memory-Bandwidth DNN Accelerator for Training Sparse Models Mattan Erez (The University of Texas at Austin), Michael Orshansky (The University of Texas at Austin)

Making Typical Values Matter in Out-of-the-Box Deep Learning Models Andreas Moshovos (University of Toronto)

ML-Driven HW-SW Co-Design of Efficient Tensor Core Architectures Tushar Krishna (Georgia Institute of Technology)

Realistic Benefits of Near-Data Processing for Emerging ML Workloads Onur Mutlu (ETH Zurich)

Scalable Graph Learning Algorithms David A. Bader (Georgia Institute of Technology)

Structure-Exploiting Optimization Algorithms for Deep Learning Jorge Nocedal (Northwestern University)

Runners-up

A Holistic Approach to Scalable DNN Training Gennady Pekhimenko (University of Toronto)

Accelerating Graph Recommender Systems with Co-designed Memory Extensions Scott Beamer (UC Santa Cruz)

Automatic Hardware-Software Co-Design for Deep Learning with TVM/AutoVTA Luis Ceze (University of Washington)

Differentiable Neural Architecture Search for Ads CTR Prediction Kurt Keutzer (UC Berkeley)

Enabling Scalable Training Using Waferscale Systems Rakesh Kumar (University of Illinois at Urbana-Champaign)

Fast Embeddings: Construction and Lookups Francesco Silvestri (University of Padua), Flavio Vella (Free University of Bozen-Bolzano)

Hardware-Neural Architecture Search-Based Co-Design for Efficient NNs Diana Marculescu (Carnegie Mellon University)

Processing-in-Memory Architecture for Word Embedding Jung Ho Ahn (Seoul National University)

Thank you to all the researchers who submitted proposals, and congratulations to the winners. To view our currently open research awards and to subscribe to our email list, visit our Research Awards page.

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.