SC15 Panel Line-Up for November 18
As data intensive science emerges, the need for high performance computing (HPC) to converge capacity and capabilities with Big Data becomes more apparent and urgent. Capacity requirements have stemmed from science data processing and the creation of large scale data products (e.g., earth observations, Large Hadron Collider, square-kilometer array antenna) and simulation model output (e.g., flight mission plans, weather and climate models).
Capacity growth is further amplified by the need for more rapidly ingesting, analyzing, and visualizing voluminous data to improve understanding of known physical processes, discover new phenomena, and compare results.
- How does HPC need to change in order to meet these Big Data needs?
- What can HPC and Big Data communities learn from each other?
- What impact will this have on conventional workflows, architectures, and tools?
An invited international panel of experts will examine these disruptive technologies and consider their long-term impacts and research directions.
- George O. Strawn (Moderator) – Networking and Information Technology Research and Development National Coordination Office
- David Bader – Georgia Institute of Technology
- Ian Foster – University of Chicago
- Bruce Hendrickson – Sandia National Laboratories
- Randy Bryant – Executive Office of the President, Office of Science and Technology Policy
- George Biros – The University of Texas at Austin
- Andrew W. Moore – Carnegie Mellon University
Mentoring Undergraduates Through Competition
The next generation of HPC talent will face significant challenges to create software ecosystems and optimally use the next generation of HPC systems. The rapid advances in HPC make it difficult for academic institutions to keep pace.
The Student Cluster Competition (SCC), now in its ninth year, was created to address this issue by immersing students into all aspects of HPC. This panel will examine the impact of the SCC on the students and schools that have participated.
Representatives from five institutions from around the world will talk about their experiences with the SCC with regards to their students’ career paths, integration with curriculum and academic HPC computing centers.
The panel will further discuss whether “extracurricular” activities, such as the SCC, provide sufficient return on investment and what activities could change or replace the competition to meet these goals more effectively.
- Brent Gorda (Moderator) – Intel Corporation
- Jerry Chou – Tsinghua University
- Rebecca Hartman-Baker – Lawrence Berkeley National Laboratory
- Doug Smith – University of Colorado Boulder
- Xuanhua Shi – Huazhong University of Science and Technology
- Stephen Lien Harrell – Purdue University
Programming Models for Parallel Architectures and Requirements for Pre-Exascale
Relying on domain scientists to provide programmer intervention to develop applications to emerging exascale platforms is a real challenge. A scientist prefers to express mathematics of the science, not describe the parallelism of the implementing algorithms.
Do we expect too much of the scientist to code for high parallel performance given the immense capabilities of the platform. This ignores that the scientist may have a mandate to code for a new architecture, and yet preserve portability in their code.
This panel will bring together user experience, programming model, architecture experts to discuss the pressing needs in finding the path forward to port scientific codes to such a platform. We hope to discuss the evolving programming stack, application-level requirements, and address the hierarchical nature of large systems in terms of different cores, memory levels, power consumption and the pragmatic advances of near term technology.
- Fernanda Foertter (Moderator) – Oak Ridge National Laboratory
- Barbara Chapman – University of Houston
- Steve Oberlin – NVIDIA Corporation
- Satoshi Matsuoka – Tokyo Institute of Technology
- Jack Wells – Oak Ridge National Laboratory
- Si Hammond – Sandia National Laboratories