Posts

Institute for Data Science Unveils 2020-2021 Talks, Many to Discuss COVID-19

Written by: Evan Koblentz Institute for Data Science will host many more talks than last year About two dozen experts on data science are giving seminars to the NJIT community this semester and next, with many of them excited to participate virtually from far away with the students and faculty here in New Jersey . David Bader, director of NJIT’s Institute for Data Science, said he is excited to host such prestigious guest speakers representing academia, government and industry.

1st Algorithmic Breakthrough in 40 years for solving the Minimum Spanning Tree (MST) Replacement Edges problem

One of the most studied algorithms in computer science is called “ Minimum Spanning Tree” or MST. In this problem, one is given a graph comprised of vertices and weighted edges, and asked to find a subset of edges that connects all of the vertices, and the total sum of their weights is as small as possible. Many real-world optimization problems are solved by finding a minumum spanning tree, such as lowest cost for distribution on road networks where intersections are vertices and weights could be length of the road or time to drive that segment.

Northeast Big Data Hub Seed Fund Open for Applications!

Seed Fund Announcement The Northeast Big Data Hub is delighted to announce our Seed Fund program this month. The Seed Fund is designed to promote collaboration and support the cross-pollination of tools, data, and ideas across disciplines and sectors including academia, industry, government, and communities. Funding provided through this program is intended to support the northeast region and align with the Major Goals and Focus Areas of the Northeast Big Data Hub.

Opportunities: New Seed Fund Program, Coming Soon

The Northeast Big Data Innovation Hub Seed Fund will be launching by this fall to support data science activities within our community. We are pleased to announce David Bader as Chair of the Seed Fund Steering Committee. Bader is a Distinguished Professor in the Department of Computer Science and inaugural Director of the Institute for Data Science at New Jersey Institute of Technology, and a leading expert in solving global grand challenges in science, engineering, computing, and data science.

Dr. David Bader Driving Global Transformations through Data Science Solutions

New Jersey Institute of Technology (NJIT) announced last summer that the University would establish a new Institute of Data Science, focusing on cutting-edge interdisciplinary research and development in all areas of digital data. To head up this endeavor, Dr. David Bader joined NJIT as a Distinguished Professor and Director of the Institute for Data Science. Previously serving as the Professor and Chair of the School of Computational Science and Engineering at Georgia Institute of Technology, Bader has become a leading expert in solving global grand challenges in science, engineering, computing, and data science.

Leadership Updates: Steering Committee and Seed Fund Steering Committee

Dear Northeast Hub Community, As new program activities launch under our second phase of NSF funding, we are delighted to welcome new community members to our leadership team who will help guide the Hub forward, and to thank those whose service has helped us reach this point. Following a search made by our project team last month, we are pleased to welcome John Goodhue of the Massachusetts Green High Performance Computing Center, Carsten Eickhoff of Brown University, and Laura Dietz of the University of New Hampshire to our Steering Committee.

Meet the 2019-2020 New Faculty in Ying Wu College of Computing

Written by: Prerna Dar Top row: Jacob Chakareski, Matthew Toegel, Ryan Tolboom, Salam Daher, Tomer Weiss, Cody Buntain. Bottom row: David Bader, Jertishta Qerimaj, Pan Xu, Ravi Varadarajan, Przemyslaw Musialski, Yao Shen. Whether promising young scholars, researchers with years of experience or seasoned instructors, each year, individuals from around the world join the faculty of Ying Wu College of Computing. In the 2019-2020 academic year, the college welcomed 12 new faculty members, continuing a strategic effort to broaden its impact in research and teaching.

Snapping Foggy Narratives Into Focus

David Bader, director of NJIT’s Institute for Data Science, works on computing initiatives that will help people make sense of large, diverse and evolving streams of data from news reports, distributed sensors and lab test equipment, among other sources connected to worldwide networks. When a patient arrives in an emergency room with high fever, coughing and shivering, the speed of diagnosis and treatment depends on the skills of the medical staff, but also on information.

Shawn Cicoria, M.S. in Data Science, NJIT@JerseyCity

Shawn Cicoria, Principle Software Engineer Manager, Microsoft, discusses the M.S. in Data Science program at NJIT@JerseyCity, highlighting Prof. David A. Bader. https://www.youtube.com/watch?v=xQt5GSgwk8k

NJIT Experts Presenting AI Answers to Real-World Problems at NYC Forum

Experts in artificial intelligence from the Ying Wu College of Computing will highlight how their work solves real-world problems at a prestigious meeting in New York next week. The professors — Chaoran Cheng, Jing Li, Zhi Wei and Pan Xu — will share their session stages with researchers from IBM, Facebook, Yahoo and other prominent organizations for audiences at the 34th annual conference of the Association for the Advancement of Artificial Intelligence.

NJIT’s Ying Wu College of Computing Launches New Location in Jersey City

NJIT@JerseyCity is located at 101 Hudson Street on the Jersey City waterfront and, in addition to an ultra-modern learning environment, also provides an expansive view of the iconic Manhattan skyline. NJIT’s Ying Wu College of Computing (YWCC) offers a master’s degree in Data Science as well as graduate certificates in Big Data and Data Mining at NJIT @JerseyCity. YWCC plans to add a graduate certificate in Data Visualization in spring 2020 and further expand next fall to include Cyber Security graduate programs.

This Week in Neo4j

Our featured community member this week is Dr. David Bader, Distinguished Professor at New Jersey Institute of Technology. Dr. David Bader – This Week’s Featured Community Member Without doing too much ego-boosting, we can just say David is a graph-addict for a long time before it was a ‘thing’. Alongside his role as a professor, he’s a fellow of the IEEE, AAAS, and SIAM, advises the White House, and the National Strategic Computing Initiative (NSCI).

Data science expert Bader looks to Fed funding for info analysis

By Evan Koblentz Data science has reached a point where techniques such as deep learning can beat humans at recognizing objects, although experts are still figuring out how to make explainable predictions from massive data, NJIT distinguished professor David Bader said. Bader leads the university’s Institute for Data Science in collaboration with the Ying Wu College of Computing, Newark College of Engineering, Martin Tuchman School of Management, and College of Science and Liberal Arts.

The Chronicle of Higher Education / NJIT

David Bader Distinguished Professor and Director of NJIT’s Institute for Data Science What is NJIT’s new Institute for Data Science? The growing abundance and variety of data we amass gives us unprecedented opportunities to improve lives in multifold arenas - manufacturing, health care, financial management, data protection, food safety and traffic navigation are just a few. The Institute for Data Science (IDS) will focus NJIT’s multidisciplinary research and workforce skills training on developing technology leaders who will solve global challenges involving data and high-performance computing (HPC).

Supercomputer analyzes web traffic across entire internet

By Rob Matheson, MIT News Office Using a supercomputing system, MIT researchers developed a model that captures what global web traffic could look like on a given day, including previously unseen isolated links (left) that rarely connect but seem to impact core web traffic (right). Image courtesy of the researchers, edited by MIT News Using a supercomputing system, MIT researchers have developed a model that captures what web traffic looks like around the world on a given day, which can be used as a measurement tool for internet research and many other applications.

Researchers Set to Receive Two Innovation Awards at HPEC’19

Defined by the practice of aggregating power in an effort to achieve greater performance, high-performance computing (HPC) is increasingly becoming more diverse. Now, this market, which is expected to reach $59.65 billion by 2025, is setting its sights on new applications including the use of graphics processing units (GPUs) for deep learning, cloud computing, and more. These applications will ultimately speed processing rates and cut computational costs for embedded computing systems used in transportation, healthcare, manufacturing, retail, and a host of other industries.

Big Data Career Notes: September 2019 Edition

By Oliver Peckham In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ve got the details. Check in each month for an updated list and you may even come across someone you know, or better yet, yourself! In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community.

HPC Career Notes: September 2019 Edition

By Oliver Peckham In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ve got the details. Check in each month for an updated list and you may even come across someone you know, or better yet, yourself! In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community.

NJIT Professor Receives Facebook Research Award for Data Science

The director of NJIT’s new Institute for Data Science has received an award from Facebook to support real-world analytics research. The research aims to develop faster learning patterns to make it easier for companies to extract actionable information from extremely large data sets. Institute Director and Distinguished Professor David Bader joined NJIT last month from Georgia Tech, where he previously served as chair of the School of Computational Science and Engineering within the College of Computing.

Future Computing Community of Interest Meeting

On August 5-6, 2019, I was invited to attend the Future Computing (FC) Community of Interest Meeting sponsored by the National Coordination Office (NCO) of NITRD. The Networking and Information Technology Research and Development (NITRD) Program is a formal Federal program that coordinates the activities of 23 member agencies to tackle multidisciplinary, multitechnology, and multisector cyberinfrastructure R&D needs of the Federal Government and the Nation. The meeting was held in Washington, DC, at the NITRD NCO office.

HPC Career Notes: August 2019 Edition

By Oliver Peckham In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ve got the details. Check in each month for an updated list and you may even come across someone you know, or better yet, yourself! David Bader The New Jersey Institute of Technology (NJIT) has announced that it is establishing a new Institute for Data Science directed by David Bader.

Big Data Career Notes: July 2019 Edition

David Bader The New Jersey Institute of Technology has announced that it will establish a new Institute for Data Science, directed by Distinguished Professor David Bader. Bader recently joined NJIT’s Ying Wu College of Computing from Georgia Tech, where he was chair of the School of Computational Science and Engineering within the College of Computing. Bader was recognized as one of HPCwire’s People to Watch in 2014. “Complementing our new facility in Jersey City, which will focus on data science training, NJIT is making significant investments in technological R&D to drive the new AI economy,” said Fadi P.

NJIT to Establish New Institute for Data Science

Former CRA Board Member David Bader will direct the new institute for data science at New Jersey Institute of Technology (NJIT). The institute will focus on cutting-edge interdisciplinary research and development in all areas pertinent to digital data. It will bring existing research centers in big data, medical informatics and cybersecurity together with new research centers in data analytics and artificial intelligence, cutting across all NJIT colleges and schools, and conduct both basic and applied research.

NJIT creates Institute of Data Science to propel AI economy

New Jersey Institute of Technology is creating a center that will conduct basic and applied research focusing on interdisciplinary research and development for all areas pertaining to digital data. The Institute of Data Science, unveiled July 9, will be led by David Bader, a distinguished professor at NJIT. At the institute, scientists, engineers and users will develop technologies applicable in the “real world,” NJIT said, working beyond academic research to solve “problems in the modern data-driven economy.

NJIT to Establish New Institute for Data Science

Continuing its mission to lead in computing technologies, NJIT announced today that it will establish a new Institute for Data Science, focusing on cutting-edge interdisciplinary research and development in all areas pertinent to digital data. The institute will bring existing research centers in big data, medical informatics and cybersecurity together with new research centers in data analytics and artificial intelligence, cutting across all NJIT colleges and schools, and conduct both basic and applied research.

Faculty Showcase Parallel Computing Research at IPDPS 2019

Researchers from the School of Computational Science and Engineering (CSE) will present seven papers at the 33rd IEEE International Parallel and Distributed Processing Symposium (IPDPS 2019) in Rio De Janeiro, Brazil, May 20-24. “IPDPS is one of the premier parallel and distributed computing conferences in the world that provides broad coverage of all areas in high performance computing (HPC) and parallel computing,” said CSE Professor Ümit V. Çatalyürek, one of the leaders of Georgia Tech’s participation at this year’s symposium.

Facebook Research: Announcing the winners of the AI System Hardware/Software Co-Design research awards

In January, Facebook invited university faculty to respond to a call for research proposals on AI System Hardware/Software Co-Design. Co-design implies simultaneous design and optimization of several aspects of the system, including hardware and software, to achieve a set target for a given system metric, such as throughput, latency, power, size, or any combination thereof. Deep learning has been particularly amenable to such co-design processes across various parts of the software and hardware stack, leading to a variety of novel algorithms, numerical optimizations, and AI hardware.

NVIDIA AI Laboratory (NVAIL)

Georgia Tech, UC Davis, Texas A&M Join NVAIL Program with Focus on Graph Analytics By Sandra Skaff NVIDIA is partnering with three leading universities — Georgia Tech, the University of California, Davis, and Texas A&M — as part of our NVIDIA AI Labs program, to build the future of graph analytics on GPUs. NVIDIA’s work with these three new NVAIL partners aims to ultimately create a one-stop shop for customers to take advantage of accelerated graph analytics algorithms.

Chronicle of Higher Education: Gazette

The Society for Industrial and Applied Mathematics selected 28 fellows for 2019 in recognition of their research and service to the community. David A. Bader, a professor and chair of computational science and engineering at the Georgia Institute of Technology, for contributions in high-performance algorithms and streaming analytics and for leadership in the field of computational science. https://www.chronicle.com/article/Transitions-New-President-at/246103

Bader Set to Return to Faculty, Research

(Georgia Tech, Atlanta, GA. 8 April 2019) After five years, Professor David Bader has decided not to seek another term as the chair of the School of Computational Science and Engineering (CSE) and is returning to faculty and his research. Bader, a founding faculty member of the school (then a division), became chair in summer 2014. Since then, enrollment in the school’s M.S. program has more than doubled to 190 students.

SIAM Announces Class of 2019 Fellows

SIAM Recognizes Distinguished Work through Fellows Program Society for Industrial and Applied Mathematics (SIAM) is pleased to announce the 2019 Class of SIAM Fellows. These distinguished members were nominated for their exemplary research as well as outstanding service to the community. Through their contributions, SIAM Fellows help advance the fields of applied mathematics and computational science. SIAM congratulates these 28 esteemed members of the community, listed below in alphabetical order:

Solving Real-World Problems: 5-Minute Interview with David Bader

When David Bader started working with graphs 25 years ago, it was a niche that required designing specific algorithms and even specific computers. Now the Neo4j graph database is used widely by analysts and researchers who work with Georgia Tech, rapidly asking questions and visualizing results. In this week’s five-minute interview (conducted at GraphConnect 2018 in NYC), we spoke with Dr. Bader about using graph technology to solve real-world problems, including a knowledge graph to track emerging problems and threats.

Georgia Tech’s Leading High-Performance Computing Scientists Showcase Research Highlights at Supercomputing 2018

Georgia Tech high-performance computing (HPC) experts are gathered in Dallas this week to take part in the HPC community’s largest annual event — the International Conference for High Performance Computing, Networking, Storage, and Analysis — commonly referred to as Supercomputing. This year’s conference, SC’18, opened Sunday at the Kay Bailey Hutchison Convention Center Dallas and runs through Nov. 16. “Georgia Tech researchers are presenting 23 separate events this week, including four technical paper presentations, several workshops, panels, and even a doctoral showcase with CSE [School of Computational Science and Engineering] Ph.

CSE Chair David Bader Named Editor-in-Chief of ACM Transactions on Parallel Computing

School of Computational Science and Engineering Chair and Professor David Bader has been named Editor-in-Chief (EiC) of ACM Transactions on Parallel Computing (ACM ToPC). ACM Transactions on Parallel Computing is a forum for novel and innovative work on all aspects of parallel computing, and addresses all classes of parallel-processing platforms, from concurrent and multithreaded to clusters and supercomputers. “I am excited for this opportunity to operate as the Editor-in-Chief of such a prestigious publication for a three-year term.

ACM Transactions on Parallel Computing Names David Bader as Editor-in-Chief

ACM Transactions on Parallel Computing (TOPC) welcomes David Bader as new Editor-in-Chief, for the term November 1, 2018 to October 31, 2021. David is a Professor and Chair in the School of Computational Science and Engineering and College of Computing at Georgia Institute of Technology. About TOPC ACM Transactions on Parallel Computing (TOPC) is a forum for novel and innovative work on all aspects of parallel computing, including foundational and theoretical aspects, systems, languages, architectures, tools, and applications.

CSE Chair David Bader Named Editor-in-Chief of ACM Transactions on Parallel Computing

School of Computational Science and Engineering Chair and Professor David Bader was named Editor-in-Chief (EiC) of ACM Transactions on Parallel Computing (ACM ToPC) this morning. ACM Transactions on Parallel Computingis a forum for novel and innovative work on all aspects of parallel computing, and addresses all classes of parallel-processing platforms, from concurrent and multithreaded to clusters and supercomputers. “I am excited for this opportunity to operate as the Editor-in-Chief of such a prestigious publication for a three-year term.

NVIDIA Introduces RAPIDS Open-Source GPU-Acceleration Platform for Large-Scale Data Analytics and Machine Learning

NVIDIA today announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed. RAPIDS(TM) open-source software gives data scientists a giant performance boost as they address highly complex business challenges, such as predicting credit card fraud, forecasting retail inventory and understanding customer buying behavior. Reflecting the growing consensus about the GPU’s importance in data analytics, an array of companies is supporting RAPIDS – from pioneers in the open-source community, such as Databricks and Anaconda, to tech leaders like Hewlett Packard Enterprise, IBM and Oracle.

David Bader on Real World Challenges for Big Data Analytics

In this video from PASC18, David Bader from Georgia Tech summarizes his keynote talk on Big Data Analytics. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms, and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams.

Massive-Scale Analytics Applied to Real-World Problems

In this keynote video from PASC18, David Bader from Georgia Tech presents: Massive-Scale Analytics Applied to Real-World Problems. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams.

The PlayStation Supercomputer

By Sebastian Moss In 1999, Sony was ascendant. Its video gaming business was its most profitable division, the PlayStation 2 was the world’s most successful console, and hopes were high that the successor would prove even more popular. To achieve this, the designers believed they would have to make the PlayStation 3 the most powerful console possible, with its own custom microprocessor. Sony turned to IBM and realized that “there was a potential synergy between the consumer oriented technology that Sony worked on, and the more business and data center oriented technology that IBM worked on,” Peter Hofstee, distinguished researcher at IBM, told DCD.

Keynote Speaker at Lehigh University 125th Anniversary of EE

In April 2018, the ECE department at Lehigh University celebrated a huge milestone- the 125th Anniversary of the department! Alumni, students, guest and faculty members all showed their support in our one day celebration. Many thanks to our key note speakers who took time out of their day to talk to our department about the “Future Directions of ECE”, “Electrical Engineering: Future in Industry & Areas”, and “How can we Improve ECE at Lehigh”.

"I Lost 50 Pounds Making One Simple Change"

By Jen Babakhan More than one-third of Americans are obese: We’re facing a national crisis, and solutions are in short supply. Here’s how one man turned a personal tracker into weight-loss success, one step at a time. Image courtesy of David Bader Even though David Bader had followed a vegan diet since his early twenties, his life-long struggle with weight remained. There are many great reasons for going vegan, but Bader, a 48-year-old Georgia Tech professor, couldn’t keep the pounds off eating that way.

David Bader from Georgia Tech Joins PASC18 Speaker Lineup

Today PASC18 announced that this year’s Public Lecture will be held by David Bader from Georgia Tech. Dr. Bader will speak on Massive-Scale Analytics Applied to Real-World Problems. David Bader from Georgia Tech “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams.

CSE Explores the Boundaries of HPC, Data Science, and India

As 2017 drew to a close, several faculty and alumni from the School of Computational Science and Engineering (CSE) traveled across the globe to attended two conferences in India – The 24th IEEE International Conference on High Performance Computing (HiPC2017), held in Jaipur, and the IEEE International Conference on Machine Learning and Data Science (ICMLDS2017) in Greater Noida. CSE Professor and Associate Chair Ümit Çatalyürek served as the program chair of HiPC2017, which, coincidentally, was where he presented his first peer-reviewed paper in 1995.

5G will enable a new era of opportunity, says David Bader

Recently, David Bader visited India to give a keynote talk at IEEE International Conference on Machine Learning and Data Science at Bennett University, Greater Noida. David A. Bader is Professor and Chair of the School of Computational Science and Engineering, College of Computing, at Georgia Institute of Technology. He is a fellow of the IEEE and AAAS and served on the White House’s National Strategic Computing Initiative (NSCI) panel. He was in conversation with Prof.

International Conference on Machine learning and Data Science

The Computer Science and Engineering Department (CSE) of Bennett University organised its first International Conference on Machine learning and Data Science, at the University campus, in Greater Noida. Touted as the biggest conference in this space, the conference will further boost India’s crucial leap in embracing the AI revolution. The conference witnessed an overwhelming response from organizations, researchers and academicians. There were thought provoking deliberations on the new wave of technologies and their impact in the world of big data, machine learning and AI.

Bennett University, IEEE hold global meet on machine learning and data science

The conference coincided with the recent MoU signed between Bennett University and Nvidia, making it the first educational institute in the country to get the DGX-1V100 AI supercomputer. NEW DELHI: Bennett University‘s computer science and engineering (CSE) department held its first international conference on machine learning and data science at its Greater Noida campus that saw researchers and academicians deliberating on the new wave of technologies and their impact on the world of big data, machine learning and artificial intelligence (AI).

15th Graph500 List Reveals Top Machines for Running Data Applications

The 15th Graph500 list – which ranks supercomputers based on how quickly they can build knowledge from massive-scale data sets – was released Nov. 15 at Supercomputing 2017 (SC17), with Japan’s K-Computer defending its position in the number-one spot several years in a row. The Graph500 is recognized as a leading indicator of high-performance computing (HPC) development and investment globally and often reveals trends regarding new technologies used in the machines.

Georgia Tech Awarded IARPA Contract to Evaluate Emu Technology System

Georgia Tech’s School of Computational Science and Engineering finalized a $662,525 contract for a one-year Intelligence Advanced Research Projects Activity (IARPA) grant. CSE Senior Research Scientist Jason Riedy leads the project titled, Evaluating Memory-Centric Architectures for High Performance Data Analysis, while working alongside CSE Chair David Bader and School of Computer Science Professor Tom Conte. The IARPA funding will be used to assess the capabilities of one of the first commercially available, memory-centric architectures.

Georgia Tech Partners with USC for $6.8 Million DARPA Project

(Georgia Tech, Atlanta, GA, 10 July 2017) The Georgia Institute of Technology and the University of Southern California Viterbi School of Engineering have been selected to receive Department of Defense Research Projects Agency (DARPA) funding under the Hierarchal Identify Verify Exploit (HIVE) program. Georgia Tech and USC are to receive total funding of $6.8 million over 4.5 years to develop a powerful new data analysis and computing platform. Many security and consumer applications – including identifying and zeroing in on erratic driving behavior of vehicles in real-time, recognizing terrorist cells through patterns of communication, or protecting critical infrastructure facilities such as power, communication and water grids, or even predicting the spread of a cyber attack – can be modeled using graph data-analysis formalisms envisioned in the HIVE program.

Forward-Looking Panel on the Future of CSE

By Karthika Swamy Cohen Where is computational science and engineering (CSE) headed? What new “grand challenges” will stimulate progress? How will the field benefit from machine learning and scientific computing? How will it drive new application areas like computational biology and medicine, computational geoscience, and materials science? What opportunities and challenges may we encounter in extending CSE to new subjects such as social network analysis, cybersecurity and the social sciences?

New Approach Seeks to Automate Parallel Programming

by George Leopold Artificial intelligence researchers are leveraging an emerging divide-and-conquer computing approach called “dynamic programming” to greatly accelerate the process of solving problems ranging from genomic analysis to cyber security. Engineers at Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University in New York reported at a computing conference earlier this month that their approach can be used to “parallelize” algorithms that leverage dynamic programming.

Georgia Tech Professor Helps Set White House’s HPC Agenda

Georgia Tech Professor David Bader, chair of the School of Computational Science and Engineering (CSE), participated in the National Strategic Computing Initiative (NSCI) Anniversary Workshop in Washington D.C., held July 29. Created in 2015 via an Executive Order by President Barack Obama, the NSCI is responsible for ensuring the United States continues leading in high-performance computing (HPC) in coming decades. Bader, a renowned HPC expert, leads several initiatives and projects that connect to the NSCI’s core objectives.

2016 IBM Faculty Awards

Congratulations to the following faculty who have been selected for IBM Faculty awards granted in 2016. Big Data / Analytics Bader, David, Georgia Institute of Technology, Optimizing Graph Analytics for Cognitive Computing https://www.research.ibm.com/university/awards/faculty_innovation_2016.shtml

Latest Graph500 Ranking of Fastest Supercomputers Released by Leading Universities at SC15

The eleventh Graph500 list was released today at the Supercomputing 2015 conference (SC’15), with Japan’s K-Computer maintaining its top spot for the second consecutive time. Fujitsu, IBM and China’s National University of Defense Technology dominated the top 10, with the BlueGene/Q architecture holding eight of the top 10 positions. Other notable entries are: Largest Problem: DOE/NNSA/LANL Sequoia at Scale 41 Best single node performance: Institute of Statistical Mathematics ismuv2k (SGI UV 2000) ranking #40 on the list with 175 GE/s Largest single node problem: UV 2000 (#79), 19.

Georgia Tech Students Named Finalists for Best Student Research Paper at SC15

Four doctoral students comprising two research projects in the School of Computational Science & Engineering at the Georgia Institute of Technology are finalists for “Best Student Research Paper” at Supercomputing ’15, the International Conference for High Performance Computing, Networking, Storage and Analysis. For the first project, the finalists developed a fast algorithm; in the second, they created a GPU-based framework that can process large graphs exceeding a device’s internal memory capacity.

SC15 Panel Line-Up for November 18

As data intensive science emerges, the need for high performance computing (HPC) to converge capacity and capabilities with Big Data becomes more apparent and urgent. Capacity requirements have stemmed from science data processing and the creation of large scale data products (e.g., earth observations, Large Hadron Collider, square-kilometer array antenna) and simulation model output (e.g., flight mission plans, weather and climate models). Capacity growth is further amplified by the need for more rapidly ingesting, analyzing, and visualizing voluminous data to improve understanding of known physical processes, discover new phenomena, and compare results.

White House National Strategic Computing Initiative Workshop Proceedings

The White House National Strategic Computing Initiative Workshop Proceedings is a summary of the workshop held at the Hilton McLean Tysons Corner in McLean, Virginia on October 20-21, 2015. The Networking and Information Technology Research and Development (NITRD) program National Coordination Office (NCO) has edited and released this report in support of the White House National Strategic Computing Initiative (NSCI). This workshop was made possible by the financial and organizational support of the U.

What Exactly Is a 'Flop,' Anyway?

By Michael Byrne Earlier this week, President Obama signed an executive order creating the National Strategic Computing Initiative, a vast effort at creating supercomputers at exaflop scales. Cool. An exaflop-scale supercomputer is capable of 1018 floating point operations (FLOPS) per second, which is a whole lot of FLOPS. But this raises the question: What’s a floating point operation and why does it matter so much to supercomputing when our mere mortals of computers are rated according to the simple-seeming notion of clock speed?

Better Decisions through Big Data

by David Bader, IEEE Computer Society member, and Chair, School of Computational Science & Engineering at Georgia Institute of Technology Companies and governments increasingly rely on ‘big data’ to operate efficiently and competitively. Analytics and security must keep pace. What research underpins the latest big data-enabled advances? Good decisions are well-informed decisions, of late powered by a diversity of data. Big data is creating profound business and social opportunities in nearly every field because of its ability to speed discovery and customize solutions.

World’s Best Supercomputers Ranked by Georgia Tech Expert

Two highly anticipated lists of the world’s fastest supercomputers were released this week, and Georgia Tech’s David Bader was behind one of them. The lists – Top 500 and Graph 500 – are published twice each year to coincide with supercomputer conferences, and are closely watched as indicators of development and investment into high-performance computing worldwide. The lists can indicate trends into which technologies are popular in the machines. Bader, chair of the School of Computational Science and Engineering, co-leads Graph 500 – a high-performance computing benchmark focused on data-intensive applications.

IBM, Nvidia rev their HPC engines in next-gen supercomputer push

By Katherine Noyes, Senior U.S. Correspondent, IDG News Service Hard on the heels of the publication of the latest Top 500 ranking of the world’s fastest supercomputers, IBM and Nvidia on Monday announced they have teamed up to launch two new supercomputer centers of excellence to develop the next generation of contenders. Created as part of IBM’s supercomputing contract with the U.S. Department of Energy, the new centers will be located at Lawrence Livermore National Laboratory and Oak Ridge National Laboratory and will focus on development of the forthcoming Summit and Sierra supercomputer systems, which are expected to be delivered in 2017.

Accenture Awards 11 Research Grants to Leading Universities to Promote Greater R&D Collaboration, Accelerate Pace of Innovation

Accenture (NYSE:ACN) has awarded 11 research grants to top universities around the world to significantly broaden and deepen the relationships between Accenture’s technology research and development (R&D) groups and leading university researchers. The grant program helps to support the ground-breaking efforts of leading university research teams, which will be invited to work in collaboration with researchers from the Accenture Technology Labs on R&D projects that are of strategic importance to the technology industry and Accenture’s enterprise clients.

David Bader joins Accelogic's Advisory Board

To reach our goals, we have assembled a Top R&D Team with several of the world’s most renowned scientists in the fields of software engineering, high-performance computing, algorithm design, and hardware systems. Dr. David Bader Professor and Chair School of Computational Science and Engineering Georgia Tech Dr. Bader is a world authority on graph-theoretic frameworks and author of STINGER, one of the most renowned graph platforms. He is Chair of the School of Computational Science and Engineering, College of Computing, at Georgia Institute of Technology, and Executive Director of High Performance Computing.

How to find the needle in a big data haystack

David Bader, professor and chair of the School of Computational Science and Engineering at Georgia Tech, delivered the keynote address during the recent “International Opportunities in Cloud Computing & Big Data” conference. Bader, executive director of High Performance Computing, explains why graph analysis is crucial for big data. Social networks use big data to spot influencers and target advertising to the right user. Biologists need data to understand common drug interactions and design better medication.

HPC and Big Data: A View from the Corner of Science and Industry

David A. Bader, Chair of the School - Computational Science and Engineering & Executive Director - HPC, Georgia Institute of Technology With the last decade’s rapid advances in computational power and the resulting explosion of available data, two major areas of computational application—business and scientific research—are now converging. And this digitally fluid world of data science and application is providing remarkable value for industry. High performance computing, once an area reserved for technical or scientific application, has for some time now affected the realm of business and enterprise computing.

Bader gives Triangle Computer Science Distinguished Lecture

Triangle Computer Science Distinguished Lecturer Series The computer science departments at Duke University, North Carolina State University, and the University of North Carolina at Chapel Hill have joined forces to create the Triangle Computer Science Distinguished Lecturer Series. The series, which began in the 1995-1996 academic year, has been made possible with a number of grants from the U.S. Army Research Office, rotated between the departments. Massive-Scale Streaming Analytics Triangle Computer Science Distinguished Lecturer Series

Surviving Cyberspace

Making room for privacy in cybersecurity In this video, Anton explains why she disagrees with the Director of the FBI and the Attorney General on the topic of cell phone encryption. Annie Anton Balancing individual privacy with national security needs will continue to be a major challenge in the years to come. Annie Antón has been working on these dual objectives since the 1990s, when she was a graduate student at Georgia Tech.

Why You’d Never Know Atlanta is Tops for Data Centers

By Jim Burress Office parks, like this one in DeKalb County, are one place data storage centers are housed. Chances are you’d never know it, though. Companies prefer to keep their data center locations top-secret. Office parks, like this one in DeKalb County, are one place data storage centers are housed. Chances are you’d never know it, though. Companies prefer to keep their data center locations top-secret. Big data needs big closets.

Google Can Now Describe Your Cat Photos

By Rolfe Winkler Google’s trained computers recognized that this is a photo of “two pizzas sitting on top of a stove top oven” Google Google ‘s computers learned to recognize cats in photos. Now, they’re learning to describe cats playing with a ball of string. Computer scientists in the search giant’s research division, and a separate team working at Stanford University, independently developed artificial-intelligence software that can decipher the action in a photo, and write a caption to describe it.

Dynamic graph analytics tackle social media and other big data

By Rick Robinson A GTRI team consisting of (left to right) Dan Campbell, Rob McColl, Jason Poovey, and David Ediger is bringing graph analytics to bear on a range of data-related challenges including social networks, surveillance intelligence, computer-network functionality, and industrial control systems. Today, petabytes of digital information are generated daily by such sources as social media, Internet activity, surveillance sensors, and advanced research instruments. The results are often referred to as “big data” – accumulations so huge that highly sophisticated computer techniques are required to identify useful information hidden within.

George Michael HPC Fellowships Announced

ACM (the Association for Computing Machinery) and IEEE Computer Society have named Harshitha Menon of the University of Illinois, Urbana Champaign (UIUC) and Alexander Breuer of Technische Universität München (TUM) as recipients of 2014 ACM/IEEE Computer Society George Michael Memorial HPC Fellowships. Menon and Breuer will each receive a $5,000 honorarium, plus travel and registration to receive the award at SC14 during the awards ceremony in November. A doctoral candidate advised by Laxmikant V.

Researchers working on computers which can last 75 times longer

Researchers Georgia Tech are helping Defense Advanced Projects Research Agency (DARPA) to develop an energy efficient computer that can last 75 times longer than the present day computers. The computer is being developed as part of an initiative called Power Efficiency Revolution for Embedded Computing Technologies (PERFECT), which is still in the elementary stages. The success of this project could result in smaller and more efficient systems which could be used in aircraft and ground vehicles as well as used by soldiers on the ground.

GT Computing Flexes Power at Parallel Computation Symposium

Georgia Tech is putting forth a dominating presence at one of the premier parallel computation symposia this week in Phoenix as it sends 30 of its professors and researchers to present nine papers, two of which earned “best paper” honors. The Institute of Electrical and Electronics Engineers (IEEE) International Parallel & Distributed Processing Symposium (IPDPS), held from May 19 to 23 in Phoenix, is the flagship activity of the IEEE Computer Society’s Technical Committee on Parallel Processing (TCPP), representing a unique international gathering of computer scientists from around the world.

ECE Alumnus Bader Promoted to Chair at Georgia Tech

Professor David Bader, ECE alumnus and founder of ECE GSA, is named Chair of Georgia Tech’s CSE. Photo courtesy of Raftermen Photography University of Maryland Electrical and Computer Engineering alumnus David Bader (Ph.D., ’96) was recently promoted to Chair of Georgia Tech’s School of Computational Science and Engineering (CSE). He will assume his role beginning July 2014. “I’m thrilled that we found within our own ranks a candidate as viable as David to take the helm of the School of CSE,” said Zvi Galil, John P.

David Bader Chosen to Lead Georgia Tech's School of Computational Science and Engineering

Following a national search for new leadership of its School of Computational Science and Engineering (CSE), Georgia Tech’s College of Computing has selected its own David A. Bader, a renowned leader in high-performance computing, to chair the school. Bader, a professor in the School of CSE and executive director of the High-Performance Computing Lab, succeeds Regents’ Professor Richard Fujimoto, who has served in the role since 2007 and through CSE’s elevation to “school” status in 2010.

College of Computing Picks Bader to Lead School of CSE

Following a national search for new leadership of its School of Computational Science and Engineering (CSE), Georgia Tech’s College of Computing has selected its own David A. Bader, a renowned leader in high-performance computing, to chair the school. Bader, a professor in the School of CSE and executive director of the High-Performance Computing Lab, succeeds Regents’ Professor Richard Fujimoto, who has served in the role since 2007 and through CSE’s elevation to “school” status in 2010.

Alumnus 'HPC rock star' to lead Ga. Tech's School of CSE

Georgia Tech’s College of Computing has selected David A. Bader ‘90 ‘91G, a renowned leader in high-performance computing, to chair its School of Computational Science and Engineering (CSE). A professor in the School of CSE and executive director of its High-Performance Computing Lab, Bader will assume his new role in July 2014. Bader, a 1987 graduate of Liberty High School in Bethlehem, PA, attended Lehigh University, where he earned a bachelor’s in computer engineering in 1990 and a master’s in electrical engineering in 1991.

For Google, a leg up in the artificial intelligence arms race

By Verne Kopytoff Google’s executives have long dreamed of solving one of the technology industry’s biggest riddles. How do you predict what people want — hockey scores or new Ugg boots, for example — before they even ask for it? Reading user’s minds, or at least seeming to, would make Google’s products that much faster and more convenient. It could also help the company fend off rivals. Last week, Google (GOOG) took its biggest step yet to ramp up its predictive powers.

HPCwire Reveals the 2014 People to Watch

HPCwire, the leader in world-class journalism covering high performance computing (HPC), announced its ‘HPCwire People to Watch 2014’ list today. The annual list is comprised of an elite group of some of the brightest minds in HPC. The annual selections are made following an extensive review process by the HPCwire editorial and executive staff along with guidance from industry analysts and luminaries across the HPC community. “It’s our great pleasure each year to step back for a moment and be truly inspired by the intriguing personalities in the high performance computing space, and to honor what they do,” said Alan El Faye, President of Tabor Communications, Inc.

David Bader Selected as One of HPCwire’s “People to Watch” in 2014

The College of Computing at the Georgia Institute of Technology, one of only two major universities to house its computing program within a college of its own, today announced that David A. Bader, professor and executive director of High Performance Computing, has been selected as one of HPCWire’s “People to Watch” in 2014. The 2014 list is a compilation of the 16 best and brightest minds from academia, science, and technology whose contributions in high performance computing (HPC) have the potential to profoundly impact the world this year and beyond.

David Bader discusses High Performance Computing at Georgia Tech

The Executive Director of High Performance Computing at Georgia Tech, David Bader, discusses High Performance Computing. https://youtu.be/Jbd_fW5l6ls

Opening Up the Accelerator Advantage

By Tiffany Trader Researchers at Georgia Institute of Technology and University of Southern California will receive nearly $2 million in federal funding for the creation of tools that will help developers exploit hardware accelerators in a cost-effective and power-efficient manner. The purpose of this three-year NSF grant is to bring formerly niche supercomputing capabilities into the hands of a more general audience to help them achieve high-performance for applications that were previously deemed hard to optimize.

Creating an app store for multi-core

Until now the niche area of supercomputing was available only to those in national research laboratories or leading research universities. That’s about to change. Researchers at the Georgia Institute of Technology and University of Southern California recently received a nearly $2 million federal grant to develop tools to assist all developers in using hardware accelerators productively and effectively. The goal of the three-year grant from the National Science Foundation is to bring capability to a general audience, including those using tablets, smartphones and other ubiquitous devices, said David Bader, the lead principal investigator.

Bader’s Love for Teaching Goes Way Back

It’s around 9:30 a.m. on a Thursday, and David Bader is explaining how trick-or-treating is connected to graph theory. “For example, we get together with our friends to figure out which houses to go to for optimal candy rewards — and optimization is one component of graph theory,” said Bader, professor in the College of Computing and executive director for High Performance Computing. “And when you get home, you sort your candy — yet another concept.

David A. Bader elected to IEEE Computer Society's Board of Governors

DAVID ALAN GRIER 2013 President George Washington University Elliott School of International Affairs 1957 E Street NW- Suite 401 Washington, DC 20052 USA G. P. “Bud” Peterson President Georgia Institute of Technology Dear Dr. Peterson, It is with great pleasure that I inform you that David A. Bader has been elected by the IEEE Computer Society membership to the Society’s Board of Governors. Dr. Bader will begin his 3-year term on 1 January 2014.

IEEE Computer Society election results

Thomas M. Conte, first vice president for Publications and professor of Computer Science and Electrical and Computer Engineering at Georgia Institute of Technology, has been voted IEEE Computer Society 2014 president-elect. Conte, who will serve as 2015 president, garnered 4,035 votes, compared with 2,418 cast for Roger Fujii, 2013-2014 IEEE Division VIII Director and President, Fujii Systems. The president oversees IEEE-CS programs and operations and is a nonvoting member of most IEEE-CS program boards and committees.

David Bader named Editor-in-Chief of the IEEE Transactions on Parallel and Distributed Systems

New Editors-in-Chief David Bader, a professor in the College of Computing at the Georgia Institute of Technology, has been named the new Editor-in-Chief of the IEEE Transactions on Parallel and Distributed Systems starting in 2014. http://news.computer.org/ef1/preview_campaign.php?lf1=755533786d987916105340c10442888

HPCwire Live! Atlanta’s Big Data Kick Off Week Meets HPC: What does the future holds for HPC?

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta’s first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today’s big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC? Speech Transcript Moderator: I am really pleased to introduce our next speaker tonight and she is Nicole Hemsoth.

Type 'S' for Suspicious

By Joshua E. Keating Jung Yeon-Je / AFP / Getty Images Government-funded trolls. Decoy documents. Software that identifies you by how you type. Those are just a few of the methods the Pentagon has pursued in order to find the next Edward Snowden before he leaks. The small problem, military-backed researchers tell Foreign Policy, is that every spot-the-leaker solution creates almost as many headaches as it’s supposed to resolve.

Six Can’t Miss Sessions for ISC’13

By Nicole Hemsoth Outside of the main attractions, including the keynote sessions, vendor showdowns, Think Tank panels, BoFs, and tutorial elements, the International Supercomputing Conference has balanced its five-day agenda with some striking panels, discussions and topic areas that are worthy of some attention. We scoured the agenda in search of the sessions we thought would strike the most resonant chords with the diverse mix of user, vendor and researcher attendees and compiled them here.

Understanding the Human Condition with Big Data and HPC

In this guest feature from Scientific Computing World, Georgia Institute of Technology’s David A. Bader discusses his upcoming ISC’13 session, Better Understanding Brains, Genomes & Life Using HPC Systems. Supercomputing at ISC has traditionally focused on problems in areas such as the simulation space for physical phenomena. Manufacturing, weather simulations and molecular dynamics have all been popular topics, but an emerging trend is the examination of how we use high-end computing to solve some of the most important problems that affect the human condition.

Georgia Tech has High Participation in IPDPS2013 Technical Program

Georgia Tech’s leadership in education and research came through clearly at the 27th IEEE International Parallel & Distributed Processing Symposium (IPDPS2013) in Cambridge, MA the week of May 19-24, 2013. IPDPS accepted only 22% of submissions, and Georgia Tech’s Schools of Computational Science and Engineering, Computer Science, and Electrical and Computer Engineering participated in nine of the total 108 papers and one of the 23 PhD posters. Four of the associated and well-known workshops contained Georgia Tech research.

The human condition

Georgia Institute of Technology’s David A. Bader discusses his upcoming ISC’13 session, Better Understanding Brains, Genomes & Life Using HPC Systems Supercomputing at ISC has traditionally focused on problems in areas such as the simulation space for physical phenomena. Manufacturing, weather simulations and molecular dynamics have all been popular topics, but an emerging trend is the examination of how we use high-end computing to solve some of the most important problems that affect the human condition.

TCPP Outstanding Service Award

This is to recognize those individuals in the broader community who have had major professional service roles in conferences (TCPP and others), journals, various committees, major events, community resources, and international outreach, and those who have had a major impact on the community at large in possibly other ways. 2013 Winner: David A. Bader, Georgia Institute of Technology https://tc.computer.org/tcpp/awards/

CRA Announces Four New Board Members

There are four new additions to the CRA Board of Directors. David A. Bader, Georgia Institute of Technology, is the new IEEE-CS Representative. Julia Hirschberg, Columbia University, and P. Takis Metaxas, Wellesley College, replace members who resigned, and their terms end June 30, 2014. Henry Kautz, University of Rochester, is the new AAAI representative. David Bader David A. Bader is a Full Professor in the School of Computational Science and Engineering, College of Computing, at Georgia Institute of Technology, and Executive Director for High Performance Computing.

The Era of Big Analytics

Did You Know Lehigh alumni are engaged in high performance computing and data analytics in a variety of ways. Here are a few examples: David A. Bader ‘90, ‘92G is executive director of high performance computing at Georgia Tech, conducting research at the intersection of high performance computing, computational biology and genomics, and social network analysis. https://engineering.lehigh.edu/research/resolve/volume-2-2013

Emcien Corp. Taps Big Data Thought Leader Dr. David A. Bader for Advisory Board

Emcien Corp., a leading provider of pattern-based analytics solutions, announced today the addition of internationally renowned High-Performance Computing luminary Dr. David A. Bader to the Emcien Advisory Board, thereby lending his expertise to Emcien’s next generation of solutions designed for massive-scale analytics in real time in multiple sectors, including government, financial services, healthcare, medical research, and insurance. Dr. Bader is a recognized leader in designing large-scale parallel algorithms for data-intensive problems, such as social network analysis, as well as for his decades-long research and innovation in dataintensive computing.

Emcien Taps Big Data Leader for Advisory Board

Emcien Corp., a leading provider of pattern-based analytics solutions, announced today the addition of internationally renowned High-Performance Computing luminary Dr. David A. Bader to the Emcien Advisory Board, thereby lending his expertise to Emcien’s next generation of solutions designed for massive-scale analytics in real time in multiple sectors, including government, financial services, healthcare, medical research, and insurance. Dr. Bader is a recognized leader in designing large-scale parallel algorithms for data-intensive problems, such as social network analysis, as well as for his decades-long research and innovation in dataintensive computing.

Sequoia HPC tops new Graph 500 ranking of Big Data machines

By Joab Jackson The Graph 500 ranking of the most powerful supercomputers at handling Big Data workloads has been introduced to complement the Top500 HPC list. So while a new Cray supercomputer took first place on the Top500, it was another machine, Lawrence Livermore National Laboratory’s Sequoia, that proved to be the most adept at processing data intensive workloads on the Graph 500. Such differences in ranking between the two scales highlight the changing ways in which the world’s most powerful supercomputers are being used.

World's Most Powerful Big Data Machines Charted on Graph 500

By Joab Jackson, U.S. Correspondent, IDG News Service The Top500 is no longer the only ranking game in town: make way for the Graph 500, which tracks how well supercomputers handle big-data-styled workloads. So while a new Cray supercomputer took first place on the Top500, it was another machine, Lawrence Livermore National Laboratory’s Sequoia, that proved to be the most adept at processing data intensive workloads on the Graph 500.

World's Most Powerful Big Data Machines Charted on Graph 500

By Joab Jackson, U.S. Correspondent, IDG News Service The Top500 is no longer the only ranking game in town: make way for the Graph 500, which tracks how well supercomputers handle big-data-styled workloads. So while a new Cray supercomputer took first place on the Top500, it was another machine, Lawrence Livermore National Laboratory’s Sequoia, that proved to be the most adept at processing data intensive workloads on the Graph 500.

DARPA awards Georgia Tech energy-efficient high-performance computing contract

Georgia Tech has received $561,130 for the first phase of a negotiated three-phase $2.9 million cooperative agreement contract from the U.S. Defense Advanced Projects Research Agency (DARPA) to create the algorithmic framework for supercomputing systems that require much less energy than traditional high-speed machines, enabling devices in the field to perform calculations that currently require room-sized supercomputers. Awarded under DARPA’s Power Efficiency Revolution for Embedded Computing Technologies (PERFECT) program, the negotiated cooperative agreement contract (with options out to five years) is one piece of a national effort to increase the computational power efficiency of “embedded systems” by 75-fold over the best current computing performance in areas extending beyond traditional scientific computing.

DARPA Awards GA Tech Energy-Efficient HPC Contract

Georgia Tech has received $561,130 for the first phase of a negotiated three-phase $2.9 million cooperative agreement contract from the U.S. Defense Advanced Projects Research Agency (DARPA) to create the algorithmic framework for supercomputing systems that require much less energy than traditional high-speed machines, enabling devices in the field to perform calculations that currently require room-sized supercomputers. Awarded under DARPA’s Power Efficiency Revolution for Embedded Computing Technologies (PERFECT) program, the negotiated cooperative agreement contract (with options out to five years) is one piece of a national effort to increase the computational power efficiency of “embedded systems” by 75-fold over the best current computing performance in areas extending beyond traditional scientific computing.

Who’s the Most Influential in a Social Graph?

Georgia Tech researchers say they have developed an algorithm that quickly determines betweenness centrality for streaming graphs. They say the algorithm also can identify influencers as information changes within a network. “Our algorithm stores the graph’s prior centrality data and only does the bare minimal computations affected by the inserted edges,” says Georgia Tech professor David Bader. In some situations, Bader says the software can compute betweenness centrality more than 100 times faster than conventional methods.

Georgia Tech Develops New Graph Algorithm

At an airport, many people are essential for planes to take off. Gate staffs, refueling crews, flight attendants and pilots are in constant communication with each other as they perform required tasks. But it’s the air traffic controller who talks with every plane, coordinating departures and runways. Communication must run through her in order for an airport to run smoothly and safely. In computational terms, the air traffic controller is the “betweenness centrality,” the most connected person in the system.

Who’s the Most Influential in a Social Graph?

At an airport, many people are essential for planes to take off. Gate staffs, refueling crews, flight attendants and pilots are in constant communication with each other as they perform required tasks. But it’s the air traffic controller who talks with every plane, coordinating departures and runways. Communication must run through her in order for an airport to run smoothly and safely. In computational terms, the air traffic controller is the “betweenness centrality,” the most connected person in the system.

Data Analysts Seek to Make Social Media More Useful

It’s not easy turning the Mayberry Police Department into the team from CSI, or turning an idea for a new type of social network analysis into something like Klout on steroids, but those types of transformations are becoming ever more realistic. The world’s universities and research institutions are hard at work figuring out ways to make the mountains of social data generated every day more useful and, hopefully, make us realize there’s more to social data than just figuring out whose digital voice is the loudest.

How researchers are letting us uncover secrets in social data

By Derrick Harris It’s not easy work turning the Mayberry Police Department into the team from C.S.I., or turning an idea for a new type of social network analysis into something like Klout on steroids, but those types of transformations are becoming increasingly more possible. The world’s universities and research institutions are hard at work figuring out ways to make the mountains of social data generated every day more useful and, hopefully, to make us realize there’s more to social data than just figuring out whose digital voice is the loudest.

Data Analysts Seek to Make Social Media More Useful

By Derrick Harris It’s not easy turning the Mayberry Police Department into the team from CSI, or turning an idea for a new type of social network analysis into something like Klout on steroids, but those types of transformations are becoming ever more realistic. The world’s universities and research institutions are hard at work figuring out ways to make the mountains of social data generated every day more useful and, hopefully, make us realize there’s more to social data than just figuring out whose digital voice is the loudest.

How DARPA Does Big Data

By Nicole Hemsoth The world lost one of its most profound science fiction authors in the early eighties, long before the flood of data came down the vast virtual mountain. It was a sad loss for literature, but it also bore a devastating hole in the hopes of those seeking a modern fortuneteller who could so gracefully grasp the impact of the data-humanity dynamic. Dick foresaw a massively connected society—and all of the challenges, beauties, frights and potential for invasion or (or safety, depending on your outlook).

Your Laptop Can Now Analyze Big Data

By John Pavlus Computer scientists from Carnegie Mellon University have devised a framework for running large-scale computations for tasks such as social network or Web search analysis efficiently on a single personal computer. The software could help developers working on many modern tasks: for example, designing a new recommendation engine using social network connections. In order to make effective recommendations—“your friends liked this movie, so here is another movie that you haven’t seen yet, but you will probably like”—the software has to be able to analyze the connections between the members of a social network.

Feds Look to Fight Leaks with 'Fog of Disinformation'

By Noah Shachtman Air Force One waits for U.S. President Barack Obama in the fog at London’s Stansted Airport, Friday, April 3, 2009. PHOTO: AP / KIRSTY WIGGLESWORTH Pentagon-funded researchers have come up with a new plan for busting leakers: Spot them by how they search, and then entice the secret-spillers with decoy documents that will give them away. Computer scientists call it it “Fog Computing” – a play on today’s cloud computing craze.

Supercomputer Learns How to Recognize Cats

By Robert Gelber Search giant Google along with researchers from Stanford University have made an interesting discovery based on an X labs project. After being fed 10 million images from YouTube, a 16,000-core cluster learned how to recognize various objects, including cats. Earlier this week, the New York Times detailed the program, explaining its methods and potential use cases. The project began a few years ago when Google researchers planned to make a human brain simulation.

KOMO News Radio Interview of David Bader on Google's Neural Network Experiment

[KOMO NewsRadio Interview](KOMO Radio–GOOGLE NEURAL NETWORK EXPERIMENT.mp3) In this clip, Seattle’s KOMO NewsRadio interviews David Bader on Google’s neural network experiment that recognizes cats in YouTube videos.

Graph500 adds new measurement of supercomputing performance

Supercomputing performance is getting a new measurement with the Graph500 executive committee’s announcement of specifications for a more representative way to rate the large-scale data analytics at the heart of high-performance computing. An international team that includes Sandia National Laboratories announced the single-source shortest-path specification to assess computing performance on Tuesday at the International Supercomputing Conference in Hamburg, Germany. The latest benchmark “highlights the importance of new systems that can find the proverbial needle in the haystack of data,” said Graph500 executive committee member David A.

Sandia Labs Details Update to Graph500 Benchmark

Supercomputing performance is getting a new measurement with the Graph500 executive committee’s announcement of specifications for a more representative way to rate the large-scale data analytics at the heart of high-performance computing. An international team that includes Sandia National Laboratories announced the single-source shortest-path specification to assess computing performance on Tuesday at the International Supercomputing Conference in Hamburg, Germany. The latest benchmark “highlights the importance of new systems that can find the proverbial needle in the haystack of data,” said Graph500 executive committee member David A.

How Many Computers to Identify a Cat? 16,000

An image of a cat that a neural network taught itself to recognize. Credit Jim Wilson/The New York Times By John Markoff, June 25, 2012 MOUNTAIN VIEW, Calif. — Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain. There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

Sandia Labs Details Update to Graph500 Benchmark

Supercomputing performance is getting a new measurement with the Graph500 executive committee’s announcement of specifications for a more representative way to rate the large-scale data analytics at the heart of high-performance computing. An international team that includes Sandia National Laboratories announced the single-source shortest-path specification to assess computing performance on Tuesday at the International Supercomputing Conference in Hamburg, Germany. The latest benchmark “highlights the importance of new systems that can find the proverbial needle in the haystack of data,” said Graph500 executive committee member David A.

Georgia Tech takes leading role in IPDPS 2012

The 26th IEEE International Parallel & Distributed Processing Symposium (IPDPS) took place May 21-25, 2012, in Shanghai, China. Georgia Tech’s participation in the technical program included 23 faculty and students presenting six accepted papers, eight workshops, two Ph.D. Forum research posters, as well as roles in the sessions and invited panel talks. IPDPS, which drew more than 600 participants this year, is an international forum for engineers and scientists from around the world to present their latest research findings in all aspects of parallel computation.

Georgia Tech Recognized as Charter Member of New HPC500 Group at ISC ’12

Georgia Institute of Technology will be recognized as one the 50 charter members of the HPC500, an exclusive community of High-Performance Computing user organizations at the vanguard of their areas of specialization, during the International Supercomputing conference, ISC’12, in Hamburg, Germany, June 17-21. Research in computational science and engineering at Georgia Tech spans many areas ranging from the development of new computational methods that may be applied to one or more fields in science and engineering to novel computational approaches specific to a particular domain such as biology or aerospace engineering.

IEEE to Award 14 Industry Professionals

The IEEE Computer Society will honor 14 prominent technologists at its annual awards dinner in Seattle, including the inventor of MATLAB, two dedicated computer science educators, a parallel programming languages expert, and innovators in the fields of data mining, distributed computing, database theory, computer standards, and other technologies. The ceremony will take place at 6 p.m. PT on Wednesday, June 13 at the Renaissance Hotel in Seattle, Washington. “For the IEEE Computer Society, the awards ceremony represents an opportunity to acknowledge these innovators for their sizable contributions to the field of computing,” said David A.

IEEE Computer Society to Honor 14 Technologists at Awards Dinner in Seattle

The IEEE Computer Society will honor 14 prominent technologists at its annual awards dinner in Seattle, including the inventor of MATLAB, two dedicated computer science educators, a parallel programming languages expert, and innovators in the fields of data mining, distributed computing, database theory, computer standards, and other technologies. The ceremony will take place at 6 p.m. PT on Wednesday, June 13 at the Renaissance Hotel in Seattle, Washington. “For the IEEE Computer Society, the awards ceremony represents an opportunity to acknowledge these innovators for their sizable contributions to the field of computing,” said David A.

CHASM Team Tackling Scalable Capability on Defense Application Requirements

Project CHASM, funded by the Defense Advanced Research Projects Agency (DARPA) as part of its Ubiquitous High Performance Computing (UHPC) program announced in 2010, is focusing UHPC architecture designs on achieving scalable capability on defense application requirements. The primary milestones in the project have been the creation of several forward-looking large scale benchmarks or “challenge problems” that model difficult computational problems that the Department of Defense will face in the upcoming decade, said Dan Campbell, the project’s research lead and a principal research engineer in the Georgia Tech Research Institute.

Bader to speak in University of Delaware's Distingushed Lecture Series

Can diseases in human populations be detected and prevented? How many currents can one electrical power grid handle? Who detects the community structure of large social networks? These are just a few of the emerging real-world graph problems that engineers currently examine. David A. Bader, from Georgia Institute of Technology, is a leading expert on massive-scale social networks, combinatorial optimization and parallel algorithms. He will discuss the real-world applications and scalability challenges in high performance computing on Thursday, May 3, at 3:30 p.

HPCwire Announces the 2012 People to Watch List

HPCwire, the most widely recognized and accessed news and information site covering the ecosystem of High Performance Computing (HPC) announced today that it has published the HPCwire 2012 ‘People to Watch’ list. The annual list is comprised of an elite group of community leaders selected from academia, government, business, and industry who’s work is believed will impact and influence the future of High Performance Computing in 2012 and beyond. The annual selections are made following a stringent review process, in-depth discussions with the HPCwire editorial and publishing teams, guidance from industry analysts and past recipients, and with input solicited from industry luminaries across the HPC community.

University of Maryland Distinguished Alumni Award

In 2011, The University of Maryland‘s Department of Electrical and Computer Engineering established the Distinguished Alumni Award to recognize alumni who have made significant and meritorious contributions to their fields. Alumni are nominated by their advising professors or the department chair, and the Department Council then approves their selection. In early May, the faculty and staff gather to honor the recipients at a luncheon. Dr. David Bader was inducted in the inaugural 2012 class of distinguished alumni.

AAAS Members Elected as Fellows

In November 2011, the AAAS Council elected 539 members as Fellows of AAAS. These individuals will be recognized for their contributions to science and technology at the Fellows Forum to be held on 18 February 2012 during the AAAS Annual Meeting in Vancouver, British Columbia. The new Fellows will receive a certificate and a blue and gold rosette as a symbol of their distinguished accomplishments. Presented by section affiliation, they are:

Pentagon under 24 / 7 DARPA surveillance

Iraq, Fallujah: US marines from the First Battalion, 5th Marines, Bravo Company, browse the internet at camp Mercury 25 April 2004. (AFP Photo / Nicolas Asfouri) © AFP The Pentagon will soon be prying through the personal correspondence and computer files of US military personnel, thanks to a $9-million program that will put soldiers’ private emails under Uncle Sam’s microscope. Defense Advanced Research Projects Agency, or DARPA, has awarded the grant to five institutions led by Georgia Tech to help develop a system of spying on solderis’ Internet and computer habits, a multi-million dollar investment that they say will serve as a preemptive measure to make sure “insider threats” can’t materialize in the military.

Pentagon to monitor military emails for "insider threats"

By Trent Nouveau The Pentagon is kicking of a new initiative to monitor military emails in an attempt to detect insider threats. Backed by DARPA, the project seeks to create “a suite of algorithms [capable of] detecting multiple types of insider threats by analyzing massive amounts of data – including email, text messages and file transfers – for unusual activity.” According to the Army Times, military officials hope to identify potential security threats like WikiLeaks suspect Pfc.

Four Georgia Tech Faculty Named AAAS Fellows

The American Association for the Advancement of Science (AAAS) has named four Georgia Tech professors as 2011 Fellows. AAAS is the world’s largest general scientific society, and the election as a Fellow is an honor bestowed upon AAAS members by their peers. Three of the new AAAS Fellows at Georgia Tech hail from the College of Engineering and one is on the faculty in the College of Computing. The Fellows were announced today in the journal Science and will be honored at the Fellows Forum, held Feb.

Pentagon to Develop Computer System to Prevent Another Wikileaks

The Pentagon is currently working to prevent another WikiLeaks situation within the department. Wired.com recently reported a group of military funded scientists are developing a sophisticated computer system that can scan and interpret every key stroke, log-in and uploaded files over the Pentagon’s networks. The Defense Advanced Research Projects Agency awarded about $9 million for five institutions’ work for the project. Georgia Tech will be leading the institutions and will kick off an initiative called the “Proactive Discovery of Insider Threats Using Graph Analysis and Learning.

Analysis tool would scan military e-mail for insider threats

A project funded through the Defense Advanced Research Projects Agency would identify the most serious insider threats to security by scanning all user e-mail messages, text messages, logins, file transfers and Web browsing on military networks, reports Katie Drummond at Wired’s Danger Room blog. The two-year Proactive Discovery of Insider Threats Using Graph Analysis and Learning (PRODIGAL) project is being is being conducted by a consortium of five institutions led by Georgia Tech.

Warning to Gossipy Grunts: DARPA's Eyeing Your E-Mails

Photo: D. Sharon Pruitt/Flickr If you don’t have anything nice to say, then definitely don’t say it, type it or text it over a military network. The Pentagon’s intent on weeding out “insider threats” – troops or other military personnel who might be disgruntled enough to (Wiki)leak some documents, or mentally unhinged enough to go on a shooting rampage. Now, military-funded scientists are plotting a computer system that’d boast unprecedented abilities to scan and interpret every keystroke, log-in and file upload performed over Pentagon networks.

SC11 Boasts High Georgia Tech Participation in Technical Program

Georgia Tech leveraged its strengths in high performance computing Nov 12-18 at the SC11 conference in Seattle, with a strong showing in the technical program and through faculty members building on current research partnerships and collaborations. More than 10,000 attendees in industry, academic research and government came to experience the latest developments in Big Data and high performance computing at SC11, the international conference for HPC, networking, storage and analysis.

Big Brothers, PRODIGAL Sons, and Cybersecurity

By Julian Sanchez I wrote on Monday that a cybersecurity bill overwhelmingly approved by the House Permanent Select Committee on Intelligence risks creating a significantly broader loophole in federal electronic surveillance law than its boosters expect or intend. Creating both legal leeway and a trusted environment for limited information sharing about cybersecurity threats—such as the idenifying signatures of malware or automated attack patterns—is a good idea. Yet the wording of the proposed statute permits broad collection and disclosure of any information that would be relevant to protecting against “cyber threats,” broadly defined.

Georgia Tech Online Spying

Georgia Tech DARPA ADAMS leaders Georgia Tech DARPA ADAMS team Data collection environment When a soldier in good mental health becomes homicidal or a government employee abuses access privileges to share classified information, we often wonder why no one saw it coming. When looking through the evidence after the fact, a trail often exists that, had it been noticed, could have possibly provided enough time to intervene and prevent an incident.

Sifting through petabytes: PRODIGAL monitoring for lone wolf insider threats

By Darlene Storm Homeland Security Director Janet Napolitano said the “risk of ‘lone wolf’ attackers, with no ties to known extremist networks or grand conspiracies, is on the rise as the global terrorist threat has shifted,” reported CBSNews. An alleged example of such a lone wolf terror suspect is U.S. citizen Jose Pimentel, who learned “bomb-making on the Internet and considered changing his name to Osama out of loyalty to Osama bin Laden.

System would monitor feds for signs they're 'breaking bad'

By Kevin McCaney Researchers backed by the Defense Advanced Research Projects Agency are developing a system than could scan up to 250 million text messages, e-mail messages and file transfers a day in search of anomalies that could help identify insider threats or employees who might be about to “break bad.” The system, dubbed PRODIGAL, for Proactive Discovery of Insider Threats Using Graph Analysis and Learning, will combine graph processing, anomaly detection and relational machine learning on a massive scale to create a prototype Anomaly Detection at Multiple Scales (ADAMS) system, according to a release from the Georgia Institute of Technology, which is working with four other organizations on the project.

DARPA Scanner To Search E-mail For "Insider Threats"

By Claire Shefchik A new project from DARPA (Defense Advanced Research Projects Agency) and George Institute of Technology will begin scanning e-mails, text messages and file transfers to pick up on what it calls “insider threats,” the university announced earlier this month. Analysts “may now have the bandwidth to investigate five anomalies per day out of thousands of possibilities. Our goal is to develop a system that will provide analysts for the first time a very short, ranked list of unexplained events that should be further investigated,” project investigator David A.

Could the U.S. Government Start Reading Your Emails?

By John Brandon, Fox News Nov. 10, 2011: The “data collection environment” at Georgia Tech, where researchers use a combination of massively scalable graph-processing algorithms and statistical analysis to scan through emails, text messages and IMs for “anomalies.” (Rick Robinson / Georgia Tech) Cherie Anderson runs a travel company in southern California, and she’s convinced the federal government is reading her emails. But she’s all right with that.

HPCwire People to Watch 2012

David Bader, Full Professor in the School of Computational Science and Engineering, College of Computing, at Georgia Institute of Technology, and Executive Director for High Performance Computing Dr. Bader is a lead scientist in the DARPA Ubiquitous High Performance Computing (UHPC) program. He received his Ph.D. in 1996 from The University of Maryland, and his research is supported through highly competitive research awards, primarily from NSF, NIH, DARPA, and DOE.

Interview: DARPA's ADAMS Project Taps Big Data to Find the Breaking Bad

By Rich Brueckner, insideHPC In this video, Professor David Bader from Georgia Tech discusses his participation in the DARPA ADAMS project. The Anomaly Detection at Multiple Scales (ADAMS) program uses Big Data Analytics to look for cleared personnel that might be on the verge of “Breaking Bad” and becoming internal security threats. Recorded at SC11. Learn more here

Analyzing Data To Pinpoint Rogue Insiders

By Robert Lemos, Contributing Editor The hunt for technology to identify malicious insiders took off in 2011 with the research arm of the Pentagon, the Defense Advanced Research Projects Agency (DARPA), offering up millions of dollars in grants to fund research. Earlier this month, for example, the Georgia Institute of Technology announced that DARPA had funded a collective effort by the school and four other organizations to create a suite of algorithms that turn disparate data feeds into real-time alerts of anomalous activity.

Announcing Our Newest Rock Star of HPC: David Bader

At insideHPC, we are pleased to announce that David Bader is our latest Rock Star of HPC. “While HPC tends to focus on compute-intensive problems, Big Data challenges require novel architectures for data-intensive computing. My group has been the first to parallelize and implement large-scale graph theoretic algorithms, which are quite a challenge because of the irregular memory accesses, little computation to overlap with these memory references, and fine grain synchronization.

Detecting Threats Before They Happen

“We never saw it coming.” Those words often fall from the lips of victims or witnesses to incidents when a person takes the law into their own hands. However, when doing the forensics in the case, a trail often exists that if anyone noticed, it could have possibly provided enough time to intervene and prevent an incident. Potentially solving those mysteries could be a bit closer because researchers at the Georgia Institute of Technology are collaborating with scientists from four other organizations to develop new approaches for identifying these “insider threats” before an incident occurs.

Georgia Tech Helps to Develop System That Will Detect Insider Threats From Massive Data Sets

Researchers at the U.S. Defense Advanced Research Projects Agency (DARPA), the Army Research Office, and Georgia Tech are developing new approaches for identifying insider threats before a data breach occurs. The researchers are developing a suite of algorithms that can detect different types of insider threats by analyzing massive amounts of data for unusual activity. “Our goal is to develop a system that will provide analysts for the first time a very short, ranked list of unexplained events that should be further investigated,” says Georgia Tech professor David A.

Georgia Tech Helps to Develop System That Will Detect Insider Threats from Massive Data Sets

Researchers from Georgia Tech are helping to create a suite of algorithms that can detect multiple types of insider threats by analyzing massive amounts of data for unusual activity. The Georgia Tech research team includes (left-right) Erica Briscoe, Andy Register, David A. Bader, Richard Boyd, Anita Zakrzewska, Oded Green, Lora Weiss, Edmond Chow and Oguz Kaya. (Credit: Gary Meek) When a soldier in good mental health becomes homicidal or a government employee abuses access privileges to share classified information, we often wonder why no one saw it coming.

CODONiS Teaming with ISB to Meet Big Challenge: Managing Terabytes of Personal Biomedical Data

Seattle-based CODONiS, a provider of advanced computing platforms for life sciences and healthcare, has teamed up with scientists from the world-renowned Institute for Systems Biology, a nonprofit research organization in Seattle, to advance biomedical computing for future personalized healthcare. The results from this ground-breaking collaboration will be discussed at the “Personalized Healthcare Challenges for High Performance Computing” panel discussion being held at the SC11 Conference in Seattle on November 15, 2011.

Computing innovations to imitate, not replace, human brain

By Ellyne Phneah Innovations around computing are increasingly designed to imitate the human brain but the day artificial intelligence becomes “smarter” than human intelligence is a “long time” away, said experts. Leong Tze Yun, associate professor at National University of Singapore’s (NUS) School of Computing, said alternate computational models have been “actively pursued” in recent years. Emulating the human brain–which she described as the “ultimate and best computer”–is one of the main directions researchers have been moving toward.

CSI to Host Two-Day Workshop on Data-Intensive Computing, Graphs, and Combinatorics in Bio-Informatics, Finance, Linguistics, and National Security

By Carlo Alaimo An artist’s rendering of a proposed Interdisciplinary High Performance Computation Center, part of the CSI Master Plan. CSI will host a two-day workshop on Data-Intensive Computing on July 26-27, 2011 from 8:15am to 4:45pm in the Lecture Hall of the Center for the Arts. The event is designed to cover topics in data-intensive computing including graph theoric and combinatoric approaches in bio-informatics, financial data analytics, linguistics, and national security.

Department Status for CSE at Georgia Tech

The Georgia Institute of Technology inaugurated its School of Computational Science and Engineering at a convocation in February. The result of an initiative that began in 2005, the new school is located within the College of Computing, where it joins the School of Computer Science and the School of Interactive Computing. Georgia Tech is one of the rare institutions to have a separate department of CSE with its own faculty.

Computational Nanogami: RNA Sequence Search Stretches Across Georgia Tech Boundaries

Back in 2009, Josh Anderson didn’t know much about biology. But he knew that a summer undergraduate research assistantship working on something called “RNA folding” had to be better than the job his mother had lined up for him. Prashant Gaurav recalls that in 2009 he was thinking about applying to graduate school in computer science—not about the base pairings of a nasty RNA virus like Hepatitis C. Yet in 2011 both Anderson (double-major in discrete mathematics and computer science) and Gaurav (M.

Genomics on the Petascale

As a new era centered on human health dawns around the world, the life sciences — accelerated by the tremendous bloom of genomics — are poised to open new horizons. And, in this relatively new, interdisciplinary branch of biology called genomics, computing plays a critical role, particularly in such areas as genome assembly, analysis and interpretation. Genome sequences are available for many organisms, but making biological sense of the genomic data requires high performance computing methods and an evolutionary perspective, whether one is trying to understand how genes of new functions arise, why genes are organized as they are in chromosomes, or why these arrangements are subject to change.

Tougher rating system evaluates nine supercomputer capabilities

Nine supercomputers have been tested, validated and ranked by the new “Graph500” challenge, first introduced this week by an international team led by Sandia National Laboratories. The list of submitters and the order of their finish was released Nov. 17 at the supercomputing conference SC10 meeting in New Orleans. The machines were tested for their ability to solve complex problems involving random-appearing graphs, rather than for their speed in solving a basic numerical problem, today’s popular method for ranking top systems.

Georgia Tech Keeps Sights Set On Exascale at SC10

The road to exascale computing is a long one, but the Georgia Institute of Technology, a leader in high-performance computing (HPC) research and education, continues to win new awards and attract new talent to drive technology innovation. From algorithms to architectures and applications, Georgia Tech’s researchers are collaborating with top companies, national labs and defense organizations to solve the complex challenges of tomorrow’s supercomputing systems. Ongoing projects and new research initiatives spanning several Georgia Tech disciplines directly addressing core HPC issues such as sustainability, reliability and massive data computation will be on display November 13-19, 2010 at SC10 in New Orleans, LA.

Georgia Tech Keeps Sights Set On Exascale at SC10

The road to exascale computing is a long one, but the Georgia Institute of Technology, a leader in high-performance computing (HPC) research and education, continues to win new awards and attract new talent to drive technology innovation. From algorithms to architectures and applications, Georgia Tech’s researchers are collaborating with top companies, national labs and defense organizations to solve the complex challenges of tomorrow’s supercomputing systems. Ongoing projects and new research initiatives spanning several Georgia Tech disciplines directly addressing core HPC issues such as sustainability, reliability and massive data computation will be on display November 13-19, 2010 at SC10 in New Orleans, LA.

Georgia Tech engaged in $100 million next-generation high-performance computing initiative

Imagine that one of the world’s most powerful high performance computers could be packed into a single rack just 24 inches wide and powered by a fraction of the electricity consumed by comparable current machines. That would allow an unprecedented amount of computing power to be installed on aircraft, carried onto the battlefield for commanders - and made available to researchers everywhere. Putting this computing power into a small and energy-efficient package, and making it reliable and easier to program, are among the goals of the new DARPA Ubiquitous High Performance Computing (UHPC) initiative.

Georgia Tech Engaged in $100 Million DARPA Program to Develop Next Generation of High Performance Computers

Imagine that one of the world’s most powerful high performance computers could be packed into a single rack just 24 inches wide and powered by a fraction of the electricity consumed by comparable current machines. That would allow an unprecedented amount of computing power to be installed on aircraft, carried onto the battlefield for commanders – and made available to researchers everywhere. Putting this computing power into a small and energy-efficient package, and making it reliable and easier to program, are among the goals of the new DARPA Ubiquitous High Performance Computing (UHPC) initiative.

Supercomputing Meets Social Media

In supercomputing these days, it’s usually the big science applications (astrophysics, climate simulations, earthquake predictions and so on) that seem to garner the most attention. But a new area is quickly emerging onto the HPC scene under the general category of informatics or data-intensive computing. To be sure, informatics is not new at all, but its significance to the HPC realm is growing, mainly due to emerging application areas like cybersecurity, bioinformatics, and social networking.

Supercomputer Digests Twitter in Real-Time

By Christopher Mims Determining the most influential users of Twitter is probably not what the creators of the Cray XMT supercomputer had in mind when they designed their machine. But when you’re packing this much computational heat, you go where the hard problems are. Twitter, Facebook and the rest of the social Web have become the modern-day equivalent of the water cooler, albeit with an automatic transcriptionist present. And processing all the data that conversation generates turns out to be a very hard problem.

NVIDIA Names GATech CUDA Center of Excellence

NVIDIA announced today that they have officially named Georgia Institute of Technology a CUDA Center of Excellence. Jeff Vetter, joint professor of the Georgia Tech College of Computing and Group Leader at Oak Ridge National Laboratory will serve as principal investigator for the center. “Georgia Tech has a long history of education and research that depends heavily on the parallel processing capabilities that NVIDIA has introduced with its CUDA architecture,” Vetter said.

NVIDIA Expands CUDA Developer Ecosystem With New CUDA Research and Teaching Centers in the U.S., Canada and Europe

NVIDIA today announced the addition of new research and educational centers dedicated to leveraging the immense processing power of graphics processing units (GPUs) to address today’s most challenging computing issues. CUDA Research Centers are recognized institutions that embrace and utilize GPU computing across multiple research fields. CUDA Teaching Centers are institutions that have integrated GPU computing techniques into their mainstream computer programming curriculum. The new centers are: CUDA Research Centers:

NVIDIA Names Georgia Institute of Technology a CUDA Center of Excellence

NVIDIA today recognized Georgia Institute of Technology (Georgia Tech) as a CUDA Center of Excellence. One of the world’s premier engineering and science universities, Georgia Tech is engaged in a wide number of research, development and educational activities which leverage GPU Computing. Jeffrey Vetter, joint professor of the Georgia Tech College of Computing and Group Leader at Oak Ridge National Laboratory, will serve as principal investigator of the CUDA Center of Excellence.

Burrowing into the bran tub

View the article http://ftp.scientific-computing.com/feature/burrowing-bran-tub

DARPA Sets Ubiquitous HPC Program in Motion

By Michael Feldman The US Defense Advanced Research Projects Agency (DARPA) has selected four “performers” to develop prototype systems for its Ubiquitous High Performance Computing (UHPC) program. According to a press release issued on August 6, the organizations include Intel, NVIDIA, MIT, and Sandia National Laboratory. Georgia Tech was also tapped to head up an evaluation team for the systems under development. The first UHPC prototype systems are slated to be completed in 2018.

DARPA Developing ExtremeScale Supercomputer System

Advanced computing is the backbone of the Department of Defense and of critical strategic importance to our nation’s defense. All DoD sensors, platforms and missions depend heavily on computer systems. To meet the escalating demands for greater processing performance, it is imperative that future computer system designs be developed to support new generations of advanced DoD systems and enable new computing application code. Targeting this crucial need, the Defense Advanced Research Projects Agency (DARPA) has initiated the Ubiquitous High Performance Computing (UHPC) program to create an innovative, revolutionary new generation of computing systems that overcomes the limitations of current evolutionary approach.

High Performance Computational Life Sciences at ISC'10

‘Life Sciences’ is becoming a strategic discipline at the frontier between Molecular Biology and Computer Science, impacting medicine, biotechnology, as well as society. ‘Life Sciences’, the leading edge research is one of the most challenging supercomputer application for the future. At ISC’10 we will organize a special session about ‘Life Sciences’, to fulfill the growing interest from the public and the scientific world. ‘Computational Life Sciences’ is fast emerging as an important discipline for academic research and industrial application.

Georgia Tech launches institute for data and high performance computing

by Rick Smith Georgia Institute of Technology has launched the Georgia Tech Institute for Data and High Performance Computing (IDH) in recognition of the need to advance and coordinate institute research and education activities in this area, both in its application to key areas of science and engineering as well as in the advancement of the technology itself. “As we look to high performance computing to drive advanced breakthroughs in science, health, energy and other industries, leveraging Georgia Tech’s strongest assets – world class researchers in computing, experts across nearly every problem domain, and low barriers to collaboration – is what will set us apart,” said Dr.

Georgia Tech Creates Institute for Data and High Performance Computing Research

Today the Georgia Institute of Technology Office of the Provost announced the formation of the Georgia Tech Institute for Data and High Performance Computing (IDH) in recognition of the need to advance and coordinate institute research and education activities in this area. High performance computing (HPC) continues to grow as a strategically important area for Georgia Tech, both in its application to key areas of science and engineering as well as in the advancement of the technology itself.

Georgia Tech Creates Institute for Data and High Performance Computing Research

Today the Georgia Institute of Technology Office of the Provost announced the formation of the Georgia Tech Institute for Data and High Performance Computing (IDH) in recognition of the need to advance and coordinate institute research and education activities in this area. High performance computing (HPC) continues to grow as a strategically important area for Georgia Tech, both in its application to key areas of science and engineering as well as in the advancement of the technology itself.

ISC'10 Announces Four Special Sessions

The 25th International Supercomputing Conference – ISC’10 – the leading HPC event in 2010, introduces four new special sessions to address some of the biggest challenges in science, industry and technology today. The sessions on HPC Computational Life Sciences, New Markets, Networking and Energy form the first part of the ISC’10 conference special sessions. The four-day conference and trade fair will once again be held at the Congress Center Hamburg, Germany, from May 30 – June 3, 2010, bringing together some 2,000 IT managers, researchers and enthusiasts to exchange knowledge and ideas, foster contacts and do business.

The supercomputer on your desktop

By John Brandon High-performance computing (HPC) has almost always required a supercomputer – one of those room-size monoliths you find at government research labs and universities. And while those systems aren’t going away, some of the applications traditionally handled by the biggest of Big Iron are heading to the desktop. One reason is that processing that took an hour on a standard PC about eight years ago now takes six seconds, according to Ed Martin, a manager in the automotive unit at computer-aided design software maker Autodesk Inc.

A niche social network for researchers

By Therese Poletti, Columnist Entrepreneur Ijad Madisch initially wanted to be a virologist and studied how to stop certain viruses in their path. Now, the 29-year-old scientist, technologist and medical doctor is learning the positive aspects of possibly having a “viral” product on the Internet. Madisch is the founder of a niche social network called ResearchGATE. It’s like Facebook for researchers and scientists. And like Facebook, it has a clean look and it seems rather easy to use.

IEEE Computer Society Golden Core Award

In 2010, David A. Bader received the IEEE Computer Society Golden Core Award. A plaque is awarded for long-standing member or staff service to the society. This program was initiated in 1996 with a charter membership of 450. Each year the Awards Committee will select additional recipients from a continuing pool of qualified candidates and permanently include their names in the Golden Core Member master list.

IEEE Computer Society Meritorious Service Award

In 2010, David A. Bader received the IEEE Computer Society Meritorious Service Award for service as General Chair for the 2010 International Parallel and Distributed Processing Symposium (IPDPS).

GRAPPAling with evolutionary history

By Miriam Boon, iSGTW This figure illustrates how gene order changes among the eight species. Each thin line represents a single gene and its position in the different species. Most genes are conserved on the same chromosomal arm or Muller element, but gene order is shuffled between species. This figure appeared in the July 2008 issue of Genetics. Image courtesy of Stephen Schaeffer. We’ve known for several years now that chimpanzees share 96 percent of our DNA.

Five Georgia Tech Faculty Members Elected as IEEE Fellows

The Georgia Institute of Technology is one of only six U.S. universities to have five of its faculty members elevated to the rank of IEEE Fellow, the most at any academic institution in the United States. The five Georgia Tech faculty members promoted to IEEE Fellow, effective January 1, 2010, are David A. Bader, Ian T. Ferguson, Richard A. Hartlein, David C. Keezer, and Emmanouil M. Tentzeris. Chosen by the IEEE board of directors, the IEEE Fellows class of 2010 consists of 309 engineering professionals from around the world.

Why Does the Air Force Want Thousands of PlayStations?

By KI MAE HEUSSNER Guess what’s on the U.S. Air Force’s wish list this holiday season. Sony’s popular PlayStation 3 gaming console. Thousands of them. The Air Force Research Laboratory in Rome, N.Y., recently issued a request for proposal indicating its intention to purchase 2,200 PlayStation 3 (PS3) consoles. But the military researchers don’t plan to play “Call of Duty: Modern Warfare 2” or any of the season’s other blockbuster games.

CS Members Elevated to Fellows

Seventy IEEE Computer Society members will be elevated to IEEE Fellow grade in 2010. The grade of Fellow recognizes unusual distinction in the profession. The IEEE Board of Directors elevated 309 members to Fellow status. Computer Society members recommended for Fellow status in 2010 include: Tinku Acharya Raj Acharya Charu Aggarwal Srinivas Aluru David Bader Grady Booch Athman Bouguettaya Lionel Briand Douglas Burger Srimat Chakradhar Stefano Chiaverini Thomas Cloonan Laurent Cohen Ray Dolby Ahmed El-Magarmid Elmootazbellah Elnozahy Mário Figueiredo William Gropp Baining Guo Richard Hartley Yutaka Hata Joseph Hellerstein James Hendler John Impagliazzo Yannis Ioannidis Dimitrios Ioannou David Kaeli Andrew Kahng Matti Karjalainen Nikola Kasabov David Keezer Fanny Klett Andrew Laine Kwei-Jay Lin Chih-Min (Jimmy) Lin John C.

Nuke labs show the future of hybrid computing

By Timothy Prickett Morgan The Hybrid Multicore Consortium is on a mission that perhaps all of computing - on the desktop and in the data center - will one day embark on: making hybrid computing architectures as easy to program and use as monolithic platforms have been. There is a growing consensus - but by no means a complete one - that the future of energy-efficient and yet powerful systems will be based on the coupling of general purpose, multicore CPUs with various kinds of co-processors that also have hundreds of cores to do specific kinds of accelerations needed by particular applications.

NSF Awards $1M to Support Petascale Computational Tools for Genome Rearrangement Research

The National Science Foundation has funded three research teams under a four-year $1 million project to help ensure that certain bioinformatics tools are compatible with upcoming petascale computers. NSF awarded the grants under the American Recovery and Reinvestment Act to support the development of algorithms that will infer evolutionary relationships from genomic rearrangement events — a task that could take “centuries” to analyze on today’s fastest parallel computers, according to the agency.

NSF Funds Genomics Petascale Project

By Matthew Dublin A team of researchers led by Georgia Tech’s David Bader are poised to bring genomic evolution to petascale heights. Equipped with a four-year $1M grant courtesy of the National Science Foundation’s PetaApps program, Bader and his team are setting their sights on developing algorithms to take advantage of petascale computing platforms for studying genome rearrangement events. Bader says that they will be writing their own open source software development framework from scratch and eventually releasing it through the GNU GPU license on Sourceforge.

NSF Funds Petascale Algorithms for Genomic Relatedness Research

Scientists at three universities will use funding from the American Recovery and Reinvestment Act to develop computational biology tools that researchers will use with next-generation computers to study genomic evolution, according to Georgia Tech. The $1 million grant from the National Science Foundation’s PetaApps program, which funds development of computer technologies for petascale machines that can conduct trillions of calculations per second, will include Georgia Tech, the University of South Carolina, and Pennsylvania State University.

Petascale computing tools could provide deeper insight into genomic evolution

Technological advances in high-throughput DNA sequencing have opened up the possibility of determining how living things are related by analyzing the ways in which their genes have been rearranged on chromosomes. However, inferring such evolutionary relationships from rearrangement events is computationally intensive on even the most advanced computing systems available today. Research recently funded by the American Recovery and Reinvestment Act of 2009 aims to develop computational tools that will utilize next-generation petascale computers to understand genomic evolution.

Georgia Tech focuses on experimental systems and computational sciences at SC09

The Georgia Institute of Technology, an emerging leader in high-performance computing research and education, will be showcasing scientific research at the technical edge at next week’s SC09, the international conference on high-performance computing, networking, storage and analysis scheduled for Nov. 14-20, 2009, at the Oregon Convention Center in Portland, Oregon. An SC09 best paper nomination for work in computational biology, a new GPU based experimental HPC system in the works, and expert presence across a range of hardware, software and application domains are just some of the ways Georgia Tech will feature multidisciplinary, cross-industry research efforts focusing on computational scientific discovery and sustainable high-performance computing (HPC).

Book Review: Petascale Computing: Algorithms and Applications

By John E. West, for HPCwire Petascale Computing: Algorithms and Applications, edited by David A. Bader (Chapman & Hall/CRC, 2007), is the first book in CRC’s Computational Science Series, edited by Horst Simon at Lawrence Berkeley National Lab. Although the book is a collection of papers, Bader has done an excellent job of creating a compilation that holds together and covers a broad topic very well. At the same time, Petascale Computing remains accessible to anyone with HPC or scientific application experience.

Petascale Coming Down the Pike

By Matthew Dublin Given the combination of the everincreasing power of compute hardware and researchers’ desire to unlock the mysteries of life, it’s no surprise that high-performance computing in the early 21st century is now talking in terms of a whole new scale of computation. While the life sciences community has for some time now been concerned with terrifying amounts of data in terabytescale proportions — that’s 1,024 gigabytes — there is an even larger scale on the computational horizon: petascale computing.

Computer Science and Engineering Grad Student Wins Best Poster

Kamesh Madduri won the best poster award in the Ph. D. Forum at the 22nd IEEE International Parallel and Distributed Processing Symposium (IPDPS) held April 14-18 in Miami. Madduri’s research in computational science and high-performance computing beat out 72 other submissions to win one of two prizes in the competition. The main track of IPDPS is highly competitive and contains peer-reviewed papers submitted from researchers worldwide. Only 105 of the submitted 410 papers were accepted for presentation.

Multicore and Parallelism: Catching Up

By Jonathan Erickson David Bader is Executive Director of High-Performance Computing in the College of Computing at Georgia Institute of Technology and author of Petascale Computing: Algorithms and Applications. Q: David, you’ve been actively involved with multicore and parallelism since the early 1990s and the rest of the computing world seems just now catching up. Any idea of why it’s taken so long? A: Overnight the computing world has changed, and we are starting a new day when parallelism is essential not just for solving grand challenge applications in science and engineering, but for speeding up everything: our 3G cellphone apps, our desktop spreadsheets and office productivity tools to our web browsers, media players, and Web services.

PlayStation 3 Processor Speeds Financial-Risk Calculation

By Samuel K. Moore Of the many things that have gone wrong on Wall Street this past year, the use and misuse of computational algorithms meant to give financiers a clear picture of the risk of big losses was one of them. One important calculation, called Value-at-Risk (VaR), is a way of assessing the probability that an investment portfolio will lose a specified value over a certain period of time. Though VaR’s reputation is much maligned, experts say firms have little choice but to continue, if not accelerate, their use of computational algorithms as the need to calculate risk and value has become more acute.

Georgia Tech Enters the Spotlight at SC08

The Georgia Institute of Technology, an emerging leader in highperformance computing research and education, will command a significant presence at next week’s SC08, the international conference on high-performance computing, networking, storage and analysis scheduled for Nov. 15-21, 2008, at the Austin Convention Center in Austin, Texas. Georgia Tech will co-chair one workshop, participate in four panel (or “Birdsof- a-Feather”) discussions, present three technical papers and one research poster, and host 16 booth presentations and video interviews on emerging high-performance computing projects and application areas.

Executive Guide to SC08: Tuesday

By Michael Feldman Tuesday marks the first full day of the conference technical program. This year’s conference keynote will be given by Michael Dell, chairman and CEO of Dell, Inc. Dell’s selection reflects both the changing face of the industry, and the conference’s location – Dell is headquartered about 20 miles north of Austin in Round Rock, Texas. Computing at Scale There are many sessions on Tuesday that address the programming and operational challenges faced by managers of large computing systems today.

How the Large Hadron Collider Might Change the Web

By Mark Anderson When the Large Hadron Collider (LHC) begins smashing protons together this fall inside its 17-mile- (27-kilometer-) circumference underground particle racetrack near Geneva, Switzerland, it will usher in a new era not only of physics but also of computing. Before the year is out, the LHC is projected to begin pumping out a tsunami of raw data equivalent to one DVD (five gigabytes) every five seconds. Its annual output of 15 petabytes (15 million gigabytes) will soon dwarf that of any other scientific experiment in history.

Supercomputing: New Processor Architecture Holds Promise for Protein, Gene Studies

A supercomputing architecture that first appeared in prototype form more than 10 years ago has been given a new lease on life, thanks in part to a recent $4 million Department of Defense grant issued to seed the new Center for Adaptive Supercomputing Software. The joint project teams up Pacific Northwest National Laboratory and supercomputer maker Cray, as well as several institutions including Georgia Institute of Technology and Sandia National Laboratories.

Systems Biology

By Abby Vogel Research Horizons Summer 2008 Vol. 25, No. 3

Oil reserves may raise false US hopes

By Sheila McNulty in Houston No one knows the extent of US oil and natural gas reserves in the offshore and Arctic areas that are off-limits to drilling. The last time they were surveyed was in the 1980s and the technology then used is no longer considered accurate, say industry experts. “The youngest seismic [tests] in some of these areas is 25 years old,” said Bobby Ryan, Chevron’s vice-president for global exploration.

Multithreaded supercomputer seeks software for data-intensive computing

The newest breed of supercomputers have hardware set up not just for speed, but also to better tackle large networks of seemingly random data. And now, a multi-institutional group of researchers has been awarded $4.0 million to develop software for these supercomputers. Applications include anywhere complex webs of information can be found: from internet security and power grid stability to complex biological networks. The difference between the new breed and traditional supercomputers is how they access data, a difference that significantly increases computing power.

The Cell Processor Builds Its Mojo

With the arrival of the new QS22 blade, IBM and its partners are pushing hard to flesh out the software ecosystem for the Cell Broadband Engine. The QS22 contains the double-precision enhanced version of the processor, and as such, represents the first mainstream Cell platform for HPC. Since the QS22 is also at the heart of the new Roadrunner petaflop machine for Los Alamos, there’s been even more interest in exploring the application possibilities for the Cell.

Sony Group, Toshiba and IBM Renew Cell Broadband Engine Center with Georgia Tech

The Georgia Tech College of Computing today announced the renewal of the Sony Corporation/Sony Computer Entertainment Inc. (Sony Group)-Toshiba-IBM Center of Competence (STI Center), based on Georgia Tech’s exceptional work in multiple areas of research and evangelism for the Cell Broadband Engine™ (Cell/B.E.) technology. Through Georgia Tech’s efforts, the STI Center has been responsible for creating and disseminating software optimized for Cell/B.E. systems, and for performing research on the design of Cell/B.

Ga. Tech finds creative uses for gaming chip

By David Ho, Cox New York Correspondent A video game microchip may be the key to a system that keeps aging commercial and military airplanes structurally safe, Georgia Tech’s College of Computing plans to announce Thursday. The research is among several projects being unveiled this week by an alliance of the institute’s scientists and tech companies Sony Corp., Toshiba Corp. and IBM Corp. The work focuses on the Cell processor, which is the heart of the Sony PlayStation 3 game console but also has been used in fields such as medical research and oil exploration.

NPR interviews David Bader on the chip in the PlayStation 3

NPR radio interview on Atlanta’s 90.1 WABE station In this radio interview, Atlanta’s NPR station 90.1 WABE’s John Lemley interviews David Bader on the IBM Cell microchip in the Sony PlayStation 3 that could save lives.

Georgia Tech studying Playstation chip for plane safety

By David Ho Cox News Service A video game microchip may be the key to a system that keeps aging commercial and military airplanes structurally safe, GeorgiaTech’s College of Computing plans to announce Thursday. The research is among several projects being unveiled this week by an alliance of the institute’s scientists and the technology companies Sony Corp., Toshiba Corp. and IBM Corp. The work focuses on the Cell processor, which is the heart of the Sony PlayStation 3 game console but also has been used in fields such as medical research and oil exploration.

Featured Keynote Speaker: David Bader at Third Annual High Performance Computing Day at Lehigh University

Third Annual High Performance Computing Day at Lehigh http://www.lehigh.edu/computing/hpc/hpcday.html Friday, April 4, 2008 Featured Keynote Speaker David Bader ‘90, ‘91G Petascale Phylogenetic Reconstruction of Evolutionary Histories http://www.lehigh.edu/computing/hpc/hpcday/2008/hpckeynote.html Executive Director of High Performance Computing College of Computing, Georgia Institute of Technology https://webapps.lehigh.edu/hpc/2008/hpcday_2008.pdf

Intel, Microsoft fund university chip research

By Tom Abate, Chronicle Staff Writer After more than two decades of boosting chip performance by using two hardware tricks that work hand-in-glove, Intel Corp. has realized that one trick makes computers too hot, so it has been forced to redesign its microprocessors in a way that requires the invention of a whole new approach to PC software. On Tuesday, Intel, joined by Microsoft Corp., said it will invest $20 million over the next five years to fund software research at UC Berkeley and the University of Illinois at Urbana-Champaign to program a way around this unexpected dead-end in chip design.

H-P to unveil big revamp in its famed labs

By Therese Poletti No technology company wants to end up with great research that it fails to commercialize. Silicon Valley is too familiar with the failure of the research lab previously known as Xerox PARC to capitalize on its early innovations for the personal computer in the 1980s. Their work provided the seeds for the point-and-click user interface commercialized first by Apple Inc. AAPL, -4.62% and then Microsoft Corp. MSFT, -3.

Will Faster Supercomputers Help Solve World’s Problems?

Over twice as fast as today’s most commonly used supercomputers, petascale processors—which are able to perform 1,000 trillion calculations per second—are slated to be released as early as 2008. With the power of 100,000 desktop computers, the newest generation of supercomputers is predicted to have a profound impact in the fields of business and industry. Not only will stock brokers be able to use their processing power to better predict swings in the stock market, but the automotive industry will be able to use the higher processing speed to limit the need for building vehicle models—thus mitigating most defects before a prototype is built.

Petascale computers: the next supercomputing wave

By Liz Tay The author of the world’s first published collection on petascale techniques, David A. Bader, discusses petascale, exascale and the future of computing. Supercomputing has come a long way in the past half-century. Far from CDC’s single-operation scalar processors in the 1960s, present day terascale computers in development by companies like Intel boast up to 100 processor cores and the ability to perform one trillion operations per second.

Students Explore Video Game Programming and Architecture in New Course

ECE introduced a new video game programming class to students this fall: Multicore and GPU Programming for Video Games. The course focuses on the architecture and programming of multicore photo of Brian Davidsonprocessors and graphical processing units (GPUs). ECE faculty members Aaron Lanterman and Hsien-Hsin “Sean” Lee helped develop the class (ECE 4893A/CS4803MPG) along with David Bader, an associate professor in the College of Computing. The three professors are co-teaching the class this semester.

First book on petascale computing launched at SC07

The College of Computing at Georgia Tech and Chapman & Hall/CRC Press today announced the launch of “Petascale Computing: Algorithms and Applications”, the first published collection on petascale techniques for computational science and engineering, at the SC07 conference. Edited by David A. Bader, associate professor of computing and executive director of high-performance computing at Georgia Tech, this collection represents an academic milestone in the high-performance computing industry and is the first work to be released through Chapman & Hall/CRC Press’ new Computational Science series.

Supercharging Multi-Core Designs

“Multi-core is a disruptive technology — and I mean that in a good way — because it’s only when you have disruption of the status quo that new innovations can affect technology with revolutionary advances,” says David A. Bader, executive director of high-performance computing at Georgia Tech. Software developers who optimize their code for this groundbreaking technology can deliver new levels of functionality more cost-effectively. However, to achieve this optimization they’ll need to tap into a new set of resources.

High Performance Computing and Web Science Initiatives Receive Seed Money from Provost's Office

The Office of the Provost has awarded Focused Research Program (FRP) funding for research proposals in high performance computing for $30,000 and web science for $29,313. The proposals were coordinated by College of Computing Associate Professor David A. Bader and co-coordinated by Associate Professors Amy Bruckman and Milena Mihail, respectively. An FRP is a grant given by the Georgia Tech Provost’s Office designed to provide start-up support for research programs that could not be carried out through individual effort or within the resources of a single unit.

David Bader Elected to Serve on Advisory Board for Internet2

College of Computing Associate Professor and Executive Director of High-Performance Computing David A. Bader has been elected to the 2007 Internet2 Advisory Council in its first-ever election held this summer. Formed in 1996, Internet2 is a non-profit consortium contributing to the advancement of networking research with projects like the Abiline Network and the National Lambda Rail (NLR) project, which has deployed the highest bandwidth research network in the country. The election for Advisory Council members closed on June 28, with 57% of eligible member institutions from industry and research casting ballots.

NAE selects two from Tech for ‘Frontiers’ symposium

The National Academy of Engineering (NAE) has selected two Georgia Tech faculty — College of Computing Associate Professor David Bader and Mechanical Engineering Assistant Professor Samuel Graham — to participate in the NAE’s annual Frontiers of Engineering symposium, a three-day event that will bring together engineers ages 30 to 45, who are performing cutting- edge engineering research and technical work in a variety of disciplines. Participants were nominated by fellow engineers or organizations and chosen from 260 applicants.

Creative Young Engineers Selected to Participate in NAE's 2007 Frontiers of Engineering Symposium

David A. Bader The thirteenth annual US Frontiers of Engineering Symposium was held in September, 2007, in Redmond Washington. Chaired by Julia M. Phillips of Sandia National Laboratories, the meeting included sessions on Engineering trustworthy computer systems, Control of protein conformations, Biotechnology for fuels and chemicals, Modeling and simulating human behavior, and Safe water technologies. US Frontiers of Engineering is an annual meeting that brings together 100 of the nation’s outstanding young engineers (ages 30-45) from industry, academia, and government to discuss pioneering technical and leading-edge research in various engineering fields and industry sectors.

Georgia Tech 'CellBuzz' Cluster in Production Use

Georgia Tech is one of the first universities to deploy the IBM BladeCenter QS20 Server for production use, through Sony-Toshiba-IBM (STI) Center of Competence for the Cell Broadband Engine (http://sti.cc.gatech.edu/) in the College of Computing at Georgia Tech. The QS20 uses the same ground-breaking Cell/B.E. processor appearing in products such as Sony Computer Entertainment’s PlayStation3 computer entertainment system, and Toshiba’s Cell Reference Set, a development tool for Cell/B.E. applications. The Georgia Tech installation includes a cluster of 28 Cell/B.

Georgia Tech High Performance Computing Pioneer Joins Nation's Brightest Engineers to Tackle Green Issues

College of Computing Associate Professor David A. Bader is one of eighty-three of the nation’s brightest young engineers who have been selected to take part in the National Academy of Engineering’s (NAE) 13th annual U.S. Frontiers of Engineering symposium. The two and a half-day event will bring together engineers ages 30 to 45 who are performing exceptional engineering research and technical work in a variety of disciplines. The participants — from industry, academia, and government — were nominated by fellow engineers or organizations.

College of Computing Hosts Workshop to Drive Innovation in Cell Broadband Engine Processor Research

The College of Computing at Georgia Tech today announced it will host the Georgia Tech Cell Broadband Engine™ (Cell/B.E.) Processor Workshop from June 18-19, 2007, focusing on applications for the Cell/B.E. processor, including gaming, virtual reality, home entertainment, tools and programmability and high performance scientific and technical computing. The two-day workshop is sponsored by Sony Computer Entertainment Inc. (SCEI), Toshiba and IBM and will be held at the Klaus Advanced Computing Building on Georgia Tech’s campus.

College of Computing at Georgia Tech Hosts Workshop to Drive Innovation in Cell Broadband Engine Processor Research

The College of Computing at Georgia Tech today announced it will host the Georgia Tech Cell Broadband Engine™ (Cell/B.E.) Processor Workshop from June 18-19, 2007, focusing on applications for the Cell/B.E. processor, including gaming, virtual reality, home entertainment, tools and programmability and high performance scientific and technical computing. The two-day workshop is sponsored by Sony Computer Entertainment Inc. (SCEI), Toshiba and IBM and will be held at the Klaus Advanced Computing Building on Georgia Tech’s campus.

Georgia Tech to Host Cell BE Workshop

Georgia Tech will be hosting a two-day workshop on software and applications for the Cell Broadband Engine, to be held on Monday, June 18 and Tuesday, June 19, at the Klaus Advanced Computing Building, (http://www.cc.gatech.edu/ ) at Georgia Institute of Technology, in Atlanta, GA, United States. The workshop is sponsored by Georgia Tech and the Sony, Toshiba, IBM, (STI) Center of Competence for the Cell BE. The theme of the workshop will include Gaming, Virtual Reality, Home Entertainment, Tools and

David Bader Joins IBM Technical Leadership Forum

What do Harley-Davidson, Lehman Brothers, eBay, Volkswagen, Electronic Arts, and Georgia Tech all have in common? Each is a member of the highly-selective IBM Technical Leadership Forum, which is comprised of high-tech representatives from around the world who serve as an advisory board for IBM on information technologies. College of Computing Associate Professor David A. Bader was selected as the first academic member of the IBM Technical Leadership Forum and attended the forum’s recent meeting held at the IBM Executive Briefing Center in Mainz, Germany, on May 3-4, 2007.

The College of Computing Honors Exceptional Students, Faculty, and Staff

The College of Computing at Georgia Tech hosted its 16th Annual Awards Celebration on April 17, 2007. Master of Ceremony and CoC Associate Dean and Honors & Awards Chair Merrick Furst led the College in congratulating students, faculty, and staff on another exciting and productive year. The 2006-2007 Undergraduate Awards Outstanding Freshman - Christopher Adam Sladky Outstanding Sophomore - Michael E. Hale Outstanding Junior - Megan Leigh Elmore Outstanding Undergraduate - David Schachter Outstanding Undergraduate Research - Steven Dalton Outstanding Undergraduate Teaching Assistant - David Schachter Outstanding Undergraduate Research Assistant - Steven French The Dave & Carrie Armento Scholarship - Gary Mann

CSE Leadership in Petascale Computing

David A. Bader, Executive Director of High-Performance Computing, and Associate Professor in the Computational Science and Engineering Division of the College of Computing at Georgia Tech, recently delivered a keynote talk on “Petascale Computing for Large-Scale Graph Problems“ at the 8th IEEE International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC), on March 30 in Long Beach, CA, held in conjunction with the 21st IEEE International Parallel and Distributed Processing Symposium (IPDPS).

Multicore Processors for Science and Engineering

by Pam Frost Gorder There’s no question that multicore processors have gone mainstream. These computer chips, which have more than one CPU, first hit the consumer market less than two years ago. Today, practically every new computer has a dual-core (two-CPU) chip, and Intel just launched a quad-core chip with four CPUs. One of 2006’s most in-demand holiday gifts was Sony’s PlayStation 3, which boasts a “cell” chip with nine CPUs for faster and more realistic video gaming.

Intel announces it's built a better microprocessor

The Intel Teraflop Research Chip is seen Tuesday, Feb. 6, 2007, near San Jose, Ore. Intel Corp. has designed a computer chip that promises to do as many calculations as quickly as an entire data center, while consuming as much energy as a light bulb. (AP Photo/The Oregonian, Fredrick D. Joe) by Tom Abate, Chronicle Staff Writer Scientists at Intel Corp. have made an experimental microprocessor the size of a fingertip that has the same computational power that it took a 2,500-square-foot supercomputer to deliver just 11 years ago.

Pioneering Petascale Computing in the Biological Sciences - Workshop Report

Supercomputers help scientists build virtual worlds to explore blood flow for stroke prevention, design new proteins for life-saving drugs, and diagnose brain disorders. But even with today’s largest machines, researchers are only beginning to capture the key features of many complex problems. To take supercomputers to the next level of power and realism, today’s frontier goal is “petascale” computing, announced by the National Science Foundation (NSF) in the Leadership-Class System Acquisition - Creating a Petascale Computing Environment for Science and Engineering.

IBM Announces Winners of Shared University Research Awards

IBM (NYSE: IBM) today announced that ten universities spanning multiple geographies have been chosen as winners of the latest IBM Shared University Research (SUR) awards. For the first time, each of the universities will be using the Cell Broadband Engine™ (Cell/B.E.) technology to enable students and faculty to drive innovation, collaborate and foster skill development in the creation of digital media, software platform performance and medical imaging solutions. As research helps drive innovation and growth, new skills are required to staff the emerging disciplines and technologies, leading to tremendous opportunities to drive Cell/B.

Pioneering Petascale Computing in Biological Sciences

Supercomputers help scientists build virtual worlds to explore blood flow for stroke prevention, design new proteins for life-saving drugs, and diagnose brain disorders. But even with today’s largest machines, researchers are only beginning to capture the key features of many complex problems. To take supercomputers to the next level of power and realism, today’s frontier goal is “petascale” computing, announced by the National Science Foundation (NSF) in the Leadership-Class System Acquisition - Creating a Petascale Computing Environment for Science and Engineering.

College of Computing Wins Cell Processor Center

The College of Computing at Georgia Tech has been designated as the first Sony-Toshiba-IBM Center of Competence to build a community of programmers and broaden industry support for the Cell Broadband Engine microprocessor. Cell BE “supercharges” compute-intensive applications, offering fast performance for computer entertainment and handhelds, virtual reality, wireless downloads, real-time video chat, interactive TV shows and other “image-hungry” computing environments. The Cell BE processor already appears in such products as Sony’s PlayStation 3, Toshiba’s Cell Reference Set and the IBM BladeCenter QS20.

David Bader Wins Microsoft Research Award

College of Computing Associate Professor and Executive Director of High-Performance Computing David Bader received a $75,000 award from Microsoft Research to investigate the design and optimization of algorithms that fully exploit multi-core processors. The project, “Enabling MS Visual Studio Programmers to Design Efficient Parallel Algorithms for Multi-Core Processors,” is one of approximately six projects selected by Microsoft Research in the 2006-2007 Parallel and Concurrent Programming, a targeted research effort that funds work dedicated to pursuits in concurrency, parallelism and multi-core technology.

College of Computing Designated First STI Center of Competence Focused on Cell Processor

The College of Computing at Georgia Tech today announced its designation as the first Sony-Toshiba-IBM (STI) Center of Competence focused on the Cell Broadband Engine™ (Cell BE) microprocessor. IBM® Corp., Sony Corporation and Toshiba Corporation selected to partner with the College of Computing at Georgia Tech to build a community of programmers and broaden industry support for the Cell BE processor. The revolutionary Cell BE processor is a breakthrough design featuring a central processing core, based on IBM’s industry leading Power Architecture™ technology, and eight synergistic processors.

College of Computing at Georgia Tech Selected as First Sony-Toshiba-IBM Center of Competence Focused on the Cell Processor

The College of Computing at Georgia Tech today announced its designation as the first Sony-Toshiba-IBM (STI) Center of Competence focused on the Cell Broadband Engine™ (Cell BE) microprocessor. IBM® Corp., Sony Corporation and Toshiba Corporation selected to partner with the College of Computing at Georgia Tech to build a community of programmers and broaden industry support for the Cell BE processor. The revolutionary Cell BE processor is a breakthrough design featuring a central processing core, based on IBM’s industry leading Power Architecture™ technology, and eight synergistic processors.

Georgia, not Austin, gets chip center

By Bob Keefe WEST COAST BUREAU Three of the biggest names in technology plan to announce today that they will start a research center at the Georgia Institute of Technology to explore ways to expand the reach of a promising new semiconductor design. The move would sidestep Austin, which was a contender for the center and is where the technology was developed. Sony Corp., IBM Corp. and Toshiba Corp. compare their new Cell microprocessor to a supercomputer on a chip that can handle some applications 10 times as fast as traditional computer microprocessors.

Ga. Tech lands research facility!

By Bob Keefe Cox Washington Bureau Three of the biggest names in technology plan to announce today they will start a research center at Georgia Tech to explore ways to expand the reach of a promising new semiconductor design. Sony Corp., IBM Corp. and Toshiba Corp. compare their new “Cell” microprocessor to a supercomputer on a chip that can handle some applications 10 times faster than traditional computer chips. The technology that the companies jointly developed in Austin, Texas, over five years at a cost of $400 million is debuting in Sony’s new PlayStation3 video game console.

Bader Gives Keynote On Petascale Computing

College of Computing Associate Professor David Bader gave an invited keynote on “Petascale Computing for Large-Scale Graph Problems” at the second international conference on High Performance Computing and Communications (HPCC ‘06) in Munich, Germany. With the rapid growth in computing and communication technology, the past decade has witnessed a proliferation of powerful parallel and distributed systems, and an ever-increasing demand for practice of high performance computing and communication (HPCC). HPCC has moved into the mainstream of computing and become a key technology in determining future research and development activities in many academic and industrial branches, especially when the solution of large and complex problems must cope with very tight timing schedules.

Bader Receives 2006 IBM Faculty Award

Congratulations to Associate Professor David Bader who recently received a 2006 IBM Faculty Award in recognition of his outstanding achievement and importance to industry. The highly competitive award, valued at $40,000, was given to Bader for making fundamental contributions to the design and optimization of parallel scientific libraries for multicore processors, such as the IBM Cell. As an international leader in innovation for the most advanced computing systems, IBM recognizes the strength of collaborative research with the College of Computing at Georgia Tech’s Computational Science and Engineering (CSE) division.

Kamesh Madduri Wins Prestigious NASA Fellowship

College of Computing Ph.D. student Kamesh Madduri has received a NASA Graduate Student Researchers Program (GSRP) Fellowship for his proposal titled, “Performance Analysis and Optimization of NASA Scientific Applications on the NAS Supercomputers.” The National Aeronautics and Space Administration began the competitive program in 1980 to support 100 promising students each year who are pursuing advanced degrees in science and engineering, and to cultivate research ties to the academic community.

Alumnus Bader Joins DSPlogic Advisory Board

David Bader Ph.D. alumnus David Bader ‘96, Associate Professor of Computational Science and Engineering at Georgia Tech, has joined the Technical Advisory Board of DSPlogic, a provider of FPGA-based, reconfigurable computing and signal processing products and services. Bader has been a pioneer in the field of high performance computing for problems in bioinformatics and computational genomics, and has co-authored over 75 articles in peer-reviewed journals and conferences.

Preparing Researchers To Use Petascale Computation

This year, the National Science Foundation (NSF) will award the acquisition of a national supercomputer for production use by 2011. The supercomputer will achieve petascale computation which is a rate several orders of magnitude more powerful than the fastest supercomputers available today. Although breakthroughs across science and engineering are anticipated with this incredible resource, few researchers are prepared to use this massive computing capability. An NSF-sponsored workshop co-organized by College of Computing Associate Professor David Bader, Allan Snavely (UC San Diego), and Gwen Jacobs (Montana State) will be held August 29-30, 2006 in Arlington, Virginia to identify key challenges in the biological sciences which may lead to early breakthroughs on petascale supercomputers.

People and Positions: HPC Expert Joins DSPlogic's Technical Advisory Board

Dr. David A. Bader, a Georgia Tech faculty member and international leader in high performance computing, has joined DSPlogic’s Technical Advisory Board. “At DSPlogic we increase our customers’ productivity by improving the software and tools needed for high performance reconfigurable computing solutions,” said Michael Babst, president of DSPlogic. “We expect growth in our core business with the addition of one of the world’s foremost parallel computing experts, Dr. David A. Bader, to our advisory board.

High-Performance Computing Expert, Dr. David A. Bader, joins DSPlogic's Technical Advisory Board

Dr. David A. Bader, a Georgia Tech faculty member and international leader in high-performance computing, has joined DSPlogic’s Technical Advisory Board. “At DSPlogic we increase our customers’ productivity by improving the software and tools needed for high-performance reconfigurable computing solutions,” said Michael Babst, president of DSPlogic. “We expect growth in our core business with the addition of one of the world’s foremost parallel computing experts, Dr. David A. Bader, to our advisory board.

CERCS Researchers Receive Industry Awards

Research faculty from the Center for Experimental Research in Computer Systems (CERCS) at Georgia Tech recently received several new industry awards. CERCS, located within the College of Computing, is one of the largest experimental systems programs in the U.S. focusing on complex hardware, communications and system-level software, and applications that lead the innovation of new information and computing technologies. Prof. Ling Liu for receiving and IBM Faculty Award for her work on “Building Secure Publish-subscribe Systems for Large Scale Event Dissemination” CERCS Research Scientist Ada Gavrilovska received funding from Intel Corporation to support her research on application-specific processing on IXA routers.

Creative Young Engineers Selected to Participate in NAE's 2007 U.S. Frontiers of Engineering Symposium

WASHINGTON — Eighty-three of the nation’s brightest young engineers have been selected to take part in the National Academy of Engineering’s (NAE) 13th annual U.S. Frontiers of Engineering symposium. The 2½-day event will bring together engineers ages 30 to 45 who are performing exceptional engineering research and technical work in a variety of disciplines. The participants — from industry, academia, and government — were nominated by fellow engineers or organizations and chosen from more than 260 applicants.

CSE Faculty-Student Research Accepted at ICPP 2006

David Bader, associate professor within the College’s Computational Science and Engingeering (CSE) division, along with Ph.D. students Kamesh Madduri and Vaddadi Chandu, have three papers accepted at this year’s 35th International Conference on Parallel Processing (ICPP). ICPP is the longest-running conference dedicated to parallel processing with a significant impact within the field, and will be hosted by Ohio State University on August 14-18, 2006. The papers include: “Designing Multithreaded Algorithms for Breadth-First Search and st-connectivity on the Cray MTA-2,” D.

David Bader Organizes Premier International Conference

David Bader, associate professor within the College of Computing’s Computational Science and Engineering (CSE) division, was co-organizer and steering committee member of the International Parallel and Distributed Processing Symposium (IPDPS) 2006, held in Rhodes, Greece on April 25-29. IPDPS celebrated its 20th year and is considered the premier academic conference in the areas of parallel and distributed computing. Not only did David serve as a program vice-chair for the Applications Track, he also chairs the Institute of Electrical and Electronics Engineers’ (IEEE) Technical Committee on Parallel Processing, which sponsors IPDPS.

Georgia Tech, Oak Ridge team up

By Aliya Sternstein After President Bush in his State of the Union address proposed spending more on supercomputing, the College of Computing at the Georgia Institute of Technology announced a new partnership with the federal government that could advance the president’s agenda. The College of Computing, the Energy Department’s Oak Ridge National Laboratory in Tennessee and the nonprofit company UT-Battelle said they would share facilities and staff for large-scale research efforts that rely on supercomputing.

Infocrats get unique opportunity to know latest techs at HiPC '05

Mr David Bader, the Associate Professor of the College of Computing, Atlanta, USA said that two papers which will be submitted in Goa conference are excellent and professionals will get latest technological know-how that are taking around the globe.

Computer Recognizes Expert Reviewers

Computer’s editor in chief extends her thanks to the more than 200 professionals who contributed their time and expertise as reviewers of article submissions in 2005. https://ieeexplore.ieee.org/document/1556489

Cray co-founder, CEO steps down

Seattle supercomputer maker Cray yesterday revealed that it lost more than Wall Street expected, including its co-founder and chief executive. Jim Rottsolk, 60, retired yesterday after 27 years of running the company. He’s being replaced by Peter Ungaro, 36, a former IBM sales executive who was promoted to Cray president in March. Rottsolk’s biggest contribution was building a company with “a culture of innovation, one that can rapidly move on new technology opportunities,” said Thomas Sterling, principal scientist at the Jet Propulsion Laboratory (JPL) and a faculty associate at the California Institute of Technology in Pasadena.

People and Positions: David Bader Joins Georgia Tech for HPC Work

David A. Bader has joined Georgia Institute of Technology’s College of Computing, effective August 15, 2005. David will advance Georgia Tech’s capabilities in the area of computational science, high-performance computing, and biomedical engineering. Prior to this, he was a faculty member at the University of New Mexico from 1998 – 2005. He received his Ph.D. in 1996 from The University of Maryland, and was awarded a National Science Foundation (NSF) Postdoctoral Research Associateship in Experimental Computer Science.

ECE Alumnus David Bader Joins Georgia Tech Faculty

Maryland Electrical and Computer Engineering (ECE) alumnus David Bader will join the Computational Science and Engineering faculty at Georgia Tech this fall. Dr. Bader completed his Ph.D. at the University of Maryland in 1996. As a graduate student, he was advised by Professor Joseph JaJa (ECE/UMIACS). Bader founded and served as president of the Electrical and Computer Engineering Graduate Student Association while he attended school at Maryland. Prior to his recent hiring at Georgia Tech, Dr.

Computing Life's Family Tree

Call it yet another biological gold rush. When Charles Darwin published The Origin of Species in 1859, scientists began working in earnest to document the world’s plant and animal species and build a phylogeny—a map of how all those species relate to each other. More scientists came to the discipline in the 1980s, when automated DNA sequencing offered a new way to classify species and new applications for phylogenetics. Today, the newest prospectors in the gold rush are those with enough expertise in computing to connect all that genetic data in a meaningful way. The goal is the same as it was 150 years ago: build the ultimate family tree.

Reconstructing the Tree of Life

by Greg Johnston Illustration by Greg Tucker. Biologist Terry Yates distinctly remembers a question posed in 2000, when he served as director of the Division of Environmental Biology at the National Science Foundation, NSF. About a month after he started his work at NSF in Arlington, Virginia, Director Rita Colwell called a division directors’ retreat to pose a challenge: “Give me your craziest idea that would represent a major unmet need for the nation or the world.

Prof. Bader Selected as an Associate Editor for the IEEE Transactions on Parallel and Distributed Systems (TPDS)

Prof David Bader who is an Associate Editor for The ACM Journal of Experimental Algorithmics and the IEEE Distributed Systems Online, as well as a Member of the Founding Editorial Board for the International Journal of High Performance Computing and Networking; has been selected as an Associate Editor for the IEEE Transactions on Parallel and Distributed Systems (TPDS). http://www.ece.unm.edu/event/news/single.php?id=90

NSF Awards Major Research Instrumentation Grant to UNM

The National Science Foundation has awarded a $350,000 Major Research Instrumentation grant to the University of New Mexico for the purchase and maintenance of a state-of-the-art shared memory high-performance computer. “The new computer will allow UNM researchers to tackle a wide range of problems in computational science and engineering,” said Principal Investigator, Hua Guo, professor, Chemistry Department. “It will also provide educational opportunities for students interested in high performance computing.”

IEEE Distributed Systems Online Names First Editorial Board

IEEE Distributed Systems Online, the IEEE’s first online-only publication has named the charter members of its editorial board. Editor-in-Chief Jean Bacon culled the 17 board members from leading professionals within academia and the distributed systems industry worldwide. “I wanted senior people, experts in each area, to ensure that we had high quality information,” says Bacon. “These people have very little time, so I encouraged them to recruit enthusiastic young colleagues to assist.

Wiz All for Fiddling Under the Software Hood

by John Fleck, Journal Staff Writer Richard Stallman is one of those people who isn’t famous but should be. … https://abqjournal.newspapers.com/image/443547089/

Richard Stallman, Founder of GNU Project, to Give Speech at UNM

Noted scientist Richard Stallman will be the featured speaker at a seminar hosted by the University of New Mexico School of Engineering (SOE) Wednesday, Oct. 8, in the Student Union Building, Ballroom B, at 3 p.m. Stallman’s speech is titled, “Copyright vs. Community in the Age of Computer Networks.” Stallman is the founder of the GNU Project, which was launched in 1984, to develop the free operating system GNU, which gives computer users the freedom that most have lost.

UNM to Collaborate on Two Information Technology Research Awards Through the National Science Foundation

The University of New Mexico will collaborate with a number of institutions on two separate Information Technology Research (ITR) “large” (over $5 million) awards announced by the National Science Foundation today. The grants, totaling more than $24 million, are two of only eight awarded from an initial field of 70. This is the second year in a row UNM is the lead institution on a large ITR grant, last year’s was the SEEK project led by Biology Professor William Michener.

Center for High Performance Computing at UNM Undergoes Reorganization Plan

Sometimes change is good, and in the case of the Center for High Performance Computing (HPC), it’s a welcomed opportunity. The HPC recently underwent a few changes including new management, mission and focus, which gives Marc Ingber, who was hired as the director earlier this year, reason to be excited. “In a sense, it’s good for the center because we can concentrate more effort on academic aspects of high performance computing,” said Ingber, who is also a professor in the Mechanical Engineering Department at the UNM School of Engineering (SOE).

UNM Computing Faculty Collaborating with IBM to Design Next-Gen Supercomputer

UNM Mirage UNM Computing faculty David A. Bader, Patrick Bridges, Arthur B. Maccabe and Bernard Moret, are collaborating on IBM’s Productive, Easy-to-use, Reliable, Computing Systems (PERCS) project, a new initiative to design a supercomputer several orders of magnitude faster than today’s high-end systems. IBM has received more than $53 million in funding from the Defense Advanced Research Projects Agency (DARPA) for the second phase of DARPA’s High Productivity Computing Systems (HPCS) initiative to perform this research and development effort in technology risk reduction demonstrations and a preliminary design review.

David Bader on the Challenges of Linux Clusters

CIO Insight reporter Debra D’Agostino spoke with Dr. David A. Bader, a professor in the Electrical and Computer Engineering Department and researcher at the Center for High Performance Computing at the University of New Mexico, about the differences between Linux clusters and supercomputers, and the challenges CIOs can expect to face when evaluating the two strategies.” CIO Insight: Why are we seeing more and more companies choose Linux clusters rather than supercomputers?

People and Positions: Bader Elected Chair Of IEEE Committee

Prof. David A. Bader, an Associate Professor and Regents’ Lecturer in the Electrical and Computer Engineering Department of The University of New Mexico, has been elected Chair of the IEEE Computer Society’s Technical Committee on Parallel Processing (TCPP). The Chair serves a two-year term beginning July 1. The TCPP acts as an international forum to promote parallel processing research and education, and participates in setting up technical standards in this area.

UNM Associate Professor Elected Chair of Prestigious IEEE Committee on Parallel Processing

David A. Bader, an associate professor and Regents’ Lecturer in the Electrical and Computer Engineering Department at the University of New Mexico, has been elected chair of the IEEE Computer Society’s Technical Committee on Parallel Processing (TCPP). He will serve a two-year term that began on July 1. The TCPP acts as an international forum to promote parallel processing research and education, and participates in setting up technical standards in the area.

David A. Bader elected chair of IEEE Computer Society's technical committee on parallel processing

David A. Bader, an associate professor and regents’ lecturer in the electrical and computer engineering department at UNM, has been elected chair of the IEEE Computer Society‘s technical committee on parallel processing. The committee acts as an international forum to promote parallel processing research and education and participates in setting up international technical standards. https://abqjournal.newspapers.com/image/436323008/

Applause: David A. Bader selected for IEEE Computer Society's Distinguished Visitors Program

David A. Bader, an assistant professor and Regent’s Lecturer in the electrical and computer engineering department at UNM, has been selected to be a lecturer for the IEEE Computer Society’s Distinguished Visitors Program for a three-year term. https://abqjournal.newspapers.com/image/428366766/

UNM Professor Bader selected as speaker for national program

UNM Mirage David A. Bader, assistant professor in the Electrical and Computer Engineering Department, has been selected as an Institute of Electrical and Electronics Engineers (IEEE) Computer Society Distinguished Speaker. Bader is named to a group of about three dozen speakers from throughout the country and will serve a three-year term. “I am honored that I have been nominated and selected for this prestigious program,” Bader said. “As a distinguished speaker, I will have the support to visit with IEEE Computer Society student and professional chapters and give presentations related to my high-performance computing research.

Assembling The Tree of Life

“Simple identification via phylogenetic classification of organisms has, to date, yielded more patent filings than any other use of phylogeny in industry.” Bader et al. (2001)

Journal Of Parallel & Distributed Computing: Special Issue

CALL FOR PAPERS Journal of Parallel and Distributed Computing Special Issue on High-Performance Computational Biology Guest Editors: Prof. Srinivas Aluru Electrical & Computer Engg. Iowa State University 3218 Coover Hall Ames, IA 50014 USA Email: aluru@iastate.edu Tel: 515-294-3539 Prof. David A. Bader Electrical & Computer Engg. University of New Mexico Albuquerque, NM 87131 USA Email: dbader@eece.unm.edu Tel: 505-277-6724 Computational Biology is fast emerging as an important discipline for academic research and industrial application.

SC2002: Discussions to Cover Computation and Controversy

Attendees of SC2002 will be treated to a thought-provoking set of panel discussions on topics from homeland security to innovations in high-end computing to the impact of the Earth Simulator, the world’s fastest supercomputer. This year’s conference, with the theme “From Terabytes to Insights,” will convene Nov. 16-22 at the Baltimore Convention Center. Some of the best-known experts in the field will lead the discussions on significant questions and major accomplishments in high performance computing, including:

Large-Scale Phylogenetic Analysis

The goal of phylogenetic analysis is to reconstruct the evolutionary history of different taxa (e.g., species, genera). Recent advances in molecular biology and genomics have provided biologists with molecular data at an unprecedented rate and scale. New approaches are necessary because the most accurate analyses are obtained through solving (or attempting to solve) NP-hard optimization problems. Furthermore, any such analysis can return hundreds or thousands of trees. Finally, some taxa evolve down networks rather than down trees.

UNM Post-Doc receives Sloan Foundation fellowship

University of New Mexico Post-Doctoral student Tiffani Williams has been awarded an Alfred P. Sloan Foundation Post-doctoral Fellowship in Computational Molecular Biology for two years. Twenty-six past Sloan Fellows have become Nobel Laureates. Williams will work with School of Engineering Professors Bernard Moret, Computer Science, and David Bader, Electrical and Computer Engineering. She also taught one course in computer science at UNM this semester. Williams’ research is in the area of phylogenetic reconstruction, the inference of the evolutionary history of a collection of organisms.

$1.1 million grant goes to professors

University of New Mexico School of Engineering professors Bernard Moret of Computer Science and David Bader of Electrical and Computer Engineering have received more than $1 million in grants this fall from the National Science Foundation for research in reconstructing evolutionary “trees.” The $1.1 million is broken up into three grants for different aspects of the project. The research is being conducted in collaboration with the University of Texas at Austin.

UNM Engineering Professors receive $1.1 million in NSF Grants

UNM Mirage University of New Mexico School of Engineering Professors Bernard Moret, Computer Science, and David Bader, Electrical and Computer Engineering, have received more than $1.1 million in grants this fall from the National Science Foundation (NSF) to pursue research in reconstructing evolutionary trees (known as “phylogenies”). This research program is being conducted in collaboration with the University of Texas at Austin. The UT-UNM group received two other awards from the NSF, made directly to the University of Texas, bringing the total group funding for the next five years to more than $7 million.

Bader Receives NSF CAREER Award

Dr. David Bader, who earned his doctoral degree from the Department in May 1996, has received a National Science Foundation (NSF) Faculty Early Career Development (CAREER) Award for his work on High-Performance Algorithms for Scientific Applications. This prestigious grant emphasizes the importance NSF places on the early development of academic careers dedicated to research, inspired teaching and enthusiastic learning. Bader’s CAREER research plan will investigate new algorithms to support irregular computations, mostly tree and graph-based, along with new insights on how to leverage the theoretical research in PRAM algorithms.

Supercomputer Math Speeds Up

*By John Fleck, Journal Staff Writer” https://abqjournal.newspapers.com/image/359985517/

Award honors computer work

By Liz Otero Vallejos David A. Bader, an assistant professor of electrical and computer engineering at the University of New Mexico, has been awarded the National Science Foundation’s Faculty Early Career Development (CAREER) Award. “Parallel computing has long offered the promise of very high performance, but it has delivered only in a narrow range of applications,” Bader said. “With the advent of symmetric multiprocessors (SMPs), however, shared memory on a modest scale is becoming an available commodity.

A Flower's Family Tree

By J. William Bell, NCSA Senior Science Writer Gene data allow researchers to recover the evolutionary history of plants, but even the smallest dataset can require impossibly large computations. Using an Alliance Linux cluster and newly designed software, a team from the University of New Mexico and the University of Texas have increased the speed of the process millionfold for one family of plants. If you’re looking for extreme diversity, consider bluebells, officially the Campanulaceae family.

UNM Engineering Professors receive NSF CAREER Awards

UNM Mirage University of New Mexico School of Engineering professors David A. Bader and Hy D. Tran recently received the National Science Foundation Faculty Early Career Development (CAREER) Awards. Bader, assistant professor in Electrical and Computer Engineering, has been awarded the grant in High-Performance Algorithms for Scientific Applications. His CAREER research plan will investigate and develop algorithms for high-performance computers that have multiple processors, advanced memory subsystems and state-of-the-art communication networks.

ECE Prof. David A. Bader Receives NSF Career Award

Prof. David A. Bader has been awarded the National Science Foundation’s Faculty Early Career Development (CAREER) Award. This highly prestigious grant in High-Performance Algorithms for Scientific Applications emphasizes the importance NSF places on the early development of academic careers dedicated to stimulating the discovery process in which the excitement of research is enhanced by inspired teaching and enthusiastic learning. Parallel computing has long offered the promise of very high performance, but it has delivered only in a narrow range of applications.

New Phylogeny Reconstruction Code, LosLobos Cluster, Mean Speedy Solution to Computational Problem

Using the largest open, production Linux supercluster, LosLobos, researchers at The University of New Mexico’s Albuquerque High Performance Computing Center have achieved a nearly one-million-fold speedup in solving the computationally-hard phylogeny reconstruction problem for the family of twelve Bluebell species (scientific name: Campanulaceae) from the flowers’ chloroplast gene order data. (The problem size includes a thirteenth plant, Tobacco, used as a distantly-related outgroup). Phylogenies derived from gene order data may prove crucial in answering some fundamental open questions in biomolecular evolution.

GRAPPA Runs In A Record Time

Using the largest open-production Linux supercluster in the world, LosLobos, researchers at The University of New Mexico’s Albuquerque High Performance Computing Center ( http://www.ahpcc.unm.edu ) have achieved a nearly one-million-fold speedup in solving the computationally-hard phylogeny reconstruction problem for the family of twelve Bluebell species (scientific name: Campanulacae) from the flowers’ chloroplast gene order data. (The problem size includes a thirteenth plant, Tobacco, used as a distantly-related outgroup). Phylogenies derived from gene order data may prove crucial in answering some fundamental open questions in biomolecular evolution.

Solving the mystery of life with sixfold speedup

To understand the evolutionary process from the beginning of life itself to present-day species, we must first determine the “tree of life.” In this tree, called a phylogeny, known species reside at the tree’s leaves, while conjectured ancestor (extinct) species reside where the tree’s branches split. We follow this process down to the tree’s base, normally represented by the three major limbs for plants, animals, and single-celled organisms. In recent years, geneticists have made wondrous progress in determining genetic sequences from generation to generation; they have now mapped complete genomes for several species.

Gigantic clusters: Where are they and what are they doing?

COLLOSAL CLUSTERS AT ALLIANCE CENTERS The Albuquerque High Performance Computing Center at the University of New Mexico has long been a proponent of colossal clusters. The AHPCC and the National Computational Science Alliance (the Alliance), comprising more than 50 academic, government, and industry research partners from across the US, have formed a partnership that the National Science Foundation funds. The Alliance, which wants to provide an advanced computational infrastructure, is running a 128-processor Linux SuperCluster with Myrinet (Roadrunner) from Alta Technologies using dual Intel 450-MHz nodes, each with 512 Mbytes of RAM.

UNM plans to buy 'supercluster'

By John Fleck, Journal Staff Writer https://abqjournal.newspapers.com/image/262850552/

IBM RS/6000 SP & Linux Clusters Getting Hitched in New Mexico, IBM and University of New Mexico Researchers to Build 'Vista Azul' Hypercluster

IBM and the University of New Mexico (UNM) today announced a joint research project to integrate leading-edge IBM RS/6000 SP supercomputing technology running AIX, IBM’s UNIX operating system, with the fast-developing world of Linux superclusters. A Linux supercluster is made up of off-the-shelf PCs or workstations interconnected with high-speed networking technologies, having large storage capabilities and running under the Linux operating system. The system, called Vista Azul (Blue Vista in Spanish), will create a unique “hypercluster” environment composed of IBM SP and Linux technologies, that will allow researchers to explore the optimal use of Linux for scientific applications as well as management strategies for hybrid clusters.

Alliance Partners Showcase Progress at SC99

Participants at the upcoming SC99 conference in Portland, OR, will have ample opportunity to experience the progress made by the National Computational Science Alliance in prototyping the next century’s advanced computational and information infrastructure. Alliance partners will host seven research exhibits and give numerous speeches, presentations, and tutorials. SC99, the annual high-performance networking and computing conference, will be held Nov. 13-19 at the Oregon Convention Center. For the first time at an SC conference, researchers will show how the Alliance is developing the Access Grid, a system that links people in virtual spaces for collaborative science, workshops, and distance education sessions.

Linux Clustering Extends Trend

By Lenny Liebmann Linux delivers lots of computing power on commodity lntel processors, and it’s especially popular with Net devotees. So could Linux turn out to be the OS of choice for dot.com server clustering? Plenty of vendors think so. Network Engines recently started to ship its XEngine Linux cluster. And, Linux systems leaders VA Research lnc. and TurboLinux lnc. are also shipping clustering solutions based on Linux. Some prominent users are buying in: The University of New Mexico built a 128-server cluster using technology from Alta Technology Corp.

SC99 Tutorials Address Latest Issues in HPC

From cutting-edge research projects, to the most talked about trends in networking, programming, performance analysis, and computer and network security, the SC99 Tutorials Program offers something for everyone with an interest in high performance computing and networking. SC99, the annual high performance computing and networking conference, takes place Nov. 13-19 at the Oregon Convention Center. The conference’s 12 full-day and eight half-day tutorials will be offered Sunday, Nov. 14, and Monday, Nov.

Alliance Roadrunner Supercluster Now on the Grid

The National Computational Science Alliance’s (Alliance) Linux Roadrunner Supercluster at the University of New Mexico (UNM) is officially open for business as a node on the Alliance Grid. A prototype of the national information infrastructure of the 21st century, the Alliance Grid is an emerging integrated computational and collaborative environment that links people, resources, and services over high speed networks. Joining the Alliance’s arsenal of parallel computing systems located at facilities from Boston to Maui, Roadrunner is a 64-node AltaCluster by Alta Technology Corporation.

Plugged In: Linux Showing Up In Supercomputers

By Therese Poletti Linux – the renegade operating system that is among the hottest topics in Silicon Valley – is also making its way into the most serious bastion of computing, the supercomputing world. Linux, developed by Finnish programmer Linus Torvalds in 1991, is given away over the Internet and managed by a far-flung group of programmers, part of what is known as the open source movement. Linux has been catching on among some corporations and Internet service providers as a reliable system to run Web servers or e-mail servers.

Catching Up with the Roadrunner

By Jessica Schneider University of New Mexico, Daily Lobo

UNM To Crank Up $400,000 Supercomputer Today

By John Fleck, Journal Staff Writer https://www.newspapers.com/image/319289210/

Access ParaScope from Concurrency’s home page

IEEE Concurrency ’s home page (http://computer.org/concurrency/) now includes a link to ParaScope, a comprehensive listing of parallel computing sites on the Internet. The list is maintained by David A. Bader, assistant professor in the University of New Mexico’s Department of Electrical and Computer Engineering. You can also go directly to the links at http://computer.org/parascope/ #parallel.

Methodology for HPC Programming on SMPs Released

David A. Bader and Joseph Ja’Ja’ have released a technical report entitled “SIMPLE: A Methodology for Programming High Performance Algorithms on Clusters of Symmetric Multiprocessors (SMPs),” Technical Report Number: CS-TR-3798 and UMIACS-TR-97-48. Institute for Advanced Computer Studies (UMIACS), University of Maryland, College Park, May 1997. The report describes a methodology for developing high performance programs running on clusters of SMP nodes. The methodology is based on a small kernel (SIMPLE) of collective communication primitives that make efficient use of the hybrid shared and message passing environment.

CuraGen Assembles Most Complete Mouse EST Database to Date

Scientists at CuraGen Corporation have coordinated the assembly of the most complete mouse EST database to date. The assembly took as input 45,683 clusters of public mouse ESTs. The CAP2 program was used to perform the assembly, producing 49,228 assembled sequences (some clusters produced multiple assemblies) with 3-fold coverage on average. The database assembly was computationally intensive, taking approximately 2 minutes per cluster on a Sun workstation. The assembly was distributed across computer resources worldwide for rapid assembly of the entire data set in a weekend.

HPC Web Sites of Interest

By Alan Beck, managing editor Contrary to strident popular opinion, the enervating quality of the Web is generated more through an abundance of information rather than an accumulation of moral deficits. Nowhere is this more evident than in the enormous collection of sites dealing with HPC. This new column, slated to appear quarterly, is not meant to provide a thorough compendium of HPC-related material on the Web; several resources already perform that function quite admirably, e.

Bethlehem Scout Becomes An Eagle

David Albert Bader, son of Dr. and Mrs. Morris Bader of Bethlehem, was elevated to the rank of Eagle Scout during ceremonies held recently in the Haupert Union Building at Moravian College. David is a member of Boy Scout Troop 346 of St. Mark’s Lutheran Church, Bethlehem. His Eagle project consisted of making a complete inventory of the Brith Sholom Community Center library, to be used by center members and the general public.

Snake Produced 4 'Beans'

By Ann Pongracz