HPC Web Sites of Interest

By Alan Beck, managing editor

Contrary to strident popular opinion, the enervating quality of the Web is generated more through an abundance of information rather than an accumulation of moral deficits. Nowhere is this more evident than in the enormous collection of sites dealing with HPC. This new column, slated to appear quarterly, is not meant to provide a thorough compendium of HPC-related material on the Web; several resources already perform that function quite admirably, e.g. David A. Bader’s page, http://www.umiacs.umd.edu/~dbader/sites.html, CalTech’s list, http://www.ccsf.caltech.edu/other_sites.html, Jonathan Hardwick’s page, http://www.cs.cmu.edu/~scandal/resources.html and others. Nor can it hope to review sites of inordinate significance; while journalists may lay legitimate claim to insight, they cannot do the same for prescience.

Ideally, what the column can accomplish is simply this: to pique the curiosity of a few interested readers – perhaps motivating them to investigate promising areas they may have inadvertendly neglected – and to provide some notion, however inadequate, of the breadth and depth of HPC resources currently residing in cyberspace. Only time – and reader comments – will tell if these goals are being met, or, indeed, if they are worth pursuing at all.


Bulk Synchronous Parallel (BSP) Primer
http://www.scs.carleton.ca/~palepu/bsp_primer.html

The handiwork of Ravi Palepu at Carlton University’s School of Computer Science, this site offers a thorough grounding in elements of the BSP model of parallel computing. Palepu covers BSP’s evolution, components, algorithms, languages and extensions. He notes: “the BSP model directs the programmer to perform many local referenced memory operations before making a non-local reference. After a sequence of local memory reference operations and at most only one nonlocal memory reference, a global barrier synchronization is performed. At this time, all processors are blocked until nonlocal memory references can be carried out. This sequence of steps is called a superstep. A series of supersteps would encompass the entire computation.”

For those committed to exploring all avenues for more efficient utilization of parallel environments, Palepu’s presentation is certainly worth a look – or even more: Harvard and Oxford are actively doing work in this area.

Fermilab’s Computation Division
http://www.fnal.gov/cd/

A good antidote for those inclined toward cynicism about nationally-funded HPC efforts, Fermilab provides a wealth of useful resources for those with interests extending into areas beyond its specialty of particle physics proper, including UNIX and standard resources, CAD, parallel and distributed computing, and databases.

Visualization of Parallel and Distributed Programs
http://www.cc.gatech.edu/gvu/softviz/parviz/parviz.html

This remarkable Georgia Tech site provides numerous striking images generated through PARADE, an environment designed to enable visualization of the dynamics of parallel and distributed programs. As visualization is proving to be an invaluable tool for understanding physical, chemical, economic and other complex processes, this site has taken the matter a step further, turning the mirror of illuminating graphics upon advanced computation itself. The results must be seen to be appreciated. Devotees of 20th century art (like this reviewer) will find the experience as impressive aesthetically as it is analytically revealing.

https://www.hpcwire.com/1996/07/04/hpc-web-sites-of-interest/

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.