Workshop on Programming and Performance Visualization Tools (ProTools 19)
Denver, Colorado, USA
November 17, 2019 (Sunday)
Held in conjunction with SC19: The International Conference on High Performance Computing, Networking, Storage and Analysis, and
in cooperation with TCHPC: The IEEE Computer Society Technical Consortium on High Performance Computing
Sponsors: SPPEXA (Research Program by German government funding organization DFG)
Understanding program behavior is critical to overcome the expected architectural and programming complexities, such as limited power budgets, heterogeneity, hierarchical memories, shrinking I/O bandwidths, and performance variability, that arise on modern HPC platforms. To do so, HPC software developers need intuitive support tools for debugging, performance measurement, analysis, and tuning of large-scale HPC applications. Moreover, data collected from these tools such as hardware counters, communication traces, and network traffic can be far too large and too complex to be analyzed in a straightforward manner. We need new automatic analysis and visualization approaches to help application developers intuitively understand the multiple, interdependent effects that algorithmic choices have on application correctness or performance. The ProTools workshop combines two prior SC workshops: the Workshop on Visual Performance Analytics (VPA) and the Workshop on Extreme-Scale Programming Tools (ESPT).
The Workshop on Programming and Performance Visualization Tools (ProTools) intends to bring together HPC application developers, tool developers, and researchers from the visualization, performance, and program analysis fields for an exchange of new approaches to assist developers in analyzing, understanding, and optimizing programs for extreme-scale platforms.
- Performance tools for scalable parallel platforms
- Debugging and correctness tools for parallel programming paradigms
- Scalable displays of performance data
- Case studies demonstrating the use of performance visualization in practice
- Program development tool chains (incl. IDEs) for parallel systems
- Methodologies for performance engineering
- Data models to enable scalable visualization
- Graph representation of unstructured performance data
- Tool technologies for extreme-scale challenges (e.g., scalability, resilience, power)
- Tool support for accelerated architectures and large-scale multi-cores
- Presentation of high-dimensional data
- Visual correlations between multiple data source
- Measurement and optimization tools for networks and I/O
- Tool infrastructures and environments
- Human-Computer Interfaces for exploring performance data
- Multi-scale representations of performance data for visual exploration
- Application developer experiences with programming and performance tools
The ProTools workshop combines two prior SC workshops: the Workshop on Visual Performance Analytics (VPA) and the Workshop on Extreme-Scale Programming Tools (ESPT).
- ESPT 18 (Dallas, TX, USA)
- VPA 18 (Dallas, TX, USA)
- ESPT 17 (Denver, CO, USA)
- VPA 17 (Denver, CO)
- ESPT 16 (Salt Lake City, UT, USA)
- VPA 16 (Salt Lake City, UT)
- ESPT 15 (Austin, TX, USA)
- VPA 15 (Austin, TX)
- ESPT 14 (New Orleans, LA, USA)
- VPA 14 (New Orleans, LA)
- ESPT 13 (Denver, CO, USA)
- ESPT 12 (Salt Lake City, UT, USA)
Call for Papers
We solicit 8-page full papers as well as 4-page short papers and position papers that focus on performance, debugging, and correctness tools for parallel programming paradigms as well as techniques and case studies at the intersection of performance analysis and visualization.
Papers must be submitted in PDF format (readable by Adobe Acrobat Reader 5.0 and higher) and formatted for 8.5” x 11” (U.S. Letter). Submissions are limited to 8 pages in the IEEE Conference format, using the sample-sigconf template. The 8-page limit includes figures, tables, and references.
All papers must be submitted through the Supercomputing 2019 Linklings site. Submitted papers will be peer-reviewed and accepted papers will be published by IEEE TCHPC.
Reproducibility at ProTools19
For ProTools19, we adopt the model of the SC19 technical paper program. Participation in the reproducibility initiative is optional, but highly encouraged. To participate, authors provide a completed Artifact Description Appendix (at most 2 pages) along with their submission. We will use the format of the SC19 appendix for ProTools submissions (see template). Note: A paper cannot be disqualified based on information provided or not provided in this appendix, nor if the appendix is not available. The availability and quality of an appendix can be used in ranking a paper. In particular, if two papers are of similar quality, the existence and quality of the appendices can be part of the evaluation process. For more information, please refer to the SC19 reproducibility page and the FAQs below.
FAQ for authors
Q. Is the Artifact Description appendix required in order to submit a paper to ProTools 19?
A. No. These appendices are not required. If you do not submit any appendix, it will not disqualify your submission. At the same time, if two papers are otherwise comparable in quality, the existence and quality of appendices can be a factor in ranking one paper over another.
Q. Do I need to make my software open source in order to complete the Artifacts Description appendix?
A. No. It is not required that you make any changes to your computing environment in order to complete the appendix. The Artifacts Description appendix is meant to provide information about the computing environment you used to produce your results, reducing barriers for future replication of your results. However, in order to be eligible for the ACM Artifacts Available badge, your software must be downloadable by anyone without restriction.
Q. Who will review my appendices?
A. The Artifact Description and Computational Results Analysis appendices will be submitted at the same time as your paper and will be reviewed as part of the standard review process by the same reviewers who handle the rest of your paper.
Q. Does the Artifacts Description appendix really impact scientific reproducibility?
A. The Artifacts Description appendix is simply a description of the computing environment used to produce the results in a paper. By itself, this appendix does not directly improve scientific reproducibility. However, if this artifact is done well, it can be used by scientists (including the authors at a later date) to more easily replicate and build upon the results in the paper. Therefore, the Artifacts Description appendix can reduce barriers and costs of replicating published results. It is an important first step toward full scientific reproducibility.
- Submission deadline (extended):
August 26, 2019September 9, 2019 (AoE)
- Notification of acceptance: September 30, 2019 (AoE)
- Camera-ready deadline: October 7, 2019 (AoE)
The workshop is on November 17, 2019 (Sunday) from 9am - 5:30 pm in room 704-706. The detailed workshop program is published below and on the SC19 schedule page.
|9:00-9:10||Opening remarks. Abhinav Bhatele, David Boehme, Josef Weidendorfer, Tom Vierjahn.|
|9:10-10:00||Keynote: Hardware Performance Monitoring Landscape. Stephane Eranian (Google). Slides|
|10:30-11:00||Understanding the Performance of GPGPU Applications from a Data-Centric View. Hui Zhang, Jeff Hollingsworth. Slides|
|11:00-11:30||Asvie: A timing-agnostic SVE optimization methodology. Miguel Tairum, Daniel Ruiz, Roxana Rusitoru|
|11:30-12:00||Designing Efficient Parallel Software via Compositional Performance Modeling. Alexandru Calotoiu, Thomas Höhl, Heiko Mantel, Toni Nguyen, Felix Wolf.|
|12:00-12:30||Performance Analysis of Tile Low-Rank Cholesky Factorization Using PaRSEC Instrumentation Tools. Quinglei Cao, Yu Pei, Thomas Herault, Kadir Akbudak, Aleksandr Mikhalev, George Bosilca, Hatem Ltaief, David Keyes, Jack Dongarra. Slides|
|2:00-2:15||The Case for a Common Instrumentation Interface for HPC Codes. David Boehme, Kevin Huck, Jonathan Madsen, Josef Weidendorfer.|
|2:15-3:00||Keynote: The Many Faces of Instrumentation: Debugging and Better Performance using LLVM in HPC. Hal Finkel (Argonne National Laboratory). Slides|
|3:30-4:00||Automatic Instrumentation Refinement for Empirical Performance Modeling. Jan-Patrick Lehr, Alexandru Calotoiu, Christian Bischof, Felix Wolf.|
|4:00-4:15||Multi-Level Performance Instrumentation for Kokkos Applications using TAU. Sameer Shende, Nicholas Chaimov, Allen Malony, Neena Imam.|
|4:15-4:30||CHAMPVis: Comparative Hierarchical Analysis of Microarchitectural Performance. Lillian Pentecost, Udit Gupta, Elisa Ngan, Johanna Beyer, Gu-Yeon Wei, David Brooks, Michael Behrisch.|
|4:30-5:00||Visualization of Performance Metrics in the Context of Machine, Application, and Communication Domains. Allen Sanderson, John Schmidt, Alan Humphrey, Michael Papka, Robert Sisneros.|
|5:00-5:30||Towards A Programmable Analysis and Visualization Framework for Interactive Performance Analytics. Tanzima Islam, Alexis Alaya, Quentin Jensen, Khaled Ibrahim.|
Abhinav Bhatele, University of Maryland, College Park, USA
David Boehme, Lawrence Livermore National Laboratory, USA
Tom Vierjahn, Westphalian University of Applied Sciences, Germany
Josef Weidendorfer, Leibniz Supercomputing Centre Munich, Germany
Contact e-mail: email@example.com.
Felix Wolf, TU Darmstadt, Germany
Jean-Baptiste Besnard, ParaTools, France
Harsh Bhatia, Lawrence Livermore National Laboratory
Holger Brunst, TU Dresden
Alexandru Calotoiu, Technical University Darmstadt
Karl Fürlinger, Ludwig Maximilian University of Munich, Germany
Todd Gamblin, Lawrence Livermore National Laboratory
Judit Gimenez, Barcelona Supercomputing Center
Marc-Andre Hermanns, RWTH Aachen University
Katherine Isaacs, University of Arizona
William Jalby, Université de Versailles St-Quentin-en-Yvelines, France
Andreas Knüpfer, Technical University Dresden,Germany
Joshua A. Levine, University of Arizona
John Linford, ARM, USA
Allen D. Malony, University of Oregon, USA
Naoya Maruyama, Lawrence Livermore National Laboratory
Bart Miller, University of Wisconsin Madison, USA
Paul Rosen, University of South Florida
Martin Schulz, Technical University Munich, Germany
Nathan Tallent, Pacific Northwestern National Laboratory, USA
Brian Wylie, Jülich Supercomputing Centre, Germany