1. An introduction to compilation issues for parallel machines
- Author
-
William Carlson and Maya Gokhale
- Subjects
Computer science ,Distributed computing ,Program transformation ,Parallel computing ,Dependence analysis ,Theoretical Computer Science ,Scheduling (computing) ,Automatic parallelization ,Program analysis ,Hardware and Architecture ,Programmer ,Pointer analysis ,Massively parallel ,Software ,Information Systems - Abstract
The exploitation of today's high-performance computer systems requires the effective use of parallelism in many forms and at numerous levels. This survey article discusses program analysis and restructuring techniques that target parallel architectures. We first describe various categories of architectures that are oriented toward parallel computation models: vector architectures, shared-memory multiprocessors, massively parallel machines, message-passing architectures, VLIWs, and multithreaded architectures. We then describe a variety of optimization techniques that can be applied to sequential programs to effectively utilize the vector and parallel processing units. After an overview of basic dependence analysis, we present restructuring transformations on DO loops targeted both to vectorization and to concurrent execution, interprocedural and pointer analysis, task scheduling, instruction-level parallelization, and compiler-assisted data placement. We conclude that although tremendous advances have been made in dependence theory and in the development of a “toolkit” of transformations, parallel systems are used most effectively when the programmer interacts in the optimization process.
- Published
- 1992