Parallel computing is one of the most sought after approaches to speed up computation. A large number of computationally intensive problems can be divided into smaller problems and each of them can be given to an independent processor with shared or independent resources.
There can be varying levels of complexity depending upon whether these processors are on same workstation or on different workstations. And if they are on same workstation then whether they have shared memory or independent memory units with communication possible between the processors. On the other hand if they are one machines distributed throughout an intranet or WAN, the communication overhead becomes huge and probability of data loss also increases.
A program written in high level languages like C/C++, Fortran, has to be converted into hardware understandable terms - machine language(instruction set) of the architecture being used. This kind of translation is done by compilers and interpreters. Parallel compilers are programs that try to parallelize process of program compilation. There are two approaches to parallel compilation :-
Programming in Existing Languages
The program is coded in existing language and optimized compiler is used to extract parallelism in the program for substantial performance improvement.
Data Parallel Programming
Parallel constructs (supported by augmented language specification) are added to the program and converted into standard language by a simple preprocessor. Researchers have very often worked on machine independent data-parallel programming.
Data parallel languages simplify parallel programming by eliminating the need to explicitly manage concurrency, communication, and synchronization. The abstraction is based on data parallelism, a strategy for breaking large computations into parallel element-wise operations on large data structures. The result is portability, but maintaining this abstraction is costly. In particular, it requires complex compilers capable of mapping a data-parallel program into explicitly parallel code that matches the underlying parallel computer and programming tools that can explain the behavior and performance of executing programs in terms of the abstract program source. The cost of building the complex compiler and tools is justifiable, however, because supporting the high-level abstraction simplifies parallel programming and results in more portable programs.
Extracting parallelism from existing programs requires new compiler. Early researchers felt it was necessary to build a compiler in order to design a parallel architecture.