Computers

Parallel Computing Environment

Parallel Computing Environment

Definition of Parallel Computing Environment – Parallel Computing is a computing environment in which many calculation or execution of processes are carried out simultaneously. It is a type computing architecture in which several processors execute or process an application or computation simultaneously. In this system large problems can be divided into smaller ones which can then be solved at the same time. Alternatively, a parallel computing is the simultaneous use of multiple computer resources to solve a computational problem. Parallel computing can be classified into bit-level, instructional level, data and task parallelism.

What are the steps followed in Parallel Computing Environment?

  1. The large problem can be broken down into discrete parts which are solved concurrently.
  2. Each part is further broken down into series of instructions.
  3. The instructions from distinct part are executed simultaneously on different processors.
  4. The control and coordination of large no of processes and processors exists in this system.

Programming Concepts of Parallel Computing Environment

A parallel task consists of multiple program instructions executing simultaneously. Another name for an instruction is thread. A process may have one or more threads. All threads which are associated with a process must run on the same node because they often communicate using the main memory which will be shared on shared memory node. Multiple processes can execute on one or multiple nodes. When multiple processes of one application run on multiple nodes, they must communicate via network’s message passing concept paradigm. Since the network speed is slower than the memory fetching and disk accessing, so a programmers should design their own codes to minimize data transfers across the network.

What are the types of Parallelism?

There are basically three types of Parallelism. They are Task parallelism, Instruction level parallelism, Bit level parallelism.

Bit level Parallelism – It is a form of Parallel computing which is based on the concept of increasing processor word size, which help to reduce the number of instructions the processor must execute in order to accomplish any task. This bit level parallelism concept is embedded in VLSI computer chip fabrication after 1970s. Now a days 64 bit processors are using in abundance in VLSI chip fabrication. The features included in Bit level Parallelism are – It exploit redundant logic, keep all circuits busy and reduce critical path.

Instruction Level Parallelism – It is the calculation or measure of how many instructions can be executed simultaneously in a computer program. In this system a processor can only issue less than one instruction per clock cycle. These special processors are called as sub scalar processors. There are two major approaches to accomplish ILP. These approaches are Hardware and Software. The hardware level approach works upon dynamic parallelism whereas software level works on static parallelism. In Dynamic parallelism the processor will decide which instruction to execute in parallel during run time. Whereas in static parallelism the compiler will decide which instruction to execute in parallel during runtime.

Task Parallelism – Task parallelism is defined as the concept of decomposing a single task into sub-tasks and then allocating each sub-task to a processor for execution at run time. The processors will execute these sub-tasks simultaneously. Task parallelism is not related with the size of a problem. Eg – Pipelining – which consists of moving a single set of data through a series of separate tasks, where each task can execute independent of the others.

webmaster

Leave a Reply