Friday, November 23, 2007

Assignment Message Passing Interface


SUBJECTIVE QUESTIONS

Q1: Write the features of Message Passing Interface (MPI).

The goal of the Message Passing Interface simply stated is to develop a widely used standard for writing message-passing programs. As such the interface should establish a practical, portable, efficient, and flexible standard for message passing. The standard is maintained by the MPI Forum.
A standard portable message-passing library developed in 1993 by a group of parallel computer vendors, software writers, and application scientists. Available to both Fortran and C programs and available on a wide variety of parallel machines. Target platform is a distributed memory system such as the SP.
Set of library routines used to design scalable parallel applications. These routines provide a wide range of operations that include computation, communication, and synchronization. MPI 1.2 is the current standard supported by major vendors.
Features:
MPI has full asynchronous communication.
Immediate send and receive operations can fully overlap computation.
MPI groups are solid, efficient, and deterministic.
Group membership is static. There are no race conditions caused by processes independently entering and leaving a group. New group formation is collective and group membership information is distributed, not centralized.
MPI efficiently manages message buffers.
Messages are sent and received from user data structures, not from staging buffers within the communication library. Buffering may, in some cases, be totally avoided.
MPI is a standard.
Its features and behaviour were arrived at by consensus in an open forum. It can change only by the same process.
Reasons for using MPI:
o Standardization - MPI is the only message passing library which can be considered a standard. It is supported on virtually all HPC platforms.
o Portability - there is no need to modify your source code when you port your application to a different platform which supports MPI.
o Performance - vendor implementations should be able to exploit native hardware features to optimize performance.
o Functionality (over 115 routines)
o Availability - a variety of implementations are available, both vendor and public domain.
Target platform is a distributed memory system including massively parallel machines, SMP clusters, workstation clusters and heterogenous networks.
All parallelism is explicit: the programmer is responsible for correctly identifying parallelism and implementing the resulting algorithm using MPI constructs.
The number of tasks dedicated to run a parallel program is static. New tasks can not be dynamically spawned during run time. (MPI-2 is attempting to address this issue).
Able to be used with C and Fortran programs. C++ and Fortran 90 language bindings are being addressed by MPI-2.











Q2: Give the principles on which the concept of Parallel Virtual Machine
(PVM) is based.

PVM was developed by Oak Ridge National Laboratory in conjunction with several universities, principal among them being the University of Tennessee at Knoxville and Emory University. The original intent was to facilitate high performance scientific computing by exploiting parallelism whenever possible. By utilizing existing heterogeneous networks (Unix at first) and existing software languages (FORTRAN, C and C++), there was no cost for new hardware and the costs for design and implementation were minimized.
Briefly, the principles upon which PVM is based include the following:
· User-configured host pool : The application's computational tasks execute on a set of machines that are selected by the user for a given run of the PVM program. Both single-CPU machines and hardware multiprocessors (including shared-memory and distributed-memory computers) may be part of the host pool. The host pool may be altered by adding and deleting machines during operation (an important feature for fault tolerance).
· Translucent access to hardware: Application programs either may view the hardware environment as an attributeless collection of virtual processing elements or may choose to exploit the capabilities of specific machines in the host pool by positioning certain computational tasks on the most appropriate computers.
· Process-based computation: The unit of parallelism in PVM is a task (often but not always a Unix process), an independent sequential thread of control that alternates between communication and computation. No process-to-processor mapping is implied or enforced by PVM; in particular, multiple tasks may execute on a single processor.
· Explicit message-passing model: Collections of computational tasks, each performing a part of an application's workload using data-, functional-, or hybrid decomposition, cooperate by explicitly sending and receiving messages to one another. Message size is limited only by the amount of available memory.
· Heterogeneity support: The PVM system supports heterogeneity in terms of machines, networks, and applications. With regard to message passing, PVM permits messages containing more than one datatype to be exchanged between machines having different data representations.
·
· Multiprocessor support: PVM uses the native message-passing facilities on multiprocessors to take advantage of the underlying hardware. Vendors often supply their own optimized PVM for their systems, which can still communicate with the public PVM version.


Q3: Explain the Parallel Virtual Machine in detail.
The PVM system is composed of two parts. The first part is a daemon , called pvmd3 and sometimes abbreviated pvmd , that resides on all the computers making up the virtual machine. (An example of a daemon program is the mail program that runs in the background and handles all the incoming and outgoing electronic mail on a computer.) Pvmd3 is designed so any user with a valid login can install this daemon on a machine. When a user wishes to run a PVM application, he first creates a virtual machine by starting up PVMThe PVM application can then be started from a Unix prompt on any of the hosts. Multiple users can configure overlapping virtual machines, and each user can execute several PVM applications simultaneously.
The second part of the system is a library of PVM interface routines. It contains a functionally complete repertoire of primitives that are needed for cooperation between tasks of an application. This library contains user-callable routines for message passing, spawning processes, coordinating tasks, and modifying the virtual machine.
The PVM computing model is based on the notion that an application consists of several tasks. Each task is responsible for a part of the application's computational workload. Sometimes an application is parallelized along its functions; that is, each task performs a different function, for example, input, problem setup, solution, output, and display. This process is often called functional parallelism . A more common method of parallelizing an application is called data parallelism . In this method all the tasks are the same, but each one only knows and solves a small part of the data. This is also referred to as the SPMD (single-program multiple-data) model of computing. PVM supports either or a mixture of these methods. Depending on their functions, tasks may execute in parallel and may need to synchronize or exchange data, although this is not always the case. An exemplary diagram of the PVM computing model is shown in Figure and an architectural view of the PVM system, highlighting the heterogeneity of the computing platforms supported by PVM, is also shown.
The PVM system currently supports C, C++, and Fortran languages. This set of language interfaces have been included based on the observation that the predominant majority of target applications are written in C and Fortran, with an emerging trend in experimenting with object-based languages and methodologies.
The C and C++ language bindings for the PVM user interface library are implemented as functions, following the general conventions used by most C systems, including Unix-like operating systems.
Fortran language bindings are implemented as subroutines rather than as functions. This approach was taken because some compilers on the supported architectures would not reliably interface Fortran functions with C functions. One immediate implication of this is that an additional argument is introduced into each PVM library call for status results to be returned to the invoking program.
All PVM tasks are identified by an integer task identifier (TID) . Messages are sent to and received from tids. Since tids must be unique across the entire virtual machine, they are supplied by the local pvmd and are not user chosen. Although PVM encodes information into each TID the user is expected to treat the tids as opaque integer identifiers. PVM contains several routines that return TID values so that the user application can identify other tasks in the system.
There are applications where it is natural to think of a group of tasks. And there are cases where a user would like to identify his tasks by the numbers 0 - (p - 1), where p is the number of tasks. PVM includes the concept of user named groups. When a task joins a group, it is assigned a unique ``instance'' number in that group. Instance numbers start at 0 and count up. In keeping with the PVM philosophy, the group functions are designed to be very general and transparent to the user. For example, any PVM task can join or leave any group at any time without having to inform any other task in the affected groups. Also, groups can overlap, and tasks can broadcast messages to groups of which they are not a member. To use any of the group functions, a program must be linked with libgpvm3.a .
The general paradigm for application programming with PVM is as follows. A user writes one or more sequential programs in C, C++, or Fortran 77 that contain embedded calls to the PVM library. Each program corresponds to a task making up the application. These programs are compiled for each architecture in the host pool, and the resulting object files are placed at a location accessible from machines in the host pool. To execute an application, a user typically starts one copy of one task (usually the ``master'' or ``initiating'' task) by hand from a machine within the host pool. This process subsequently starts other PVM tasks, eventually resulting in a collection of active tasks that then compute locally and exchange messages with each other to solve the problem. Note that while the above is a typical scenario, as many tasks as appropriate may be started manually. As mentioned earlier, tasks interact through explicit message passing, identifying each other with a system-assigned, opaque TID.
Figure: PVM program hello.c
main()
{
int cc, tid, msgtag;
char buf[100];

printf("i'm t%x\n", pvm_mytid());

cc = pvm_spawn("hello_other", (char**)0, 0, "", 1, &tid);

if (cc == 1) {
msgtag = 1;
pvm_recv(tid, msgtag);
pvm_upkstr(buf);
printf("from t%x: %s\n", tid, buf);
} else
printf("can't start hello_other\n");
pvm_exit();
}
Figure: PVM program hello_other.c
#include "pvm3.h"

main()
{
int ptid, msgtag;
char buf[100];

ptid = pvm_parent();

strcpy(buf, "hello, world from ");
gethostname(buf + strlen(buf), 64);
msgtag = 1;
pvm_initsend(PvmDataDefault);
pvm_pkstr(buf);
pvm_send(ptid, msgtag);

pvm_exit();
}






























Q4: Differences between MPI and PVM.

When your program must be able to use the resources of multiple systems, you choose between MPI and PVM. In many ways, MPI and PVM are similar:
· Each is designed, specified, and implemented by third parties that have no direct interest in selling hardware.
· Support for each is available over the Internet at low or no cost.
· Each defines portable, high-level functions that are used by a group of processes to make contact and exchange data without having to be aware of the communication medium.
· Each supports C and Fortran 77.
· Each provides for automatic conversion between different representations of the same kind of data so that processes can be distributed over a heterogeneous computer network.
The chief differences between the current versions of PVM and MPI libraries are as follows:
· PVM supports dynamic spawning of tasks, whereas MPI does not.
· PVM supports dynamic process groups; that is, groups whose membership can change dynamically at any time during a computation. MPI does not support dynamic process groups.
MPI does not provide a mechanism to build a group from scratch, but only from other groups that have been defined previously. Closely related to groups in MPI are communicators, which specify the communication context for a communication operation and an ordered process group that shares this communication context. The chief difference between PVM groups and MPI communicators is that any PVM task can join/leave a group independently, whereas in MPI all communicator operations are collective.
· A PVM task can add or delete a host from the virtual machine, thereby dynamically changing the number of machines a program runs on. This is not available in MPI.
· A PVM program (or any of its tasks) can request various kinds of information from the PVM library about the collection of hosts on which it is running, the tasks that make up the program, and a task's parent. The MPI library does not provide such calls.
· Some of the collective communication calls in PVM (for instance, pvm_reduce()) are nonblocking. The MPI collective communication routines are not required to return as soon as their participation in the collective communication is complete.
· PVM provides two methods of signaling other PVM tasks: sending a UNIX signal to another task, and notifying a task about an event (from a set of predefined events) by sending it a message with a user-specified tag that the application can check. A PVM call is also provided through which a task can kill another PVM task. These functions are not available in MPI.
· A task can leave/unenroll from a PVM session as many times as it wants, whereas an MPI task must initialize/finalize exactly once.
· A PVM task need not explicitly enroll: the first PVM call enrolls the calling task into a PVM session. An MPI task must call MPI_Init() before calling any other MPI routine and it must call this routine only once.
· A PVM task can be registered by another task as responsible for adding new PVM hosts, or as a PVM resource manager, or as responsible for starting new PVM tasks. These features are not available in MPI.
· A PVM task can multicast data to a set of tasks. As opposed to a broadcast, this multicast does not require the participating tasks to be members of a group. MPI does not have a routine to do multicasts.
· PVM tasks can be started in debug mode (that is, under the control of a debugger of the user's choice). This capability is not specified in the MPI standard, although it can be provided on top of MPI in some cases.
· In PVM, a user can use the pvm_catchout() routine to specify collection of task outputs in various ways. The MPI standard does not specify any means to do this.
· PVM includes a receive routine with a timeout capability, which allows the user to block on a receive for a user-specified amount of time. MPI does not have a corresponding call.
· PVM includes a routine that allows users to define their own receive contexts to be used by subsequent PVM receive routines. Communicators in MPI provide this type of functionality to a limited extent.
On the other hand, MPI provides several features that are not available in PVM, including a variety of communication modes, communicators, derived data types, additional group management facilities, and virtual process topologies, as well as a larger set of collective communication calls.















Q5: Give an example of a MPI program using different routines and derived
data types.

A simple master - slave program in which one is supposed to evaluate the expression (a + b) * (c - d). The master will read the values of a, b, c, and d from the user and one slave will calculate (a + b) and the other one will calculate (c - d). The program is as follows.
mpi_demo.c
#include
#include
#include /* for MPI constants and functions */

#define MSG_DATA 100 /* message from master to slaves */
#define MSG_RESULT 101 /* message from slave to master */

#define MASTER 0 /* rank of master */
#define SLAVE_1 1 /* rank of first slave */
#define SLAVE_2 2 /* rank of second slave */

/* functions to handle the tasks of master, and the two slaves */
void master(void);
void slave_1(void);
void slave_2(void);

int main(int argc, char** argv)
{
int myrank, size;
/* initialize the MPI system */
MPI_Init(&argc, &argv);

/* get the size of the communicator i.e. number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &size);

/* check for proper number of processes */
if(size != 3)
{
fprintf(stderr, "Error: Three copies of the program should be run.\n");
MPI_Finalize();
exit(EXIT_FAILURE);
}
/* get the rank of the process */
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

/* perform the tasks according to the rank */
if(myrank == MASTER)
master();
else if(myrank == SLAVE_1)
slave_1();
else
slave_2();

/* clean up and exit from the MPI system */
MPI_Finalize();

exit(EXIT_SUCCESS);
} /* end main() */

/* function to carry out the masters tasks */
void master(void)
{
int a, b, c, d;
int buf[2];
int result1, result2;
MPI_Status status;

printf("Enter the values of a, b, c, and d: ");
scanf("%d %d %d %d", &a, &b, &c, &d);

/* send a and b to the first slave */
buf[0] = a;
buf[1] = b;
MPI_Send(buf, 2, MPI_INT, SLAVE_1, MSG_DATA, MPI_COMM_WORLD);

/* send c and d to the secons slave */
buf[0] = c;
buf[1] = d;
MPI_Send(buf, 2, MPI_INT, SLAVE_2, MSG_DATA, MPI_COMM_WORLD);

/* receive results from the slaves */
MPI_Recv(&result1, 1, MPI_INT, SLAVE_1, MSG_RESULT,
MPI_COMM_WORLD, &status);
MPI_Recv(&result2, 1, MPI_INT, SLAVE_2, MSG_RESULT,
MPI_COMM_WORLD, &status);

/* final result */
printf("Value of (a + b) * (c - d) is %d\n", result1 * result2);
} /* end master() */

/* function to carry out the tasks of the first slave */
void slave_1(void)
{
int buf[2];
int result;
MPI_Status status;
/* receive the two values from the master */
MPI_Recv(buf, 2, MPI_INT, MASTER, MSG_DATA, MPI_COMM_WORLD, &status);
/* find a + b */
result = buf[0] + buf[1];

/* send result to the master */
MPI_Send(&result, 1, MPI_INT, MASTER, MSG_RESULT, MPI_COMM_WORLD);
} /* end slave_1() */

/* function to carry out the tasks of the second slave */
void slave_2(void)
{
int buf[2];
int result;
MPI_Status status;
/* receive the two values from the master */
MPI_Recv(buf, 2, MPI_INT, MASTER, MSG_DATA, MPI_COMM_WORLD, &status);
/* find c - d */
result = buf[0] - buf[1];

/* send result to master */
MPI_Send(&result, 1, MPI_INT, MASTER, MSG_RESULT, MPI_COMM_WORLD);
} /* end slave_2() */

/* end mpi_demo.c */

To use the MPI system and functions, you first need to include the header file mpi.h as is done in line 8.. In case of MPI, the MPI system assigns each process a unique integer called as its rank beginning with 0. The rank is used to identify a process and communicate with it. Secondly, each process is a member of some communicator. A communicator can be thought of as a group of processes that may exchange messages with each other. By default, every process is a member of the communicator called MPI_COMM_WORLD
Any MPI program must first call the MPI_Init() function. This function is used by the process to enter the MPI system and also do any specific initialization required by the system. Next, we get the size of the MPI_COMM_WORLD communicator i.e. the number of processes in it using the MPI_Comm_size() function. The first parameter is the communicator and the second is a pointer to an integer in which the size will be returned. Here, we need exactly 3 processes, one master and two slaves. After that, we get the rank by calling MPI_Comm_rank(). The three processes will have ranks 0, 1 and 2. All these processes are essentially identical i.e. there is no inherent master - slave relationship between them. So it is up to us to decide who will be the master and who will be the slaves. We choose rank 0 as master and ranks 1 and 2 as slaves. Depending upon the rank, we choose to execute the appropriate function. Note that there is no spawning of processes as in PVM, and as we shall see, we choose to decide the number of process to be spawned from a command line argument rather than the program spawning slaves. Once the execution is finished, we must call the MPI_Finalize() function to perform final clean up.
Let us now consider the master function. After reading the values of a, b, c, and d from the user, the master must send a and b to slave 1 and c and d to slave 2. Instead of sending the variables individually, we choose to pack them up in an array and send the array of 2 integers instead. Once the buffer is ready, unlike PVM, we do not need to pack or encode the data, MPI will manage these details internally. So we can directly call the MPI_Send() function to send the data. The first parameter is the address of the buffer, the second one the number of elements in the message, the third is a specification of the data type of the buffer, which here is MPI_INT specifying that the buffer is an array of integers. Next comes the rank of the process to which we want to send the message. Here it is SLAVE_1. Next is the message tag similar to that in case of PVM. Final parameter is the communicator of which the receiver is a member, which in this case, is MPI_COMM_WORLD.
Once the data is distributed among the slaves, the master must wait for the slaves to send the results. For simplicity, we first collect the message from the slave 1 and then from slave 2. To receive a message, we use the MPI_Recv() function. Again, packing and decoding is handled by MPI internally. The first argument is the address of the buffer in which to receive the data. The second is the size of the buffer in terms of the number of elements, which in this case is 1. Next is the data type, which is MPI_INT here. Next three parameters specify the rank of the source of the message, the tag of the expected message and the communicator of which the source is the member. The final argument is a pointer to a structure of type MPI_Status in which some status information will be returned (however, we ignore this information).
MULTIPLE CHOICE QUESTIONS::

1) The keyword MPI_COMM_WORLD signifies::
a) Rank
b) System Buffer
c) Communicator
d) Application Buffer

Ans:: Communicator

2) The current Message Passing Library supports which two languages::
a) Fortran & C
b) C & C++
c) Java & C
d) Java & Fortran

Ans:: Fortran & C

3) Message Passing Interface targets distributed memory system including::
a) Parallel Machines
b) Workstation Clusters
c) Heterogeneous Networks
d) All of above

Ans:: All of above

4) What are the five basic MPI routines?

Ans:: (i) MPI_Init (*argc,*argv)
(ii) MPI_Comm_size (comm,*size)
(iii) MPI_Comm_rank (comm,*rank)
(iv) MPI_Abort (comm,errorcode)
(v) MPI_Finalize ()


5) PVM was developed by ______________in conjunction with several universities.

Ans:: Oak Ridge National Laboratory

6) PVM permits messages containing more than one datatype to be exchanged between machines having different data representations.
True or False?

Ans:: True

7) The PVM system is composed of two parts. Name them.
Ans:: (i) daemon , called pvmd3/pvmd
(ii) a library of PVM interface routines

8) All PVM tasks are identified by an integer___________

Ans:: task identifier (TID)

9) Dynamic spawning of tasks is supported by::
(a) PVM
(b) MPI
(c) Both
(d) None

Ans:: PVM

10) What does a Rank signify within a communicator in MPI?

Ans:: Within a communicator, every process has its own unique integer
identifier assigned by the system when the process initializes. A rank is sometimes also called a "process ID". Ranks are contiguous and begin at zero.

0 comments;Click here for request info on this topic:

Post a Comment