Friday, November 23, 2007

Assignment General Model of Shared Memory Programming


OBJECTIVE QUESTIONS

Q1. Two processes A and B start execution simultaneously. A is executing a 100 instructions program while B is executing a 50 instructions program. Which of the two processes will execute first?

B
A
Both at same time
Can’t say


Q2. id=create_process(N)
If the value of ‘id’returned by this primitive is 0, it indicates

any child process
first child process
parent process
No process created


Q3. In parallel processing if Unix processes are used as independent units of execution, any computation or memory update that a process does is, by default_______________other processes.

visible to
not visible to


Q4. Special constructs are needed to ___________________data from/with other threads while using threads as independent units of execution in shared memory parallelism.

share
hide
update
correct

Q5. The variables that can be modified by any process and the updation is immediately visible to all other processes are called
local
shared
special
parallel

Q6. The mechanism to ensure that certain blocks of statements are executed by only one process at a time is called

deadlock
block scheduling
self scheduling
mutual exclusion


Q7. To ensure mutual exclusion locks are used to block the regions of statements and are declared as
pointers
array
lists
integer variables

Q8. Locks are allotted from_______________since more than one process need access to it.

Q9. To parallelise processing of loops, the mechanism of dividing single loop into multiple loops and assigning to respective processors is called

loop scheduling
self splitting
loop splitting
self scheduling

Q10. The method of loop parallelization in which processes choose their work dynamically at run time is called

loop scheduling
self splitting
loop splitting
self scheduling

Q11. The drawback of loop splitting arises when the processing of elements involved in a loop is__________.

uniform
non-unform
complex
simple

Q12. The advantage of self scheduling is good _____________and drawback is__________________.

Q13. the common pool of work in self-scheduling method is protected as a ________________.

Q14. for overall synchronization, the meeting point where all processes meet and then proceed to their own tasks is called

Point
Barrier Point
Barrier
Meet Point

Q15. A barrier is a shared lock initialized to N no. of processes on which everyone waits till the lock value becomes

Zero
One
N/2
Very small












ANSWERS

Q1. d
Q2. c
Q3. b
Q4. b
Q5. b
Q6. d
Q7. a
Q8. Shared Memory Pool
Q9. c
Q10. d
Q11. b
Q12. i)load balancing
ii) Overhead in pool management
Q13. Critical section
Q14. c
Q15. a


SUBJECTIVE QUESTIONS

Q1. Explain the process creation and destruction with primitives in shared memory programming.

Ans. Processes are generated as per the requirement of the problem.these processes are destroyed after the parallel part of processing is completed so that system resources are not wasted and also the sequential processing required can be done by a single process without interference from other processes.
So Process management involves :
Generating required no of processes
Destroying these processes when parallel part of processing is completed so that system resources are not wasted and sequential processing is carried on by a single process.

For this we require a no of primitives:
id=create_process(N);

The execution of this primitive results in the creation of N new processes. N+1 processes (including the parent process) are identical, only the id is different for each process.
Create_process returns integers 0 to N .
Id=0 indicates the parent process

2. join_process(N,id)

N is the no of processes
id is the integer returned by create_process.
This statement is executed by all processes and only a single process remains alive after the call.
If we put an instruction after the join statement
àThe instruction will not be executed till all processes have executed join_process.Hence no process is still in the compute stage.
àonly one process will be active beyond the join statement and hence parallelism and consequent problems will not arise.




3. Shared
This primitive allocates shared memory. It also provides an id so that memory can be discarded after use.
Shared() returns a pointer to shared memory allocated. That is why sum0 is declared as a pointer in the following example.
Example: to sum up 1+2+3+4 in parallel.

Int *sum0, *sum1, id1, id2, id;
Sum0=(int *) shared(sizeof (int) ,&id1);
Sum1=(int *) shared(sizeof (int) ,&id2);
*sum0=0,*sum1=0;
id=create_process(1);
If (id==1)
*sum0=1+2;
else
*sum1=3+4;
join_process(2,id);
printf(“%d”,*sum0 + *sum1);
free_shm(id1);free_shm(id2);


4. free_shm
free_shm primitive takes the id to free the allocated shared memory by shared primitive.


Q2. How should the access to shared areas be coordinated? What primitives are used for this purpose?

Ans. In shared memory programming, access to shared areas must be properly coordinated. If one process reads a location with the intention of updating it, it must ensure that nobody else reads that area, till one finishes the update. So we use primitives that can ensure that certain block of statements is executed by only one process at a time. If a process is within such a block, no other process should be allowed to enter the block. This mechanism is called mutual exclusion.

Mutual exclusion is ensured by using locks mechanism. We lock such a region when we enter and unlock it when we get out.
The locking primitives are:
à init_lock(id)
It initializes the lock to be in a known position—locked or unlocked.
à lock(id)
It attempts to lock the lock. If the lock is already locked by some other process, the process is put to wait. It will resume only when the lock has been released by the other process and it gets to lock it.
àunlock(id)
It unlocks the lock and returns. It does not check or wait for any condition.

The argument id is required because we may nee different locks for different areas.The locks should be allotted in a shared memory space since more than one process need access to it. So the locks are declared as a pointer and allotted space from shared memory pool.


Q3.Explain the following terms:
i) Loop splitting
ii) Self scheduling
iii) Barrier


Ans. i) Loop splitting
Loop splitting is a method of loop parallelization. It involves splitting a single loop into multiple separate loops. We divide the N no of elements being processed in a loop to a P no of partitions of N/P elements each and assign to the respective processes.
We can use contiguous block of elements for a processing element or we can interleave the access to the elements.
For example: if N=15
And P=5

We can assign elements to processors in the following ways:
I)
P1à1,2,3
P2à4,5,6
P3à7,8,9
P4à10,11,12
P5 à13,14,15


II)
P1à1,6,11
P2à2,7,12
P3à3,8,13
P4à4,9,14
P5à5,10,15
These method of loop splitting are easy to implement. Deciding what elements a process is going to handle is done statically and there is no run time overhead.
Since the processes access is disjoin areas, no coordination or mutual exclusion is required.

ii) Self-scheduling

Self scheduling is a method in which processes choose their work dynamically at run time

In statically assigning elements to processors, the difficulty arises when the processing of all elements is not uniform.
For example if the work is to be done by even numbered processors others will have no work.
In this model, all the work is considered to be available in a common pool. Each process will go to the pool, pick up work if available, execute the work and then go back for more work. Depending upon the complexity of processing various elements of the loop, the number of iterations handled by the process will vary. It achieves a good load balancing among the processes available. But pool management is an overhead of this method.

iv) Barriers

For overall synchronization of processes, a meet point where all processes meet and proceed to their own tasks again is called a barrier. The meeting ensures that all processes have completed the tasks assigned to them till the meeting task
The various primitives involoved are:
i) bar=barrier_init(nproc)
It creates a barrier structure and returns a pointer to it. Every call to barrier_init creates a new barrier. The reference returned is to be used to identify the barrier in invoking it as well as in cleaning it up.

ii) barrier(bar)
It is the call that is to be done by each process which has to go through the barrier.

iii) clean_barrier(bar)
It is to be invoked for every barrier created to clean up any shared memory and semaphores allotted to implement the barrier.

Content Credit: J.S.

0 comments;Click here for request info on this topic:

Post a Comment