Tuesday, September 25, 2007

Seminar: Personel Computer vs Super Computer

0 comments;Click here for request info on this topic
All computers require the following hardware components:
Central processing unit : The heart of the computer, this is the component
that actually executes instructions organized in programs ("software") which tell
the computer what to do.
Memory : Enables a computer to store, at least temporarily, data, programs,
and intermediate results.
Mass storage device : Allows a computer to permanently retain large amounts of
data and programs between jobs. example hard disk drives and tape drives etc.
Input device: Usually a keyboard and mouse, the input device is the passage
through which data and instructions enter a computer.
Output device: A display screen, printer, or other device that lets you see what
the computer has accomplished.
We all know a PC,probably heard of supercomputers. But
mainframes are not so known.A mainframe is simply a very large
computer. Mainframe is an industry term for a large computer. The
name comes from the way the machine is build up: all units (pro-
cessing, communication etc.)were hung into a frame.Thus the m-
aincomputer is build into a frame, therefore: Mainframe
Historically, a mainframe was associated with centralized
computing opposite from distributed computing. But Today, it
refers to a class of ultra-reliable medium and large-scale servers
designed for enterprise-class and carrier-class operations.

A supercomputer is a computer that leads the world in terms of
processing capacity, particularly speed of calculation, at the time
of its introduction. Today, supercomputers are one-off custom
designs produced by "traditional" companies such as IBM and HP,
who had purchased many of the 1980s companies to gain their
experience, although Cray Inc. still specializes in building superc-
omputers. In the 1970s most supercomputers were running a
vector processor, but later the attention turn from vector
processors to massive parallel processing systems with thousands
of simple CPUs.






If you cant see the above presentation click here
Download Synopsis here

Read full story

Saturday, September 22, 2007

Project: Online Books Shopping System

2 comments;Click here for request info on this topic
Introduction:

Onlinebook.com is a website for a books store which provides books, Magazine, Jounarls for all types of student or any person needs. There are members and non-

Members customers buy books from this website under current schemes and discount rates. I am developing website for above said onlinebook.com that is sale and purchase controls system with account receivable/payable. During analysis I consult my guide at every stage. I will try my best effort to introduced new website with new platform for onlineBooks.com

Objectives of the project:

The main objective of this website(this project website) is to eliminate middle men between the customer and Producer and to provide all profits to the customers. Online to their customers so that They can get the home delivery of their required books at effective prices under one roof. With this website the efficiency is improved up to large extent because its easy for User or Buyer.


Download here
Size: 11.9 MB
Torrent downlaod you need softwares like uTorrent or Azeures or bittorrent to download this file see here, for howto.
Read full story

Project: POP3 Email using .Net

0 comments;Click here for request info on this topic

Introduction

The .NET framework 2.0 has revamped the support of email sending with improved SMTP classes, but receiving emails is still missing. There are various articles on CodeProject for POP3 support, but all have some drawbacks such as

  • some code is not managed
  • use of DLLs without .NET source code
  • too limited functionality and error reporting
  • no tracing of server commands and responses
  • no support for SSL
  • no XML documentation, etc.

This project builds on the previous projects, but is written entirely in C# 2.0. The present first article focuses on the downloading of raw emails from a POP3 server (RFC1939). There are methods to connect to a POP3 server, to get a list of available emails, to request some emails, to delete some emails and to disconnect. For debugging and for professional use, extensive error reporting and communication tracing is provided. A future article will decompose the received raw email into body, alternative views and attachments following the MIME specification.

I used Gmail for testing, which is freely available for anyone (recommended).


Download here
Read full story

Monday, September 17, 2007

Parallel Programming: MPI

0 comments;Click here for request info on this topic
Parallelization - Parallel Knock

We describe a simple example of message passing in a parallel code. For those you learning about parallel code, we provide the following discussion of elemental message-passing as an introduction to how communication can be performed in parallel codes.
Knock
This example, knock.c, passes a simple message from the even processors to the odd ones. Then, the odd processors send a reply in a message back to the even ones. The "conversation" is displayed on all processors, and the code ends.

This code's main routine performs the appropriate message-passing calls. In a more complex parallel code, these can be used to pass information processors to coordinate or complete useful work. Here, we focus on the basic technique to pass these messages. The C source is shown in Listing 1.
Listing 1 - knock.c, or see knock.f
#include
#include
int main(int argc, char *argv[])
{
/*
this program demonstrates a very simple MPI program illustrating
communication between two nodes. Node 0 sends a message consisting
of 3 integers to node 1. Upon receipt, node 1 replies with another
message consisting of 3 integers.
*/
/* get definition of MPI constants */
#include "mpi.h"
/* define variables
ierror = error indicator
nproc = number of processors participating
idproc = each processor's id number (0 <= idproc < nproc)
len = length of message (in words)
tag = message tag, used to distinguish between messages */
int ierror, nproc, idproc, len=3, tag=1;
/* status = status array returned by MPI_Recv */
MPI_Status status;
/* sendmsg = message being sent */
/* recvmsg = message being received */
/* replymsg = reply message being sent */
int recvmsg[3];
int sendmsg[3] = {1802399587,1798073198,1868786465};
int replymsg[3] = {2003332903,1931506792,1701995839};

printf("Knock program initializing...\n");

/* initialize the MPI execution environment */
ierror = MPI_Init(&argc, &argv);
/* stop if MPI could not be initialized */
if (ierror)
exit(1);
/* determine nproc, number of participating processors */
ierror = MPI_Comm_size(MPI_COMM_WORLD,&nproc);
/* determine idproc, the processor's id */
ierror = MPI_Comm_rank(MPI_COMM_WORLD,&idproc);
/* use only even number of processors */
nproc = 2*(nproc/2);
if (idproc < nproc) {
/* even processor sends and prints message,
then receives and prints reply */
if (idproc%2==0) {
ierror = MPI_Send(&sendmsg,len,MPI_INT,idproc+1,tag,
MPI_COMM_WORLD);
printf("proc %d sent: %.12s\n",idproc,sendmsg);
ierror = MPI_Recv(&recvmsg,len,MPI_INT,idproc+1,tag+1,
MPI_COMM_WORLD,&status);
printf("proc %d received: %.12s\n",idproc,recvmsg);
}
/* odd processor receives and prints message,
then sends reply and prints it */
else {
ierror = MPI_Recv(&recvmsg,len,MPI_INT,idproc-1,tag,
MPI_COMM_WORLD,&status);
printf("proc %d received: %.12s\n",idproc,recvmsg);
ierror = MPI_Send(&replymsg,len,MPI_INT,idproc-1,tag+1,
MPI_COMM_WORLD);
printf("proc %d sent: %.12s\n",idproc,replymsg);
}
}
/* terminate MPI execution environment */
ierror = MPI_Finalize();
if (idproc < nproc) {
printf("hit carriage return to continue\n");
getchar();
}
return 0;
}
Discussing the parallel aspects of this code:

mpi.h - the header file for the MPI library, required to access information about the parallel system and perform communication
idproc, nproc - nproc describes how many processors are currently running this job and idproc identifies the designation, labeled using integers from 0 to nproc - 1, of this processor. This information is sufficient to identify exactly which part of the problem this instance of the executable should work on.
MPI_Init - performs the actual initialization of MPI, setting up the connections between processors for any subsequent message passing. It returns an error code; zero means no error.
MPI_COMM_WORLD - MPI defines communicator worlds or communicators that define a set of processors that can communicate with each other. At initialization, one communicator, MPI_COMM_WORLD, covers all the processors in the system. Other MPI calls can define arbitrary subsets of MPI_COMM_WORLD, making it possible to confine a code to a particular processor subset just by passing it the appropriate communicator. In simple cases such as this, using MPI_COMM_WORLD is sufficient.
MPI_Comm_size - accesses the processor count of the parallel system
MPI_Comm_rank - accesses the identification number of this particular processor
if (idproc%2==0) - Up until this line, the execution of main by each processor has been the same. However, idproc distinguishes each processor. We can take advantage of this difference by guiding the processor to a different execution as a function of idproc. This Boolean statement causes those processors whose idproc is even (0, 2, 4, ...) to execute the one part of the if-block, while the odd processors (1, 3, 5, ...) execute the second part of the block. At this point, no central authority is consulted for assignments; each processor recognizes what part of the problem it should perform using only its idproc.
MPI_Recv and MPI_Send - These elemental MPI calls perform communication between processors. In this case, the even processor, executing the first block of the if statement, begins by using MPI_Send to send the contents of the sendmsg array, whose size is given as 3 long integers, specified by len and MPI_INT, to the next processor, specified by idproc+1. tag merely identifies the message for debugging purposes, and must match the tag on the corresponding receive. MPI_Send returns when the MPI library no longer requires that block of memory, allowing your code to execute, but the message may or may not have arrived at its destination yet.
Meanwhile, the odd processor uses MPI_Recv to receive the data from the even processor at idproc-1. This processor specifies that the message be written to recvmsg. The call returns information about the message in the status variable. MPI_Recv returns when it is finished with the recvmsg array.
Note that the even and odd processors might not be in lock step with one another. Such tight control is not needed to complete this task. One processor could reach one MPI call sooner than the other, but the MPI library performs the necessary coordination so that the message is properly handled while allowing each processor to continue with work as soon as possible. The processors affect one another only when the communicate.
After printing the first message, each processor switches roles. Now the odd processors call MPI_Send to send back a reply in replymsg, and the even processors call MPI_Recv to receive the odd processors' reply in their recvmsg. For debugging, the tag used is incremented on all processors. Note that this is a distributed-memory model, so the instance of recvmsg on one processor resides in the memory on one machine while the other instance of recvmsg resides on a completely different piece of hardware. Here we are using the even processors' recvmsg for the first time while the odd processors' recvmsg holds data from the first message. Finally, each processor prints this second message.
MPI_Finalize - performs the properly cleanup and close the connections between codes running on other processors and release control.
When this code, with the initialization values in sendmsg and replymsg, is run in parallel on two processors, the code produces this output:
Processor 0 output
Processor 1 output
Knock program initializing...
proc 0 sent: knock,knock!
proc 0 received: who's there?
hit carriage return to continue
Knock program initializing...
proc 1 received: knock,knock!
proc 1 sent: who's there?
hit carriage return to continue
It performs the opening lines of the classic "knock, knock" joke. This example is an appropriate introduction because that joke requires the first message to be sent one way, then the next message the other way. After each message pass, all processors print the data they have showing how the conversation is unfolding.

Conclusion
The purpose of this discussion was to highlight the basic techniques needed to perform elemental message passing between processors. It also shows how one identification number, idproc, can be used to cause different behavior by distinguishing between processors. The overhead of initialization (MPI_Init et al) and shut down (MPI_Finalize) seems large compared to the actual message passing (MPI_Send and MPI_Recv) only because the nature of the message passing is so simple. In a real-world parallel code, the message-passing calls can be much more complex and interwoven with execution code. We chose a simple example so attention could be drawn to the communications calls. The reader is welcome to explore how to integrate these calls into their own code
Read full story

Sunday, September 9, 2007

Quantum Computing

0 comments;Click here for request info on this topic

Ø Introduction

Every so often a new technology surfaces that enables the bounds of computer performance to be pushed further forward. From the introduction of valve technology through to the continuing development of VLSI designs, the pace of technological advancement has remained relentless. Lately, the key to improving computer performance has been the reduction of size in the transistors used in modern processors. This continual reduction however, cannot continue for much longer. If the transistors become much smaller, the strange effects of quantum mechanics will begin to hinder their performance. It would therefore seem that these effects present a fundamental limit to our computer technology, or do they?

In 1982, the Nobel prize-winning physicist (Late) Richard Feynman thought up the idea of a 'quantum computer', a computer that uses the effects of quantum mechanics to its advantage.

Ø Quantum computer basics

In the classical model of a computer, the most fundamental building block, the bit, can only exist in one of two distinct states, a 0 or a 1. In a quantum computer the rules are changed. Not only can a 'quantum bit', usually referred to as a 'qubit', exist in the classical 0 and 1 states, it can also be in a coherent superposition of both. When a qubit is in this state it can be thought of as existing in two universes, as a 0 in one universe and as a 1 in the other. An operation on such a qubit effectively acts on both values at the same time. The significant point being that by performing the single operation on the qubit, we have performed the operation on two different values. Likewise, a two-qubit system would perform the operation on 4 values, and a three-qubit system on eight. Increasing the number of qubits therefore exponentially increases the 'quantum parallelism' we can obtain with the system. With the correct type of algorithm it is possible to use this parallelism to solve certain problems in a fraction of the time taken by a classical computer.

Decoherence

The very thing that makes quantum computing so powerful, its reliance on the bizarre subatomic goings-on governed by the rules of quantum mechanics, also makes it very fragile and difficult to control. For example, consider a qubit that is in the coherent state. As soon as it measurable interacts with the environment it will decohere and fall into one of the two classical states. This is the problem of decoherence and is a stumbling block for quantum computers as the potential power of quantum computers depends on the quantum parallelism brought about by the coherent state. This problem is compounded by the fact that even looking at a qubit can cause it to decohere, making the process of obtaining a solution from a quantum computer just as difficult as performing the calculation itself.

Building a quantum computer

A quantum computer is nothing like a classical computer in design; you can't for instance build one from transistors and diodes. In order to build one, a new type of technology is needed, a technology that enables 'qubits' to exist as coherent superpositions of 0 and 1 states. The best method of achieving this goal is still unknown, but many methods are being experimented with and are proving to have varying degrees of success.

Quantum dots

An example of an implementation of the qubit is the 'quantum dot' which is basically a single electron trapped inside a cage of atoms. When the dot is exposed to a pulse of laser light of precisely the right wavelength and duration, the electron is raised to an excited state: a second burst of laser light causes the electron to fall back to its ground state. The ground and excited states of the electron can be thought of as the 0 and 1 states of the qubit and the application of the laser light can be regarded as a controlled NOT function as it knocks the qubit from 0 to 1 or from ' to 0.

If the pulse of laser light is only half the duration of that required for the NOT function, the electron is placed in a superposition of both ground and excited states simultaneously, this being the equivalent of the coherent state of the qubit. More complex logic functions can be modelled using quantum dots arranged in pairs. It would therefore seem that quantum dots are a suitable candidate for building a quantum computer.

Unfortunately there are a number of practical problems that are preventing this from happening:

The electron only remains in its excited state for about a microsecond before it falls to the ground state. Bearing in mind that the required duration of each laser pulse is around 1 nanosecond, there is a limit to the number of computational steps that can be made before information is lost.

Constructing quantum dots is a very difficult process because they are so small. A typical quantum dot measures just 10 atoms (1 nanometer) across. The technology needed to build a computer from these dots doesn't yet exist.

To avoid cramming thousands of lasers into a tiny space, quantum dots could be manufactured so that they respond to different frequencies of light. A laser that could reliably retune itself would thus selectively target different groups of quantum dots with different frequencies of light. This again, is another technology that doesn't yet exist.

Computing liquids

This latest development in quantum computing takes a radical new approach. It drops the assumption that the quantum medium has to be tiny and isolated from its surroundings and instead uses a sea of molecules to store the information. When held in a magnetic field, each nucleus within a molecule spins in a certain direction, which can be used to describe its state; spinning upwards can signify a 1 and spinning down, a 0. Nuclear Magnetic Resonance (NMR) techniques can be used to detect these spin states and bursts of specific radio waves can flip the nuclei from spinning up (1) to spinning down (0) and vice-versa.

The quantum computer in this technique is the molecule itself and its qubits are the nuclei within the molecule. This technique does not however use a single molecule to perform the computations; it instead uses a whole 'mug' of liquid molecules. The advantage of this is that even though the molecules of the liquid bump into one another, the spin states of the nuclei within each molecule remain unchanged. Decoherence is still a problem, but the time before the decoherence sets in is much longer than in any other technique so far. Researchers believe a few thousand primitive logic operations should be possible within time it takes the qubits to decohere.

Some potential Applications of quantum computers

Artificial Intelligence

The theories of quantum computation have some interesting implications in the world of artificial intelligence. The debate about whether a computer will ever be able to be truly artificially intelligent has been going on for years and has been largely based on philosophical arguments.

To implement AI (Artificial Intelligence) through quantum computing, one has to consider the Church-Turing principle.

The Church-Turing principle - "There exists or can be built a universal computer that can be programmed to perform any computational task that can be performed by any physical object".

The thing to note is that every physical object, from a rock to the universe as a whole, can be regarded as a quantum computer and that any detectable physical process can be considered a computation. Under these criteria, the brain can be regarded as a computer and consciousness as a computation. The next stage of the argument is based in the Church-Turing principle and states that since every computer is functionally equivalent and that any given computer can simulate any other, therefore, it must be possible to simulate conscious rational thought using a quantum computer.

Exponential rate of processing

In order for a quantum computer to show its superiority over conventional computers, it needs to use algorithms that exploit its power of quantum parallelism (Coherence). Such algorithms are difficult to formulate, to date the most significant theorised being Shor's algorithm and Grover's algorithm. By using good these algorithms a quantum computer will be able to outperform classical computers by a significant margin.

Shor's algorithm

This algorithm, invented by Peter Shor in 1995, allows extremely quick factoring of large numbers, a classical computer can be estimated at taking 10 million billion billion years to factor a 1000 digit number, where as a quantum computer would take around 20 minutes. If it is ever implemented it will have a profound effect on cryptography, as it would compromise the security provided by public key encryption (such as RSA).

Grover's algorithm

Lov Grover has written an algorithm that uses quantum computers to search an unsorted database faster than a conventional computer. Normally it would take N/2 number of searches to find a specific entry in a database with N entries. Grover's algorithm makes it possible to perform the same search in root N searches. The speed up that this algorithm provides is a result of quantum parallelism. The database is effectively distributed over a multitude of universes, allowing a single search to locate the required entry. A further number of operations (proportional to root N) are required in order to produce a readable result.

Grover's algorithm has a useful application in the field of cryptography. It is theoretically possibly to use this algorithm to crack the Data Encryption Standard (DES), a standard which is used to protect, amongst other things, financial transactions between banks. The standard relies on a 56-bit number that both participants must know in advance, the number is used as a key to encrypt/decrypt data.

If an encrypted document and its source can be obtained, it is possible to attempt to find the 56-bit key. An exhaustive search by conventional means would make it necessary to search 2 to the power 55 keys before hitting the correct one. This would take more than a year even if one billion keys were tried every second, by comparison, Grover's algorithm could find the key after only 185 searches. For conventional DES, a method to stop modern computers from cracking the code (i.e. if they got faster) would be simply to add extra digits to the key, which would increase the number of searches needed exponentially. However, the effect that this would have on the speed of the quantum algorithm is negligible.

Quantum Communications

Quantum communications makes use of the fact that information can be encoded as the polarisation of photons (i.e. the orientation of a photon's oscillation). An oscillation in one direction can be thought of as 0 and in another as a 1. Two sets of polarisations are commonly used, rectilinear and diagonal.

The property that quantum communication exploits is that in order to receive the correct information, photons have to be measured using the correct filter polarisation e.g. the same polarisation that the information was transmitted with. If a receiver is in rectilinear polarisation, and a diagonally polarised photon is sent, then a completely random result will appear at the receiver. Using this, property information can be sent in such a way as to make it impossible for an eavesdropper to listen undetected.

Conclusion

With classical computers gradually approaching their limit, the quantum computer promises to deliver a new level of computational power. With them comes a whole new theory of computation that incorporates the strange effects of quantum mechanics and considers every physical object to be some kind of quantum computer. A quantum computer thus has the theoretical capability of simulating any finite physical system and may even hold the key to creating an artificially intelligent computer.

The quantum computers power to perform calculations across a multitude of parallel universes gives it the ability to quickly perform tasks that classical computers will never be able to practically achieve. This power can only be unleashed with the correct type of algorithm, a type of algorithm that is extremely difficult to formulate. Some algorithms have already been invented; they are proving to have huge implications on the world of cryptography. This is because they enable the most commonly used cryptography techniques to be broken in a matter of seconds. Ironically, a spin off of quantum computing, quantum communication allows information to be sent without eavesdroppers listening undetected.

For now at least, the world of cryptography is safe because the quantum computer is proving to be vary difficult to implement. The very thing that makes them powerful, their reliance on quantum mechanics, also makes them extremely fragile. The most successful experiments only being able to add one and one together. Nobody can tell if the problems being experienced by researchers can be overcome.
Read full story

Friday, September 7, 2007

Boxing and Unboxing

0 comments;Click here for request info on this topic
Boxing and Unboxing enable value types to be treated as objects. Value types, including both struct types and built-in types, such as int,

can be converted to and from the type object

Boxing is an implicit conversion of a value type the type object or to any interface type implemented by this value type. Boxing a value of a value allocates an object instance and copies the value into the new object.

Consider the following declaration of a value-type variable:

int i = 123;

The following statement implicitly applies the boxing operation on the variable i:

object o = i;

The result of this statement is creating an object o, on the stack, that references a value of the type int, on the heap. This value is a copy of the value-type value assigned to the variable i. The difference between the two variables, i and o, is illustrated in the following figure.

Boxing Conversion

FPRIVATE "TYPE=PICT;ALT="

It also possible, but never needed, to perform the boxing explicitly as in the following example:

int i = 123;

object o = (object) i;

Example

This example converts an integer variable i to an object o via boxing. Then the value stored in the variable i is changed from 123 to 456. The example shows that the object keeps the original copy of the contents, 123.

// boxing.cs

// Boxing an integer variable

using System;

class TestBoxing

{

public static void Main()

{

int i = 123;

object o = i; // Implicit boxing

i = 456; // Change the contents of i

Console.WriteLine("The value-type value = {0}", i);

Console.WriteLine("The object-type value = {0}", o);

}

}

Output

The value-type value = 456

The object-type value = 123

A boxing conversion permits a value-type to be implicitly converted to a reference-type. The following boxing conversions exist:

From any value-type (including any enum-type) to the type object.

From any value-type (including any enum-type) to the type System.ValueType.

From any value-type to any interface-type implemented by the value-type.

From any enum-type to the type System.Enum.

Boxing a value of a value-type consists of allocating an object instance and copying the value-type value into that instance.

The actual process of boxing a value of a value-type is best explained by imagining the existence of a boxing class for that type. For any value-type T, the boxing class behaves as if it were declared as follows:

sealed class T_Box: System.ValueType

{

T value;

public T_Box(T t) {

value = t;

}

}

Boxing of a value v of type T now consists of executing the expression new T_Box(v), and returning the resulting instance as a value of type object. Thus, the statements

int i = 123;

object box = i;

conceptually correspond to

int i = 123;

object box = new int_Box(i);

Boxing classes like T_Box and int_Box above do not actually exist and the dynamic type of a boxed value is not actually a class type. Instead, a boxed value of type T has the dynamic type T, and a dynamic type check using the is operator can simply reference type T. For example,

int i = 123;

object box = i;

if (box is int) {

Console.Write("Box contains an int");

}

will output the string "Box contains an int" on the console.

A boxing conversion implies making a copy of the value being boxed. This is different from a conversion of a reference-type to type object, in which the value continues to reference the same instance and simply is regarded as the less derived type object. For example, given the declaration

struct Point

{

public int x, y;

public Point(int x, int y) {

this.x = x;

this.y = y;

}

}

the following statements

Point p = new Point(10, 10);

object box = p;

p.x = 20;

Console.Write(((Point)box).x);

will output the value 10 on the console because the implicit boxing operation that occurs in the assignment of p to box causes the value of p to be copied. Had Point been declared a class instead, the value 20 would be output because p and box would reference the same instance.

Unboxing

C# Language Specification

4.3.2 Unboxing conversions

An unboxing conversion permits a reference-type to be explicitly converted to a value-type. The following unboxing conversions exist:

From the type object to any value-type (including any enum-type).

From the type System.ValueType to any value-type (including any enum-type).

From any interface-type to any value-type that implements the interface-type.

From the type System.Enum to any enum-type.

An unboxing operation consists of first checking that the object instance is a boxed value of the given value-type, and then copying the value out of the instance.

Referring to the imaginary boxing class described in the previous section, an unboxing conversion of an object box to a value-type T consists of executing the expression ((T_Box)box).value. Thus, the statements

object box = 123;

int i = (int)box;

conceptually correspond to

object box = new int_Box(123);

int i = ((int_Box)box).value;

For an unboxing conversion to a given value-type to succeed at run-time, the value of the source operand must be a reference to an object that was previously created by boxing a value of that value-type. If the source operand is null, a System.NullReferenceException is thrown. If the source operand is a reference to an incompatible object, a System.InvalidCastException is thrown

Routing

routing

(n.) In internetworking, the process of moving a packet of data from source to destination. Routing is usually performed by a dedicated device called a router.

Routing is a key feature of the Internet because it enables messages to pass from one computer to another and eventually reach the target machine.

Each intermediary computer performs routing by passing along the message to the next computer. Part of this process involves analyzing a routing table to determine the best path.

Routing is often confused with bridging, which performs a similar function. The principal difference between the two is that bridging occurs at a lower level and

is therefore more of a hardware function whereas routing occurs at a higher level where the software component is more important. And because routing occurs at

a higher level, it can perform more complex analysis to determine the optimal path for the packet.

delegates

A delegate in C# is similar to a function pointer in C or C++. Using a delegate allows the programmer to encapsulate a reference to a method inside a delegate object. The delegate object can then be passed to code which can call the referenced method, without having to know at compile time which method will be invoked. Unlike function pointers in C or C++, delegates are object-oriented, type-safe, and secure.

A delegate declaration defines a type that encapsulates a method with a particular set of arguments and return type. For static methods, a delegate object encapsulates the method to be called. For instance methods, a delegate object encapsulates both an instance and a method on the instance. If you have a delegate object and an appropriate set of arguments, you can invoke the delegate with the arguments.

An interesting and useful property of a delegate is that it does not know or care about the class of the object that it references. Any object will do; all that matters is that the method's argument types and return type match the delegate's. This makes delegates perfectly suited for "anonymous" invocation.

Note Delegates run under the caller's security permissions, not the declarer's permissions.

This tutorial includes two examples:

Example 1 shows how to declare, instantiate, and call a delegate.

Example 2 shows how to combine two delegates.

In addition, it discusses the following topics:

Delegates and Events

Delegates vs. Interfaces

Example 1

The following example illustrates declaring, instantiating, and using a delegate. The BookDB class encapsulates a bookstore database that maintains a database of books. It exposes a method ProcessPaperbackBooks, which finds all paperback books in the database and calls a delegate for each one. The delegate type used is called ProcessBookDelegate. The Test class uses this class to print out the titles and average price of the paperback books.

The use of delegates promotes good separation of functionality between the bookstore database and the client code. The client code has no knowledge of how the books are stored or how the bookstore code finds paperback books. The bookstore code has no knowledge of what processing is done on the paperback books after it finds them.

// bookstore.cs

using System;

// A set of classes for handling a bookstore:

namespace Bookstore

{

using System.Collections;

// Describes a book in the book list:

public struct Book

{

public string Title; // Title of the book.

public string Author; // Author of the book.

public decimal Price; // Price of the book.

public bool Paperback; // Is it paperback?

public Book(string title, string author, decimal price, bool paperBack)

{

Title = title;

Author = author;

Price = price;

Paperback = paperBack;

}

}

// Declare a delegate type for processing a book:

public delegate void ProcessBookDelegate(Book book);

// Maintains a book database.

public class BookDB

{

// List of all books in the database:

ArrayList list = new ArrayList();

// Add a book to the database:

public void AddBook(string title, string author, decimal price, bool paperBack)

{

list.Add(new Book(title, author, price, paperBack));

}

// Call a passed-in delegate on each paperback book to process it:

public void ProcessPaperbackBooks(ProcessBookDelegate processBook)

{

foreach (Book b in list)

{

if (b.Paperback)

// Calling the delegate:

processBook(b);

}

}

}

}

Declaring a delegate The following statement:

public delegate void ProcessBookDelegate(Book book);

declares a new delegate type. Each delegate type describes the number and types of the arguments, and the type of the return value of methods that it can encapsulate. Whenever a new set of argument types or return value type is needed, a new delegate type must be declared.

Instantiating a delegate Once a delegate type has been declared, a delegate object must be created and associated with a particular method. Like all other objects, a new delegate object is created with a new expression. When creating a delegate, however, the argument passed to the new expression is special — it is written like a method call, but without the arguments to the method.

The following statement:

bookDB.ProcessPaperbackBooks(new ProcessBookDelegate(PrintTitle));

creates a new delegate object associated with the static method Test.PrintTitle. The following statement:

bookDB.ProcessPaperbackBooks(new

ProcessBookDelegate(totaller.AddBookToTotal));

creates a new delegate object associated with the nonstatic method AddBookToTotal on the object totaller. In both cases, this new delegate object is immediately passed to the ProcessPaperbackBooks method.

Note that once a delegate is created, the method it is associated with never changes — delegate objects are immutable.

Calling a delegate Once a delegate object is created, the delegate object is typically passed to other code that will call the delegate. A delegate object is called by using the name of the delegate object, followed by the parenthesized arguments to be passed to the delegate. An example of a delegate call is:

processBook(b);

A delegate can either be called synchronously, as in this example, or asynchronously by using BeginInvoke and EndInvoke methods.

Delegates and Events

Delegates are ideally suited for use as events — notifications from one component to "listeners" about changes in that component. For more information on the use of delegates for events, see the Events Tutorial.

Delegates vs. Interfaces

Delegates and interfaces are similar in that they enable the separation of specification and implementation. Multiple independent authors can produce implementations that are compatible with an interface specification. Similarly, a delegate specifies the signature of a method, and authors can write methods that are compatible with the delegate specification. When should you use interfaces, and when should you use delegates?

Delegates are useful when:

A single method is being called.

A class may want to have multiple implementations of the method specification.

It is desirable to allow using a static method to implement the specification.

An event-like design pattern is desired (for more information, see the Events Tutorial).

The caller has no need to know or obtain the object that the method is defined on.

The provider of the implementation wants to "hand out" the implementation of the specification to only a few select components.

Easy composition is desired.

Interfaces are useful when:

The specification defines a set of related methods that will be called.

A class typically implements the specification only once.

The caller of the interface wants to cast to or from the interface type to obtain other interfaces or classes.

Read full story

Microsoft Common Language Runtime

0 comments;Click here for request info on this topic
The Common Language Runtime (CLR) provides a solid foundation for developers to build various types of applications. Whether you're writing an ASP.Net application , a Windows Forms application, a Web Service, a mobile code application, a distributed application, or an application that combines several of these application models, the CLR provides the following benefits for application developers:
  • Vastly simplified development
  • Seamless integration of code written in various languages
  • Evidence-based security with code identity
  • Assembly-based deployment that eliminates DLL Hell
  • Side-by-side versioning of reusable components
  • Code reuse through implementation inheritance
  • Automatic object lifetime management
  • Self describing objects

From an application developer's perspective, the CLR has much to offer. The Application Infrastructure Overview describes the surfaces of the runtime that application developers come in contact with. This includes private and shared components, incremental code download and caching, native platform interoperability, seamless integration with COM, dynamic inspection (reflection), administration, configuration. In general, an application developer needn't know about all of the runtime-supported infrastructure, as the tools and frameworks will expose a subset of functionality appropriate to the type of application being built. However, architects and developers who work with a range of tools and application models will benefit from understanding the infrastructure landscape.

The CLR is a multi-language execution environment. There are currently over 15 compilers being built by Microsoft and other companies that produce code that will execute in the CLR. The CLR Compiler Writers Overview provides information for compiler writers interested in learning about the IL instruction set, file formats, metadata APIs, reflection emit, and other tools.

In addition to compilers, other tools developers can find information in the CLR Tool Developers Overview

2. Introduction

Microsoft .NET framework brings one of the amazing aspects of development, called "Cross-Language Development". By Cross-Language development it means, cross language Inheritance, Cross-Language Debugging and Cross-Language Exception handling. More generally, we can visualize Cross-Language development as follows. The code written using C# (as a matter of fact any .NET compliant language) should be usable in another language such as VB. Later on these two code modules should be usable in, yet another language like JScript etc and so on. In order to make the above said statement possible there should be a common run time environment, which can understand all these languages.

This article introduces the Cross-Language capabilities of CLR (Common Language Runtime Environment) and describes how IL (Intermediate Language) becomes core of all .NET compliant languages. In particular, I discuss about how to program using Intermediate Language and Use ILASM (IL assembler). Further, I also deal with some examples, which demonstrate the cross language capabilities of .NET framework. In the examples that follow i will be using IL (Intermediate Language), C#, Jscript, C++ and VB languages.

One of the main design goals of .NET is to encourage cross - Language development. (Cross - Language Inheritance, Exception handling etc.) The advantage of doing so is, a developer can choose a language that best suits for delivering a given module / unit (each language has its own strengths) and still be able to integrate into a single application. The end result of that is Languages become equal. Even employers also feel more comfortable as they would have more resource and options at hand.

Microsoft .NET framework also eliminates DLL hell and allows for side by side deployment of Components. This is so because registration information and state data are no longer stored in the registry where it can be difficult to establish and maintain.

In the section that follows, i will introduce the .NET framework architecture in brief and then move to different cross language implementation using different .NET compliant languages.

3. Microsoft .NET Architecture

IL - Intermediate Language

CLR Environment - Common Language runtime Environment

JIT - Just in time Compilation

VES - Virtual Execution System

BCL - Base Class Library

PE Portable Executable

GC Garbage Collector

Common Language Runtime

Common Language Runtime (CLR) manages the execution of code and provides different services like Garbage collection and support for Base Class Libraries etc. The main constituents of CLR are described below

The common Language Runtime (CLR) a rich set of features for cross-language development and deployment. CLR supports both Object Oriented Languages as well as procedural languages. CLR provides security, garbage collection, cross language exception handling, cross language inheritance and so on.

The Common Type System, support both Object Oriented Programming languages as well as procedural languages. Basically CTS provides rich type system that is intended to support wide range of languages.

CLS (Common Language Specification) defines a subset of Common Type System, which all language compilers targeting CLR must adhere to. CLS is a subset of CTS.

All compilers under .NET will generate Intermediate Language no matter what language is used to develop an application. In fact, CLR will not be aware of the language used to develop an application. All language compilers will generate a uniform, common language called Intermediate Language. For this reason IL can be called as The language of CLR A platform for cross language development.

Just in Time Compiler converts the IL code back to a platform/device specific code. In .NET you have three types of JIT compilers.

Pre-JIT (Compiles entire code into native code at one stretch)

Ecno-JIT (Compiles code part by part freeing when required)

Normal JIT (Compiles only that part of code when called and places in cache)

Type safety is ensured in this phase. In all, the role of a JIT compiler is to bring higher performance by placing the once compiled code in cache, so that when a next call is made to the same method/procedure it get executed at faster speed.

OVERVIEW

Microsoft's .NET is a broad family of products representing the company's next generation of services, software, and development tools. At the core of the .NET strategy lives the Common Language Runtime. The CLR is a platform for software development that provides services by consuming metadata. It is standards-based and component-oriented. Like any platform, the important pieces are the runtime, the libraries that provide access to them, and the languages that can target the platform.

The aim of this tutorial is to provide a foundation for forming and answering questions about the technical aspects of the CLR. We will examine this technology at a high level, defining and touching on each of the core aspects - runtime, libraries, and languages - in turn. Additionally, we'll look at the extensive support the CLR gives to standards-based and component-oriented software development.

We'll start with a broad overview of the CLR - what are the major pieces, what was the motivation for moving to this new model, and what benefits it provides. Then we'll dive down to cover some aspects of the CLR in greater depth. References to sources for further, more detailed study will be given throughout.

Note that although every effort was made to ensure that the information contained within this document was accurate at the time of publication, the technologies mentioned here are subject to change. Caveat emptor!

CLR BASICS

The Common Language Runtime is the core of Microsoft's .NET vision.

The .NET vision was officially introduced at the Microsoft Professional Developer's conference in Orlando, Florida, in July 2000, although at the time much of the documentation referred to it as "Next Generation Windows Services." Since the PDC, Microsoft has continued to expand upon the list of products and services associated with the .NET name.

In keeping with their tradition of defining vague marketing terms (think ActiveX - did anyone ever figure out exactly what that meant?), the moniker ".NET" has been applied to everything from the next version of the Windows operating system to development tools. It's only half a joke to suggest that we will soon see "Age of Empires.NET" hit the shelves of computer gaming stores.

This effort on Microsoft's part to frame everything from mice to FoxPro in terms of .NET is actually a good sign: it indicates to consumers such as you and me that Microsoft is serious about the product, that it represents both a core part of their strategy, and that they are making a fundamental and massive shift. In the same way that they did with COM in the mid 1990s and with the Internet in later years, Microsoft is (in their own words) "betting the company" on this new technology.

But what exactly is .NET? Although the precise meaning can be a little hard to isolate by reading the prolific marketing literature, a little digging reveals that .NET is in fact Microsoft's grand strategy for how all of their software, systems, and services will fit together. It includes development tools (like the new version of Visual Studio, dubbed Visual Studio.NET), future versions of their Windows operating systems, new Internet-based services (like a stepped-up version of their Passport web authentication service), and an entirely new beast called the Common Language Runtime.

The Common Language Runtime is the single most important piece of the .NET product strategy, because it is in essence the engine that pulls the train - the CLR is how developers will write software in the brave new .NET world (see figure 1). For that reason, this tutorial will focus on the CLR exclusively. For information about other .NET technologies, such as HailStorm (a set of web-enabled services based around Microsoft's Passport technology), Visual Studio.NET, and the rest of Microsoft's .NET vision, visit http://www.microsoft.com/net.

The CLR as a development platform

The CLR is a development platform. Like any platform, it provides a runtime, defines functionality in some libraries, and supports a set of programming languages.

The CLR is a platform for developing applications. A platform is a set of programmatic services, exposed through some API to developers using one or more languages. Development generally targets a single platform; when I write a program using Visual Basic, I say that I'm writing it for Windows, my target platform. The forms and controls that I develop won't run directly on, say, Apple's Mac OS X.

The CLR is not an operating system in the strict sense of the term - it does not, for example, provide a file system, relying instead on the underlying OS (such as Windows) to implement that feature. The CLR is, however, a platform, and in much the same way that code written for Unix will not run on Windows, code must specifically target the CLR. Don't panic, though, because there's plenty of consideration given to interoperating with existing, non-CLR code. You'll still be able to use your existing COM objects and DLLs while taking advantage of the new features of CLR development.

The Common Language Runtime is Microsoft's development platform of the future. In Microsoft's vision of the world, most future software will be written to make use of the CLR features. We'll be looking at what the CLR provides you so you can decide for yourself whether the advantages outweigh the costs. Now, assuming that you agree that this new platform offers significant advantages over your current platform, you might wonder, "What's the big picture? What things do I need to learn in order to develop for the CLR?"

When I approach any new platform, be it a new operating system, the CLR, or even an application suite that allows automation of its features, like Microsoft Office or SAS, I mentally break down the feature set of the platform into three fundamental areas: the runtime that the platform offers, the libraries it defines, and the languages I'm going to use. These aspects of the platform overlap (see figure 2), and understanding each of them and the ways in which they interact is crucial to becoming an effective CLR programmer.

Jit

JIT

Machines cannot run IL directly. A process known as JIT compilation turns IL into executable code.

We could consider IL to be the machine language of the CLR. Whatever we call it, it's metadata that describes implementation. But because it's not x86 assembly instructions, like we stored in COM and vanilla DLLs, it cannot be executed directly by a computer. We rely on yet another service of the platform to make this magic happen: Just-In-Time Compilation.

Just-In-Time compilation (JIT for short) is the process by which the runtime examines the IL in our assemblies and creates code that can be executed by whatever processor we happen to be running on. The "Just-In-Time" comes from the fact that the runtime performs this compilation at runtime, every time the component is loaded into a new process[22] [23]. This is true compilation - the IL is turned into actual machine code a method at a time. Interpretation - reading one IL instruction at a time and executing it - never occurs in code targeting the CLR.

This is an important point. Because the code is being compiled - converted to machine code en masse - rather than reading and then executing one IL instruction after another - performance of code in the CLR should be quite good.

In fact, JIT code could even outperform unmanaged code in some situations, because it knows things that a normal compiler doesn't, like exactly what processor (Pentium II? Pentium IV? AMD Athlon?) the code is executing on. Each processor chip has its own extensions to the standard x86 instruction set, and by using them, the JIT compiler may be able to produce more efficient code[24].

Some of you may remember what a pain it was to move 16-bit Windows 3.1 code to the 32-bit Windows NT or Windows 95 platform. A similar ordeal faces us in the coming years as new 64-bit processors become available. Anyone coding the old way, where your development tools create components that contain actual machine instructions, will need to rewrite everything to take advantage of the new machines. However, anyone that is making use of JIT compilation technology can rely on the runtime to emit the correct code - your components don't have to change at all. The only thing we need to update is the JIT compiler itself, rather than hundreds or thousands of individual components. This is one of the biggest advantages to adding this level of indirection between what our compilers produce and what eventually gets executed by the target computer.

While JIT compilation theoretically[25] gives us the ability to write code that can run on a variety of platforms, there are other great benefits to this approach as well. One of these is something known as code verification. Because we have metadata about the implementation - CIL - and a runtime that understands it, we can get something we never had in COM: an assurance that the code doesn't do anything it shouldn't be allowed to.

Assemblies

Assemblies

An assembly is a collection of types and resources that forms a logical unit of functionality. All types in the .NET Framework must exist in assemblies; the common language runtime does not support types outside of assemblies. Each time you create a Microsoft Windows® Application, Windows Service, Class Library, or other application with Visual Basic .NET, you're building a single assembly. Each assembly is stored as an .exe or .dll file.

Note Although it's technically possible to create assemblies that span multiple files, you're not likely to use this technology in most situations.

The .NET Framework uses assemblies as the fundamental unit for several purposes:

· Security

· Type Identity

· Reference Scope

· Versioning

· Deployment

Security

An assembly is the unit at which security permissions are requested and granted. Assemblies are also the level at which you establish identity and trust. The .NET Framework provides two mechanisms for this level of assembly security: strong names and SignCode.msi. You can also manage security by specifying the level of trust for code from a particular site or zone.

Signing an assembly with a strong name adds public key encryption to the assembly. This ensures name uniqueness and prevents substituting another assembly with the same name for the assembly that you provided.

The signCode.msi tool embeds a digital certificate in the assembly. This allows users of the assembly to verify the identity of the assembly's developer by using a public or private trust hierarchy.

You can choose to use either strong names, SignCode.msi, or both, to strengthen the identity of your assembly.

The common language runtime also uses internal hashing information, in conjunction with strong names and signcode, to verify that the assembly being loaded has not been altered after it was built.

Type Identity

The identity of a type depends on the assembly where that type is defined. That is, if you define a type named DataStore in one assembly, and a type named DataStore in another assembly, the .NET Framework can tell them apart because they are in two different assemblies. Of course you can't define two different types with the same name in the same assembly.

Reference Scope

The assembly is also the location of reference information in general. Each assembly contains information on references in two directions:

· The assembly contains metadata that specifies the types and resources within the assembly that are exposed to code outside of the assembly. For example, a particular assembly could expose a public type named Customer with a public property named AccountBalance.

· The assembly contains metadata specifying the other assemblies on which it depends. For example, a particular assembly might specify that it depends on the System.Windows.Forms.dll assembly.

Versioning

Each assembly has a 128-bit version number that is presented as a set of four decimal pieces: Major.Minor.Build.Revision

For example, an assembly might have the version number 3.5.0.126.

By default, an assembly will only use types from the exact same assembly (name and version number) that it was built and tested with. That is, if you have an assembly that uses a type from version 1.0.0.2 of another assembly, it will (by default) not use the same type from version 1.0.0.4 of the other assembly. This use of both name and version to identify referenced assemblies helps avoid the "DLL Hell" problem of upgrades to one application breaking other applications.

Tip An administrator or developer can use configuration files to relax this strict version checking. Look for information on publisher policy in the .NET Framework Developer's Guide.

Deployment

Assemblies are the natural unit of deployment. The Windows Installer Service 2.0 can install individual assemblies as part of a larger setup program. You can also deploy assemblies in other ways, including by a simple xcopy to the target system or via code download from a web site. When you start an application, it loads other assemblies as a unit as types and resources from those assemblies are needed.

The Assembly Manifest

Every assembly contains an assembly manifest, a set of metadata with information about the assembly. The assembly manifest contains these items:

· The assembly name and version

· The culture or language the assembly supports (not required in all assemblies)

· The public key for any strong name assigned to the assembly (not required in all assemblies)

· A list of files in the assembly with hash information

· Information on exported types

· Information on referenced assemblies

In addition, you can add other information to the manifest by using assembly attributes. Assembly attributes are declared inside of a file in an assembly, and are text strings that describe the assembly. For example, you can set a friendly name for an assembly with the AssemblyTitle attribute:

Table 1. Standard assembly attributes

Attribute

Meaning

AssemblyCompany

Company shipping the assembly

AssemblyCopyright

Copyright information

AssemblyCulture

Enumeration indicating the target culture for the assembly

AssemblyDelaySign

True to indicate that delayed signing is being used

AssemblyDescription

Short description of the assembly

AssemblyFileVersion

String specifying the Win32 file version. Defaults to the AssemblyVersion value.

AssemblyInformationalVersion

Human-readable version; not used by the common language runtime

AssemblyKeyFile

Name of the file containing keys for signing the assembly

AssemblyKeyName

Key container containing a key pair to use for signing

AssemblyProduct

Product Name

AssemblyTitle

Friendly name for the assembly

AssemblyTrademark

Trademark information

AssemblyVersion

Version number expressed as a string.

You can also define your own custom attributes by inheriting from the System.Attribute class. These attributes will be available in the assembly manifest just like the attributes listed above.

The Global Assembly Cache

Assemblies can be either private or shared. By default, assemblies are private, and types contained within those assemblies are only available to applications in the same directory as the assembly. But every computer with the .NET Framework installed also has a global assembly cache (GAC) containing assemblies that are designed to be shared by multiple applications. There are three ways to add an assembly to the GAC:

· Install them with the Windows Installer 2.0

· Use the Gacutil.exe tool

· Drag and drop the assemblies to the cache with Windows Explorer

Note that in most cases you should plan to install assemblies to the GAC on end-user computers by using the Windows Installer. The gacutil.exe tool and the drag and drop method exist for use during the development cycle. You can view the contents of your global


Read full story