StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Parallelism in Computing - Essay Example

Cite this document
Summary
This essay "Parallelism in Computing" is about technology, called Multi-Programming, several programs used to run simultaneously on a single processor. Since there was only one processor, there was no true simultaneous execution of two programs. Instead, it used to execute one part of a program and then another part of the same or another program…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER94.3% of users find it useful
Parallelism in Computing
Read Text Preview

Extract of sample "Parallelism in Computing"

Vivek Lahoti 12th December 2008 Parallelism in Computing Moore's Law: "The number of transistors on a chip roughly doubles every two years." (Moore,3) Gordon Moore, the co-founder of Intel, had said this way back in 1965. It has since been the guiding principle of the hi-tech industry of Integrated Circuit (IC) technology. In an article written by Moore, he said "On the silicon wafer currently used, usually an inch or more in diameter, there is ample room for such a structure if the components can be closely packed with no space wasted for interconnection patterns." (Moore, 3) Though it has been claimed that the law has stood its test of time, things have certainly changed now. Moore's recent view on his own law was "In terms of size [of transistor] you can see that we're approaching the size of atoms which is a fundamental barrier." (Dubash, par.3) You cannot have exponential growth for ever. However, with technological advancements, the need for faster computers is ever increasing. To achieve this, a paradigm shift in the field of computing is necessary. One way to do it would be, instead of burdening one single processor with all the work, have more than one processor to do the job. That brings in the need for Multi-Processing. Multi-Programming Vs Multi-Processing: Multi-Programming: Earlier computers were able to do only one thing at a time. For example, an old DOS based environment. Due to this, areas where computers could be used were also limited. But later, as processor speed started to increase by leaps and bounds and the need for more advanced techniques emerged, the processor was required to do more activities at a time. In this technology, called Multi-Programming, several programs used to run simultaneously on a single processor. Since there was only one processor, there was no true simultaneous execution of two programs. Instead it used to execute one part of a program and then another part of the same or other program. This also brought forward the need for faster memory access, cache memory, faster RAMs, virtual memory, etc. so that large amount of data of various programs could be swapped with the processor for faster execution. Multi-Processing: As it was quite evident, processor speed was always a major bottleneck in a Multi-Programming environment. So, the obvious next step was to get more than one processors to handle the job. In a Multi-Processing environment, two or more processors share the work to be done. Comparison: The first and quite obvious advantage that Multi-Processing systems have over Multi-Programming systems is that since the work is divided between two or more processors, proportionately higher speed can be achieved. One more major advantage is, since different programs can be handled by different processors, the need for data swapping to and from the cache can be minimized, thus improving performance. Types of Multi-Processing: Various Multi-Processing techniques achieved by using physically more than one processor (multiple processors) are discussed below. Master-Slave Configuration: In the earliest version, one processor (master) was responsible for all the work in the system, while the other (slave) performed only those tasks that were assigned to it by the master. Thus the master had to be a more powerful processor than the slave. It was a necessary arrangement since issues relating to sharing of common resources were not resolved satisfactorily. Symmetric Multi-Processing System (SMP): In Symmetric Multi-Processing, two or more 'Tightly Coupled' processors share a common Operating System (OS) along with all system resources like memory, data path, etc. Massively Parallel Processing System (MPP): As the number of processors in SMP systems increase, communication between the processors becomes an issue and the performance benefit of adding more processors reduces. A Massively Parallel Processing System could be roughly described as a group of 'Loosely Coupled' SMPs. The major challenge for Multiple Processor Systems is that of resource and data sharing. Since memory needed to be shared between two different processors, maintaining 'Cache Coherence' was a major problem. Systems used to crash and data recovery were difficult since it was difficult to know, how the system crashed. Mal-wares could be designed to disturb 'Cache Coherence' and thus crash the system. Multi-Core Processors: Multi-Core Processors combine two or more 'cores' (normally a CPU) into a single Integrated Circuit. They could be roughly described as advanced SMPs where the 'cores' are even more 'Tightly Coupled'. These 'cores' may share a single coherent cache at the highest level or have altogether different caches. In both the cases, 'Cache Coherence' is maintained by better co-ordination between the processors. They may share I/O ports. The processors also share other system resources like Data Bus, RAM etc. Also, since proven designs without much architectural changes are used for the 'cores' design risk is significantly reduced. Major Issues: Three most commonly referenced issues or 'walls' with parallel processing are the Memory Wall, Instruction Level Parallelism Wall and the Power Wall. Memory Wall: Memory Wall is the growing difference between the speed of the processor and external memory. This is also mainly because the communication outside the processor through the Data Bus is slow. A natural solution to this is to develop high speed RAM, or better still, have higher on chip memory. Instruction Level Parallelism (IPP) Wall: One very interesting law regarding parallel processing was given by Gene Amdahl, also called as Amdahl's Law, "A small portion of the program which cannot be parallelized will limit the overall speed-up available from parallelization." (Amdahl, par.4) Naturally, parallel processing will work only if things can be done parallel. A solution to this can be achieved at a software level by using Thread Level Parallelism (TLP) and other such techniques. Power Wall: Processors generally consume twice the power with each doubling of operating frequency. More power consumption means more heat dissipation. It poses manufacturing, system design and deployment problems. However, a Dual-Core processor will generally consume less power compared to two Single-Core processors due to some integrated circuitry. Together these three 'Walls' inspire the software and hardware technologies related to parallel processing. The true power of parallel processing can be experienced in various multi-thread applications like artificial intelligence, data mining, gaming, multimedia, networking, etc. Expansion Cards: Expansion Cards are circuits which can be inserted into an expansion slot of a computer motherboard to provide additional functionality. They usually always follow the Master-Slave configuration. There are various types of expansion cards like video cards, sound cards, network cards, TV tuner cards, etc. They usually do only a specific task which can be done parallel with the rest of the computing, like multimedia. Since they take some burden off the CPU for some particular activity, they help in improving the computing experience. Most commonly used expansion cards are for multimedia applications and networking. Summary Though Moore's Law, of doubling the number of transistors per chip every two years, has stood its test of time, its fundamental barrier, the size of atoms, has come near. As such, a paradigm shift in the way microprocessors are designed is needed to achieve the ever increasing demand for faster computing. Through Multi-Programming we achieved the task of running many programs simultaneously on a single CPU using time-sharing method. However, in this technique, the CPU speed and availability becomes a bottleneck. To overcome this, Multi-Processing can be introduced, in which more than one CPUs are used to run one or more than one application. Multi-Processing systems have evolved from Master-Slave Configuration (in which one processor controls the other), Symmetric Multi-Processing (SMP) systems (in which two or more similar processors are 'Tightly Coupled') to Massively Parallel Processing (MPP) systems (in which various SMPs are 'Loosely Coupled'). Multi-Core Processors integrate more than one CPUs on a single Integrated Circuit. Multi-Processing techniques are challenged by 3 major issues viz. Memory Wall (Speed of the memory and time lost in data swapping with the memory being a bottleneck), Instruction Level Parallelism (IPP) wall (in case things cannot be done parallel) and Power Wall (to face difficulties due to high power consumption of the CPUs). Works Cited Moore, Gordon Cramming More Components onto Integrated Circuits. 19 April 1965 Electronics, Volume 38, Number 8. 12 December 2008 Dubash, Manek Moore's Law is Dead, says Gordon Moore 13 April 2005 TechWorld.com 12 December 2008 Amdahl, Gene M. Validity of the Single Processor Approach to achieving Large Scale Computing Capabilities 1967 AFIPS spring joint computer conference. 12 December 2008 Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Parallelism in Computing Essay Example | Topics and Well Written Essays - 1500 words”, n.d.)
Parallelism in Computing Essay Example | Topics and Well Written Essays - 1500 words. Retrieved from https://studentshare.org/technology/1523294-parallelism-in-computing
(Parallelism in Computing Essay Example | Topics and Well Written Essays - 1500 Words)
Parallelism in Computing Essay Example | Topics and Well Written Essays - 1500 Words. https://studentshare.org/technology/1523294-parallelism-in-computing.
“Parallelism in Computing Essay Example | Topics and Well Written Essays - 1500 Words”, n.d. https://studentshare.org/technology/1523294-parallelism-in-computing.
  • Cited: 0 times

CHECK THESE SAMPLES OF Parallelism in Computing

Computer hardware and operating systems

Metacomputing approach should make large scale computing more efficient and cost-effective, with special interest given to the problems generally called Grand Challenges.... Due to the recent advances in search technologies coupled with internet searches, computing has risen to higher levels of scales involving huge data sets.... Opterons are used both in the computational nodes feeding the Cells with useful data and in the system operations and communication nodes passing data between computing nodes and helping the operators running the system....
4 Pages (1000 words) Essay

Further Computer Systems Architecture

IA-64 architecture is based on innovative features such as “Explicit parallelism, and Predication and Speculation, resulting in superior Instruction Level parallelism.... Predication is an effective technique that allows the handling of complex activities when the computer pursues aggressive instruction-level parallelism (ILP).... This also increases instruction-level parallelism thus reducing the “impact of memory latency”....
9 Pages (2250 words) Assignment

Advanced Computer Architecture Classification in Regards to Microprocessing

The goal of the paper "Advanced Computer Architecture Classification in Regards to Microprocessing" is to shed light on the organization and architecture of multiprocessor computing systems.... The writer will particularly discuss the issue of interconnection scheme.... hellip; A multiprocessor is whereby a computer uses two or more processing units under the integrated control while multiprocessing is defined as the way of using two or more CPUs within a single PC....
10 Pages (2500 words) Term Paper

Theater

The difficulties with which it unfolds are not easily noticed by a reader or an audience member even though most of his plays fail to meet the thresholds required.... The first presentation of the play The… The play presents the problem of placing certain important events out of stage like the Kostya's suicide at the end of the play....
13 Pages (3250 words) Research Paper

Two Different Genres from the Old Testament

hellip; The author states that Hebrew poetry is rooted in a pace of phrases and verses recited in a different way referred to as 'synonymous parallelism', expressing similar messages and knowledge in the same or different ways.... The paper “Two Different Genres from the Old Testament” seeks to evaluate the poetry of the Bible, which can be quite different for it cannot be interpreted or decoded into English in an exact manner....
6 Pages (1500 words) Assignment

Future of Quantum Computing

The paper "Future of Quantum computing" discusses that QC sounds similar to science fiction for the reason that moon shots, satellites, as well as the unique microprocessor formerly, were.... nbsp; On the other hand the period of computing in not still at the conclusion of the start.... The preliminary application for the practical execution of a Quantum computing was developed in 1993.... The fundamental element of quantum data and information in a Quantum computing is the qubit or quantum bit....
11 Pages (2750 words) Research Paper

Commentary Comparing Two Media Systems

They continue to argue that liberal countries have special characteristics such as having a dominant internal pluralism and low political parallelism in the media system.... This literature review "Commentary Comparing Two Media Systems" discusses how similar or different are the Media Systems in Europe in the modern world, as well as are the western systems of journalism emerging in the European journalism systems....
6 Pages (1500 words) Literature review

Branch Prediction

Branch prediction remains an important area of research in ways of improving Parallelism in Computing.... It is a central factor to parallelism for it has an influence on the performance of processors.... The writer of the paper “Branch Prediction” states that there are two facets to branch predictions; one being determining the results of a given branch that can either be taken or not taken and other being the target address knowledge in case of the taken outcome....
6 Pages (1500 words) Coursework
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us