StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Advanced Computer Architecture Classification in Regards to Microprocessing - Term Paper Example

Cite this document
Summary
The goal of the paper "Advanced Computer Architecture Classification in Regards to Microprocessing" is to shed light on the organization and architecture of multiprocessor computing systems. The writer will particularly discuss the issue of interconnection scheme…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER96.4% of users find it useful
Advanced Computer Architecture Classification in Regards to Microprocessing
Read Text Preview

Extract of sample "Advanced Computer Architecture Classification in Regards to Microprocessing"

Advanced Computer Technology and Advanced Computer Architecture A multiprocessor is whereby a computer uses two or more processing units under the integrated control while multiprocessing is defined as the way of using two or more CPUs within a single PC. The multiprocessors as the name shows or indicates have the ability to support more than one processor at the same time or rather simultaneously. In multiprocessing the processors are organized in a parallel form and therefore a large number of executions can be brought at the same time. Some other definition is the sharing of the execution process by the interconnection of more than one microprocessor using tightly or loosely couples technology (Culler, 1999). Multiprocessing tasks carry two simultaneous steps, one being performing the task of editing and the other is the handling of the data processing. A multiprocessor device over a single semiconductor chips a plurality of processors including a first group of processors and the second group of processors. The first bus is one to which first group of processors is coupled while the second external bus is the one to which the second bus is coupled. This term is also used to refer to a computer that has many independent processing elements. Processing elements are nearly full computers in their own right. The main contrast is that they have been freed from the encumbrance of communication with peripherals (El-Rewini and Abd-El-Barr, 2005). Multiprocessors systems in terms of architecture The processors are made of small and medium scale ICs which usually contains a less or large number of transistors. The multiprocessors involve computer architecture most common multiprocessor systems today use SMP architecture. In this scenario of multicore processors, the SMP architecture applies to the nuclei, handling them as separate processors. SMP systems permit any processor to labour on any task no matter where the data for the task are located in memory; with proper operating systems can easily move tasks between processors to balance the work load efficiently (Stallings, 2013). Multiprocessing in terms of architecture has some benefits like increased processing power, scale resource use of application requirements and also some additional operating system responsibilities such as, all processors remain busy, they work on consistent copies of shared data, execution of related processes synchronized and mutual exclusion is enforced. Multiprocessing is a processing type in which two or more processors work together to process more than one program simultaneously. Multiprocessor systems have more than one processor and that’s why it’s known as multiprocessor systems (Hennessy, 2012). In this multiprocessor system, there is one master processor and others are termed as slaves. If one processor fails, the master processor assigns the other slave processor tasks. If the master processor fails, then the whole system will be unsuccessful. The central part of multiprocessor is called the master processor and all of them share the hard disk and memory and also the other memory devices. The following are examples of multiprocessors: SMP, bus interconnection, snoopy cache coherence, 512 Kb L2 cache per processor just to mention a few (Hamacher, 2012). Architecture classification One of the classifications is the nature of data path, second, interconnection scheme, and finally the third classification is how processors share resources. The message-passing Architectures include separate address space for each processor and the processors communicate via message passing. The shared memory architectures include single address space shared by all processors, processor communicate by memory read/write and SMP or NUMA (Hurson, 2014). Sequential and parallel architectures can be classified as, stream, which is the sequence of bytes, data stream, instruction stream and Flynn’s classification. MISD multiprocessing offers mainly the advantage of redundancy since multiple processing units perform the same tasks on the same data, which in turn reduces the chances of incorrect results if in case one of the units fails to work. MISD architecture may involve the comparison between processing units in order to detect the failures. Rather than the redundant and fail-safe character of this type of multiprocessing, it has few cons and it’s very costly. It does improve performance and it can be implemented in a way that is transparent to software (Lafferty, 1995). MIMD is multiprocessing architecture is suitable for a wide variety of tasks in which completely independent and parallel execution of instructions touching different sets of data can be put to a productive use. For this purpose, and easiness in implementation, MIMD predominates in multiprocessing. Processing is subdivided into several threads each with its own state of hardware processor, within a solo process of software-defined or within multiple processes. Insofar as a system has numerous threads waiting dispatch (either user threads or system), this architecture brands good use of hardware assets (Chapman, 2010). MIMD raises issues of deadlock and resource contention, nonetheless, since the threads may collide in their access to resources in an unpredictable way that is difficult to manage efficiently. It requires special coding in the operating system of a computer, but does not require changes of applications unless the programs use several threads on their own. Both system and user software may need to use software constructs such as locksorgates (also known as semaphores) to prevent one thread from interfering with another if they should happen to cross paths in referencing the same data. This locking process increases the complexity of the code, lowers the performance and increases the amount of testing required greatly, although it’s not usually enough to negate the advantages of multiprocessing (Cheptsov, 2013). SISD multiprocessing is a single instruction stream whereby each instruction processes one data item. SIMD multiprocessing handles a stream of instructions each one of which can perform calculations in parallel on multiple data locations. It is well suited to parallel or vector processing in which a very large set of data can be divided into parts that are individually subjected in identical but operations which are independent. A single instruction stream guides the operations of multiple processing units to perform the same manipulations simultaneously on large quantities of data. For some kinds of computing applications, this sort of architecture can yield massive increases in performance in terms of the elapsed time required to complete a given function or task. Moreover, a pro con to this architecture is that large part of the system falls idle when programs or system tasks are executed that cannot be divided into units that can be processed in parallel (Catanzaro, 1994). Interconnection scheme This describes how the system’s components such as processors and memory modules are connected. It contains nodes and links, parameters used in evaluation of interconnection schemes, node degree, bisection width, shared bus, interconnection scheme cost, fast multiprocessors and dual-processors Intel Pentium. NUMA and UMA and Shared Memory The major difference between NUMA and UMA memory architecture is the location of memory. The UMA architecture nodes have first and second cache memory levels joint with the processor, next level s of the memory hierarchy are of the interconnection network. The NUMA architecture defines the node as the processing Element with the cache lines and a part of the main memory. Each node is then connected to each other by the network. Therefore, in the NUMA architecture, we can say that the memory and the cache are distributed in the nodes while the UMA architecture is only the cache that is dispersed. Parallel computing is a form of reckoning whereby many calculations are carried out at the same time, operating on the principle that large problems can often be divided into smaller ones which are in turn solved concurrently (Kempf, 2011). Parallel computers can roughly be classified according to the level at which the hardware support parallelism with multicore and multiprocessor computers, which have several processing elements within a single machine. Specialized parallel computer architectures are sometimes used alongside traditional processors for the acceleration of specific tasks. Architectural background of Non-Uniform Memory Access NUMA machines provide a linear address space allowing the processes to directly address all memory. This feature feats the 64bit addressing available in modern scientific computers. The cons over distributed memory machines include movement of data which is fast, less duplication of data and a programming which is easier. The drawbacks include the hardware routers cost and the lack of programming standards for large configurations (Kaminsky, 2009). The fundamental building block of Non-uniform memory Access machine is a uniform Memory access region that we will call a “node”. Within this zone, the processors share a memory which is common. This local memory provides the fastest memory access for each of the processors on the node. The number of processors on a node is thus limited by the speed of the switch that couples the processors with their local memory (Dowd, 1998). For larger configurations, multiple nodes are combined to form a Non-Uniform Memory Access machine. When a processor on one node positions data that is stored on alternative node, hardware routers spontaneously sends the data from the node where it is stored in the node where it is being demanded. In memory access, this extra step effect on delays, thus degrading the performance. Small to medium size NUMA machines have only one level of memory hierarchy, the data is either local or remote. The machines that use either high topology are larger NUMA machines, where there are greater delays for nodes further away (May, 2001). One design goal of a NUMA machine is to make the routers as fast as possible to minimize the difference between local and remote references. An individual application performance depends on the number of nodes used. Only if two nodes are used and the memory is randomly placed, there will be a fifty percent chance that memory references will be local. The probability decreases as the number of nodes increases. The flash media server programming tools described in the next section overcome the scaling issues which are associated with large NUMA architectures (Buyya, 1999). Programs for Clusters of SMPs (Data-Parallel) Symmetric clusters of shared memory multiprocessors are becoming the most promising parallel computing platforms for scientific computing. Its examples include: SUN, IBM, and SGI. SMP clusters comprise a set of multiprocessor compute nodes connected via a high-speed interconnection network. While processors compute nodes have direct access to a shared memory, accessing data located on other nodes has to be realized by means of message-passing. The complexity of user application development as a consequence, is significantly increased, thus the programmers are forced to deal with the shared-memory programming issues such as multithreading and synchronization and also with the distributed memory issues such as data distribution and message-passing communication (Kempf, 2011). Manipulating the Ranked Structure of SMP Clusters HPF provides the concept of abstract processor arrangements for establishing an abstraction of the parallel target architecture in the form of one or more processor arrays which are rectilinear. Processor arrays are used within data distribution commands to describe a mapping of array elements to abstract processors. Array elements mapped to an abstract processor are owned by that processor. The Ownership of data is the central concept for the execution of programs which are data parallel. Based on the ownership of data, the distribution of computations to abstract processors and the necessary communication and synchronization are derived automatically. Consider now an SMP cluster consisting of NN nodes, each equipped with NPN processors (Chapman, 2010). Currently, if an HPF program is targeted to an SMP cluster, abstract processors Are either associated with the NN nodes of the cluster or with the NN*NPN processors. In the first case, data arrays are distributed only across the NN nodes of the cluster and therefore only parallelism of degree NN can be subjugated. In the second situation, whereby the abstract HPF processors are associated with the processors of a cluster, potential parallelism of degree NN*NPN can be demoralized. Nevertheless, by viewing an SMP cluster as a distributed-memory machine consisting of NN*NPN processors, the shared-memory available within nodes is usually not subjugated, since data distribution and communication are performed within nodes as well (Stallings, 2013). Exploiting DM and SM Parallelism If an abstract processor array dimension is distributed by block or genblock, Adjoining blocks of processors are mapped to the nodes in the corresponding dimension of the abstract node array which is specified. As a consequence of such a processor mapping, both memory-distributed parallelism and shared-memory parallelism may be subjugated for all dimensions for data array that are mapped to that dimension of processor array. Consequently, if in a Processor mapping a dimension of an abstract processor array is distributed by means of "*", All abstract processors in that dimension are mapped to the same node of a node array which is abstract, and therefore shared-memory parallelism only may be exploited across array dimensions which have been mapped to that processor array dimension. Both distributed- and shared-memory parallelism may be exploited for array only shared-memory parallelism may be exploited across the first dimension of array B, while both shared-memory and distributed-memory parallelism may be exploited across the second dimension (El-Rewini and Abd-El-Barr, 2005). Process and thread level distribution In process level distribution, each process provides the resources which are needed in execution of a program. A process has a virtual address space, open handles to a system object, codes which are executable, a unique identifier of a process, a security context, set sizes which are minimum and maximum, a priority class and environment variables. Each single process begins with a single thread, regularly called the key thread; nonetheless, it can create extra threads from any of its threads (Culler, 1999). Thread level is the entity within a process that can be planned for implementation. The process’ threads share its cybernetic address space and resources of a system. Additionally, each thread upholds handlers which are exception, local storage for thread and a set of structures that the system will use in saving the thread context until it is planned. The context of the thread includes the thread’s set of machine registers, a thread environment block and user task in the address space of the thread’s process. Threads also have security context of their own which is used or rather can be used for clients’ impersonation. Advantages SMPs have some advantages which are as follows: it is cheap; multiprocessors share the same resources whereby separate supply of power or motherboard for each chip is not required, this in turn reduces the cost. Also the reliability of the system is also increased, whereby the failure of one processor does not affect the other processors though it makes the machine slow down. Some mechanisms are obligatory to achieve increased dependability (Culler, 1999). They also have increased number of processes and complete the work in very short time. It is very important to note that making the number of processors double does not have the time to complete a process; this is due to the overhead in communication between processors and contention for shared resources (Hennessy, 2012). By using the proper access control mechanism and properly scheduling the jobs in the pool we can ensure the same degree of security in a time-shared machine as in a dedicated machine. In mainframe or minicomputer systems many users are accessing the same computer and the same resources so operating system must be managed that way the all users get the fair share of all the resources like memory, I/O and available CPU time (Buyya, 1999). In workstations connected to servers, many users although have dedicated own resources, but they also have to share resource like networking and servers such as file server or print server, so operating system must manage a balanced utilization of individual access to shared resources. In handheld computers, most device are used by individual user but because of limitation of power, speed and interface sometimes need to remote operations, so operating system must manage the individual resources and memory properly (El-Rewini and Abd-El-Barr, 2005). List of References Stallings, W. 2013. Computer organization and architecture. Boston: Pearson. Hennessy, J. L., Patterson, D. A. and Asanović, K. 2012. Computer architecture. Waltham, MA: Morgan Kaufmann. El-Rewini, H. and Abd-El-Barr, M. 2005. Advanced computer architecture and parallel processing. Hoboken, N.J.: John Wiley. Hamacher, V. C. 2012. Computer organization and embedded systems. New York, NY: McGraw-Hill. Culler, D. E., Singh, J. P. and Gupta, A. 1999. Parallel computer architecture. San Francisco: Morgan Kaufmann Publishers. Hurson, A. 2014. Advances in Computers. Burlington: Elsevier Science. Lafferty, E. L. 1995. Parallel Computing. Burlington: Elsevier Science. Chapman, B. 2010. Parallel computing. Amsterdam: IOS Press. Cheptsov, A. 2013. Tools for high performance computing 2012. Berlin: Springer. Catanzaro, B. J. 1994. Multiprocessor system architectures. Mountain View, CA: Sun Microsystems. Kempf, T., Ascheid, G. and Leupers, R. 2011. Multiprocessor systems on chip. New York: Springer. Kaminsky, A. 2010. Building parallel programs. Boston, Mass.: Course Technology. Dowd, K. 1998. High performance computing. Cambridge: OReilly & Associates. May, J. M. 2001. Parallel I/O for high performance computing. San Francisco, CA: Morgan Kaufmann Publishers. Buyya, R. 1999. High performance cluster computing. Upper Saddle River, N.J.: Prentice Hall PTR. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Advanced Computer Architecture Classification in Regards to Term Paper, n.d.)
Advanced Computer Architecture Classification in Regards to Term Paper. Retrieved from https://studentshare.org/information-technology/1633272-advanced-computer-architecture
(Advanced Computer Architecture Classification in Regards to Term Paper)
Advanced Computer Architecture Classification in Regards to Term Paper. https://studentshare.org/information-technology/1633272-advanced-computer-architecture.
“Advanced Computer Architecture Classification in Regards to Term Paper”, n.d. https://studentshare.org/information-technology/1633272-advanced-computer-architecture.
  • Cited: 0 times

CHECK THESE SAMPLES OF Advanced Computer Architecture Classification in Regards to Microprocessing

Advanced Computer Integrated Manufacturing

computer integrated manufacturing is a system in which individual engineering,production and marketing are organized into a computer-integrated system.... he computer is linked with all the procedural steps such as design,analysis,planning,purchasing,cost accounting… computer integrated manufacturing (CIM) is a system in which individual engineering, production and marketing are organized into a computer-integrated system....
4 Pages (1000 words) Essay

Computer Peripheral Architecture

This paper ''Computer Peripheral architecture'' will discuss the various peripherals being used in virtual environments.... In this definition, primary devices refer to the computer's main storage device, such as an IDE or SCSI hard drive.... Due to its high-speed connection to the computer's CPU, RAM makes the best choice for primary storage.... computer users often depend on these devices to store backup or archive information or to carry data that must be easily portable from site to site....
7 Pages (1750 words) Essay

Information Technology Architectures

This paper will focus on the fundamental components, advantages and disadvantages of distributed system architecture.... dvantages of distributed system architecturesDistributed systems architecture enhances hardware and software resource sharing as shown from a loosely coupled distributed system which is a single-user workstation enabling accessibility, of shared resources and data in other server computers (Belapurkar, 2009).... Distributed systems architecture also facilitates use of software and equipment from different vendors....
2 Pages (500 words) Essay

Evaluation of Systems Architecture

The most important and the… Some of the household names in CPU architecture include Intel, IBM and Sun Microsystems. The Von Neumann architecture identified the CPU to be made of three main components.... Below is the Von Neumann architecture on which all other architectures are based; Traditionally, CPUs did not have many registers and comprised numerous memory addressing techniques.... CPUs using VLIW architecture have 64 general purpose registers at their disposal....
4 Pages (1000 words) Essay

Computers in Architecture

This essay describes usage of computers and digital technology in modern architecture, for architectural designes.... Supervisor] Computers in architecture Introduction: The past few years has seen an increase in the use of digital tools in almost every field and line of work.... architecture is no different from any of these fields.... Digital Technology & architecture— White Paper Submitted to the NAAB by ACADIA.... computer aided design software helped the architect of the building compromise on the design while ensuring that practicality of the structure was also maintained(The Economist)....
2 Pages (500 words) Essay

Foundations of IT- Designing a Computer Architecture

Both are set up on the processor die. I would like my ideal computer to Designing a computer architecture Current processors make use of a fast accessed cache memory that keeps data that are used somuch.... t the moment, computers are based on the von Neumann architecture.... Nonetheless, the von Neumann architecture has its limitations.... With the increase of CPU speed, it was evident that a solution is needed to defeat the bottleneck ("Von Neumann architecture," n....
2 Pages (500 words) Essay

Application of Computers by Architectural Designers

… The paper “Application of Computers by Architectural Designers” is an apposite variant of an essay on information technology.... People have different occupations, and their main goals are to sustain and improve their living standards.... In the market, different occupations are available and most of them employ technologies to ensure that their accomplishments are effective and efficient....
6 Pages (1500 words) Essay

The Architectural Implications of the Institutional View of Art

This paper ''architecture'' tells that Art and architecture are two important human activities that have changed over time.... It is clear from past research that architecture and art have not remained static but changed over time.... hellip; This change has been attributed to the emergence of movements in art and architecture to view art and design differently.... ver time, it has become almost impossible to define what architecture is as well as draw that diversity between art and architecture....
12 Pages (3000 words) Essay
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us