StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Further Computer Systems Architecture - Assignment Example

Cite this document
Summary
The assignment "Further Computer Systems Architecture" states that Traditional architecture could no longer address the complex and growing need to process massive data at a given time. Heavy workloads placed too much pressure on high-performance servers and hardware. …
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER95.1% of users find it useful
Further Computer Systems Architecture
Read Text Preview

Extract of sample "Further Computer Systems Architecture"

Computer Systems Architecture Q1. A. Key features of IA-64 Architecture Traditional architecture could no longer address the complex and growing need to process massive data at a given time. Heavy workloads placed too much pressure on high performance servers and hardware. IA-64 architecture resolves the demand for complex requirements of the Internet, e-commerce, database management and other applications that traditional architecture could not meet. IA-64 architecture is based on innovative features such as “Explicit Parallelism, and Predication and Speculation, resulting in superior Instruction Level Parallelism (ILP) and increased instructions per cycle (IPC).”1 In addition, IA-64 has scalability that allows for future expansion. In order to overcome the limitations met by traditional architectures, IA-64 used a technique called predication where it could indicate which paths are being utilized and which are not. The paths that are in use proceed with their activity while paths detected as unused would be automatically turned off. Predication is an effective technique that allows the handling of complex activities when the computer pursues aggressive instruction level parallelism (ILP).2 Memory insufficiency is a common problem that traditional architecture cannot address because CPU’s run on faster speeds than usual. IA-64 resolves the problem by using a technique referred to as speculation.3 The purpose of this is to initiate loads of previous memory initiations even before the branch is required. This makes the memory available on demand. This also increases instruction level parallelism thus reducing the “impact of memory latency”.4 The “Nat” bits allow IA-64 to load data ahead of time without registering an error message.5 Traditional architectures have limited instruction level parallelism. In IA-64 architecture, processors usually include “128 general purpose integer registers, 128 floating point registers, 64 predicate registers and many execution units” to accommodate present and future requirements.6 This is especially important if the server handles huge amount of data at any given time. IA-64 architecture handles loops different from traditional architecture. The use of register rotation prevents code bloats by allowing the “pipelining of loops.”7 Unlike traditional architecture, each register moves up a notch. The last register will revert back to the beginning hence simulating rotation. In combination with the predication, the loop feature enables the compiler to create a loop code that is important in highly parallel forms.8 To further enhance efficiency, IA-64 offers 128 registers readily available to compilers. This is more appropriate because compilers can easily detect which programs no longer need a specific register. Compared to other architectures that carry only 32 registers, IA-64 has registers partitioned into two subsets: 32 static and 96 stacked.9 Q1. B. Benefits of Parallel Computing Parallel computing is simply the use of one or more computers or processors simultaneously to process or execute a program. It is essentially a method used to solve complex problems at the shortest possible time. In order to run the program, two or more CPU’s must be present. The problem then is decomposed into smaller sections. Each section is resolved simultaneously at different units. The configuration of parallel computing comes in three forms: “A single computer with multiple processors; An arbitrary number of computers connected by a network; A combination of both.”10 The benefits of parallel computing include the capability of solving complex problems; it also saves time and money. It allows doing multiple tasks at the same time. One of the contributions of parallel computing to current development efforts is the possibility of creating multiple cores and veering away from the single core configuration of processors. Dual core microprocessors permit the processing of massive data and at the same time the system is scaleable.11 Parallel computing made it possible for users to utilize other resources within the network if the local system is unavailable. It can also overcome memory limitations. A single core microprocessor has a finite memory capacity. With parallel computing, one can utilize other sources to supplement memory requirements at minimal cost. Parallel computing is applied in many fields like aerospace, military, research, medicine, database management, telecommunication, gaming to name a few. Parallel computing is touted as the future direction of computer architecture development. The system appropriately responds to the burgeoning demand for stronger, faster and cost effective computing. Q2. A. The difference between RISC and CISC leading up to the dule core RISC or reduced instruction set computer is a kind of microprocessor with limited number of instructions. By creating microprocessors that could recognize simpler but limited number of instructions, the system could execute commands fast. The system also required lesser number of transistors. Hence, it is cheaper to design and manufacture.12 In most personal computers, the CISC or complex instruction set computer is used. In computers with CISC architecture, the computer could support as much as two hundred instructions.13 Comparing the RISC and CISC architecture, the instructions or commands in RISC are executed in a single clock cycle and only simple commands are acceptable. CISC on the other hand allows multiple clock cycles and the commands are more complex. The RISC is dependent on the software while the CISC requires specific hardware configurations. The “load and store” in CISC are incorporated into a single instruction while in RISC, they are independent instructions. 14 By separating the two, it makes the computer do less work. The CISC requires small code sizes at high cycles per second while RISC can execute commands using large code sizes at low cycles for second. In CISC, more transistors are needed to store complex instructions while in RISC, transistors are used as memory registers.15 The CISC architecture requires that an operation be reloaded. If the operation or command is also required in another activity, the CISC processor must re-load the data retrieved from the memory repository. In RISC, the command remains in memory as long as it is not replaced with another. The early developments of the Intel Ia-64 originally used CISC platform with the intention of competing with RISC processors.16 In recent developments, a convergence between RISC and CISC was seen as a solution to the increase in speed requirements of the microprocessor. CISC developed today is able to execute more than one instruction in a single cycle while RISC was allowed to give CISC-like commands. With the advent of newer technologies, one could fit more transistors into a single chip.17 Q2. B. Comparison of the IA-64, Intel and AMD Dule Core IA-64 architecture initially addressed the problem of faster processing by using explicit parallelism to speed up things and accept heavy workloads. In both the Inter and AMD dule core architecture, the companies took the development of high performance processors onto a higher level. The dule core processor allows two different executable units in a single die. In other words, it is a twin processor packaged as one.18 Dual core CPU’s are intended to run multiple tasks without sacrificing the time element required and are designed to perform better the single core variants. Brown (2005) conducted an evaluation of both Intel’s and AMD’s offering in the dual core technology. Brown (2005)initially tested the capability of the two designs in serving daily activities such as using word processing programs. AMD’s Athlon 64 X2 4800+ outpaced Intels flagship Pentium Processor Extreme Edition 840 chip.19 AMD’s performance registered faster speed of performing multi-tasks, multi-threaded computing especially when it involves 3D applications. The execution of Intel’s dual core system involves “1MB of L2 cache and support Hyper-Threading--for a total of 2MB of L2 cache and support for four execution threads.” 20 AMD on the other hand had a built in memory controller that allows faster execution time. AMD’s offering appeared to outgun Intel’s dual core system for several reasons. First, compared to AMD’s built-in memory controller that essentially abbreviated some processing steps, Intel’s “PEE 840 must communicate with system memory via a separate memory controller connected via the frontside bus, a pathway on which data can travel at 1,066MHz at best.”21 AMD’s built-in memory controller eliminated the additional step. Second, AMD’s dual core is adaptable to existing AMD motherboards. It does not require the users to completely discard usable components in their upgrade. AMD Athlon chips works well with older versions.22 Brown (2005, 2005b) also found that in multitasking, gaming, 3D generation and multi-threading activities, AMD overtakes Intel when it comes to speed. AMD’s dual core performed better than Intel’s. Brown (2005, 2005b) concluded that AMD Athlon chip set was far more superior to Intel’s. Comparing Athlon 64 X2 4800+ and Intel’s PEE 840, AMD had an “average performance gain of 15.2 percent across the board.” 23Finally, Intel’s offering is more expensive than AMD’s. Q3. Why is it necessary to control concurrency in multi-processing environments? Explain the methods that can be used for concurrency control. Concurrency control is necessary in “regulating multiple requests for modification of a shared resource. “24 It is also important in managing multiple database transactions and ensuring that the simultaneous activities are serializable.25 Concurrency control is needed because simultaneous operations can inevitably lead to lost updates, uncommitted data and inconsistent retrieval.26 Concurrency control is required when there are huge amounts of data accessed but the main memory could only accommodate a fraction and the presence of multiple tasking complicates the matter. There are two main techniques for concurrency control. The first is pessimistic concurrent control and the other is optimistic concurrency control. In pessimistic concurrent control method, it is assumed that a problem would occur. The method is useful in detecting errors as soon as they occur and resolves the problem through blocking. 27 A typical action is locking. Locking is considered pessimistic because it assumes that a conflict would occur. This strategy is used in transaction executions. The transaction locks the object before it is utilized and when other transactions are requesting for the same object, it must wait for the other transaction’s completion.28 Optimistic concurrency control takes on an opposite track. It does not assume a problem will occur in simultaneous processing of data. Instead, transactions are checked prior to its execution to avoid conflicts within the system. It does not use locking as a control mechanism. The optimistic concurrency control involves three stages: read, validation and write.29 Whenever an object is requested, the optimistic concurrency control automatically makes a copy of the same object. Nothing is written into the system without going through validation. When the transaction fails, the optimistic concurrency control creates a copy and resumes a new transaction. If the transaction is successful, the process moves on to the write phase.30 Q4. Describe two scheduling policies and outline how each is implemented. What are the strengths and weaknesses of each approach? Scheduling is important to determine which process goes first and which comes last. A scheduling policy determines which thread is executable in relation to others.31 A thread can have any one of the following policies: SCHED_FIFO (first-in/first-out scheduling), SCHED_RR (round-robin scheduling), SCHED_FG_NP (foreground scheduling; also known as SCHED_OTHER), and SCHED_BG_NP (background scheduling).32 The scheduling policy of the system must also satisfy two conditions: fast process response time (low latency) and high process throughput.33 To fulfill the requisites of the process, the scheduler utilizes a series of complex algorithms while conducting fair allocation without sacrificing efficiency.34 The simplest scheduling policy is the first-in-first-out where the first transaction request is first served before other consecutive ones. The system can be efficient but not appropriate for long runs. It could block even the simplest instructions thus efficiency is sacrificed. By queuing a series of transactions according to priority, the system might obstruct simple tasks. The first-in-first-out policy treats all processes as equal. To discriminate which activity is more important, the user can assign each process and priority and the ones receiving high priority labels would be accessed first. Problems that could occur in FIFO policies are priority inversions and starvation. Priority inversions occur when a priority task is dependent on the completion of another least priority task.35 The final problem that one encounters in priority scheduling is determining which job must go first. The FIFO would work well if the list of transactions that would be executed is relatively small and manageable. FIFO encounters problems when huge amount of tasks are queued waiting for their turn. A simple task may end up waiting too long to be executed. The round-robin policy is more efficient and fair compared to FIFO. Under this policy, the process with the highest priority is executed first. However, when the scheduler encounters two objects with similar levels of priority, the threads are timesliced.36 Timeslicing is “the numeric value that represents how long a task can run until it is preempted.”37 Timeslicing allows each queued priority transactions to be run by preempting the runs of other threads at fixed intervals. For example, if a process currently called out has a higher priority than the one that is currently operating, the operation is preempted to give way to the higher priority process. The round-robin policy has two advantages. It is simple to operate therefore translating to low overhead costs. Second, starvation is avoided. However, there are also disadvantages to the round-robin policy.38 First, it does not take scheduling priorities into account. Second, interactive jobs are not necessarily given the priority hence the longer wait times. Finally, Input-output jobs are not given the priority. 39 Prioritization can be determined using multiple queues based on priority. High priority jobs are placed on one queue and lower priority jobs are placed on another. Higher priority jobs must run first before the others. However, this sequencing could result to starvation for low priority jobs especially when they have to wait in line for the higher priority queue to complete its tasks. Another way of determining priority is using previous usage feedback. The tasks are queued depending on their recent usage. A new job that begins its process after being blocked is placed on a priority queue. Jobs that were stopped because they had consumed the allocated time slices are placed on the least priority list. By using the feedback, shorter jobs get the priority because of the frequency of the usage. The interactive processes are given priority because of their relatively short executions. However, this priority scheme could starve more intensive jobs if there are more than enough interactive processes to overwhelm them.40 The Windows 2000 operation system runs on priority driven, preemptive scheduling algorithm. It also has 32 priority levels with the top 16 referred to as real time levels and the bottom half is reserved for ordinary user activities. 41 References Barney, B. (2007) Introduction to parallel computing. Retrieved 24 June 2007 from: http://www.llnl.gov/computing/tutorials/parallel_comp/#Whatis Begun, D. and Brown, R. (2005). Intel ready to ship dual-core processors. Retrieved 24 June 2007 from: http://www.cnet.com/4520-6022_1-6079031-1.html?tag=txt Bouma, F. (2003). Concurrency control methods. Is there a silver bullet? Retrieved 25 June 2007 from: http://weblogs.asp.net/fbouma/archive/2003/05/24/7499.aspx Brown, R. (2005). Dual-core desktop CPU bout: AMD vs. Intel. Retrieved 24 June 2007 from: http://reviews.cnet.com/4520-10442_7-6389077-1.html Brown, R. (2005b). AMDs dual-core CPUs come out swinging. Retrieved 24 June 2007 from: http://www.cnet.com/4520-6022_1-6217968-1.html?tag=txt Chen, C., Novick, G. and Shimano, G. (n.d.) RISC vs. CISC Retrieved 24 June 2007 from: http://cse.stanford.edu/class/sophomore-college/projects-00/risc/risccisc/index.html CISC (2001) Retrieved 24 June 2007 from: http://www.webopedia.com/TERM/C/CISC.html CSCI.4210 Operating systems process scheduling (2004) Retrieved 25 June 2007 from: http://www.cs.rpi.edu/academics/courses/fall04/os/c8/index.html Execution scheduling (n.d.) Retrieved 25 June 2007 from: http://h30097.www3.hp.com/docs/posix/PCD1C_REV2/DOCU_010.HTM Huck, J. et al (2000). Introducing the IA-64 architecture, IEEE Micro (September-October 2000); 12-23. IA-64 architecture innovations (1999) Retrieved 22 June 2007 from: http://www.csee.umbc.edu/help/architecture/ia64archinn.pdf Kung, H.T. and Robinson, J.T. (n.d.) Optimistic methods for concurrency control. Retrieved 25 June 2007 from: http://www.cse.scu.edu/~jholliday/COEN317S06/OptimisticCC.ppt RISC (2001) Retrieved 24 June 2007 from: http://www.webopedia.com/term/r/risc.html Zelenovsky,R. and and Mendonça, A. (2004).Intel 64-bit architecture (IA-64). Retrieved 24 June 2007 from: http://www.hardwaresecrets.com/article/55/1 Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Further Computer Systems Architecture Assignment, n.d.)
Further Computer Systems Architecture Assignment. Retrieved from https://studentshare.org/technology/1541275-further-computer-systems-achitecture
(Further Computer Systems Architecture Assignment)
Further Computer Systems Architecture Assignment. https://studentshare.org/technology/1541275-further-computer-systems-achitecture.
“Further Computer Systems Architecture Assignment”, n.d. https://studentshare.org/technology/1541275-further-computer-systems-achitecture.
  • Cited: 0 times

CHECK THESE SAMPLES OF Further Computer Systems Architecture

The Revolution of Architectural Forms in the Computer Age

The field of architecture has also displayed its increasing use of the digital technology for designing purposes since recent past and this has resulted in speeding up the process and the quality of design and construction as compared to the earlier period.... As seen by Bart Lootsma (Zellner 1999), "instead of trying to validate conventional architectural thinking in a different realm, our strategy today should be to infiltrate architecture with other media and disciplines to produce a new crossbreed....
12 Pages (3000 words) Essay

From vision to virtual to real: how real is cyber-architecture

Cyber architecture has come out as a pertinent issue in this discussion.... He further states that virtual reality is pretty different to animations in the fact that users are actively involved and thus the interaction between the human beings and the computers remain, as will be seen further in this paper, discussed in the Human computer Interaction paragraph....
24 Pages (6000 words) Essay

Computer Web Services (SOA, restful services)

This essay describes the computer web services, that form today the core technology for developing distributed web applications.... The researcher focuses on the discussion of a massive demand for exchange of data and information across various enterprises.... … A Web Service that is discussed in the essay, is a powerful software tool that has massively boosted the efficiency of communication among various business organizations....
5 Pages (1250 words) Essay

Analysis Modeling, Design Concepts, and Architectural Design

Software design and architecture requires understanding of Entity Relationship Diagram, Class Diagram, relationship between design and programming, rationality behind modular programming, and relation between the concepts of portability and coupling.... These underlying concepts of… Example of Linux operating system explains benefits of layered architecture.... The case study of the automated insulin delivery system requirements further ER diagrams reflect the relationship between entities and associated attributes in the data model, whereas class diagram reflects the relationship between classes and their associated attributes of the proposed system handling real-life business scenario....
5 Pages (1250 words) Assignment

Advanced Computer Architecture

This paper "Advanced Computer architecture" focuses on the fact that microprocessors are also recognised as modernised 'Central Processing Units' (CPUs).... These are generally a chip comprises various programs that include control circuits, 'Arithmetic Logical Unit' (ALU), and register circuits....
8 Pages (2000 words) Case Study

Advanced Computer Architecture Classification in Regards to Microprocessing

The goal of the paper "Advanced Computer architecture Classification in Regards to Microprocessing" is to shed light on the organization and architecture of multiprocessor computing systems.... The multiprocessors involve computer architecture most common multiprocessor systems today use SMP architecture.... In this scenario of multicore processors, the SMP architecture applies to the nuclei, handling them as separate processors....
10 Pages (2500 words) Term Paper

Systems Architecture and Integration

This study purposes to address The paper "systems architecture and Integration" is an outstanding example of an essay on information technology.... Applications architecture happens to be the building block of enterprise architecture in information systems.... Application architecture could be used to discuss various applications useful in… The architectural uniqueness is based on business requirements.... Most people hardly reach an agreement when it comes to the definition of architecture....
3 Pages (750 words) Case Study

The Various Ways Architecture has related to Technology since WWII

This work called "The Various Ways architecture has related to Technology since WWII" demonstrates that technology and architecture walk hand in hand.... From this work, it is evident that architecture adversely responds to the changes in technological advancements.... The architecture itself is a team-work process and is never 'alone activity'.... For what has come to be seen as 'distinctive emblems of prestige', a big industry turned to modern architecture....
6 Pages (1500 words) Essay
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us