StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

The Concept of Process Management: CPU Scheduling - Essay Example

Cite this document
Summary
The author of the paper "The Concept of Process Management: CPU Scheduling" will begin with the statement that to understand CPU scheduling it is essential to know the concept of process management. All modern computers can do several things at the same time.
 …
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER92.4% of users find it useful
The Concept of Process Management: CPU Scheduling
Read Text Preview

Extract of sample "The Concept of Process Management: CPU Scheduling"

CPU scheduling To understand CPU scheduling it is essential to know the concept of process management. All modern computers can do several things at the same time. While running a user program, a computer can also be reading from a disk and outputting text to a screen or printer. In a multiprogramming system, the CPU also switches from program to program, running each for tens or hundreds of milliseconds. While, strictly speaking, at any instant of time, the CPU is running only one program, in the course of 1 second, it may work on several programs, thus giving the users the illusion of parallelism. Sometimes people speak of pseudoparallelism in this context, to contrast it with the true hardware parallelism of multiprocessor systems (which have two or more CPUs sharing the same physical memory). Keeping track of multiple, parallel activities is hard for people to do. Therefore, operating system designers over the years have evolved a conceptual model (sequential processes) that makes parallelism easier to deal with. (Tanenbaum, 2006). The difference between a process and a program is subtle, but crucial. An analogy may help make this point clearer. Consider a culinary-minded computer scientist who is baking a birthday cake for his daughter. He has a birthday cake recipe and a kitchen well stocked with the necessary input: flour, eggs, sugar, extract of vanilla, and so on. In this analogy, the recipe is the program (i.e., an algorithm expressed in some suitable notation), the computer scientist is the processor (CPU), and the cake ingredients are the input data. The process is the activity consisting of our baker reading the recipe, fetching the ingredients, and baking the cake. The key idea here is that a process is an activity of some kind. It has a program, input, output, and a state. A single processor may be shared among several processes, with some scheduling algorithm being used to determine when to stop work on one process and service a different one. Creation of a process: Operating systems need some way to make sure all the necessary processes exist. In very simple systems, or in systems designed for running only a single application (e.g., controlling a device in real time), it may be possible to have all the processes that will ever be needed be present when the system comes up. In general-purpose systems, however, some way is needed to create and terminate processes as needed during operation. There are four events that cause process to be created: 1. System initialization 2. Execution of a process creation system call by an existing process. 3. A user request to create a new process. 4. Initiation of a batch job. When an operating system is booted, often several processes are created. Some of these are foreground processes, that is, processes that interact with (human) users and perform work for them. Others are background processes, which are not associated with particular users, but instead have some specific function. For example, a background process may be designed to accept incoming requests for web pages hosted on that machine, waking up when a request arrives to service the request. Processes that stay in the background to handle some activity such as web pages, printing, and so on are called daemons. Large systems commonly have dozens of them. During the running of multiple processes, the processes compete among themselves. When more than one process is in the ready state and there is only one CPU available, the operating system must decide which process to run first. The part of the operating system that makes the choice is called the scheduler; the algorithm it uses is called the scheduling algorithm. Introduction to scheduling: Back in the old days of batch systems with input in the form of card images on a magnetic tape, the scheduling algorithm was simple: just run the next job on the tape. With timesharing systems, the scheduling algorithm became more complex, because there were generally multiple users waiting for service. There may be one or more batch streams as well (e.g., at an insurance company, for processing claims). On a personal computer you might think there would be only one active process. After all, a user entering a document on a word processor is unlikely to be simultaneously compiling a program in the background. However, there are often background jobs, such as electronic mail daemons sending or receiving e-mail. You might also think that computers have gotten so much faster over the years that the CPU is rarely a scarce resource any more. However, new applications tend to demand more resources. Processing digital photographs or watching real time video are examples. The operating system kernel. Most operating systems rely on the kernel concept. The existence of a kernel is a natural consequence of designing a computer system as a series of abstraction layers, each relying on the functions of layers beneath itself. The kernel, from this viewpoint, is simply the name given to the lowest level of abstraction that is implemented in software. In order to avoid having a kernel, one would have to design all the software on the system to not use abstraction layers; this would increase the complexity of the design to such a point that only the simplest systems could feasibly be implemented.(Wikipedia,2006) The kernel provides a basic set of primitive operations and processes. Primitive operations are subroutine calls or macro expansions which are part of the calling process and critical section for the process. Process involves synchronous execution with respect to the calling process. The kernel provides a way to protect system services like supervisor call instruction. It protects the O.S. and key O.S. data structures from interference by user programs. Also the fact that a process is executing in kernel mode is indicated by a bit in program status word. The kernel also provides a set kernel operations such as process management, resource management, input/output and interrupt handling. CPU scheduling The scheduler decides the process to run first by using a scheduling algorithm. The desirable features of a scheduling algorithm are as follows: Fairness: Make sure each process gets its fair share of the CPU Efficiency: Keep the CPU busy 100% of the time Response time: Minimize response time for interactive users Turnaround: Minimize the time batch users must wait for output Throughput: Maximize the number of jobs processed per hour (Stallings,2005) Scheduling in batch systems: First come First served: Probably the simplest of all scheduling algorithms is nonpreemptive first-come first-served. With this algorithm, processes are assigned the CPU in the order they request it. Basically, there is a single queue of ready processes. When the first job enters the system from the outside in the morning, it is started immediately and allowed to run as long as it wants to. As other jobs come in, they are put onto the end of the queue. When the running process blocks, the first process on the queue is run next. When a blocked process becomes ready, like a newly arrived job, it is put on the end of the queue. Shortest Job First Let's take a non preemptive batch algorithm in which run times are known in advance. For example consider a company in which employees can predict accurately how long will it take to run a batch of 1000 claims .When several equally important jobs are in the input queue, waiting to be started , the scheduler picks the shortest job first. Shortest Remaining Time Next A preemptive version of shortest job first is shortest remaining time next. With this algorithm, the scheduler always chooses the process whose remaining run time is the shortest. Again here, the run time has to be known in advance. When a new job arrives, its total time is compared to the current process' remaining time. If the new job needs less time to finish than the current process, the current process is suspended and the new job started. This scheme allows new short jobs to get good service. Scheduling in interactive systems Round Robin Scheduling Each process is assigned a time interval, called its quantum, which it is allowed to run. If the process is still running at the end of the quantum, the CPU is preempted and given to another process. If the process has blocked or finished before the quantum has elapsed, the CPU switching is done when the process blocks. Priority Scheduling Here each process is assigned a priority, and the runnable process with the highest priority is allowed to run. Scheduling in Linux Like any time-sharing system, Linux achieves the magical effect of an apparent simultaneous execution of multiple processes by switching from one process to another in a very short time frame. Scheduling policy: The scheduling algorithm of traditional Unix operating systems must fulfill several conflicting objectives: fast process response time, good throughput for background jobs, avoidance of process starvation, reconciliation of the needs of low- and high-priority processes, and so on. The set of rules used to determine when and how selecting a new process to run is called scheduling policy. (Bovet and Cesati,2000) Linux scheduling is based on the time-sharing technique. Several processes are allowed to run "concurrently," which means that the CPU time is roughly divided into "slices," one for each runnable process. If a currently running process is not terminated when its time slice or quantum expires, a process switch may take place. Time-sharing relies on timer interrupts and is thus transparent to processes. No additional code needs to be inserted in the programs in order to ensure CPU time-sharing. The scheduling policy is also based on ranking processes according to their priority. Complicated algorithms are sometimes used to derive the current priority of a process, but the end result is the same: each process is associated with a value that denotes how appropriate it is to be assigned to the CPU. In Linux, process priority is dynamic. The scheduler keeps track of what processes are doing and adjusts their priorities periodically; in this way, processes that have been denied the use of the CPU for a long time interval are boosted by dynamically increasing their priority. Correspondingly, processes running for a long time are penalized by decreasing their priority. When speaking about scheduling, processes are traditionally classified as "I/O-bound" or "CPU-bound". System calls related to scheduling: nice( ) Change the priority of a conventional process. getpriority( ) Get the maximum priority of a group of conventional processes. setpriority( ) Set the priority of a group of conventional processes. sched_getscheduler( ) Get the scheduling policy of a process. sched_setscheduler( ) Set the scheduling policy and priority of a process. sched_getparam( ) Get the scheduling priority of a process. sched_setparam( ) Set the priority of a process. sched_yield( ) Relinquish the processor voluntarily without blocking. sched_get_ priority_min( ) Get the minimum priority value for a policy. sched_get_ priority_max( ) Get the maximum priority value for a policy. sched_rr_get_interval( ) Get the time quantum value for the Round Robin policy Process preemption Linux processes are basically preemptive. If a process enters the TASK_RUNNING state, the kernel checks whether its dynamic priority is greater than the priority of the currently running process. If it is, the execution of current is interrupted and the scheduler is invoked to select another process to run. The scheduling algorithm The Linux scheduling algorithm works by dividing the CPU time into epochs. In a single epoch, every process has a specified time quantum whose duration is computed when the epoch begins. In general, different processes have different time quantum durations. The time quantum value is the maximum CPU time portion assigned to the process in that epoch. When a process has exhausted its time quantum, it is preempted and replaced by another runnable process. Of course, a process can be selected several times from the scheduler in the same epoch, as long as its quantum has not been exhausted--for instance, if it suspends itself to wait for I/O, it preserves some of its time quantum and can be selected again during the same epoch. The epoch ends when all runnable processes have exhausted their quantum; in this case, the scheduler algorithm recomputes the time-quantum durations of all processes and a new epoch begins.In order to select a process to run, the Linux scheduler must consider the priority of each process. Actually, there are two kinds of priority: Static priority: This kind is assigned by the users to real-time processes and ranges from 1 to 99. It is never changed by the scheduler Dynamic priority: This kind applies only to conventional processes; it is essentially the sum of the base time quantum (which is therefore also called the base priority of the process) and of the number of ticks of CPU time left to the process before its quantum expires in the current epoch. Windows 2000 Scheduling Windows 2000 (W2K) is designed to be as responsive as possible to the needs of a single user in a highly interactive environment or in the role of a server. W2K implements a preemptive scheduler with a flexible system of priority levels that includes round-robin scheduling within each level and, for some levels, dynamic priority variation on the basis of their current thread activity. Process and thread priorities Priorities in W2K are organized into two bands, or classes: real time and variable. Each of these bands consists of 16 priority levels. Threads requiring immediate attention are in the real-time class, which includes functions such as communications and real-time tasks. Overall, because W2K makes use of a priority-driven preemptive scheduler, threads with real-time priorities have precedence over other threads. On a uniprocessor, when a thread becomes ready whose priority is higher than the currently executing thread, the lower-priority thread is preempted and the processor given to the higher-priority thread. Priorities are handled somewhat differently in the two classes. In the realtime priority class, all threads have a fixed priority that never changes. All of the active threads at a given priority level are in a round-robin queue. In the variable priority class, a thread's priority begins at some initial assigned value and then may change, up or down, during the thread's lifetime. Thus, there is a FIFO queue at each priority level, but a process may migrate to one of the other queues within the variable priority class The initial priority of a thread in the variable priority class is determined by two quantities: process base priority and thread base priority. One of the attributes of a process object is process base priority, which can take on any value from 0 through 15. Each thread object associated with a process object has a thread base priority attribute that indicates the thread's base priority relative to that of the process. The thread's base priority can be equal to that of its process or within two levels above or below that of the process. Multiprocessor Scheduling When W2K is run on a single processor, the highest-priority thread is always active unless it is waiting on an event. If there is more than one thread that has the highest priority, then the processor is shared, round robin, among all the threads at that priority level. In a multiprocessor system with N processors, the (N - 1) highest priority threads are always active, running exclusively on the (N - 1) extra processors. The remaining, lower-priority, threads share the single remaining processor. For example, if there are three processors, the two highest-priority threads run on two processors, while all remaining threads run on the remaining processor. The foregoing discipline is affected by the processor affinity attribute of a thread. If a thread is ready to execute but the only available processors are not in its processor affinity set, then that thread is forced to wait, and the executive schedules the next available thread. Hence it can be inferred that the Linux operating system mainly relies on preemption scheduling and time sharing as is evident form the above observation. Linux assigns dynamic priorities to it's processes. Also in Linux there are two types of process invocations : Direct invocation and Lazy invocation. In direct invocation the scheduler is invoked directly when the current process must be blocked right away because the resource it needs is not available. Lazy invocation is performed when current process has used up its quantum of CPU time and also When a process is woken up and its priority is higher than that of the current process. Windows 2000 also uses preemption and round robin scheduling. Here multiprocessor scheduling is performed. Bibliography 1.Stallings, Williams (2005). Windows 2000, Operating systems Internals and design principles. 2. Tanenbaum, Andrew S. (2006). Operating systems design and implementation(3rd edition),Prentice Hall publications. 3. Silberschatz, Galvin, Gagne. (2006) Operating systems and concepts(6th edition) Wiley publications 4. Scheduling. Wikipedia-the free encyclopedia, http://en.wikipedia.org/wiki/Scheduling_%28computing%29 5. Bovet, Cesati (2005). Scheduling in linux, Understanding Linux kernel (3rd edition) O' Reilly publishers 6.Windows 2000 Wikipedia-the free encyclopedia http://en.wikipedia.org/wiki/Windows %2000 7.Basic concepts of real time operating systems, (Dec 2006) Retrieved from http://www.linuxdevices.com/articles/AT4627965573.html Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“CPU scheduling Essay Example | Topics and Well Written Essays - 2500 words”, n.d.)
CPU scheduling Essay Example | Topics and Well Written Essays - 2500 words. Retrieved from https://studentshare.org/miscellaneous/1513619-cpu-scheduling
(CPU Scheduling Essay Example | Topics and Well Written Essays - 2500 Words)
CPU Scheduling Essay Example | Topics and Well Written Essays - 2500 Words. https://studentshare.org/miscellaneous/1513619-cpu-scheduling.
“CPU Scheduling Essay Example | Topics and Well Written Essays - 2500 Words”, n.d. https://studentshare.org/miscellaneous/1513619-cpu-scheduling.
  • Cited: 0 times

CHECK THESE SAMPLES OF The Concept of Process Management: CPU Scheduling

Concept and Definition of Project Management

the concept phase in project management involves; ... evelopment of the concept statement ... The author of the particular paper "Concept and Definition of Project management" will begin with the statement that project management refers to the activities involved in the reaching of certain objectives in terms of creating a specific portfolio.... According to Harrison & Lock (2004), project management is the art of creating single management capabilities to achieve an objective portfolio based on set goals and objectives....
5 Pages (1250 words) Research Paper

Operating Systems Design and Implementation

ased on the above discussion the primary purposes of an Operating System can be enlisted as under:process management: Process is defined as an instance of a program that is currently under execution (Stallings, 2002).... A process needs certain resources such as cpu time, memory, files and IO devices to accomplish its tasks....
26 Pages (6500 words) Essay

Project Management as a Process

The paper "Project management as a Process" discusses that a very critical stage in the project lifecycle is the implementation stage.... Project management is a well-planned activity to carry out any action in the future.... 'Project management is a carefully planned and organized effort to accomplish a specific (and usually) one-time effort, for example, construct a building or implement a new computer system.... (Project management, 2007, Free management Library) http://www....
9 Pages (2250 words) Essay

Operating Systems, Process Concept, and State

cpu scheduling Information: depicts priority, position in the queue, etc.... In a time-shared computer system the cpu activities are termed are programs.... bull; cpu registers: Includes accumulators, index registers, stack pointers, condition codes, state registers (for interrupts, etc.... bull; Accounting Information: depicts cpu usage time, process number, time limits, etc.... bull; Memory management Information: contains vales of base and limit registers, page/segment tables, etc....
5 Pages (1250 words) Research Paper

Manufacturing Planning and Scheduling Techniques

An assortment of process scenarios was applied to the display such as punch pace, side perimeter, along with the deepness of the die cavity.... In the research paper 'Manufacturing Planning and scheduling Techniques,' the author examines the manufacturing process of light alloy wheels, which includes a number of complex techniques that need to be thoroughly understood prior to designing a part.... Forging is a basic process included in the production process of light alloy wheels....
15 Pages (3750 words) Assignment

Project Scheduling Modeling

In the paper 'Project scheduling Modeling' the author discusses the relevance of project scheduling modeling the showing its importance in the business environment today and future prospects.... The author states that project scheduling is important for in the efficient planning and management of the project.... Businesses that apply project scheduling in knowing a schedule of earliest and most recent commencement and termination times for every activity that they undertake, and that makes them complete projects in the earliest possible time....
5 Pages (1250 words) Assignment

Operating System Concepts

The present assignment entitled "Operating System Concepts" dwells on the concept of a user-level thread library.... As the author puts it, a user-level thread library provides support for thread creation, scheduling, and management with no support from the kernel.... Usually, it's done by default scheduling which user-level thread library does.... Usually, four approaches are used by multiprocessors Load Sharing, Gang scheduling, dedicated processor assignment and Load Sharing....
4 Pages (1000 words) Assignment

The Era of Multi-Core Processors

process management is critical in the operating system.... Operating system (OS) is a software that offers access to the various hardware resources such as memory, cpu, and devices that encompass a computer system.... emory management and storage.... This management must be done efficiently.... rocess management.... evice management.... It ensures that each process has been allocated enough memory to execute its task and also ensure each process does not use the memory allocated to a different process....
10 Pages (2500 words) Report
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us