Processes handled depends on the algorithm implemented by

Processes in a system cycle between two states: a CPU(Central Processing Unit) burst, in which calculations are carried out, and anI/O (input/output) burst, in which data is either sent to or received by the system.The Central Processing Unit (CPU) attempts to maximise the efficiency and ‘fairness’of a CPU, by allowing a process to utilise the CPU, whilst another waits for anI/O. If this isn’t addressed, CPU cycles are lost whilst the CPU waits for asingle process’s I/O. The role of the CPU scheduler is to find the subsequentprocess for an inactive CPU, from a queue of immediately available processes.

The order in which the processes are handled depends on the algorithm implementedby the CPU scheduler.There are two types of algorithms applied: non-preemptiveand preemptive. Nonpreemptive algorithms must be applied applied when a newprocess is required, i.e.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

when either a process is terminated, or when aprocess transitions from a running to waiting state. However, when a processgoes from either a running state to ready state, or from waiting state to readystate, preemptive algorithms are optimum. In order to ensure that preemptivealgorithms can be implemented, it must be ensured that the hardware can supporttime interrupts.

Once the scheduling algorithm has selected a process, thedispatcher hands control of the CPU to it. This involves a context switch, auser mode switch and then a jump to the correct location in the newly loadedprogram. The time taken for this tocomplete is called the dispatch latency.When attempting to choosing the optimum algorithm toimplement in CPU scheduling, there is criteria to adhere by in order to makethe best decision: CPU utilisation should be maximised, in order not waste asingle CPU cycle wherever possible, and the ‘throughput’ (number of processes executedper unit time) should also be maximised. However, the turnaround time, waitingtime and response time should all be minimised as much as possible.For the following evaluation of algorithms, a singleCPU burst is assumed per process. First-Come First Served (FCFS) is the most primitiveform of scheduling, involving a first-in, first-out basis for the queue order,and is a non-preemptive algorithm.

The benefits of using FCFS is that it issimple in application and allows the process being executed to complete its CPUburst. Therefore, there is no need to context switch and take CPU away fromprocess preemptively. However, as well as having long waiting times, FCFS can alsoslow the system down in a process known as the convoy effect; preventing theCPU from being optimally used as after the large process finished, the CPU goesthrough periods of high intensity processing (with a minimal I/O) and viceversa.Round Robin (RR) shares similarities with FCFS,however, the CPU bursts are given limits called time quantum. If the processcompletes before the time quantum limit runs out, the schedule continues in afirst in, first out manner. However, if the quantum timer limit expires, thetime interrupt occurs and the process is placed at the back of the queue (thealgorithm is preempetive). This is ideal for small tasks, as even if a largeprocess is placed higher in the queue than a short one, each get a ‘fair’attempt.

However, this proves problematic with processes of equal time, whereFCFS could have a lower throughput and the overhead produced through ‘contextswitching’ reduces the CPU utilisation.Shortest-Job-First Scheduling (SJF) arranges thequeue in ascending order of length of CPU burst time, relying on an ‘exponentialaverage’ for estimating each subsequent CPU burst. SJF can be behave preemtivelywhen a smaller process arrives than is currently being handled. The smaller processis then prioritised and the throughput of the processes is maximised.

This is regardedas the optimum solution for CPU scheduling, however, estimating CPU bursts isinaccurate and the algorithm is bias to smaller tasks, to the point where ‘starvation'(low priority task is never tended to). A solution to this is to increase thepriority of older tasks in the queue (known as ‘aging’).Priority scheduling sees that processes are assigneda value relating to the urgency of execution (can be preempetive or non-preempetive).Values are assigned either by users or, in SJF, by finding the inverse of thesubsequent predicted CPU burst time.  Priorityscheduling also suffers from problems due to starvation, but can also beremedied using aging.

Multilevel queue scheduling can occur whereprocesses can be categorised and therefore assigned to a queue with a bespokealgorithm for the task. This can be further expanded into the most flexibletype of scheduling: ‘Multilevel Feedback-Queue scheduling’, where processes maytransition from queue to queue in order to incorporate ‘aging’ and re-evaluatingprocess priorities. Ideal because it is suited for any scenario, however, issuesarise with complexity of the system.

Multiple processor scheduling is also widelyimplemented; the most commonform of which is ‘symmetic multiprocessing’ (SMP). This is when a processor isself-scheduling and has a queue unique to it (assuming processors arehomogenous).

Author: Jeffery Tran

x

Hi!
I'm Mia!

Don't know how to start your paper? Worry no more! Get professional writing assistance from me.

Check it out