Scheduling algorithm in operating systems is used to maximize the efficiency and utilization of their resources, which is the primary goal of an OS. Scheduling algorithms determine the order in which various processes are executed by the CPU or any other shared resource. They are important for optimizing the utilization of a resource. In this article, we will look at the various scheduling algorithms in operating system for CPU and Disk (storage device). It is suitable for beginners who want a comprehensive understanding of scheduling algorithms and the comparison between CPU scheduling and disk scheduling algorithms.
Table of Contents:
What are Scheduling Algorithms in Operating Systems?
The primary goal of any operating system is to maximize the efficiency and utilization of its resources. Scheduling algorithms in operating systems are a set of rules that determine the order in which various tasks or processes are executed or given access to a shared resource. They are crucial for optimizing resource utilization, whether it’s the CPU, disk, or other system components. These algorithms organize and decide the order in which processes will be executed or serviced, managed by a component often referred to as a “scheduler.”
Why Do We Need Scheduling in OS?
Scheduling algorithms in OS are essential for the following reasons:
- Optimal Resource Utilization: A scheduling algorithm ensures that no single resource remains idle when tasks are waiting to access it. It aims to increase the system’s throughput and responsiveness.
- For a CPU, scheduling algorithms are important so that multiple programs and processes can share multiple cores of the CPU efficiently. They reduce the possibility of starvation.
- For a disk, scheduling algorithms in OS are more vital as compared to the CPU due to the latency of disk I/O operations. Disk I/O (Input/Output) operations are simply how your computer reads data from and writes data to its storage devices, like a Hard Disk Drive (HDD) or Solid State Drive (SSD). When multiple processes issue I/O requests for different disk blocks, the scheduler optimizes the sequence of these requests.
Preemptive vs Non-Preemptive Scheduling Algorithms
Preemptive refers to the ability of the operating system to interrupt a currently executing process and allocate the CPU to another process. This interruption can occur even if the first process has not completed its task or has voluntarily given up the CPU. Think of it as the OS having the power to “take away” the CPU from a running process.
- Preemptive scheduling algorithms are those algorithms that allow the operating system to interrupt an already running process and switch the CPU to another process, even if the previous process wasn’t complete due to one of the three reasons, if a higher-priority process arrives, if the CPU time that was given to the process expires, or some interruption takes place. Some examples of preemptive scheduling algorithms in operating systems are the round robin scheduling algorithm, Shortest Remaining Time First (SRTF), Preemptive Priority Scheduling, and more.
- Non-preemptive scheduling algorithms do not allow the OS to interrupt a process once it starts executing. It runs until completion.
Some examples of non-preemptive scheduling algorithms in operating systems are First Come First Serve (FCFS), Shortest Job First (SJF) (non-preemptive version), and Non-preemptive Priority Scheduling.
Key Terminologies Used in Scheduling Algorithms in OS
Before we move forward with understanding the types of scheduling algorithms, you must get familiar with a few of the terminologies used.
- Job Queue: This queue holds all the programs in the system. They wait in this queue before being submitted to the ready queue to get executed.
- Ready Queue: This queue holds all the processes that are loaded into the main memory and are ready and waiting to execute on the CPU. It is this queue from which the CPU scheduler selects the processes to execute.
- I/O Queue: This queue is also known as the device queue. This queue consists of processes that are currently blocked because they are waiting for an I/O operation. It could be something like reading data from a hard drive. Once their I/O is complete, they typically move back to the ready queue.
- Throughput: It is the metric to calculate the efficiency and productivity of a resource. It is the total number of processes or requests that complete their execution or service per unit of time.
- Response Time: It is the duration from when the request was created to the time the response was given. A low response time means a smooth user experience.
- Starvation: Starvation is a phenomenon where a request/process is repeatedly delayed or continuously waits for a resource and never gets scheduled to execute.
- Context Switching: The process of saving the current state of a running process and loading the saved state of another process. This mechanism creates the illusion of parallel execution.
- Gantt Chart: A Gantt chart is a visual diagram of the timeline that is used to depict the execution schedule of processes on a CPU or on a disk over time.
Terminologies Used in CPU Scheduling
- Arrival Time (AT): This is the time when a process initially enters the ready queue.
- Burst Time (BT): The burst time is also known as Execution or CPU time. It is the time in seconds or milliseconds that a process requires to complete its execution. This is often an estimated value used by scheduling algorithms.
- Completion Time (CT): This is the exact time when the process completes and leaves the CPU.
- Turnaround Time (TAT): The total time a process stays within the system, from the arrival time to its completion.
TAT = CT - AT (Turnaround Time = Completion Time - Arrival Time)
- Waiting Time (WT): It is the time a process spends waiting in the ready queue for its turn to access the CPU. This excludes time spent doing I/O.
WT = TAT - BT (Waiting Time = Turnaround Time − Burst Time)
- CPU Utilization: The percentage of time that the CPU is actively busy executing processes versus the time it is sitting idle doing nothing.
- Time Slice: This term is used in preemptive scheduling, which is the time a process is allowed to run on the CPU before being preempted.
Terminologies Used in Disk Scheduling
- Track: A concentric circle on the surface of a disk platter where data is stored.
- Cylinder: A stack of all tracks that are at the same radial distance on all platters in a disk drive.
- Sector: It is a fixed-size segment of a track, which represents the smallest physical unit of data that can be read from or written to the disk.
- Seek Time: It is the time it takes for a disk’s read/write head to move from its current position to the specific track where the target data is located. The goal is to try and reduce this time.
- Rotational Latency: After the read/write head reaches the correct track, this is the additional time required for the desired sector to physically rotate under the read/write head.
Process states
A process goes through five major states in its lifetime. They are:
- New
- Running
- Waiting
- Ready
- Terminated
At the time a process gets assigned via scheduling algorithms in operating system, they are in the running state.
CPU Scheduling Algorithms in Operating System
The process scheduling algorithms in operating systems are responsible for maximizing CPU performance by optimizing CPU utilization. There are four major CPU scheduling algorithms in operating systems: FCFS, Shortest Job First (SJF), Round Robin, and Shortest Remaining Time First (SRTF). Let us look at them in detail one by one.
1. FCFS Scheduling Algorithm in OS
The first-come, first-served scheduling algorithm is the simplest CPU scheduling algorithm. It follows the principle that the process that requests the CPU will be allocated the CPU first. It is a non-preemptive scheduling algorithm.
When to choose this algorithm
It is easy to implement and requires minimal overhead for tracking processes.
Advantages
- It is simple to implement, making it easier for OS designers and students to understand and implement.
- Processes are served in the exact order they arrive.
Disadvantages
- If a large process that has a long burst time arrives first, all the smaller processes that come after will have to wait for its completion.
- This increases the average waiting time and turnaround time, making the CPU inefficient.
2. Shortest Job First (SJF) Scheduling Algorithm in OS
Like the name suggests, the shortest job first scheduling algorithm works on the principle of giving the CPU to the process that has the shortest burst time. It can be both preemptive and non-preemptive. When it is preemptive, it becomes the Shortest Remaining Time First (SRTF) Scheduling Algorithm.
When to choose this algorithm
Operating systems choose SJF when they need minimum average waiting time and minimum average turnaround time for a given set of processes.
Advantages
- This algorithm provides the lowest possible average waiting time and turnaround time for a set of processes, which leads to efficient resource utilization.
Disadvantages
- This algorithm needs to know the burst time of each process beforehand, which is impossible to predict accurately in a real system.
- The processes with longer wait times might be put on hold for a long time before they get executed, which might lead to starvation of these processes.
Get 100% Hike!
Master Most in Demand Skills Now!
3. Round Robin Scheduling Algorithm
In the round-robin scheduling algorithm, each process gets a small amount of CPU time called a time slice. The process executes for its time slice and is preempted. If the process completes its burst time, the process releases the CPU; otherwise, the process is added at the end of the ready queue and gets a chance to execute again.
When to choose this algorithm
The round-robin algorithm is chosen to provide fair CPU allocation to each process. It prevents starvation and a single process from having the CPU for too long, making it suitable for multitasking environments.
Advantages
- Every process gets a chance to run, preventing starvation and ensuring that all processes receive some CPU time.
- The system becomes more responsive because Round Robin gives each process time, as no process waits indefinitely.
Disadvantages
- The performance of the CPU is highly dependent on the time slice. If the time slice is too long, Round Robin starts behaving like FCFS, and if it is too small, the excessive context switching increases overhead.
4. Shortest Remaining Time First (SRTF) Scheduling Algorithm
As you know from the above section, the shortest remaining time first scheduling algorithm in OS is just the preemptive version of the SJF algorithm. In contrast with SJF, which assigns the CPU to the process that has the least burst time, SRTF assigns the CPU to the process that has the shortest burst time remaining. If a new process arrives with a CPU burst time that is shorter than the remaining time of the currently executing process, the currently executing process is preempted, and the new process starts executing.
When to choose this algorithm
SRTF is chosen when minimizing average waiting time and turnaround time is a top priority, and the system can handle the overhead of frequent preemption.
Advantages
- This algorithm also gives the lowest possible average waiting time, which dynamically adjusts to any new process that arrives.
- It also effectively minimizes the turnaround time by quickly processing shorter jobs, whether newly arrived or already existing, first, reducing their total time in the system.
Disadvantages
- This algorithm also needs to know the burst time beforehand to accurately schedule the processes.
- Constant checking for shorter remaining times and the preemption of running processes can lead to overhead, reducing efficiency.
Other Important CPU Scheduling Algorithms in OS
Scheduling Algorithm |
Principle It Works On |
When to Choose It |
One Advantage |
One Disadvantage |
Priority Scheduling Algorithm in OS |
Assigns priority; higher priority = first served |
When some processes are more important |
Handles urgent tasks first |
Low-priority processes may starve |
Longest Job First (LJF) Scheduling |
Picks process with the longest burst time |
When long processes are preferred |
Reduces switching overhead |
Short processes may wait too long |
Longest Remaining Time First (LRTF) |
Preemptive version of LJF |
When longer tasks need to be completed early |
Finishes longer tasks quickly |
Starvation of shorter tasks |
Highest Response Ratio Next (HRRN) |
Based on response ratio = (wait time + burst) / burst |
To balance between short and long processes |
Fairer than SJF or LJF |
Complex to calculate the response ratio |
Multilevel Queue Scheduling |
Uses multiple queues based on priority or type |
When tasks can be grouped by priority/class |
Organizes different types of processes well |
Rigid – hard to move between queues |
Multilevel Feedback Queue Scheduling |
Like MLQ, but allows movement between queues |
When flexible and adaptive scheduling is needed |
Adapts to process behavior |
Complex to implement and tune |
Disk Scheduling Algorithms in OS
In an operating system, a disk refers to secondary storage devices, such as Hard Disk Drives (HDDs) and Solid State Drives (SSDs). While HDDs have a physical read/write head whose movement introduces a time delay called the seek time, optimizing data access in storage types is important. Disk scheduling algorithms in OS are specifically designed to minimize this seek time for HDDs and generally enhance performance across all storage devices by ordering read/write requests.
1. FCFS Disk Scheduling Algorithm
FCFS stands for first-come, first-served and is the simplest disk scheduling algorithm that executes the requests in the same order they are requested. This algorithm does not take the physical location of the file and, therefore, might increase the seek time.
When to choose this algorithm
An operating system should use this algorithm when the system is very simple so that the implementation is easy. This algorithm will only work for systems where disk head movement is not a priority.
Advantages
- This algorithm is easy to implement because it is straightforward and queue-based.
- There is no possibility of starvation because every request gets processed.
Disadvantages
- The seek time is high due to large total head movements.
- Due to the potentially excessive head movement, it can lead to high average response times and low I/O throughput.
2. SSTF (Shortest Seek Time First) Scheduling Algorithm
The shortest seek time first (SSTF) scheduling algorithm selects the request that has the shortest seek time from its current position. This principle minimizes the immediate distance.
When to choose this algorithm
SSTF is chosen when the primary goal is to minimize the total head movement and reduce the average seek time, which maximizes disk I/O throughput.
Advantages
- This algorithm minimizes the seek time by always moving to the closest request and improves disk efficiency.
- Since there is less head movement, more requests get served in a given time. This increases the throughput as well as improves the response time of a disk.
Disadvantages
- The requests that are far away from the current head position might get indefinitely postponed, which leads to a starvation of distant requests.
- This algorithm requires prior knowledge of all the requests and the locations they are requesting.
3. SCAN Scheduling Algorithm
The SCAN scheduling algorithm starts at one end of the disk and moves towards the other end, servicing all the requests in the path. After reaching the end, it reverses its direction and starts moving again, servicing all the requests on the way.
When to choose this algorithm:
This algorithm is suitable for systems with moderate to heavy loads where requests can be spread across the entire disk.
Advantages
- It removes any starvation possibilities as the arm travels the entire disk, so every request eventually gets serviced.
- This algorithm performs well under heavy disk loads, as it provides an even distribution of service across the disk.
Disadvantages
- The requests in the middle of the disk may be serviced more frequently than those at the ends, creating a bias.
- In the SCAN disk scheduling algorithm, there is a longer waiting time for new requests that arrive just after the arm has passed.
4. C-SCAN Scheduling Algorithm
It is the modified version of the SCAN disk scheduling algorithm, which deals with the middle request bias. In the C-SCAN algorithm, instead of reversing its direction after reaching the end, it jumps back to the start without servicing any requests on the return trip.
When to choose this algorithm:
C-SCAN is preferred more when a uniform wait time is required.
Advantages
- It provides a uniform wait time for the requests compared to the SCAN algorithm.
- This algorithm also prevents starvation.
Disadvantages
- This algorithm has a larger total head movement since it “jumps” back without servicing any requests in its journey back.
- The return trip from one end of the disk to the other is unproductive in terms of servicing requests. This decreases efficiency.
5. LOOK and C-LOOK Disk Scheduling Algorithms
The LOOK algorithm is also a variation of the SCAN algorithm. Instead of going all the way to the end, the head only moves the disk arm as far as the furthest request in the current direction. Once it services the furthest request in that direction, it reverses direction and services requests on its way back, again only going as far as the furthest request in the new direction. In the C-LOOK algorithm, the disk “jumps” back to the first request in the opposite direction without servicing any request on the return trip.
When to choose this algorithm:
These disk scheduling algorithms are chosen when efficiency and performance are the priority. They are generally preferred over SCAN and C-SCAN in real-world scenarios due to their optimized head movement.
Advantages
- The LOOK and C-LOOK algorithms significantly reduce the total head movement. Saving unnecessary movements increases the performance and improves the response time, giving higher disk throughput.
Disadvantages
- These algorithms also face the same disadvantages as the SCAN and C-SCAN disk scheduling algorithms.
Comparison of CPU and Disk Scheduling Algorithms
Both CPU Scheduling algorithms and disk scheduling algorithms aim to optimize resource utilization; they differ in objectives and the nature of the resources they manage. Let us look at some other factors to completely understand the difference between the two.
Factor |
CPU Scheduling Algorithms |
Disk Scheduling Algorithms |
Primary Goal |
Maximize CPU utilization and throughput and minimize waiting time |
Minimize disk access time, seek time and head movement |
Resource Characteristics |
CPU is fast and time-shared among processes |
Disk is mechanical, slower, and involves physical movement |
Bottleneck |
CPU availability and process coordination |
Disk I/O speed, seek time, and rotational latency |
Preemption |
Yes – processes can be interrupted and resumed (e.g., in Round Robin) |
Rare – I/O operations are usually not preempted once initiated |
Key Metrics |
Turnaround Time, Waiting Time, Response Time, CPU Utilization |
Seek Time, Rotational Latency, Throughput, Disk Utilization |
Typical Algorithms |
FCFS, SJF, Priority, Round Robin, SRTF, HRRN, etc. |
FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK |
Shared Algorithms |
FCFS and Priority (with modifications) |
FCFS and Priority (scheduling disk access) |
Nature of Scheduling |
Logical/Algorithmic (based on process attributes like burst time) |
Physical (based on disk head position and track locations) |
Impact on System |
Affects overall multitasking and process responsiveness |
Affects read/write performance and I/O throughput |
Complexity |
More dynamic – involves multiple process states and frequent context switches |
More spatial – involves optimizing disk arm/head movement |
User Control |
Often controlled by the OS scheduler automatically |
Can be influenced by the file system and OS I/O strategies |
Become a Software Engineer with Job-Ready Skills!
Apply Now and launch your software engineering journey with confidence!
Conclusion
To summarize, the scheduling algorithms are the backbone of an efficient and fast operating system. It maximizes the resource utilization across both CPU and disk by determining the order of execution for the processes. These algorithms ensure fairness, prevent starvation, and enhance the overall system’s responsiveness. A deep understanding of these algorithms will help you design a high-performing computing environment.
Scheduling Algorithm in Operating Systems – FAQs
Q1. What are scheduling algorithms in OS?
Scheduling algorithms decide the order in which processes run on the CPU to optimize performance, efficiency, and fairness.
Q2. What are the four types of CPU scheduling algorithms?
Common types of scheduling algorithm include: First-Come, First-Served (FCFS), Shortest Job First (SJF) , Round Robin (RR) , Priority Scheduling.
Q3. What is preemptive and non-preemptive algorithm in OS?
Preemptive allows a process to be interrupted. Non-preemptive runs a process to completion once started. Preemptive ensures better responsiveness.
Q4. What are the 4 disk scheduling algorithms?
You commonly use FCFS (First-Come, First-Served), SSTF (Shortest Seek Time First), SCAN, LOOK disk scheduling algorithm.
Q5. What is the LOOK algorithm in OS?
LOOK moves the disk arm toward the nearest request, services requests in one direction, then reverses—only going as far as the last request, not to the end of the disk.