Process Scheduling in Operating System

Process-Scheduling-in-Operating-Feature.jpg

A computer system often needs to run multiple programs simultaneously. Some programs may be running, while others are waiting for their turn. The operating system decides which program should run first and which should wait based on the priority. This process is called process scheduling. It helps the system work smoothly by making sure the CPU is used in the best possible way. In this blog, you will explore process scheduling, its different categories, and the types of queues used in process scheduling in detail.

Table of Contents:

What is Process Scheduling in Operating System?

What is Process Scheduling in an Operating System

Process Scheduling in Operating System is the method used by the OS for choosing which process will run next in the CPU. As the CPU can run only one process at a time (on single-core systems), it is the operating system’s role to choose which process to allocate the CPU to, and for how long. The OS uses a scheduler, which is a special part of the system that allows the operating system to decide which process to run and in what order. The scheduler ensures that all processes get a chance to run on the CPU and maintains a good working state of the system.

Objectives of Process Scheduling in Operating System

The primary objectives of process scheduling are:  

  1. Maximize CPU Usage: The scheduler tries to keep the CPU active by always assigning it a task to avoid being idle.
  2. Minimize Waiting Time: This focuses on reducing how long a process stays in the ready queue before getting CPU time.
  3. Fairness: Every process should get a fair chance to run without waiting too long or getting skipped.
  4. Efficiency: The system should handle tasks in the best way, using the least time and resources.
  5. Improve Response Time: The goal is to reduce the time a process takes to respond once it gets a user request.
Become a Job-Ready Software Engineer
Master coding, system design, and real-world projects with expert mentors. Get certified and land your dream tech job
quiz-icon

Categories of Process Scheduling in Operating System

Categories of Process Scheduling in OS

Let’s explore the different categories of process scheduling in OS, which are based on the state of the process:

1. Long-Term Scheduling

Long-term scheduling refers to the process of determining which jobs (or processes) from the job pool (held in secondary storage) are to be brought into memory and placed in the ready queue to be executed.

Features:

  • Controls the number of processes in memory
  • Decides which processes will be entered into the system
  • Takes place less frequently than other schedulers
  • Helps balance the mix of I/O-bound and CPU-bound processes in the system

2. Short-Term Scheduling (CPU Scheduling)

Short-term scheduling picks one process off the ready queue and allocates it to the CPU for execution. This occurs frequently and must be completed rapidly.

Features:

  • It takes place frequently, usually every few milliseconds.
  • It is used to decide which process gets the CPU next.
  • It is very fast and helps in avoiding delays.
  • The performance of the system is affected directly.

3. Medium-Term Scheduling

Medium-term scheduling suspends some processes from memory for a limited time in order to decrease the load on the system and free up resources.

Features:

  • Helps manage overloaded memory situations.
  • Suspended processes are kept in secondary memory.
  • This frees up CPU and RAM resources for active processes.
  • Suspended processes are resumed when the resources become available.

Process Scheduling Queues in Operating System

When a process comes into the system, it passes through several different kinds of queues. These queues allow the OS to manage the process from the time it arrives until it leaves the system:

1. Job Queue

  • This queue contains all of the processes that are waiting to get into the system.
  • The job queue is managed by the long-term scheduler.

2. Ready Queue

  • The ready queue contains all processes that are ready to execute and are waiting for execution on the CPU.
  • The short-term scheduler will select a process from the ready queue.

3. Waiting (or Blocked) Queue

  • Some processes might need the resources from the input or output.
  • These tasks stay in the waiting queue until the needed resource becomes available and they can continue running.

4. Device Queue

  • Each device (like a printer or disk) has its own queue.
  • Processes waiting for a device must wait for that particular device in the device queue.

These many different queues help the OS keep track of each process and ensure that nothing gets missed.

Get 100% Hike!

Master Most in Demand Skills Now!

Two-State Process Model in Operating System

The Two-State Process Model is one of the simplest ways the operating system controls running programs. In this model, each process is allowed to exist in only one of two states at a time, making it easier to manage how the CPU is used.

1. Active (Running) State

In this state, the process is being executed directly by the CPU. Only one process can be active at a time on a single-core system. The process uses the CPU to carry out its instructions. The operating system monitors this running process closely.

2. Inactive (Not Running) State

In this state, the process is not currently using the CPU but still exists in the system. It may be in the ready queue, waiting for its turn to use the CPU. Or it may be waiting for input or output to finish, like reading a file. Once the wait is over, it can move back to the ready queue.

How Does the Two-State Model Work?

  • When the CPU is free, the operating system will take a process from the ready queue and put it in the running state.
  • When the running process is done with its time slice or needs to wait for something, it goes to the not-running state.
  • The operating system will select a process to run.

This cycle continues, allowing all processes to eventually get CPU time.

Context Switching in Process Scheduling

Context Switching in Process Scheduling

Context Switching is the process by which the operating system saves the current state of a running process and loads the state of another process. This switching allows the CPU to pause one task and continue or begin another, enabling smooth multitasking. It helps the system manage multiple tasks efficiently, even when the CPU can only handle one at a time.

Why is Context Switching Necessary?

In a computer, many tasks may be active at the same time. But in a single-core system, the CPU can only handle one task at a time. To ensure every task gets a chance to run, the operating system rapidly changes between them.

Every time the CPU pauses one task and starts another, a context switch takes place. This helps the system run smoothly and manage multiple tasks without delay.

Steps Involved in Context Switching

Let’s go through the steps involved in context switching:

Step 1: Suspend the Current Process

When the CPU has determined the need to move from one process to another, the first thing the Operating System does is suspend the process that is running. This happens when a process finishes its time slot or needs to wait for input or output that is currently unavailable or still in progress. The operating system does not end the process but pauses it for a while to give another process access to the CPU.

Step 2: Save the Context

Once the process is suspended, the operating system saves important information about it, including the current values in the CPU registers and the location where the program stopped. This information is stored in a structure called the Process Control Block. It helps the system continue the program later from the same point without losing any progress.

Step 3: Selecting a New Process

The next step is for the short-term scheduler to pick a new process from the ready queue to run on the CPU. The new process selection is normally based on a scheduling algorithm like First-Come-First-Served, Round Robin Scheduling, or Priority Scheduling. The CPU ensures that each process gets a fair share of time and keeps system performance at a level that satisfies user expectations.

Step 4: Load the Context of the New Process

After selecting a new process, the operating system loads the saved context of that process into the CPU. This includes the correct values for CPU registers, the program counter, and the memory state. So, the CPU starts running the process again from the same point where it was paused. For the process, it feels like nothing was stopped, just like picking up a book and reading from the last page you marked.

Step 5: Execute the Process

The CPU begins running the new process. From the view of that process, it continues as if it was never paused. This finishes the context switching, and the operating system stays ready to switch again whenever needed, making sure every process gets fair time on the CPU.

Criteria for Process Scheduling in OS

1. CPU Utilization: It is necessary that the CPU is always active and stays busy with any task, like system work or a user program.

2. Throughput: Throughput means how many processes get finished in a given time. More processes completed are always better.

3. Waiting Time: Waiting time means how long a process stays in the ready queue, and it is best to keep this time short.

4. Turnaround Time: Turnaround time means the total time taken by a process from start to finish and should be as low as possible.

5. Response Time: Response time means the time taken to give a reply to the user. It is very useful in fast or live systems.

Comparison of Process Schedulers in OS

Feature Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
Main Role Chooses which jobs enter the ready queue Chooses which process runs on the CPU Suspends and resumes processes in memory
Frequency of Execution Runs occasionally Executes very frequently, often within milliseconds. Runs sometimes when needed
Required Speed Needs moderate speed Needs fast speed to manage CPU use Needs moderate speed for memory tasks
Key Responsibility Manages how many jobs run together Gives CPU to a ready process Handles memory by pausing and resuming
Tasks Handled Handles job selection and admission Handles execution of ready processes Handles moving processes to and from memory
Effect on System Controls system load and memory use Directly affects speed and performance Improves memory use and process flow

Common Mistakes While Process Scheduling in OS

Let’s look at the common mistakes that can affect system performance:

1. Wrong Algorithm for Scheduling: Using the wrong scheduling method for the system can lead to long wait times, slow response, or poor CPU use.

2. Ignoring Process Priority: When all processes are treated the same, important ones may not get enough attention.

3. Starvation of Less Important Tasks: Some methods may stop less important tasks from running because new important ones keep coming.

4. High Context Switching: Switching tasks too often can waste CPU time and lower useful output.

5. No Balance Between I/O and CPU Jobs: Overloading the system with I/O-bound tasks can degrade the performance of the system.

Conclusion

Process scheduling helps the operating system manage tasks by deciding when each process should run, wait, or be paused. With mechanisms like context switching and the two-state model, the system can handle many tasks smoothly. By using the right scheduling rules and avoiding common errors, process scheduling can improve system speed, switch between tasks more smoothly, and provide a better and more responsive experience for the user.

Take your skills to the next level by enrolling in the Software Engineering Course today and gain hands-on experience. Also, prepare for job interviews with Software Engineering Interview Questions prepared by industry experts.

Process Scheduling in Operating System – FAQs

Q1. What is the main purpose of a process scheduler?

The main purpose of the process scheduler is to decide which process to allocate CPU time and for how long, ensuring fairness and efficiency.

Q2. What's the difference between short-term and long-term schedulers?

The short-term scheduler picks a task from the ready queue for the CPU to run. The long-term scheduler decides which tasks from storage enter memory for execution.

Q3. What is context switching?

Context switching in an operating system is the process of saving the state of a running process and loading the state of the next process to run.

Q4. Why is minimizing waiting time important?

It is important to minimize the waiting time as it improves system responsiveness and efficiency.

Q5. What causes process starvation?

Starvation happens when a low-priority process never gets CPU time due to continuous arrival of higher-priority tasks.

About the Author

Senior Consultant Analytics & Data Science, Eli Lilly and Company

Sahil Mattoo, a Senior Software Engineer at Eli Lilly and Company, is an accomplished professional with 14 years of experience in languages such as Java, Python, and JavaScript. Sahil has a strong foundation in system architecture, database management, and API integration. 

fullstack