Process in Operating System

Process-in-Operating-System-Featured-image.jpg

A process in operating system is a fundamental concept that refers to a program that is in execution, or actively using the system resources. Understanding a process in computer science, IT, or software engineering is important because it forms the foundation of multitasking and resource management. In this article, we will discuss what a process is in OS, its components, states, life cycle, process control block, process scheduling, and the difference between a process and a program.

Table of Contents:

What is a Process in Operating System?

A process in operating system can be defined as a program that is currently executing or running on a computer. It is an active instance that uses system resources such as CPU, memory, input, or output devices. A process is made up of a few components, such as a program code, a current program counter, register values, a stack, a data section, and a heap. Each process has its own process ID and address space.

The processes within the operating system are managed by process control blocks (PCBs), which include process states, priority, and resources being used. The processes in the operating system can be in different states, including ready, running, and waiting. Processes communicate with one another via inter-process communication (IPC) and are scheduled and managed by the OS for a given CPU time, or the time at which to run their instructions on the CPU.

What Does a Process Look Like in Memory?

Here is the diagram that shows what a process looks like in memory.

How Does a Process Look Like in Memory

When a process is loaded into memory, it is divided into various sections that are used for different purposes:

  • Text or Code Segment: The text segment contains the instructions of the compiled program, which is essentially the executable code of the program. 
  • Data Segment: The data segment contains the global and static variables and is initialized by the program. 
  • Heap: The heap is used for dynamic memory allocations that occur during the execution of the process. 
  • Stack: The stack contains function call information, local variables, and return addresses. This means the stack is used for managing function calls and recursion. 
  • Memory-mapped Region: The memory-mapped region contains shared libraries or files that are mapped into the address space of the process. 

Components of a Process in Operating System

Here is a list of the main components of a process in operating system:

  1. Program Code: The executable instructions that make up the program.
  2. Program Counter (PC): The Program Counter is a processor special register that keeps track of where the next instruction to execute is located in the process code.
  3. Process Stack: The stack where temporary data is stored, such as function parameters, return addresses, and local variables.
  4. Data Section: The data section is where global and static variables are stored, whether they are initialized or not.
  5. Heap: The heap is used for dynamic memory allocation while the program is running.
  6. Process Control Block (PCB): This is a data structure that is created and maintained by the operating system for each process, which contains information about the process, such as:
    • Process ID (PID)
    • Process State (running, ready, waiting, etc.)
    • CPU registers
    • Memory Management Information
    • Scheduling Information
    • Accounting Information 
    • I/O Status Information
  1. Open Files List: This keeps track of the files opened by the process.

These components allow the operating system to manage, schedule, and execute processes efficiently.

Become a Certified Software Development Engineer!
Learn from IIT faculty and industry leaders with no prior coding required. Enroll Now!
quiz-icon

States of Process in Operating System

Here are the main states of the process in operating system:

  • New: The state of the process where it is created, but is not ready for execution yet.
  • Ready: It is the state of the process where it is loaded in memory and waiting for the CPU to execute the process.
  • Running: The state where the process is being executed on the CPU.
  • Waiting (Blocked): It is the state where the process cannot continue without an external event occurring, such as completing I/O.
  • Terminated (Exit): It is the state in which the process has completed or failed execution.

The optional states that may be present in some systems: 

  • Ready Suspended: It is the state where the process is in secondary memory, but is ready to return to memory. 
  • Waiting Suspended: The state in which a process is both waiting for an event and also swapped out of memory.

Process Life Cycle in Operating System

The process life cycle is the cycle of the different stages a process goes through from when it is created until it finishes execution. 

When a program starts, it becomes a process and is in the New state. Then it moves to the ready state, waiting for the CPU. When the OS schedules the process, it goes into the running state to execute instructions. If the process needs to wait for something (input/output), it goes into the waiting (Blocked) state. After finishing the wait, it returns to the ready state. The process is in the terminated state when it finishes its task or has been terminated by the system.

This process life cycle helps the operating system to manage multiple processes efficiently by controlling how they go through these states.

Process Life Cycle in Operating System

Steps in the Process Life Cycle

  1. New: The operating system creates and initializes the process.
  2. Ready: The process is loaded into the main memory and is waiting for allocation of CPU time.
  3. Running: The process is executing instructions on the CPU.
  4. Waiting or Blocked: The process cannot continue past this point until an event occurs, like an I/O operation being completed.
  5. Ready (again): After the waiting condition, the process is in the ready state and waiting for CPU time.
  6. Terminated or Exit: The process has completed executing or is killed and is removed from the system.

Get 100% Hike!

Master Most in Demand Skills Now!

Process Control Block in Operating System

A process control block (PCB) is a data structure that is used by the operating system to store all the information about a particular process. It works like a record for each process, which allows the OS to manage, schedule, and control the process efficiently.

Process Control Block (PCB)

Main Components of a PCB:

Here are a few main components of the PCB that will help you to understand the process information better.

  1. Process ID (PID): A unique number, created by the operating system, to identify processes and differentiate them from others. The operating system and other processes can use this number when referring to the process with this ID.. 
  2. Process State: It tells you what the process is doing, and in which state it is, such as new, ready, running, waiting, or terminated. Knowing where it is helps the operating system decide what to do next.
  3. Program Counter: A program counter keeps the address of the next instruction to be executed by the CPU. It allows the process to restart where it left off after the context switch.
  4. CPU Registers: Store the current state of all the CPU registers that the process is using, which include accumulators, index registers, and stack pointers. They save them and restore the values during context switches so the process can continue accurately and efficiently. 
  5. Memory Management Information: It contains information about the process’s memory, base, and limit registers, segment tables, or page tables. This information is very useful for all operating systems to make sure that the processes are separate from each other and to allow the allocation of different memory areas.
  6. Accounting Information: Information such as total CPU time used, limits on execution time, process priority, and user or group identification is used by the operating system to track the use of resources and may also be useful for doing billing, or enforcement of scheduling policies.
  7. I/O Status Information: It lists the I/O devices assigned to the process, files opened by the process, and any pending I/O requests. This allows the OS to manage and coordinate input/output operations.
  8. CPU Scheduling Information: It basically consists of scheduling parameters such as process priority, pointers to scheduling queues, and other relevant data used by the OS scheduler to decide the order in which processes are executed.

Difference between Process and Program in OS

Aspect Program Process
Definition A passive set of instructions stored on disk An active instance of a program in execution
State Static (does not change by itself) Dynamic (changes state as it runs)
Existence Exists as a file on storage Exists in memory during execution
Lifespan Permanent until deleted Temporary; lasts until execution completes
Resources Does not need CPU, memory, or I/O Requires CPU time, memory, and I/O devices
Identity No unique identity (just a file) Has a unique Process ID (PID) assigned by the OS
Multiplicity Multiple processes can originate from one program Each process is an independent execution
Example A text editor executable file Running an instance of the text editor window

Process Scheduling in Operating System

Process Scheduling is the action taken by the operating system to select which process from the ready queue should be given CPU time next. Because there are generally more processes than CPUs, the scheduler must fairly and efficiently share limited CPU time among competing processes.

Process Scheduling in Operating System

Here are a few main points of the process scheduling in OS: 

  • It maximizes total CPU usage by keeping the CPU busy with as many tasks as possible.
  • It makes sure that each process gets a chance to execute.
  • It optimizes performance measures, such as throughput (number of processes completed per time period), turnaround time, wait time, and response time.
  • Scheduling decisions are made during events, such as process creation, process termination, or a process’s state changing from waiting to ready.

Types of Process Schedulers in OS

There are three types of process schedulers in OS:

  • Long-term Scheduler (Job Scheduler): Determines the processes that should be moved to the ready queue from the pool of new processes.
  • Short-term Scheduler (CPU Scheduler): Determines the next process in the ready queue that will execute on the CPU, so that the scheduling operations take place very frequently.
  • Medium-term Scheduler (Swapper): Suspends processes and resumes a process by moving them between main memory and secondary storage to help regulate levels of multiprogramming.

Common Process Scheduling Algorithms in OS

Here are a few common process scheduling algorithms in OS:

  • First-Come First-Serve (FCFS): In FCFS, the processes are executed in the same order they arrived, which means the process that comes first is processed first.
  • Shortest Job First (SJF): The SJF is a process scheduling algorithm that selects the process that has the shortest CPU burst time scheduled next, thus reducing waiting time.
  • Shortest Remaining Time First (SRTF): SRTF is the preemptive version of SJF. In SRTF, the CPU switches to a process that has less remaining time than the currently running process.
  • Priority Scheduling: It is a process scheduling algorithm in which the processes are executed based on priority number, with a preference for higher priority processes.
  • Round Robin (RR): In the round robin, each process is allocated a fixed time to run before it gets preempted or moved back to the queue, not killed. Therefore, essentially all processes are given a fair chance of running by segmenting their CPU time.
  • Multilevel Queue Scheduling: Similar to FCFS but able to queue up and categorize processes based on static groups ( e.g., system or user process) with their own rules for scheduling, and by priority of queue.
  • Multilevel Feedback Queue Scheduling: Very similar to multilevel queues but runs on heuristics. It allows processes to move queues based on how they behave, based on fairness and response time efficiency.
Master C and Data Structures with hands-on projects across top industries.
Enroll Now in the C Programming Course and Get Certified!
quiz-icon

Conclusion

Understanding processes is a fundamental part of how operating systems control the programs that are currently running. A process is an executing program that goes through a life cycle of states, such as new, ready, running, waiting, and terminated, through the use of operating system tools, including the Process Control Block (PCB) and scheduling algorithms. The operating system can control processes and their states, allowing fair access to the CPU, shared access to the CPU, multitasking, and general efficient operation of the computer system. Understanding these processes helps you to know how computers can run an enormous number of tasks simultaneously and reliably.

Process in Operating System – FAQs

Q1. What are the differences between a program and a process?

A program is a collection of instructions recorded on disk, while a process is a program in execution that uses system resources.

Q2. Why are process states needed?

The process state gives the operating system a way of tracking what each process is doing, and tracking the waiting, running, and other transitions between process states to allow multitasking.

Q3. What is the role of the process scheduler?

The process scheduler determines which process will get CPU time next, indicates which processes were not given CPU time in a timely manner, and stops their execution in order to ensure fair execution of processes in time.

Q4. How can one process communicate with another process?

Process communication using Inter-Process Communication (IPC) mechanisms, such as pipes, message queues, shared memory, or sockets.

Q5. What is a Process Control Block (PCB)?

A Process Control Block (PCB) is a data structure with information about a process, including the state of the process, a program counter, CPU registers, and resource usage information.

About the Author

Senior Consultant Analytics & Data Science, Eli Lilly and Company

Sahil Mattoo, a Senior Software Engineer at Eli Lilly and Company, is an accomplished professional with 14 years of experience in languages such as Java, Python, and JavaScript. Sahil has a strong foundation in system architecture, database management, and API integration. 

fullstack