Deadlocks in operating systems are a critical concept that every developer must understand to completely grasp the workings of their computers. Whether you’re building a desktop application or managing server-side processes, the risk of multiple processes blocking each other over shared resources might cost you a lot. In this article, you will understand what deadlocks in OS are, how they occur, and the most effective deadlock handling techniques. The techniques will include deadlock prevention, deadlock avoidance, deadlock detection and recovery, and, finally, deadlock ignorance.
Table of Contents:
What is a Deadlock in Operating Systems?
A deadlock in OS is a state in which each process in a set is waiting for a resource that is held by another process in the same set, and none of them can proceed.
A deadlock is a very critical aspect in an OS as it wastes resources and decreases the efficiency of the CPU. Some operating systems implement deadlock detection algorithms depending on their design and requirements. Deadlock handling is an important feature in many operating systems, especially those requiring high reliability and uptime.
Become a Pro Software Engineer – Guided by IIT Educators!
Secure Your Seat Today and transform into a job-ready software engineering expert!
How Does a Deadlock Occur in Operating Systems?
To understand how a deadlock in OS occurs, consider this particular situation where process P1 holds the resource R1 and is also waiting for the resource R2, while another process, P2, holds the resource R2 and is also waiting for resource R1. This creates a circular waiting condition, leading to a deadlock where neither P1 nor P2 can proceed. As a result, both P1 and P2 wait indefinitely for resources without doing any useful work.
Resource Allocation Graph for Deadlock Detection in OS
In order to study deadlocks in OS, we use Resource Allocation Graphs. A resource allocation graph is a directed graph used to capture which resource is allocated to which process in a system. It helps you to identify whether a system is in a deadlock or not. The above diagram is an example of a Resource allocation graph.
- Resources are depicted using squares
- Processes are defined via circles.
- Arrows directed from a process to a resource show a request for the resource. It is also called a request edge.
- Arrows directed from a resource to a process indicate that the process is using the said resource. It is also called an assignment edge.
Deadlock in OS Example: Let us take a series of resource requests and create a resource allocation graph to identify whether this set of processes and resources is in a deadlock or not.
In this question, P indicates a process, and R represents a resource.
- 1. P1 requests R1
- 2. P2 requests R2
- 3. P3 requests R3
- 4. P1 requests R2
- 5. P2 requests R3
- 6. P3 requests R1
At first, all the resources are free.
Therefore, when P1 requests R1, it holds it. The same applies to P2 and P3, as both R2 and R3 are free, respectively. The first three requests are granted without any issue.
In the fourth request, P1 asks for R2, which is held by P2. This means that P1 has to wait for P2 to naturally release R2 to complete the request. A similar situation happens with P2 → R3 and P3 → R1.
As you can see, all four conditions of deadlock in OS – mutual exclusion, hold and wait, non-preemption, and circular wait – are met. Hence, this set of processes and resources is in deadlock.
Starvation vs Deadlock in OS: Are They the Same?
Beginners often confuse deadlock with the starvation concept in operating systems. While both deadlock and starvation include processes waiting indefinitely for resources, they are fundamentally different problems with distinct causes and solutions. The table below highlights the key differences between deadlock in OS and starvation in OS.
Feature |
Deadlock |
Starvation |
Definition |
A situation where two or more processes are blocked forever, waiting for each other. |
A situation where a process waits indefinitely to gain access to resources. |
Cause |
Circular waiting on resources. |
Resource allocation strategy favors certain processes. |
Process State |
All involved processes are blocked. |
Some processes proceed while one or more keep waiting. |
Detection |
It can be detected using algorithms. |
Often harder to detect as it depends on scheduling and may not form a clear pattern. |
Resolution |
Requires external intervention (e.g., killing processes). |
It can be resolved by changing scheduling policies. |
The Four Essential Deadlock Conditions
For a deadlock in an operating system to occur, the set of processes must satisfy the following four deadlock conditions.
- Mutual Exclusion: This condition says that each resource must be in one of two states, either available or currently assigned to a single process. It must not be assigned to more than one process for a deadlock in OS to happen.
- Hold and Wait: The hold and wait condition states that a process holding a resource must request another resource.
- Non-Preemption: Preemption is the act of temporarily interrupting a running process in order to assign the CPU to another process. For a deadlock to occur in an OS, a resource previously granted to one process cannot be forcibly taken away from that process. It should be perfectly released by the process holding it.
- Circular Wait: Finally, a circular chain of two or more processes must form, each of which is waiting for a resource held by the next process in the chain.
All four of these conditions must be present in a set of processes for a deadlock in operating system. Therefore, a deadlock in an OS is a chance event.
What are the Negative Effects of Deadlock in Operating Systems?
- Resource wastage: Resource wastage is the primary disadvantage of deadlock in operating systems. Resources such as CPU, memory, and I/O devices remain allocated to processes that are stuck and cannot proceed. This leads to a resource being stuck without use, contributing to inefficiency.
- Performance degradation: Since the processes keep waiting indefinitely for resources, the performance of the whole system goes down.
- Reduced Response: Deadlock in OS also affects the user experience. Due to the critical processes being deadlocked, the system becomes unresponsive or freezes.
- Increased CPU Idle Time: CPU idle time refers to the time a CPU is not executing any process. The CPU may stay idle if all processes are in a deadlock state and no process is available to execute.
- Risk of System Crash: Sometimes, unresolved deadlocks escalate into system crashes and then require a reboot to recover, resulting in loss of data.
Strategies for Handling Deadlock in Operating Systems
Handling deadlock depends on the potential impact on the system and the probability of a deadlock occurring. In some cases, ignoring the deadlock is better than detecting and recovering. In other cases, a deadlock can be easily prevented or avoided. Therefore, handling a deadlock ultimately depends on how critical it is. We have four major methods of handling deadlock in OS.
- 1. Deadlock Prevention
- 2. Deadlock Avoidance
- 3. Deadlock Detection and Recovery
- 4. Deadlock Ignorance
Let’s understand each of the handling techniques in detail.
1. Deadlock Prevention
As the name suggests, in deadlock prevention, the operating system does not allow the system to get into a deadlocked state. It prevents the system from a deadlock. We know that for a set of processes to go into deadlock in operating system, the set must follow four conditions. If we prevent at least one of these four necessary conditions from being fulfilled, we will prevent deadlock in OS.
1.1. Breaking Mutual Exclusion
Breaking mutual exclusion means that resources are not exclusively assigned to a single process if possible.
How to achieve: Allowing resources to be shared among processes.
Example: A printer is a mutually exclusive device. If we use virtual printers that can queue jobs, multiple processes can utilize the resource (printer) without being held by a single process indefinitely. This helps prevent a deadlock in OS.
1.2. Eliminating Hold and Wait
Hold and wait is a condition where a process holds a resource while waiting for the others. If we prevent the processes from holding and waiting, it will help in deadlock prevention.
How to achieve: To achieve this state, the operating system must allow processes to request all the resources at once, before execution starts. If all cannot be allocated, the process must wait without holding any, eliminating the chance of deadlock.
Example: Let us consider a print and save operation. Instead of allowing a process to obtain print access first and then disk space, the process can be designed to request both resources together before starting a job. If both are not available, the process waits without holding any resource, avoiding deadlock.
1.3. Allowing Preemption
Preemption means allowing the OS to interfere and forcibly terminate a process, which releases the resource that the process held.
How to achieve: If a process is waiting for a resource, the OS ensures that it releases the resources it holds. If the process didn’t, then the OS has complete authority to forcibly take that resource away.
Example: If a low-priority process is holding a resource needed by a higher-priority waiting process, the OS can forcibly preempt (take back) the resource and assign it to the higher-priority.
1.4. Breaking Circular Wait
This strategy ensures that the OS prevents any circular chain of requests.
How to achieve: It can be achieved by imposing a strict order in which all the available resources can be requested. It is like assigning priority or order to the resources.
Example: The OS can number all resources of a system and require that each process request resources in increasing order only. For instance, if a process has been allocated resource #2, it cannot request resource #1.
2. Deadlock Avoidance: Banker’s Algorithm
Deadlock avoidance also ensures that the system never enters a deadlock. However, unlike deadlock prevention, which blocks the possibility of deadlock in operating system by eliminating one of the necessary conditions, deadlock avoidance allows resource allocation as long as it does not lead to an unsafe state.
To apply deadlock avoidance, the system must have prior knowledge of the resource requests that would be made by the processes in the future.
What is a safe state and an unsafe state?
Safe state is a sequence of process executions such that every process can complete even if all of them make maximum requests.
Unsafe state: It is a warning sign. In this state, a system may or may not go into a deadlock.
2.1. Deadlock Avoidance with a Single Instance of Each Resource
In a system that has single instances of each resource (printer, scanner, etc.), we use the resource allocation graph with claim edges to prevent deadlock in OS. Claim edges are dashed arrows used to depict potential future requests. A basic resource allocation graph alone is not equipped for handling deadlock in operating system using deadlock avoidance.
How to perform deadlock avoidance?
Step 1: You create a resource allocation graph for a given system with all the claim edges, request edges, and assignment edges.
Step 2: Then we assume the start of execution and convert each edge.
- When a process actually requests a resource, its claim edge becomes a request edge.
- When the resource is assigned, the request edge is replaced by an assignment edge.
- Once the resource is released, the edge reverts back to a claim edge.
Step 3: The requests that, after being granted, result in a cycle are denied. This prevents the system from entering an unsafe or deadlocked state.
For Example, In the system below, if we grant the request of the P1 process, it will form a cycle, and therefore, the system will go into a deadlock state. This is the unsafe state. Therefore, the request of P1 will be denied.
This is an example of a resource allocation graph for deadlock avoidance. As you can see there is no cycle forming. Therefore, all the requests can be granted in this system.
2.2. Deadlock Avoidance with Multiple Instances of Each Resource
Banker’s algorithm is very useful to avoid a deadlock in OS. In the execution phase of a process, it declares the maximum number of resources it will consume in the future to the operating system. When a request for a resource is made by a process, the banker’s algorithm runs and checks to see if granting the request will leave the system in an unsafe state. If the system still appears to be in a safe state, the system proceeds and allocates the resources. If the system were left in an unsafe state, the system simply tells the process to wait and does not allocate the resources.
The algorithm keeps track using four important data points: Available, Max, Allocation, and Need. When a process makes a request, the system first checks whether things will remain safe, and only then does it make a decision.
3. Deadlock Detection and Recovery
Deadlock prevention and deadlock avoidance attempt to prevent the occurrence of deadlock in OS. An alternative approach is to let the system reach a deadlocked state, detect it, and then recover. This approach is appropriate when deadlocks are infrequent and performance is more important than absolute prevention. For example, with security devices, if you kill a process, you may lose data, as you do not know what killing the process could do. Hence, deadlock detection and recovery would be the correct solution in cases like this.
3.1. Deadlock Detection in OS
If the system has only one instance of each resource, use a resource allocation graph for deadlock detection. If the system has multiple instances of a resource, then the Wait-for Graph or other detection algorithms like the Banker-like Detection Algorithm will be used.
3.2. Deadlock Recovery in OS
Once a deadlock is detected, the operating system takes steps to break the cycle and return to normal operation. There are generally two ways to do this: either terminate processes or reclaim resources through preemption.
Terminate Process: The system might choose to kill all affected processes at once in urgent cases or terminate one process at a time, checking after each whether the deadlock has been resolved or not. This strategy is effective but must be applied carefully to avoid unnecessary disruption. Several factors should be considered before terminating a process. These are
- How critical the process is (its priority)
- How many resources are held by that process or requested by it
- How long has it been running
- Whether it can be easily restarted or restored from a saved state
Resource Preemption: Instead of terminating processes, the system may choose to temporarily take back some resources from one process and give them to another to break the deadlock in OS. Although this strategy is complex, resource preemption is often a better fit for systems where it’s important to maintain long-running processes without abrupt terminations, like security servers. Always ensure that fairness is maintained.
4. Deadlock Ignorance: Ostrich Algorithm
Deadlock ignorance is a strategy where the OS chooses not to handle deadlocks explicitly. In this strategy, the system deliberately avoids implementing any mechanism to detect, prevent, or recover from deadlocks. Instead of investing resources in deadlock handling techniques, the operating system automatically assumes that deadlocks in operating systems occur rarely and the cost of managing them is not justified.
This method is seen in operating systems like Windows, Linux, and macOS, where deadlock detection and prevention mechanisms are not built in. These systems minimally handle deadlocks or ignore deadlocks to reduce overhead. This technique is also known as the ostrich algorithm. The name comes from the myth that ostriches bury their heads in the sand to ignore danger.
Note: The ‘Ostrich Algorithm’ is not a real algorithm but a metaphorical approach where the system ignores the deadlock, assuming it occurs rarely.
Can Deadlock Be Used in Our Favor?
While it may seem counterintuitive, deadlocks can sometimes be leveraged strategically. Most engineers view deadlocks as critical issues to avoid. But as systems grow more complex, some developers are using deadlock states intentionally to:
- Enforce high-security resource locks
- Trigger-controlled failure or alerts
- Freeze execution for debugging
- Simulate resource behavior under stress
Use Cases
- In a bank scenario, the process handling the transactions must never be interrupted. If a high-priority process tries to breach the transaction process, the system enters a controlled deadlock state to freeze or prevent the breach.
- In a monitoring system used in industrial or healthcare settings, the process can invoke a deadlock if a sensor or signal fails a sanity check to stop processing operations in the system in order to prevent unsafe conditions.
- In simulated environments, especially in academic OS experiments or AI game simulations, triggering a deadlock can be useful to understand or test how a system behaves under pressure.
Master Data Structures in C with Expert Training!
Enroll Now and become a confident, job-ready programmer!
Deadlock Handling Techniques: Choose the Right Method
The table below outlines key deadlock handling techniques in operating systems, such as deadlock prevention, deadlock avoidance, deadlock detection and recovery, and deadlock ignorance. This will help you decide which method to use based on how frequently deadlocks occur in your system.
Method |
When to Use |
Usage |
Deadlock Prevention |
1. In safety-critical systems, where deadlocks must never happen
2. Use when you can control how resources are requested and released |
Rarely |
Deadlock Avoidance |
1. When the system knows maximum resource needs in advance
2. Suitable for systems with predictable workflows |
Occasionally |
Deadlock Detection & Recovery |
1. In general-purpose systems where deadlocks may occur, but it’s acceptable to detect and fix them afterward |
Occasionally |
Deadlock Ignorance |
1. In systems where deadlocks are extremely rare, and handling them adds unnecessary complexity
2. Often used in desktop or low-stakes systems |
Very rarely |
Conclusion
It is crucial to understand deadlock handling in OS when designing stable systems. However, not all deadlocks in operating systems need to be addressed. If a deadlock occurs rarely or it doesn’t have a significant impact on the system, it is cost-effective to leave the deadlock as it is. However, when deadlocks occur frequently or have a significant impact on the system, handling a deadlock is essential. You choose any deadlock handling technique, including deadlock avoidance, deadlock prevention, or deadlock detection and recovery. By understanding what a deadlock in OS is and its characteristics, you can make better decisions. You may also use the Banker’s Algorithm for deadlock avoidance. Ultimately, the best deadlock handling algorithm is the one that fits your system’s needs and performance priorities.
Understanding Deadlocks in Operating Systems – FAQs
Q1. What is a deadlock in operating system?
A deadlock occurs when two or more processes wait indefinitely for resources held by each other, blocking further execution.
Q2. What are the 4 components of deadlock?
The four components of deadlock are mutual exclusion, hold and wait, no preemption, and circular Wait.
Q3. What is RAG in OS?
RAG stands for Resource Allocation Graph. It is a visual tool used to detect and prevent deadlocks by showing relationships between processes and resources.
Q4. How can deadlocks be recovered?
You can recover by terminating processes, preempting resources, or rolling back to a safe state using checkpoints.
Q5. How to prevent deadlock in OS?
You can prevent deadlock by avoiding one or more of the four necessary conditions of deadlock.