Key Takeaways:
- A critical section in OS is a code segment where only one process or thread can access shared resources at a time.
- The Critical Section Problem in os arises from race conditions that may cause inconsistent or corrupted results.
- Any valid solutions of race condition must satisfy mutual exclusion, progress, and bounded waiting.
- Software solutions include Peterson’s, Dekker’s, and Lamport’s Bakery algorithms.
- Hardware soultions include Test and Set, Compare and Swap, and disabling interrupts.
- Synchronization mechanisms like mutex locks, semaphores, and condition variables are widely used in modern operating systems.
A critical section problem in os refers to a specific part of a program that accesses shared resources and, therefore, if it is uncontrolled, it may lead to conflicts, data corruption, or inconsistent results. The operating system needs to provide mechanisms like locks and semaphores to ensure proper synchronization and mutual exclusion in the critical section.
The operating system helps to control this interference for the sake of synchronization. The operating system uses locks, semaphores, and scheduling policies to implement synchronization. In this blog, we will learn what a critical section problem in os is, how it occurs, and various solutions to solve it. So, let’s get started!
Table of Content:
What Is Critical Section in OS?
“A critical section in operating system is part of a program that accesses shared resources, such as data structures or devices, and must be executed by only one process or thread at a time to prevent race conditions.”
When multiple processes (e.g., P1 and P2) try to access a common resource at the same time to access or update (e.g., variable value = 2), the end result of that common resource depends on the last process to modify it. This is known as a race condition; to demonstrate, consider the following concurrent processes: P1 and P2, accessing a shared variable value = 2:
- P1: value + 2 → value = 4
- P2: value – 2 → value = 0
In the best-case scenario, after P1’s actions, the value would be 4. However, despite having executed all its instructions, P2 inserted itself in the middle of P1’s instructions, resulting in an incorrect value of 0 instead of 2 as the final product. In this case, the accuracy of the variable is compromised due to the uncoordinated access to a common resource.
The Critical Section Problem in operating system attempts to identify a solution for this problem. In it, a program may only allow one process to access its critical section (the shared resource section of code) at any given time. By enforcing this rule, an operating system can guarantee that race conditions do not occur, therefore ensuring consistency of data.
Characteristics of Critical Section Problem in OS
Process synchronization in OS occurs when more than one process accesses shared resources at the same time, resulting in race conditions and inconsistent values. When one process is in its critical section, all other processes must be excluded from accessing their critical sections.
The methods in which operating systems solve this are through synchronization techniques so that processes have exclusive access to the critical section, require fairness, and abide by the principles of progress.
Any correct/valid solution to the Critical Section Problem must satisfy three basic criteria as given below:
- Mutual Exclusion – Only one process can access its critical section at a time. This prevents data corruption in shared resources by providing accurate access.
- Progress in Critical Section – ensures that if no process is currently inside its critical section, one of the waiting processes will eventually be allowed to enter without unnecessary delay.
- Bounded Waiting – A process should not be forced to wait indefinitely. There should be a statement in the record of how many processes could enter their critical section before the waiting process gets information that it can access its critical section.
These three conditions form the foundation of concurrency control in operating systems.
Become a Certified Full-Stack Developer with IIT & Microsoft
Join our MEAN Stack course! Master front-end to back-end in 6 months and land your dream job
Solutions to the Critical Section Problem in Operating System
Operating systems make use of various synchronization mechanisms to meet the previously mentioned requirements. This is done to solve the critical section problem in an operating system. These can be categorized into hardware-based and software-based solutions. So, let’s see the Critical Section Problem Solution in OS, which is as follows:
Software-Based Approach to Solve Critical Section Problem in OS
Software algorithms utilize logical constructs to achieve mutual exclusion without using any special hardware instructions. These algorithms are mostly theoretical, but they enhance the understanding of fundamental concepts responsible for synchronization.
1. Peterson’s Algorithm
- A classical solution that works for the synchronization of two processes.
- It uses two variables:
- flag[] → indicates if a process intends to enter its critical section.
- turn → indicates who is to enter.
- Peterson’s algorithm guarantees mutual exclusion, progress, and bounded waiting for two processes on a uniprocessor system with strict memory ordering. However, it is not practical on modern multiprocessors without additional memory barriers.
2. Dekker’s Algorithm
- Dekker’s algorithm was one of the earliest solutions to the Critical Section Problem.
- Dekker’s algorithm combines flag variables and a turn variable to coordinate access between processes.
- It guarantees mutual exclusion, but it is considerably more complicated than Peterson’s algorithm.
3. Lamport’s Bakery Algorithm
- Lamport’s Bakery Algorithm generalizes Peterson’s idea of taking turns among competing processes.
- Each process takes a ticket (essentially a number) before entering the critical section.
- Each process registers the next ticket number (or rather the “next enrollment in the queue”), and the process that has the ticket with the smallest number gets to enter the next.
- This mechanism guarantees fairness while also preventing starvation.
Hardware-Based Approach to Solve Critical Section Problem in OS
Hardware-level instructions provide atomic operations and allow for more efficient synchronizing primitives. They are prevalent in modern systems.
1. Test-and-Set
- Uses a shared Boolean variable (commonly referred to as a lock).
- The test_and_set instruction atomically sets this lock to true.
- If the lock is already true, the process continues along its code path until it is able to enter the critical section.
2. Compare-and-Swap
- It is a hardware instruction that only updates a variable (typically one whose previous state needs to be evaluated) if its current value satisfies a condition.
- Supports atomicity and provides a way to avoid concurrent access.
- It is commonly used in lock-free data structures.
3. Disabling Interrupts
- Disables interrupts for a process in its critical section.
- Disables context switching, ensuring no other process can enter the critical section simultaneously.
- It is simple but not suitable in multiprocessor systems (because performance could be affected).
High-Level Synchronization Techniques (or OS-level solutions)
High-Level Synchronization Techniques are operating system–level solutions that manage process coordination without requiring developers to handle low-level details. These mechanisms simplify synchronization while ensuring safety, fairness, and efficiency.
Let’s know them better one by one:
1. Mutex Locks
- Mutex (or Mutual Exclusion) locks in os offer two primary operations: acquire() and release().
- At any given time, only one process may hold the lock.
- If multiple processes acquire and release the lock correctly, then the device is accessed safely in a critical section.
- Mutex locks are often found within operating systems and multithreaded applications today.
2. Semaphores
- A semaphore in os is a more complex form of synchronization mechanism, represented as an integer variable.
- There are two atomic operations:
- wait() → If the semaphore value is greater than 0, wait() decrements it and the process continues. If the value is 0, the process is blocked until another process calls signal() to increment it.
- signal() → It will increment the semaphore. If there are any blocked processes, a blocked process will be awakened.
- There are two forms of semaphore: binary (similar to mutexes) and counting (for many resources).
3. Condition Variables
- Condition variables are used with mutexes to coordinate how processes are scheduled.
- Condition variables allow threads to wait for certain conditions to be met, but they must always be used in combination with a mutex to avoid lost wakeups and ensure proper synchronization.
- Condition variables provide additional control when processes depend on certain events.
Strategies for Avoiding Problems of Race Condition in OS
Besides classical synchronization mechanisms like mutex locks and semaphores, many operating systems and concurrent programming frameworks employ sophisticated techniques in order to decrease contention, increase performance, and reduce the chances of deadlocks or starvation.
Here are some commonly used techniques:
1. Fine-Grained Locking
Instead of one lock for the entire structure, multiple smaller locks protect individual sections of the data
Example: in a hash table, every bucket can get its own lock, rather than a lock for the entire hash table.
Benefit: more parallelism, because various threads can safely access different parts at the same time; they don’t have to wait on one another for the whole structure to “be free”.
2. Lock Hierarchies
Locks are acquired in a specific, predetermined global order.
Prevents deadlock by avoiding circular lock dependencies.
Common in databases and in operating systems (i.e., acquiring filesystem locks in a strict order).
3. Read-Write Locks
Distinguishes between readers (who need only to read shared data) and writers (who need to modify data).
Readers can read the resource together, but only one writer can modify this resource.
Improves performance in workloads with a high ratio of reads to writes (e.g., caching or querying a database).
4. Optimistic Concurrency Control (OCC)
Assumes that conflicts are rare, and we let multiple processes/threads execute without using locks.
Before we commit changes, every process checks to see if there was any conflict.
If no conflict is found, then we have successfully committed changes.
If a conflict is found, we roll back.
Commonly used in databases and transactional memory systems.
5. Lock-Free and Wait-Free Data Structures
Instead of using locks, they use atomic hardware instructions such as compare-and-swap.
Lock-free → guarantees that at least one process will make progress.
Wait-free → guarantees every process will finish its operation in a finite number of steps.
They are often used in real-time systems and high-performance computing, where blocking is considered expensive.
Examples of Critical Section Problem in Real-World Applications
The idea of a critical section problem in os is not just theoretical, but it is also important in many real-life systems where shared resources need to be safely accessed. Here are some examples from real life:
1. Banking Systems
Scenario: Two processes are updating the same bank account balance at the same time.
Problem: If one transaction is depositing money while the other is withdrawing money, race conditions may lead to an incorrect balance.
Solution: Synchronisation via some mechanism (lock, semaphores, etc.) will assure that only one transaction is updating the balance at a time.
2. Ticket Booking Systems
Scenario: Several users try to purchase the last ticket on a flight or movie.
Problem: Because of a lack of synchronization of the workings of the two bookers, both users may be confirmed for the last seat and arrive at the plane or cinema to discover they are both entitled to one single seat.
Approach: The critical section of the booking mechanism will be designed to allow only one user to book the seat at a time.
3. File Systems
Scenario: Several processes want to write to a file in parallel.
Problem: Data corruption or inconsistent results can happen.
Approach: The file systems have at least an implicit, if not explicit, locking or semaphore mechanism for users to read/write/append or update records safely.
4. Operating System Kernels
Scenario: The kernel needs to modify process tables, memory management structures, or I/O buffers.
Problem: There could be interrupts or concurrent processes, which could corrupt the kernel’s data structure.
Solution: The Operating System protects critical sections with mutexes, semaphores, or disabling interrupts in certain locations to avoid system corruption.
5. Database Management Systems (DBMS)
Scenario: One query is accessing a record, and another query is updating the same record.
Problem: The concurrent updates could produce problems like lost updates or dirty reads.
Solution: Databases provide transactions, locks, and optimistic concurrency controls to maintain consistent results.
6. Multiplayer Online Games
Scenario: Multiple players are interacting with a game resource (e.g., a treasure chest or score counter).
Problem: Without synchronization, two players could activate the same item or record inconsistent scores.
Solution: Critical sections would ensure updates to game object(s) are done in order.
Advantages of Critical Section in OS
- Prevents Race Conditions in OS: It guarantees that only a single process can be executed in the critical section, now producing inconsistent or corrupted data.
- Data Integrity and Consistency: It shields shared resources (files, databases, memory) from concurrent access and conflicting accesses.
- Simplicity of Concept: It is simple; we can restrict concurrent access to guarantee correctness.
- Fairness: When properly implemented with appropriate synchronization mechanisms (semaphores, locks), processes are given fair turns to the resource.
- Supports Multi-Process/Thread Environments: Critical sections are important for multi-core processors, multithreading, parallel applications, etc.
Disadvantages of Critical Section in OS
- Performance Overhead: The requirement of time and resources for synchronization will typically cause slower execution than if all access were unsynchronized.
- Deadlocks in OS: If multiple processes are waiting for each other’s locks indefinitely, they can be put into a deadlock.
- Starvation Problem: If the scheduling is unfair (e.g., high prep process gets unrestricted preference), some processes may be starved by being continually denied access.
- Decreased Parallelism: Serializing access to only one process at a time reduces scalability for the system under high concurrency with critical sections.
- Complex and Error Prone: Managing a large number of locks in big systems is complex and error-prone.
Use of Critical Section and Its Impact on Scalability
While critical sections ensure safe access to shared resources, their improper or excessive use can reduce the scalability of a system. Scalability refers to a system’s ability to handle increasing workloads or users without a proportional drop in performance. Critical sections can become a limiting factor due to:
- Bottlenecks: Multiple processes waiting for the same critical section create a queue, reducing throughput.
- Reduced Parallelism: Overuse of locks forces processes into serialized execution, limiting concurrency.
- Locking Overhead: Acquiring and releasing locks adds extra computation, which grows significant in highly concurrent systems.
Note: To maintain scalability, critical sections should be kept as short as possible, and techniques like fine-grained locking, lock-free data structures, or read-write locks should be applied to minimize contention.
Conclusion
The Critical Section Problem in OS guarantees safe access to shared resources by eliminating race conditions and data inconsistency. With this, we have reached the end of this blog, and we believe you have successfully gained the knowledge regarding “What is critical section problem in OS” and how to solve critical section problem using software and hardware techniques. Operating systems solve the critical section problem by enforcing the three requirements, mutual exclusion, progress, and bounded waiting, through synchronization mechanisms like mutexes, semaphores, and condition variables.
Critical Section in Operating System – FAQs
1. What is a critical section in operating system?
Ans. In operating systems, a critical section is a section of program that has shared resources (variables, data structures, I/O devices). For proper results to occur for shared resources, only one process or thread should be executing in the critical section; otherwise, there may be data inconsistencies. Critical sections are essential for avoiding race conditions.
2. What is a race condition in os?
Ans. A race condition in operating system is an undesirable situation that occurs when the device or system attempts to do two or more things simultaneously.
3. What is mutual exclusion in a critical section?
Ans. Mutual Exclusion is a property of process synchronization that states that “no two processes can be in the critical section at the same time”. The term was first created by Dijkstra.
4. Why is Peterson's algorithm important?
Ans. Peterson’s Algorithm is a straightforward and powerful way of allowing mutual exclusion in two processes. It guarantees that at most one process will access the critical section at a time so there is no contention or inconsistent data. For two processes, Peterson’s algorithm works fine. Unfortunately, it will not work as well in larger systems.
5. How does compare-and-swap assist in concurrency?
Ans. Compare-and-swap is a method involved in designing concurrent algorithms. Essentially, compare-and-swap compares (checks) a variable to an expected value; if the variables are equal, it will swap the value of the variable to a new value.