• Articles
  • Tutorials
  • Interview Questions

Semaphore in Operating System

Semaphore in Operating System

In operating systems, efficient management of resources and synchronization among processes are crucial for ensuring smooth and reliable system operation. Semaphores play a vital role in this context, serving as a synchronization mechanism that helps control access to shared resources and avoid race conditions. In this blog post, we’ll delve into the concept of semaphores and their types and discuss some classical operating system problems that can be solved using semaphores.

Table of Contents

What is Semaphore in OS?

Within operating systems, a semaphore serves as a synchronization mechanism to regulate access to a shared resource among numerous processes or threads. Functioning as a signaling tool, it facilitates communication between processes, enabling them to coordinate activities, prevent conflicts, and ensure the systematic execution of tasks. It is a hardware-based solution designed to address the critical section problem.

Semaphores were introduced by Dutch computer scientist Edsger Dijkstra and have become an integral part of modern operating systems, facilitating efficient resource management.

What is a Critical Section Problem?

What is a Critical Section Problem?

The critical section problem is a classic synchronization problem in computer science that arises in concurrent or multi-process systems where multiple processes share a common resource. The objective is to ensure that only one process at a time can execute in its critical section, a segment of code where it accesses shared variables or resources. This is crucial to prevent race conditions and ensure the integrity of shared data.

The critical section problem involves finding a solution that satisfies the following conditions:

  • Mutual Exclusion: Only one process is allowed to execute in its critical section at any given time. This ensures that the shared resources are not accessed simultaneously by multiple processes, preventing conflicts and data corruption.
  • Progress: If no process is in its critical section and some processes are interested in entering their critical sections, only those processes that are not in their remainder sections can participate in deciding which one will enter next. This ensures that processes do not deadlock and that progress is made.
  • Bounded Waiting: There exists a bound on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. This prevents a process from being indefinitely delayed in entering its critical section.

Solving the critical section problem is typically achieved using synchronization mechanisms such as semaphores, locks, or other mutual exclusion constructs. These mechanisms help control access to shared resources and coordinate the execution of processes to avoid conflicts.

Problems in Critical Section Problems

In a scenario where multiple processes attempt to access the critical section concurrently, there is a risk of a race condition. This situation can lead to unexpected outcomes, particularly when the second process attempts to access a variable that has already been modified by the first process. This can result in data inconsistency or corruption if proper synchronization mechanisms are not in place to ensure that only one process at a time is allowed to execute in the critical section. Synchronizing access to shared variables is essential to prevent conflicts and maintain the integrity of the shared resources.

Example 

Suppose we have a shared variable, denoted as ‘x,’ with an initial value of 20. The variable ‘x’ is accessed and modified by two processes, Process 1 and Process 2.

In Process 1,

int a = 20;
int b = 10;
x = a + b;

In Process 2,

int a = 10;
int b = 30;
x = a - b;

If these processes are executed sequentially, one after the other, there is no issue. For instance:

– If only Process 1 is executed, the value of ‘x’ becomes 30.

The shared variable ‘x’ changes from an initial value of 10 to 30.

– If only Process 2 is executed, the value of ‘x’ becomes -20.

The shared variable ‘x’ changes from an initial value of 30 to -20.

However, if both processes are executed simultaneously, a problem arises. The compiler may face a dilemma in choosing between the values -20 and 30 for the shared variable ‘x.’ This conflicting state is known as data inconsistency.

To address such issues, hardware solutions like semaphores can be employed. Semaphores provide a mechanism to control access to shared resources and prevent data inconsistency by allowing only one process to access the shared variable at a time. This helps synchronize the execution of processes and ensures that conflicting modifications do not occur concurrently, maintaining the integrity of shared data.

Are you ready to become a full stack web developer? Join our Full Stack Web Development course and gain the skills needed to build dynamic and scalable web applications. Enroll now!

Get 100% Hike!

Master Most in Demand Skills Now!

Semaphore Operations

Semaphore operations are fundamental operations used in concurrent programming to control access to shared resources and implement synchronization between multiple processes. Semaphores are integer variables that are used for signaling and mutual exclusion. Its minimum value is zero; its maximum value can reach any positive value. The two primary operations associated with semaphores are “wait” (P) and “signal” (V). Here’s an explanation of each operation:

Wait Operation

The wait operation, often denoted as P (from the Dutch word “proberen,” meaning “to test”), is used to acquire or decrement the value of a semaphore. This operation goes by various names, including sleep operation, down operation, decrease operation, and P function. In the wait operation, the semaphore value is decreased by one if it is initially greater than zero. This decrement indicates that a process has acquired the semaphore or entered a critical section. If the semaphore value is already zero, implying that the resource is currently in use, the process invoking the wait operation is blocked. The process remains in a blocked state until the semaphore value becomes greater than zero, signifying that the resource is available for use. The wait operation is commonly employed to facilitate entry into critical sections or to await the fulfillment of a specific condition before allowing further execution.

P(Semaphore):
  if Semaphore > 0:
    Semaphore = Semaphore – 1
  else:
    // Block the process until Semaphore > 0

Signal Operation

The signal operation, often denoted as V (from the Dutch word “verhogen,” meaning “to increment”), is used to release or increment the value of a semaphore. It is alternatively known as wake-up operation, up operation, increase operation, and V function. The signal operation is executed following the completion of a process’s critical section or the fulfillment of a specific condition. In the signal operation, if there are processes currently blocked on the semaphore, one of them is unblocked, enabling it to resume execution. Simultaneously, the semaphore value is incremented, indicating that the associated resource is now available for use. The signal operation plays a key role in releasing resources, allowing waiting processes to proceed, and signaling the readiness of the shared resource.

V(Semaphore):
  Semaphore = Semaphore + 1
  // Unblock one of the waiting processes, if any

Do checkout our blog on top features of linux operating system to gain in-depth knowledge about it!

Types of Semaphore

There are two main types of semaphores: binary semaphores and counting semaphores. These types serve different synchronization purposes and are used in various scenarios in concurrent programming.

Binary Semaphores

Binary semaphores, also known as mutex (mutual exclusion) or binary semaphore, are characterized by having only two possible values: 0 and 1. Serving as a simple on/off switch, these semaphores play a crucial role in controlling access to critical sections and implementing mutual exclusion. In a binary semaphore, a value of 0 signifies that the critical section is currently occupied, while a value of 1 indicates its availability. This type of semaphore is commonly employed to address the critical section problem, ensuring that only one process accesses a critical section at any given time and mitigating the risk of race conditions.

Counting Semaphores

Counting semaphores, also referred to as general semaphores, deviate from the binary counterpart by allowing non-negative integer values. These semaphores are used to regulate access to a resource consisting of a finite number of instances. Unlike binary semaphores, counting semaphores enable multiple processes to access the critical section concurrently, up to the limit defined by the semaphore value. The value of the counting semaphore represents the number of available instances of the resource. Counting semaphores are particularly useful in scenarios where multiple instances of a resource can be shared among processes, facilitating a more flexible approach to synchronization.

Advantages and Disadvantages of Semaphore

Semaphores are powerful synchronization tools, but their proper usage requires careful consideration of the specific requirements of the concurrent system. Let’s explore the advantages and disadvantages of semaphore:

Advantages of Semaphore

Advantages and Disadvantages of Semaphore
  • Effectiveness in Synchronization: Semaphores are more effective than other synchronization approaches. By allowing mutual exclusion, they facilitate well-coordinated execution of processes, minimizing the risk of data corruption and race conditions.
  • Efficient Resource Management: Semaphores excel in resource management, preventing multiple processes from simultaneously entering critical sections. This effective control mechanism helps in avoiding conflicts and ensuring the proper utilization of shared resources.
  • Platform Independence: Semaphores are machine-independent, as their implementation and codes are written in the microkernel’s machine-independent code area. This characteristic enhances their portability across different computing environments.
  • Strict Mutual Exclusion: Semaphores strictly enforce mutual exclusion, particularly evident in the case of binary semaphores. They ensure that processes enter critical sections one at a time, preventing concurrent access to shared resources.
  • Elimination of Busy Waiting: Semaphores eliminate the need for busy waiting, where processor time is not wasted to continually check conditions before allowing access to critical areas. This results in more efficient resource utilization.

Do you want a comprehensive list of interview questions? Here are the Full Stack Developer Interview Questions!

Disadvantages of Semaphore

Disadvantages of Semaphore
  • Order Sensitivity in Actions: The wait() and signal() actions must be executed in the correct order to prevent deadlocks. Maintaining this order sensitivity adds a layer of intricacy to semaphore programming.
  • Priority Inversion Possibility: Semaphore usage may lead to high-priority processes entering critical sections before low-priority processes. This scenario, known as priority inversion, can impact system performance and responsiveness.
  • Complexity in Design: Semaphore implementation can be somewhat complex. Careful design of wait and signal actions is important to avoid deadlocks, where processes are unable to proceed due to circular dependencies.
  • Programming Challenges: Programming with semaphores presents challenges, and there is a risk that mutual exclusion may not be achieved if not implemented correctly. This complexity requires careful consideration during the development phase.

Classical OS Problems Resolution by Semaphores

Semaphores play an important role in resolving classical operating system problems that involve synchronization and resource sharing among concurrent processes. Let’s see how semaphores can be applied to address three well-known problems.

The Dining Philosophers’ Problem

The Dining Philosophers' Problem

Definition: The Dining Philosophers’ Problem is a classic synchronization and concurrency problem that illustrates the challenges of resource sharing and avoiding deadlock in a multithreaded or multiprocessor environment. The problem was formulated by E.W. Dijkstra in 1965 to represent issues in concurrent programming and system design.

The scenario involves a number of philosophers sitting around a dining table. Each philosopher alternates between thinking and eating. The dining table is set with a bowl of spaghetti, and there is a single fork placed between each pair of adjacent philosophers. Philosophers need two forks to eat.

The challenge is to design a synchronization protocol that allows the philosophers to alternate between thinking and eating while avoiding deadlock and ensuring that no philosopher starves (i.e., each philosopher gets a chance to eat).

Required Data

In this problem, we have a set of philosophers, a dining table, and a set of forks. The philosophers can be represented as individual processes or threads, and the forks are shared resources that need to be acquired by the philosophers to eat. The key data structures include:

  • Philosophers: A set of concurrent processes or threads, each representing a philosopher. They alternate between thinking and eating.
  • Forks: A set of shared resources (forks) placed between the philosophers. Philosophers need to acquire two forks to eat.

Solution

One common solution to the Dining Philosophers’ Problem involves using semaphores to control access to the shared forks. Here is a simplified solution:

  • Semaphore for Each Fork: Create a semaphore for each fork. This semaphore is used to control access to the fork, allowing only one philosopher to pick up the fork at a time.
  • Semaphore for Mutual Exclusion (Mutex): Create a semaphore to ensure mutual exclusion when updating the state of the system (e.g., changing the state of a philosopher from thinking to eating).

Algorithm:

  • Each philosopher, before picking up forks, must acquire the mutex semaphore to ensure exclusive access to the shared state.
  • Once a philosopher has acquired the mutex, they can check the availability of both forks.
  • If both forks are available, the philosopher picks them up. If not, the philosopher releases the mutex and waits.
  • After picking up the forks, the philosopher updates their state to “eating” and releases the mutex.
  • After eating, the philosopher puts down the forks, updates their state to “thinking,” and releases the mutex.
  • The use of semaphores ensures that only one philosopher at a time can update the state or pick up forks, preventing conflicts and race conditions.

Algorithm Code

// Define the number of philosophers
const int N = 5;
// Define semaphores for forks and a mutex semaphore
Semaphore mutex = 1; // Binary semaphore for mutual exclusion
Semaphore forks[N] = {1, 1, 1, 1, 1}; // Binary semaphores for each fork
// Philosopher process
void philosopher(int id) {
    while (true) {
        // Thinking
        think();
        // Wait for mutex to ensure exclusive access to shared state
        wait(mutex);
        // Pick up left fork
        wait(forks[id]);
        // Pick up right fork
        wait(forks[(id + 1) % N]);
        // Release mutex to allow other philosophers to access shared state
        signal(mutex);
        // Eating
        eat();
        // Put down left fork
        signal(forks[id]);
        // Put down right fork
        signal(forks[(id + 1) % N]);
    }
}
// Helper functions for wait and signal operations on semaphores
void wait(Semaphore s) {
    while (s <= 0) {
        // Block the process/thread
    }
    s--;
}
void signal(Semaphore s) {
    s++;
}

In this algorithm:

  • `philosopher(id)` represents the code for each philosopher, where `id` is the identifier for the philosopher.
  • `wait(s)` is a function that decreases the value of semaphore `s` and blocks the process if the semaphore is not available.
  • `signal(s)` is a function that increases the value of semaphore `s`.
  • The philosopher alternates between thinking and eating, acquiring and releasing forks while ensuring mutual exclusion using the mutex semaphore.

The Readers and Writers Problem

The Readers and Writers Problem

Definition: The Readers and Writers Problem is another classic synchronization problem that arises in concurrent programming. It models a situation where multiple threads (readers and writers) contend for access to a shared resource (such as a data structure or file). The problem is to design a solution that allows multiple readers to access the resource concurrently but ensures exclusive access for a writer to prevent data inconsistency.

In this problem, readers can access the resource simultaneously without any issue, but a writer must have exclusive access to the resource when writing to it. The challenge is to find a synchronization scheme that allows for maximum concurrency among readers while preventing race conditions and ensuring the integrity of data when a writer is active.

Required Data

The key data structures involved in the Readers and Writers Problem include:

  • Readers: It is a set of concurrent processes or threads representing readers who want to read the shared resource.
  • Writers: It is a set of concurrent processes or threads representing writers who want to write to the shared resource.
  • Shared Resource: The data structure or resource that both readers and writers want to access. This could be a file, database, or any other shared data structure.
  • Counters or Semaphores: Counters or semaphores are used to control access to the shared resource, indicating the number of readers currently accessing it and whether a writer is active.

Solution:

A common solution to the Readers and Writers Problem involves the use of semaphores to coordinate access to the shared database. Here is a simplified solution:

  • Mutex (for Database Access): It is a semaphore to ensure mutual exclusion when accessing the database. Both readers and writers must acquire this semaphore before accessing the database.
  • Semaphore for Readers Count It is a semaphore or counter to keep track of the number of readers currently accessing the database. This semaphore allows multiple readers to access the database concurrently.

Algorithm:

  • Reader’s Entry: A reader, before accessing the database, acquires the mutex semaphore to ensure exclusive access for reading. The reader increments the readers count, signaling to other readers that the database is being read. If the reader is the first to arrive, it acquires the mutex and gains access to the database.
  • Reader’s Exit: After reading, the reader decrements the readers count. If the reader is the last to leave, it releases the mutex, allowing writers to access the database.
  • Writer’s Entry: A writer, before accessing the database, acquires the mutex semaphore to ensure exclusive access for writing. The writer waits until there are no readers in the database (readers count is zero).
  • Writer’s Exit: After writing, the writer releases the mutex, allowing both readers and other writers to access the database.

Algorithm Code:

// Define semaphores and variables
Semaphore mutex = 1; // Binary semaphore for mutual exclusion
Semaphore wrt = 1;   // Binary semaphore for controlling write access
int readers_count = 0; // Variable to track the number of active readers
// Reader process
void reader() {
    while (true) {
        // Wait for mutual exclusion before updating readers_count
        wait(mutex);
        // Increment the number of active readers
        readers_count++;
        // If the first reader, wait for write access
        if (readers_count == 1) {
            wait(wrt);
        }
        // Release mutual exclusion
        signal(mutex);
        // Read the shared resource
        // Wait for mutual exclusion before updating readers_count
        wait(mutex);
        // Decrement the number of active readers
        readers_count--;
        // If the last reader, release write access
        if (readers_count == 0) {
            signal(wrt);
        }
        // Release mutual exclusion
        signal(mutex);
        // Perform other tasks or sleep to simulate reader behavior
    }
}
// Writer process
void writer() {
    while (true) {
        // Wait for write access
        wait(wrt);
        // Write to the shared resource
        // Release write access
        signal(wrt);
        // Perform other tasks or sleep to simulate writer behavior
    }
}
// Helper functions for wait and signal operations on semaphores
void wait(Semaphore s) {
    while (s <= 0) {
        // Block the process/thread
    }
    s--;
}
void signal(Semaphore s) {
    s++;
}

In this algorithm:

  • `reader()` represents the code for each reader, and `writer()` represents the code for each writer.
  • `mutex` is a binary semaphore used for mutual exclusion to protect the `readers_count` variable.
  • `wrt` is a binary semaphore used to control write access to the shared resource.
  • `readers_count` is a variable that tracks the number of active readers.
  • Readers acquire the mutex semaphore before updating `readers_count` and check if they are the first reader to enter. If so, they wait for `wrt` to ensure exclusive access for writers. Readers release the` mutex` semaphore after updating `readers_count`.
  • Writers wait for `wrt` before writing to the shared resource and release it afterward.

The Bounded-Buffer Problem

The Bounded-Buffer Problem

Definition: The Bounded-Buffer Problem, also known as the Producer-Consumer Problem, is a classical synchronization problem that highlights challenges in managing a shared, fixed-size buffer between two types of processes: producers and consumers. Producers generate data items and place them into the buffer, while consumers retrieve and process these items. The key challenge is to ensure proper synchronization to prevent issues such as buffer overflow or underflow and to avoid race conditions.

Required Data:

In the Bounded-Buffer Problem, the critical data structures include:

  • Buffer: A fixed-size shared data structure that serves as a temporary storage space for items produced by producers and consumed by consumers.
  • Producers: A set of processes or threads responsible for generating data items and placing them into the buffer
  • Consumers: Another set of processes or threads responsible for retrieving and processing data items from the buffer
  • Counters or Semaphores: To keep track of the number of items in the buffer and control access to the buffer, two counters or semaphores are typically used: one for tracking the number of filled slots (items) in the buffer and another for tracking the number of available slots in the buffer.

Solution

One common solution to the Bounded-Buffer Problem involves using semaphores to control access to the buffer and the associated counters. Here is a simplified solution:

  • Semaphore for Empty Slots: Create a semaphore to represent the number of empty slots in the buffer. Producers will decrement this semaphore when producing an item and increment it when placing an item in the buffer.
  • Semaphore for Full Slots: Create a semaphore to represent the number of filled slots in the buffer. Consumers will decrement this semaphore when consuming an item and increment it when a slot becomes empty.
  • Mutex (Optional): Optionally, use a mutex semaphore to ensure mutual exclusion when updating shared variables or accessing the buffer.

Algorithm:

  • Producers, before adding an item to the buffer, must acquire the semaphore for empty slots. If no empty slots are available, they wait.
  • Once the producer acquires an empty slot, it may update the buffer and release the semaphore for empty slots.
  • Consumers, before retrieving an item from the buffer, must acquire the semaphore for full slots. If no items are available, they wait.
  • Once the consumer acquires a full slot, it may retrieve the item, update the buffer, and release the semaphore for full slots.
  • Use a mutex (if employed) to ensure mutual exclusion when updating shared variables or accessing the buffer to prevent race conditions.

Algorithm Code:

// Define the size of the buffer
const int BUFFER_SIZE = 5;
// Define semaphores for synchronization
Semaphore mutex = 1; // Binary semaphore for mutual exclusion
Semaphore empty = BUFFER_SIZE; // Counting semaphore to track empty slots in the buffer
Semaphore full = 0; // Counting semaphore to track filled slots in the buffer
// Buffer to hold items
int buffer[BUFFER_SIZE];
// Producer process
void producer() {
    while (true) {
        // Produce item
        int item = produce_item();
        // Wait for an empty slot in the buffer
        wait(empty);
        // Wait for mutex to ensure exclusive access to the buffer
        wait(mutex);
        // Place item into the buffer
        insert_item(item);
        // Signal mutex to release access to the buffer
        signal(mutex);
        // Signal that the buffer has a new item
        signal(full);
    }
}
// Consumer process
void consumer() {
    while (true) {
        // Wait for a filled slot in the buffer
        wait(full);
        // Wait for mutex to ensure exclusive access to the buffer
        wait(mutex);
        // Remove item from the buffer
        int item = remove_item();
        // Signal mutex to release access to the buffer
        signal(mutex);
        // Signal that the buffer has an empty slot
        signal(empty);
        // Consume item
        consume_item(item);
    }
}
// Helper functions for wait and signal operations on semaphores
void wait(Semaphore s) {
    while (s <= 0) {
        // Block the process/thread
    }
    s--;
}
void signal(Semaphore s) {
    s++;
}

In this algorithm:

  • `producer()` represents the code for the producer process.
  • `consumer()` represents the code for the consumer process.
  • `produce_item()` and `consume_item()` are placeholder functions representing the production and consumption of items.
  • `insert_item(item)` and `remove_item()` are placeholder functions representing the insertion and removal of items from the buffer.

Conclusion

Semaphores are a fundamental concept in operating systems, providing a powerful mechanism for synchronizing processes and managing shared resources. By carefully implementing and using semaphores, operating systems can achieve efficient resource allocation, prevent race conditions, and ensure the overall stability and reliability of the system. As technology continues to advance, the role of semaphores remains essential in facilitating the smooth operation of complex concurrent systems.

Course Schedule

Name Date Details
Python Course 14 Dec 2024(Sat-Sun) Weekend Batch View Details
21 Dec 2024(Sat-Sun) Weekend Batch
28 Dec 2024(Sat-Sun) Weekend Batch

About the Author

Senior Consultant Analytics & Data Science

Sahil Mattoo, a Senior Software Engineer at Eli Lilly and Company, is an accomplished professional with 14 years of experience in languages such as Java, Python, and JavaScript. Sahil has a strong foundation in system architecture, database management, and API integration.