Process Synchronization involves resource sharing among the processes in a way that handles concurrent access to shared data, reducing the likelihood of inconsistent data. It's a mechanism to ensure the coordinated execution of cooperating activities necessary for maintaining data consistency.
Process synchronization was created to address issues that sprang up during the execution of several processes. The next section discusses some of the issues.
Critical Section Problem
A Critical Section is a section of code that must be run in one atomic operation that accesses shared variables. It means that just one process must be running its crucial part at any given time in a collection of processes that are cooperating. Any subsequent processes must wait until the first one is finished before beginning their key sections.
The Solution to the Critical Section Problem
A solution to the critical section problem must satisfy the following three conditions:
- Mutual Exclusion: One process at a time can be in its critical section among a group of processes that are cooperating.
- Progress: Any one of these threads must be permitted to enter a process' critical part if there is no process in that section already and one or more threads desire to execute that section.
- Bounded Waiting: There is a restriction on how many other processes can enter a process's vital area after it requests access to it before the request of that process is granted. The system must therefore give the process permission to enter its crucial portion after the limit has been reached.
Synchronization Hardware
Hardware support for critical section coding is available from numerous platforms. If we could prevent interrupts from happening when a shared variable or resource is being modified, we could easily address the critical section problem in a single-processor system.
We could be certain that the current set of instructions would be allowed to execute sequentially without interference in this way. Unfortunately, a multiprocessor environment does not allow for the viability of this strategy. As the message is sent to every processor in a multiprocessor environment, disabling interrupt can take some time. This message transmission lag reduces system efficiency by delaying threads' entry into a crucial portion.
Mutex Locks
Due to the difficulty of implementing the synchronization hardware solution, Mutex Locks, a rigid software technique, was developed. According to this method, a LOCK is gained over the vital resources that are modified and used inside the critical portion of code in the entry part, and it is released in the exit section. No other process can access the resource since it is locked while a process is completing its critical portion.
Semaphores
By synchronizing the progress of interdependent processes using the value of a straightforward integer variable, Dijkstra established a novel and crucial method for controlling concurrent processes in 1965. We refer to this integer variable as a semaphore. It serves as a synchronizing tool in essence and can only be used with two low-standard atomic operations, wait and signal, which are denoted by P(S) and V(S), respectively.
In the simplest terms possible, a semaphore is a shared variable between all threads that can only hold a non-negative integer value, with the wait and signal actions functioning as follows:
P(S): if S ≥ 1 then S <- S - 1 else <block and enqueue the process>; V(S): if <some process is blocked on the queue> then <unblock a process> else S <- S + 1;
- Wait: When a thread wishes to reach a crucial piece of code (i.e., access a shared resource), it first tries to decrement the semaphore value. This is known as a semaphore wait (P operation / Down operation). The decrement is successful and the thread is permitted to enter the critical region if the semaphore value is greater than zero. The thread is blocked (put to sleep) if the semaphore value is zero and will remain blocked (put to sleep) until the semaphore value increases. .
- Signal: A thread increases the semaphore value to indicate that it has freed a resource when it leaves a crucial area. If there were any other threads waiting on the semaphore, one of them would be awakened and permitted to reach the critical region (often the first one to wait).
Properties of Semaphores
It is simple and always has an integer value that is not zero. uses a range of techniques. may have a number of different important portions and semaphores. For each essential component, there are separate access semaphores. can allow multiple processes to enter the important area at once if necessary.
Types of Semaphores
Semaphores are mainly of two types.
- Binary Semaphore: It is frequently referred to as a Mutex since it is a unique kind of semaphore that is utilized to implement mutual exclusion. When a program is running, a binary semaphore is initialized to 1 and only accepts the numbers 0 and 1.
- Counting Semaphores: These are used to implement bounded concurrency control.
Here is a simple step-wise implementation involving the declaration and usage of semaphores. Shared var mutex: semaphore = 1;
Process i begin . . P(mutex); execute CS; V(mutex); . . End;
Limitations of Semaphores
A significant drawback of semaphores is Priority Inversion. Their use is merely by convention and not by law. A process could get permanently blocked if used improperly. Deadlock describes such a circumstance. In the upcoming lessons, we will go into great detail on deadlocks.
Classical Problems of Synchronization
We will talk about a number of well-known synchronization challenges in this course. Other synchronization issues besides Mutual Exclusion can be solved with semaphores. The issues with process synchronization in systems with cooperating processes are shown in some of the classic challenges listed below.
Bounded Buffer Problem
The Producer-Consumer problem, where messages are exchanged between producer and consumer processes using a finite buffer pool, is a generalization of this issue. The Bounded Buffer Problem is a common name for this issue since the buffer pool has a finite capacity. To track the current number of full and empty buffers, respectively, two counting semaphores named "full" and "empty" are created as a solution to this issue.
The Readers Writer's Problem
In this scenario, certain processes—referred to as readers—only ever read the shared data and never make any changes, while other processes—referred to as writers—might make changes in addition to or instead of reading the data. Readers-writers issues can take many different forms.
Dining Philosophers Problem
The allocation of scarce resources to a group of activities without creating a standstill or famine is the crux of the dinner philosopher's conundrum. A bowl of rice is placed in the middle of the table, and five philosophers are seated around it. When a philosopher wants to eat, he uses two chopsticks—one from their left and one from their right. A philosopher places both chopsticks in their original positions on the table when they wish to reflect.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.