In an operating system, if a process is in a ready state, the process needs to be scheduled to be executed. The prime objective of the process scheduling is to deliver minimum response time. Thus, the scheduler allocates the processor to a ready process by de-allocating the processor from a running process in such a way that the idle time of a processor is minimized.
Initially, all the processes are kept in the Job Queue. A new process is kept in the Ready Queue. A process waits in the ready queue until it is selected for execution. The processes in the Ready state are placed in the Ready Queue. The processes waiting for a device are placed in Device Queues (for each device there are unique device queues).
Once the process is assigned to the CPU and is executed, one of the following events can occur. The process could
- issue an I/O request, and then be placed in the I/O queue.
- create a new subprocess and wait for its termination.
- be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue.
Scheduling techniques are categorized as non-pre-emptive and pre-emptive. In non-pre-emptive scheduling, a running process cannot be sent to the ready queue forcibly. Whereas in pre-emptive scheduling, a process can be sent to a ready queue if the processor needs to be allocated to another ready process.
A process eventually switches from the waiting state to the ready state and is then put back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all queues, its PCB and allocated resources deallocated.
Types of Schedulers
Let's discuss all the different types of Schedulers in detail:
Long-Term Scheduler: The long-term scheduler decides which program must get into the job queue, from where the Job Processor selects the processes and loads them into the memory for execution.
Medium-Term Scheduler: This scheduler removes the processes from memory to reduce the degree of multiprogramming. Later, the processes are reloaded into memory to continue the execution of the process where it left off. It mainly deals with swapping in and out of the processes to and from main memory.
Short-Term Scheduler (CPU Scheduler): This scheduler enhances CPU performance by allocating CPU for the processes to execute the instructions. It runs very frequently and the CPU is allocated to a process for a very short time.
Context Switching
Running multiple processes on a single CPU system sometimes requires the following scenario while the CPU is allocated to a process by deallocating it from the other process:
- Save the state of the old process in the backup store and
- load the state for the new process in the main memory.
The context of a process (represented by PCB) contains information on memory management and the status of the process as well as the value of the CPU registers. When a context switching happens, the Kernel loads the saved context of the new process that is set to execute and stores the context of the previous process in its PCB.
Since the system accomplishes nothing meaningful during a context switch, the time spent doing so is pure overhead. Depending on the memory speed, the quantity of registers that must be copied, and the presence of special instructions, its speed varies from machine to machine. The typical range of speeds is 1 to 1000 microseconds.
Categories of Scheduling Algorithms
Several scheduling strategies are required in different settings. This scenario occurs due to the varying objectives of different application domains and types of operating systems. The scheduler's optimization objective varies across different systems. Three distinct contexts that are worth differentiating are
- Batch : In batch systems, there are no users eagerly awaiting prompt responses at their terminals. Therefore, nonpreemptive algorithms, or preemptive algorithms with lengthy time intervals for each process, are frequently deemed appropriate. This strategy minimizes context switching, resulting in enhanced speed.
- Interactive : In an environment with interactive users, preemption is essential to keep one process from hogging the CPU and denying service to the others. Even if no process intentionally ran forever, due to a program bug, one process might shut out all the others indefinitely. Preemption is needed to prevent this behavior.
- Real time : In systems with real-time restrictions, preemption is occasionally unnecessary due to the processes' awareness that they may not execute for extended durations and typically complete their tasks promptly before entering a blocked state. Real-time systems exclusively execute programs that are specifically designed to advance the current application, distinguishing them from interactive systems. Interactive systems are versatile and capable of executing arbitrary programs that may not be cooperative or even malevolent.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.