Schedulling and Resource Management in Operating Systems

I will explain this topic in a very simple and clear way.
Modern computer systems consists of processors, memory, storage disks, terminals, network interfaces, printers as well as other devices. One of the functions of the operating system is to provide for an orderly and well controlled allocation of the processors, memory and other resources among the programs or applications that may be competing for them.
What is a Resource?
A resource can be a hardware component of the computer or maybe a piece of information. A typical example of a resource is the Printer.  For some resources, a number of identical instances may be provided. An example is a computer with two or three CD drives. In such a situation, when several copies of the resource is available, any of them can be used to satisfy any request for that resource.
The sequence of events required to use a resource is as follows:
  • Request for the resource
  • Use the resource
  • Release the resource
The work of the Scheduler is to decide
  • How long a process would hold a resource
  • How long a process executes 
  • In which order processes would execute
The Scheduler is a very important part of the operating system. It performs these tasks using scheduling algorithm. We would now look at a number of scheduling algorithms employed by the scheduler in allocating and controlling resource usage.
Scheduling Algorithms
There are six scheduling algorithm used in modern operating systems that we are going to examine.
1. Round Robin Scheduling
In this algorithm, each process is assigned a time slot or interval known as a quantum. The process would run within this time interval If the process is still not completed at the end of this interval, the CPU is preempted and given to the next process. The next process runs within its own quantum and then to the next until it gets back to the first. If a process complete execution or is blocked before its quantum is exhausted, a process called CPU switching is done and the next process takes over.
2. Priority Scheduling
In this algorithm, each process is assigned a priority, and the runnable process having the highest priority is allowed to run. After it completes, the next runnable process with highest priority is allowed to run and so one. There is no time slot. lf a process is taking to much time to run, then the CPU would decrease its priority with each clock tick. This action may cause its priority to drop below that of the next process and when this happens, process switching occurs and the next process takes over.
3. Multiple Queues
In this algorithm, priority classes are set up. Processes in the highest classes are set up. Proesses in the highest class are run for one quantum. Processes in the next highest class are run for two quanta and so on. Whenever a process uses up all of its quanta, it would be moved down one class.
4. Shortest Job First( SJF)
The SJF algorithm allows for priority to be given to the job with the shortest execution time. This algorithm is applied  when the run time  for each of the process is known before hand.
5. Policy-Driven Scheduling
In this algorithm, a policy is created that defines the amount of CPU time assigned to each process. For example a policy may be defined that:
“if N users are logged on, then each user will receive 1/N of the CPU time”.
6. Two-Level Scheduling
Here, two scheduler levels are maintained: a higher level scheduler and a lower level scheduler.
This algorithm is used when there is no enough memory space and one of the runnable processes have en kept on the disk. A higher level scheduler takes the responsibility of swapping runnable processes between disks and memory. The lower-level scheduler then handles the process swapping.
For more details on Recourse Management and Scheduling see the book:  “Modern Operating System” by Andrew Tanenbaum
I hope this have been informative your. Do leave a comment to let me know your views.