Operating Systems

June 16, 2025 (1w ago)

What is an Operating System?

An operating system (OS) serves as an intermediary between users and hardware. It manages resources, allocates them efficiently, and provides a platform for applications to run.

Key aspects of an OS include:

  1. Intermediate Role: The OS acts as a bridge between users and hardware, facilitating communication and instruction execution.
  2. Resource Manager: The OS allocates resources such as CPU time, memory, and system buses among various processes.
  3. Platform Provider: The OS provides a platform on which applications can be installed and executed.

Goals and Functions

Functions are the specific tasks performed by the OS to achieve these goals, including process management, memory management, input/output device management, file management, network management, and security.

Major Components of an Operating System

The major components include:

Types of Operating Systems

Batch Operating System

Early computers used batch operating systems, where users prepared jobs consisting of programs, control information, and input data. These jobs were processed sequentially.

Spooling

Spooling (Simultaneous Peripheral Operations Online) involves sending data to input/output devices via secondary memory, reducing the load on the processor.

Example: Print spooling allows the CPU to continue processing while the printer handles the printing task in the background.

Multiprogramming

Multiprogramming allows multiple processes to reside in memory simultaneously, improving CPU utilization by switching between processes when one is waiting for an event.

Note: Multiprogramming does not mean executing multiple processes in parallel. The CPU executes one process at a time, but switches between them rapidly.

Multitasking (Time Sharing)

Multitasking is a logical extension of multiprogramming, where the CPU rapidly switches between multiple tasks, giving users the impression of parallel execution.

Multiprocessing

Multiprocessing involves using two or more CPUs within the same computer, enabling true parallel execution of multiple processes.

Symmetric vs. Asymmetric Multiprocessing
Real-Time Operating Systems

Real-time operating systems (RTOS) are designed for applications with strict time constraints. They are categorized into hard real-time and soft real-time systems.

Distributed Operating Systems

Distributed operating systems manage a collection of independent networked computers, allowing them to work together on tasks.

Advantages:

Structure of Operating Systems

Operating system structures include:

User Interface

The user interface facilitates communication between users and the operating system. Common types include:

System Calls

System calls provide an interface for user programs to request services from the operating system kernel. They are essential for performing tasks that require privileged access.

User Mode v/s Kernel Mode

System calls cause a switch from user mode to kernel mode.

Process Management

What is a Process?

A process is a program in execution. It is an active entity that requires resources such as CPU time, memory, and registers. A program, on the other hand, is a passive entity consisting of instructions stored on disk.

Process Control Block (PCB)

The Process Control Block (PCB) is a data structure that stores metadata about a process, including its state, program counter, registers, and memory limits.

Process States

A process goes through various states during its lifecycle:

Schedulers

Schedulers are programs that manage the movement of processes between different states. Types of schedulers include:

Dispatcher

The dispatcher is a module that gives control of the CPU to the process selected by the short-term scheduler. It involves context switching and transitioning to user mode.

CPU-Bound vs. I/O-Bound Processes
Context Switch

A context switch is the process of saving the state of one process and loading the state of another process.

CPU Scheduling

CPU scheduling is the process of determining which process in the ready queue is allocated to the CPU.

Scheduling Criteria
Preemptive vs. Non-Preemptive Scheduling

Scheduling Algorithms -

First-Come, First-Served (FCFS)

FCFS is the simplest scheduling algorithm, where processes are executed in the order they arrive. It is non-preemptive.

Advantages:
Disadvantages:
Shortest Job First (SJF)

SJF selects the process with the shortest burst time for execution. It can be preemptive or non-preemptive.

Advantages:
Disadvantages:
Priority Scheduling

Priority scheduling assigns a priority to each process, and the process with the highest priority is executed first. It can be preemptive or non-preemptive.

Advantages:
Disadvantages:
Round Robin

Round Robin allocates a fixed time quantum to each process, ensuring fair distribution of CPU time. It is preemptive.

Advantages:
Disadvantages:
Multilevel Queue Scheduling

Multilevel queue scheduling involves maintaining multiple ready queues, each with its own scheduling algorithm.

Multilevel Feedback Queue Scheduling

Multilevel feedback queue scheduling allows processes to move between different queues based on their behavior.

Process Synchronization

Process synchronization involves coordinating the execution of multiple processes to ensure data consistency and avoid race conditions.

Race Condition

A race condition occurs when the output of a process depends on the execution sequence of multiple processes.

Critical Section

A critical section is a section of code where a process accesses shared resources.

Critical Section Problem

The critical section problem arises when multiple processes access shared resources simultaneously. Solutions must satisfy mutual exclusion, progress, and bounded waiting.

Requirements for a Solution to the Critical Section Problem
Peterson's Solution

Peterson's solution is a classic solution to the critical section problem for two processes. It uses a shared variable turn to indicate which process can enter the critical section, and a shared array flag to indicate whether a process wants to enter the critical section.

Semaphores

Semaphores are synchronization tools that can be used to solve the critical section problem for multiple processes. They are integer variables that can be accessed through two atomic operations: wait and signal.

Classic Problems of Synchronization

Hardware Synchronization

Hardware synchronization involves using hardware-level mechanisms to solve synchronization problems.

Deadlock

Deadlock is a situation where two or more processes are blocked indefinitely, waiting for each other to release resources.

Necessary Conditions for Deadlock
  1. Mutual Exclusion: Resources are used in a mutually exclusive fashion. (Resources are non-sharable)
  2. Hold and Wait: A process holds resources while waiting for additional resources.
  3. No Preemption: Resources cannot be forcibly taken away from a process.
  4. Circular Wait: A circular chain of processes exists, where each process is waiting for a resource held by the next process in the chain.
Deadlock Handling
Deadlock Prevention Strategies
Deadlock Avoidance: Banker's Algorithm

The Banker's Algorithm is a deadlock avoidance algorithm that requires the system to know in advance the maximum number of resources that each process may request.

Deadlock Detection and Recovery
Deadlock Ignorance: Ostrich Algorithm

The Ostrich Algorithm is a strategy of ignoring deadlocks, based on the assumption that they are rare and the cost of preventing or detecting them is high.

Fork Command and Threading

Fork Command

The fork command creates a new process that is a copy of the existing process. This can be useful for creating multiple processes that need to perform similar tasks.

Threading

Threading allows a process to have multiple threads of execution. Threads share the same memory space, which makes it easier for them to communicate with each other.

User-Level Threads vs. Kernel-Level Threads
Multithreading Models

Memory Management

Memory Hierarchy

Locality of Reference

Locality of reference is the tendency for a process to access the same memory locations repeatedly over a short period of time.

Operating System Responsibilities

Contiguous vs. Non-Contiguous Allocation

Contiguous Allocation

Allocation Algorithms

Fragmentation

Non-Contiguous Allocation: Paging

Paging is a memory management scheme that allows the physical address space of a process to be non-contiguous.

Page Fault

A page fault occurs when a process tries to access a page that is not in memory.

Page Replacement Algorithms

Thrashing

Thrashing is a situation in which a process spends more time paging than executing.

Working-Set Model

The working-set model is a memory management technique that allocates memory to a process based on its working set, which is the set of pages that the process is actively using.

Disk Scheduling

Disk scheduling is the process of determining the order in which disk requests are serviced.

Disk Structure

Disk Performance Metrics

Disk Scheduling Algorithms

File System

File Organization

Access Methods

Directory Structure

File Protection