Definition
IO Management
IO management is the task of the operating system to manage the interaction between the processor and external peripheral devices. It provides a uniform interface to the hardware while handling the vast diversity in device characteristics.
Categories of IO Devices
Peripheral devices can be grouped into three broad categories:
- Human-Readable: Used for communication with the user (e.g., keyboards, displays, printers).
- Machine-Readable: Used for electronic communication and data storage (e.g., disk drives, sensors, actuators).
- Communication: Used for exchanging data with remote devices (e.g., network interfaces, modems).
Characteristics and Diversity
Managing IO is often considered the “messiest” aspect of OS design due to extreme variations in:
- Data Rate: Differences of several orders of magnitude (e.g., keyboard vs. Gigabit Ethernet).
- Application: The context in which a device is used (e.g., using a disk for a file system vs. virtual memory swap space).
- Control Complexity: The level of sophistication required to manage the device (e.g., simple polling vs. complex interrupt handling).
- Unit of Transfer: Data may be transferred as a stream of bytes (character-stream) or in fixed-size blocks (block-stream).
- Data Representation: Differences in byte order, encoding, or parity checks.
- Error Handling: The types of errors that occur and how they are reported or recovered from.
Design Objectives
The primary goals of the IO subsystem are:
- Efficiency: IO is often the bottleneck of a system. Techniques like buffering and disk scheduling are used to bridge the speed gap between the CPU and peripherals.
- Generality (Flexibility): Hiding device-specific details behind a uniform interface (e.g., using
open,close,read,writefor all devices) to simplify application development and device replacement. - Reliability: Ensuring data integrity and handling hardware failures gracefully.
IO Synchronisation and Blocking
IO operations can be categorised based on how they interact with the calling process’s execution flow.
Blocking vs. Non-blocking IO
- Blocking IO: The calling process is moved to the Blocked state and suspended until the IO operation completes.
- Non-blocking IO: The system call returns immediately without blocking the process. The return status indicates whether the operation is finished or how many bytes were transferred.
Synchronous vs. Asynchronous IO
- Synchronous IO: The process is conceptually tied to the completion of the IO. Even if it is non-blocking (polling), the process is actively involved in checking for completion.
- Asynchronous IO: The process triggers an IO request and continues execution. The OS notifies the process (e.g., via a signal or callback) only once the entire operation is complete.
IO Request Handling Sequence
A typical synchronous IO request follows these steps:
- User Request: The application issues a system call (e.g.,
read()). - Kernel Check: The IO subsystem checks if the request can be satisfied from the cache (e.g., disk cache).
- Driver Initiation: If not, the OS blocks the process and sends commands to the device driver.
- Controller Command: The driver configures the device controller to start the transfer.
- Hardware Execution: The device performs the physical IO.
- Completion Interrupt: The controller generates an interrupt when finished.
- Data Transfer: The interrupt handler copies data into the kernel buffer and then to the user process space.
- Unblock: The OS moves the process back to the Ready state.