operating-systems io

Definition

IO Buffering

IO buffering is the use of main memory as a temporary storage area for data during IO operations. It decouples the speed of the user process from the speed of the IO device.

Strategies

No Buffering

The process waits for the IO to complete directly in the user memory. Inefficient for slow devices.

Single Buffering

The OS allocates a single buffer in the kernel memory for the IO operation.

  • Mechanism: The OS reads a block into the buffer while the process continues. When the block is full, the OS copies it to the user space.
  • Advantage: The process can process one block of data while the next one is being read into the buffer.

Double Buffering (Buffer Swapping)

The OS uses two buffers.

  • Mechanism: The IO device fills one buffer while the process consumes data from the other. When both tasks finish, the buffers are swapped.
  • Advantage: Maximises parallelisation of execution and IO.

Circular Buffering

Uses more than two buffers arranged in a circle. This is effective for handling bursts of IO activity where the processing and IO speeds vary significantly over time.

Benefits

  • Performance (Transaction Batching): Bridging the speed gap by collecting individual characters into larger blocks before transfer, significantly reducing the number of IO operations.
  • Entroping (Decoupling): Separates process access times from device transfer times. The OS can perform transfers when it is most efficient (e.g., following a disk schedule) without holding up the application.
  • Masking Peaks: Buffers act as a reservoir to absorb bursty IO activity, allowing the process to run at full memory speed until the buffer is saturated.
  • Swapping Support: Allows the process to be swapped out to disk while its IO operation is still in progress (the OS manages the buffer).