operating-systems io

Definition

Disk Cache

A disk cache is a portion of main memory (RAM) reserved for storing sectors from a hard disk. It leverages the principle of locality to reduce disk access latency.

Blocks can be replaced on demand or pre-fetched — reading subsequent sectors before they are explicitly requested, further reducing future latency.

Replacement Policies

Least Recently Used (LRU)

Replaces the block not accessed for the longest time.

Least Frequently Used (LFU)

Replaces the block accessed the fewest times. May retain old blocks frequently accessed in the past but no longer needed.

Frequency-Based Replacement (FBR)

Enhancement of LFU preventing “cache pollution” from short-term references. Divides cache into three sections:

  • New: Most recently used blocks; references do not increment access count
  • Middle: References increment access count
  • Old: Replacement candidates; OS selects the block with lowest access count here

Write Strategies

Write-Through

Every write to cache is immediately written to disk. Ensures safety but is slow.

Write-Back

Modified sectors written to disk only on eviction or in batches. Better performance but risks data loss on power failure.