Definition
Disk Cache
A disk cache is a portion of main memory (RAM) reserved for storing sectors from a hard disk. it leverages the principle of locality to reduce disk access latency.
Replacement and Fetching
Blocks can be replaced in the cache on demand or in advance (pre-fetching). Pre-fetching leverages the principle of locality by reading subsequent sectors into the cache before they are explicitly requested by a process, thereby reducing future latency.
Replacement Policies
When the disk cache is full, the OS must select a block to replace:
Least Recently Used (LRU)
Replaces the block that has not been accessed for the longest time.
Least Frequently Used (LFU)
Replaces the block that has been accessed the fewest number of times. LFU can be problematic because it might keep old blocks that were frequently accessed in the past but are no longer needed.
Frequency-Based Replacement (FBR)
FBR is an enhancement of LFU that prevents “cache pollution” from frequent short-term references. The cache is divided into three logical sections: New, Middle, and Old.
- New Section: Contains the most recently used (MRU) blocks. References to blocks in this section do not increment their access count.
- Middle/Old Sections: If a block is re-referenced here, its access count is incremented.
- Replacement: When a new block is needed, the OS selects the block with the lowest access count that is specifically in the Old Section.
Write Strategies
- Write-Through: Every write to the cache is immediately written to disk. Ensures safety but is slow.
- Write-Back: Modified sectors are only written back to disk in batches or when the block is evicted from the cache. Significantly improves performance but risks data loss during a power failure.