Log-structured File System (BSD)

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

The Log-Structured File System (or LFS) is an implementation of a log-structured file system (a concept originally proposed and implemented by John Ousterhout), originally developed for BSD. It was removed from FreeBSD and OpenBSD; the NetBSD implementation was nonfunctional until work leading up the 4.0 release made it viable again as a production file system.[1]

Design

Most of the on-disk format of LFS is borrowed from UFS. The indirect block, inode and directory formats are almost identical. This allows well-tested UFS file system code to be re-used; current implementations of LFS share the higher-level UFS code with the lower-level code for FFS, since both of these file systems share much in common with UFS.

LFS divides the disk into segments, only one of which is active at any one time. Each segment has a header called a summary block. Each summary block contains a pointer to the next summary block, linking segments into one long chain that LFS treats as a linear log. The segments do not necessarily have to be adjacent to each other on disk; for this reason, larger segment sizes (between 384KB and 1MB) are recommended because they amortize the cost of seeking between segments.[2]

Whenever a file or directory is changed, LFS writes to the head of this log:

  1. Any changed or new data blocks.
  2. Indirect blocks updated to point to (1).
  3. Inodes updated to point to (2).
  4. Inode map blocks updated to point at (3).[3]

Unlike UFS, inodes in LFS do not have fixed locations. An inode map—a flat list of inode block locations—is used to track them. As with everything else, inode map blocks are also written to the log when they are changed.

When a segment is filled, LFS goes on to fill the next free or clean segment. Segments are said to be dirty if they contain live blocks, or blocks for which no newer copies exist further ahead in the log. The LFS garbage collector turns dirty segments into clean ones by copying live blocks from the dirty segment into the current segment and skipping the rest. The summary block in each segment contains a map to track live blocks.

Generally, garbage collection is delayed until there are no clean segments left; it can also be deferred for when the system is idle. Even then, only the least-dirty segments are picked for collection. This is intended to avoid the penalty of cleaning full segments when I/O bandwidth is most needed.[2]

At a checkpoint (usually scheduled about once every 30 seconds), LFS writes the last known block locations of the inode map and the number of the current segment to a checkpoint region at a fixed place on disk. There are two such regions; LFS alternates between them each checkpoint. Once written, a checkpoint represents the last consistent snapshot of the file system. Recovery after a crash and normal mounting work the same way—the file system simply reconstructs its state from the last checkpoint and resumes logging from there.

Disadvantages

  • There can be severe file system fragmentation in LFS, especially for slowly growing files or multiple simultaneous large writes. This inflicts a severe performance penalty, even though the design rationale for log-structured file systems assumes disk reads will mostly be cached away.
  • LFS becomes progressively less efficient as it nears maximum capacity, when the garbage collector has to run almost constantly to make clean segments available.
  • LFS does not allow snapshotting or versioning, even though both features are trivial to implement in general on log-structured file systems.

References

  1. Lua error in package.lua at line 80: module 'strict' not found..
  2. 2.0 2.1 Lua error in package.lua at line 80: module 'strict' not found..
  3. Lua error in package.lua at line 80: module 'strict' not found..