A File System Designed for Embedded

Posted by: Thom Denholm

To protect against unexpected power loss, so common in the embedded world, file writes need to be atomic. Linux file systems ext3 and ext4 were designed for server or desktop environments. Google developer Tim Bray suggests that appropriate use of fsync() can mitigate the risk of data loss, but I am sure that's not the best solution. The use of delayed allocation means that metadata is committed but the data is not. Alternatively, both can be committed to the journal at a performance penalty. Performance is crucial in both desktops and devices, but not at the expense of data corruption. This problem is readily demonstrated when updating files, an action which usually happens "in place". This is quite common in database and other important system files. When power is lost, data can be overwritten only partially, or else metadata can be altered to point to where the data will be updated but has not been. Another alternative to liberal use of fsync() is a rename strategy, that is, write only new data, then rename and replace the old file. Rename is atomic, at least. The best solution, and one which does not require applications to change the way they do writes, is to perform all data writes atomically. In addition to that, the file system should never overwrite live data and always retain a "known good" state on the media. This way caching does not have to be removed - either user data changes get to the disk fully or not at all. No partial writes or incorrect metadata, and no mount-time journal rebuilds or disk checks either. Instead of adapting a desktop or server file system for embedded use, it is far better to use a file system designed specifically for embedded use. View whitepaper: Breakthrough Performance with Tree-based File Systems

Comments (0)

Add a Comment

Allowed tags: <b><i><br>Add a new comment: