Subscribe to Datalight's Blog


SD Reliability – Best Practices

In an earlier blog post titled Managed NAND Performance: It’s All About Use Case, we referred to an article measuring SD media, specifically sequential write performance. Speed was measured using a camera to shoot continuous pictures. Photographers and other users of SD media have another concern – reliability.

A short survey of photographer websites finds that physical damage causes the most problems with SD cards. This isn’t so much because the internal NAND flash has been damaged, but as a result of the access pins and internal connections failing. This is less of a concern for devices that use SD media as their internal storage, such as medical or industrial equipment. SD makes a lot of sense in those cases, as it can be replaced or upgraded.

Newer SD cards use newer NAND flash, and there are reports of problems with data retention. A good wear leveling solution in the firmware can help with this, as the blocks are worn evenly and rewritten more frequently. Firmware is not common between vendors, so buying a name brand can result in name brand firmware under the hood.

Another suggestion given on the photographer websites is to format the media instead of erasing files. This is most likely done to improve performance for large sequential file allocations. Another benefit is that the format only rewrites the FAT and root directory on the card, where individual file deletes will affect those locations and any subdirectory metadata. On an SD card, a format performs less writes than a series of deletes.

While cameras will likely continue to use the FAT file system for exchangeability, other devices using SD as storage media are not limited to this file system. Datalight’s Reliance Nitro is an excellent fit for SD storage where data is critical and product life is long. An installable Windows driver is available if the SD card needs to be removed and accessed via a Windows-based desktop.

The one thing most SD manufacturers emphasize is that their cards will last for a long time. The average camera application writes large files, often sequentially, and these stay in place until downloaded and then deleted. This results in a very low count of writes and erases to the NAND media. With proper physical care, an SD card used in this fashion can last ten years or more. It is likely to be replaced for capacity reasons before then – the average size of an SD card in 2003 was 256 MB.

While the camera use case is particularly friendly to SD media, one that isn’t is database applications – which are becoming more prevalent in multi-function devices like handheld terminals and smartphones. These applications generate many small writes which can greatly reduce the life of the flash media. For optimum SD card life these small writes should be grouped into fewer, larger writes. Of course, for reliability reasons, maintaining the order of the writes is critical. Datalight has addressed this need with our newest product, FlashFXe, which enhances Reliance Nitro to bring faster writes, power-efficiency and improved flash endurance while keeping the best reliability for the application.

Read more about FlashFXe

Thom Denholm | September 6, 2013 | Consumer Other, Flash File System, Industrial, Medical | Leave a comment

What is Industrial Grade eMMC?

eMMC has seen strong adoption and become the storage of choice for consumer devices such as smartphones, e-readers and tablets. These small devices all run on battery power and require high-density storage with low power consumption — at a low cost. Offered for less than $0.60 per GB and in a wide array of sizes — 4GB to 128GB in a single package — consumer-grade eMMC provides this price-performance-power combination. Built to a JEDEC standard, eMMC is now produced by all the major storage vendors including: Toshiba, Samsung, SanDisk, and Micron.

Newly emerging industrial-grade eMMC seeks to leverage this well-defined standard to satisfy a different set of demands. While consumer storage is focused first on cost and size almost to the exclusion of other requirements, industrial applications have a greater need for endurance and reliability with cost much lower on the priority list. Low power consumption – an escalating priority for the consumer market — is also a growing concern for some industrial uses. Many applications within the industrial market require a wider operating temperature range, greater environmental tolerances, additional data security protection, and power fail recovery features.

The new breed of industrialized eMMC responds to these requirements and is gaining consideration for embedded applications in areas such as automotive, medical, aerospace, and other commercial applications. While the industrial marketplace demands a much greater ruggedness than the consumer market, it is also not as cost-sensitive. Industrial devices are much more durable and often mission-critical, products that may be in use for ten or more years recording important data day in and day out. At least one Datalight customer expects their products to operate for twenty years in the field with zero failures.

Industrial grade devices often need to operate in harsh environments that include extended operating temperature ranges — extremes that would cause consumer-grade eMMC to fail. Components must be specified in consideration of the particular need. Automotive and aerospace uses, for example, have a broader temperature demands than medical environments. The table below shows typical temperature ranges for different grades of electrical parts.


Low Temperature

High Temperature











The higher reliability of industrial eMMC comes with a higher price, which is justified by the longer payback period and business-critical performance. The chart below shows the general relationship of cost and reliability for the consumer and industrial eMMC memory.


So how do eMMC manufacturers achieve the endurance necessary to meet the industrial standard? Unlike other industrial solid-statue storage – eMMC can be configured to operate in one of two modes: Single-Level Cell (SLC) at one bit per flash cell – sometimes called “enhanced mode” or Multi-Level Cell (MLC) at two bits per flash cell – referred to as “standard mode”. While enhanced mode has half the number of storage bits, it lasts about 20 times as long as standard mode. Stated conversely, standard mode provides twice as much storage, but its endurance is reduced by a factor of 20. In more specific terms, standard mode may fail at about 3,000 P/E cycles, while enhanced endures for about 60,000 P/E cycles.

In addition to the existing consumer eMMC standard packaging of 153 or 169 ball packages, JEDEC has recently standardized on a 100-ball industrial-grade packaging. This new Industrial packaging provides reduced cost of manufacturing with simplified PCB trace/space designs, fewer number of balls, and fewer number of PCB layers. Industrial eMMC devices are available today from Micron, Kingston, and Memoright. Greenliant and Smart Modular are also entering the industrial eMMC market.

Industrial storage requires increased reliability and endurance. The hardware is only half of the solution.   The software driving the storage is the other half, and has an equal or greater effect on reliability and endurance.   Are you leveraging all the tools available to you to meet the endurance and reliability requirements?    To learn more about solutions for managing eMMC devices, see our FlashFXe page.

RoySherrill | July 23, 2013 | Flash Memory, Flash Memory Manager | 1 Comment

Automotive Challenges

Automobiles and Trucks have gone from simple contraptions to full blown multi-processor networks in just the last few years. Between M2M and the Internet of Things, today’s vehicles are communicating more than ever. Here are some of the challenges we have observed in this industry.

When the power comes on, the system head unit is the first to boot. What the user sees first, besides the dash default display, is the infotainment console – and in a most cases, the image from the reverse camera. A strict requirement in Europe is to display this within 3 seconds of vehicle power up.

From the perspective of system internals, this means the file system and flash must mount very quickly. Only a very small portion of these three seconds is available here – the rest are used to mount an OS kernel and bring up other system resources. There is no time to resynchronize a journal or catalog each flash region. Datalight software is designed to meet these requirements at both the file system and flash driver level.

Vehicle consumers expect some of the same experience in their car as on their smart phones. This includes the latest version of software and installable applications – with both available over-the-air. Embedded software updates can be risky, especially in an environment where power is not guaranteed. The ideal system would perform the software update on blocks not in use for the media, then switch to the new system in one operation. Reliance Nitro does this and more.

Security is being talked about a lot, and covers many things for automotive. One of these is the security of networks, keeping packets confidential and preventing malware injection. As we approach drive by wire, this one is really important! Talked about less but of equal concern is the confidentiality of user data. When you sell your car, you don’t want your private data going with it! Today’s eMMC media provides a Secure Delete option to remove data from the media completely. Datalight products support this hardware completely, allowing full application control of these important functions.

One important topic receiving a lot of press in other embedded devices is the shrinking lifetime of the flash media. NAND flash, the internal storage on many devices and automotive embedded as well, has a limited life span which is measured in program/erase cycles. Software designed to understand this can reduce the amount of write amplification, or in other words minimize the additional write cycles generated by each media write. Once again, Datalight’s software stack comes through for the automotive embedded designer.

While these may be new challenges for the automotive market, they are for the most part old challenges for the embedded world. You don’t necessarily need to reinvent the wheel (pun intended) when there are suppliers such as Datalight with a proven track record and industry recognized customer support.

Read more about Datalight Automotive Solutions

Thom Denholm | July 9, 2013 | Automotive, Datalight Products, Flash File System, Flash Memory Manager | Leave a comment

Wish Granted

OEM customers have told us for years that struggles with NAND supply and lack of standards costs them a great deal of time and money. A shortage or EOL on a key component like NAND flash memory can cause product delivery delays that impact topline revenue and potentially company reputation. What they wish for is a “plug and play” option that lets them multi-source their flash memory. For years Datalight has provided a software standard to make parts switching less painful, but the lack of hardware standards continued to plague our customers.

As you might expect, many are excited by eMMC because it promises to address a big concern with supply chain and parts availability. By adopting this hardware “standard”, OEM’s believe they will be free from vendor lock in – that is compelling. They and their ODM suppliers can source parts from whichever vendor is closer, has available supply or is easiest to work with. Powerful stuff for negotiating cost of goods.

However, the inconvenient truth is that though eMMC is a hardware standard that ensures pin-compatible alternatives from a plethora of suppliers, there are so many exceptions and vendor-specific variants that substantial software modifications are still required. You will likely find a driver from a BSP provider that enables their board or processor to work with eMMC – at the most basic level. This purpose-built software will be provided in un-modifiable binary form written expeditiously to “check the box” for eMMC support. It is unlikely that the supplied driver will work as-is with special capabilities in parts from another vendor (or even with the next die shrink of the first vendor’s parts!) Then starts the quest to either get the software updated by the BSP provider or negotiate for source code access and invest in making the changes yourself. Can you say “schedule impact”?

Another potential shortcoming was pointed out to me recently by a long time FlashFX customer — “how do you know how effective the wear-leveling is when it’s all done inside the black box?”

Ideally, the driver you use with your eMMC should be intelligent enough to assess the vendor-specific features available, the wear-leveling effectiveness and be provided in source code so you can make any modifications for as-yet-undefined capabilities of your hardware. And if you could have everything you wished for, the driver would be written by flash-vendor-neutral software and flash technology experts. Hmmm. I think I know some of those.


KerriMcConnell | July 1, 2013 | Datalight Products, Flash Memory, Flash Memory Manager, Uncategorized | Leave a comment

Software Power Consumption

One of the questions we received at Datalight is whether our software affects the power consumption of embedded devices. Not being power experts ourselves, we found an intern and faculty advisor from our nearby University of Washington to help us out with the process. After a little research, we also selected the PicoScope as the best solution for measuring the power.

As part of the final conclusions of this project, our intern Cameron wrote up his findings, which have now been published on the Pico Technology website –

The comparisons reflected in that article represent a version of our FlashFXe product that was in production at the time. The use case measured (SQLite operations) is the one targeted for improvement by our software. By writing the eMMC media in the most optimal fashion, FlashFXe generates fewer erases and uses less power. Each access to the media does more meaningful work in a more optimal way. At the media level, this also increases the dwell time, which in general has been shown to decrease the bit error rate over time, and may also improve long term data retention.

Datalight continues to research ways where software changes can improve the hardware experience for our customers.

Find out more about Datalight's FlashFXe

Thom Denholm | June 17, 2013 | Datalight Products, Extended Flash Life, Flash Memory, Flash Memory Manager | Leave a comment

Managed NAND Performance: It’s All About Use Case

Last week the UK journal PC Pro published an interesting article about fast SD cards, with a good description of the SD card Class system. With some clever testing, they show how six cards perform in a continuous shooting situation.

These tests also demonstrate how the SD card manufacturers have customized their firmware to handle sequential write cases. A class 10 card requires a minimum of 10 MB/sec throughput, and a supplemental rating system for Ultra High Speed (UHS) indicates a higher clock rate and correspondingly higher transfer rate. For the larger frame sizes (12 megapixel photos, HD video) high transfer rates are a requirement. The resulting data is almost always sequential, which matches the firmware characteristics well.

This article brings out one more interesting point. The authors point out that the performance measurements from using an SD card in a desktop system don’t always reflect the use case. They end up performing their tests using an actual camera, thereby getting as close to the use case as possible.

For an application which uses random I/O (such as tablets and other Android devices), these firmware optimizations aren’t necessary. In some cases, such optimizations actually lower random I/O performance. Similar firmware shows up in eMMC media as well. A software solution (such as FlashFXe) can adjust much of the I/O to be more sequential and more closely match the optimized performance.

At Embedded World a few weeks ago we recorded our demonstration showing the benefits of our new FlashFXe product on eMMC.

Watch our FlashFXe Demo Video Here

Thom Denholm | March 15, 2013 | Flash Memory, Flash Memory Manager, Performance | Leave a comment

Even When Not Using a Database, You Are Still Using a Database

Recently, we’ve focused considerable development effort on improving database performance for embedded devices, specifically for Android. This is because Android is a particularly database-centric environment.

On an Android platform, each application is equipped with its own SQLite database. Data stored here is accessible by any class in the application, but not by outside applications. The database is entirely self-contained and server-less, while still being transactional and still using the standard SQL language for executing queries. With this approach, a crash in one application (the dreaded “force close” message) will not affect the data store of any other application. While fantastic for protection, this method is quite often implemented on flash media, which was designed for large sequential reads and writes.

For years, benchmarks have touted the pure performance of a drive through large sequential reads and writes. On managed flash media, the firmware programmers have responded by optimizing for this use case – at the expense of the random I/O used by most databases, including SQLite. Another challenge is the very high ratio of flushes performed by the database (sometimes 1:1). The majority of database writes are not done on sector boundaries – especially problematic for flash media which must write an entire block.

While there are a few unified “flash file systems” for Linux such as YAFFS and JFFS2, designed specifically for flash memory, they have fallen out of favor because they do not plug neatly into the standard software stack, and therefore cannot take advantage of standard Linux features such as the system cache. While traditional file systems such as VFAT and Ext2/3/4 can work with flash, they are not designed with that purpose in mind, and therefore their performance and reliability suffers. For example, discard support has largely been tacked onto Linux file systems, and is still considered to be somewhat experimental. To quote the Linux v3.5 Ext4 documentation, discard support is “off by default until sufficient testing has been done.” Another example: file systems on flash memory typically benefit from using a copy-on-write design, which ext4 does not use. The reality is that most file systems are designed for desktop (and often server) environments, where high resource usage is OK, and power-loss is infrequent.

Our solution to improving database performance on flash memory is to provide a more unified solution where the various pieces of the stack work in a cohesive fashion. Furthermore, the solution is specifically designed for embedded systems using flash memory, where power-loss is a common event. Datalight’s Reliance Nitro file system is a transactional, copy-on-write file system, designed from the ground up to support flash memory discards and power-loss safe operations.

The result of our work in this area is FlashFXe, a new Datalight product built on our many years of experience managing raw NAND, but designed for eMMC. When used together with Reliance Nitro, almost all write operations become sequential and aligned on sector boundaries for the highest performance. Internal operations are more efficiently organized for the copy-on-write nature of flash media. A multi-tiered approach allows small random writes with very frequent flushes to be efficiently handled while maintaining power-loss safe operations.

This month at Embedded World, we will be demonstrating the results of our efforts to improve database performance on embedded devices using Android. Prepare to be impressed!

Learn more about FlashFXe

Thom Denholm | February 12, 2013 | Datalight Products, Flash File System, Performance | Leave a comment

Why CRCs are important

Datalight’s Reliance Nitro and journaling file systems such as ext4 are designed to recover from unexpected power interruption. These kinds of “post mortem” recoveries typically consists of determining which files are in which states, and restoring them to the proper working state. Methods like these are fine for recovering from a power failure, but what about a media failure?

When a media block fails, it is either in the empty space, the user data, or the file system data. A block from the empty space can be detected on the next write, which will either cause failure at the application, or will be marked bad internally and the system will move on to another block. When a media block in the user space fails, it cannot be reliably read. Often, the media driver will detect and report an unreadable sector, resulting in an error status (and probably no data) to the user application. When a media block containing file system data or metadata fails, it is the responsibility of the file system to detect and (if possible) repair that damage. Often the best thing that can be done is to stop writing to the media immediately.

In some ways, blocks lost due to media corruption present a problem similar to recovering deleted files. If it is detected quickly enough, user analysis can be done on the cyclical journal file, and this might help determine the previous state of the file system metadata. Information about the previous state can then be used to create a replacement for that block, effectively restoring a file.

Metadata checksums have been added to several file system data blocks for ext4 in the 3.5 kernel release. Noticeably absent from this list are the indirect and double indirect point blocks, used to allocate trees of blocks for a very large file. The latest release of Datalight’s Reliance Nitro file system (version 3.0) adds CRCs to all file system metadata and internal blocks, allowing for rapid and thorough detection of media failures.

Optional within this new version of Reliance Nitro is using CRCs on user data blocks, for individual files or entire volumes. This failsafe can be configured to write protect the volume or halt system operations. Diagnostic messages are also available to indicate the specific logical block number of the corrupted block.

The combination of full CRC protection on every metadata block and optional protection of user file data blocks is one of the key attributes of this release of Reliance Nitro. Embedded system designers can detect more media failures in testing, and can diagnose failed units more quickly, leading to greater success in the marketplace.

Learn more about Reliance Nitro

Thom Denholm | January 26, 2013 | Flash File System, Flash Memory, Reliability | Leave a comment

eMMC Problems

If you’ve been following this blog, you’ve probably noticed a lot of discussion and analysis around eMMC. We’ve written about the reasons we are so excited about eMMC, but also why the Write Amplification issues caused by eMMC parts are a problem that needs more attention by the industry.

As more and more device manufacturers use eMMC in their devices, product reviews are beginning to highlight some of the limitations of eMMC that we have been discussing. A case in point is this recent review of Google Nexus 7 by Anand Lal Shimpi and Brian Klurg.

As the review points out, the performance downside of using eMMC parts is that they are “optimized for reading and writing large images as if they were used in a camera.” Also, eMMC was never designed to be used by a “full blown multitasking OS,” and therefore can cause major problems with device responsiveness. This is mainly because multi-tasking (i.e. any other action performed while download is in progress) effectively “turns the IO stream from purely sequential to pseudo-random.” This corroborates with our view that many eMMC parts are not equipped for optimal performance for random reads and writes. The authors’ benchmark results (below) underscore the severity of the problem:

So, how can device manufacturers get better performance from their eMMC parts, and continue to leverage the simplicity of programming and consistency of design parameters that eMMC offers?

Simplistically put, the eMMC driver is responsible for flash-aware allocation of data to flash memory. The combined layers of the driver and the file system, sometimes known as the flash file system, is the level at which hardware behavior can be translated to software behavior in a way that enhances performance without compromising the endurance and data integrity.  Also, the complementary interaction between the driver and the file system layer can bring further benefits to the device performance, endurance and reliability. Getting this part of the system right goes a long way to solving eMMC’s write amplification problem.

Here at Datalight, we have been researching the most efficient way of doing this, drawing on our decades of experience of developing driver and file system software for a wide array of flash parts. Stay tuned for more in-depth explanations on how we’re doing it, but for now we are very excited about the early test results we’re seeing in our lab, especially enhancements combining an optimized file system with our new eMMC driver.

Learn more about Datalight's eMMC solutions

AparnaBhaduri | December 19, 2012 | Datalight Products, Flash Industry Info, Flash Memory | 1 Comment

Multithreading in Focus: Performance & Power Efficiency

We’re constantly on the lookout for ways to help our customers boost performance and improve power efficiency, and often our inspiration comes by way of the conversations we have with them. Recently, several of these discussions highlighted user scenarios where the complexity of the application would benefit from an enhancement to the classic Dynamic Transaction Point™ technology found in our Reliance Nitro file system. Here are a couple examples of the user scenarios I’m talking about, specifically for multi-threaded environments:

In a multi-threaded system, the activity among threads can be unpredictable, sometimes requiring multiple writes by the file system to the media within milliseconds. Each write requiring its own transaction commit or flush by the file system takes a toll on performance with no real reliability benefit.

Another challenge in a multi-threaded system is power efficient utilization of the processor when the file system is configured to commit data after specific time intervals. These transactions “wake up” the processor just to generate a request, even though no actual commits or flushes occur if there was no disk activity since the last transaction point. This unnecessary activation of an inactive processor is a waste of valuable power. By suspending thread activity until new disk activity occurs, battery life could be extended significantly.

Understanding how customers use the configurable transaction points of our Reliance Nitro file system was instrumental in improving Reliance Nitro. Below is a little background on Reliance Nitro and Dynamic Transaction Point technology:

The Reliance Nitro file system is a highly reliable, power interrupt-safe transactional file system. Keeping the reliability intact without risking loss or corruption of data means that customers have the flexibility to configure when a “transaction” (i.e. a set of operations that constitute a change as a whole), is to be written to the storage media from cache. This can even be done during operation of the device (run time), and includes the following options:

(a) Timed: Transacts or commits to storage media after a specified time interval (e.g., commit data to storage media every 10 milliseconds).

 (b) Automatic: Transacts every time a file system event happens (e.g., handheld scanner commits every time the database file is written to (file_close)).

(c) Application-controlled: Transacts every time all conditions are met (e.g., several files that are dependent on each other and need to be updated together have been all changed.)

Using these options in combination gives customers the flexibility to choose under exactly which conditions they want to transact and protect important data, precision that enables total control over the balance between performance and protection for any use case.

Our efforts to address the needs of our multi-threaded customers described at the beginning of this blog post have led us to the next big breakthrough in embedded file system design, and the next big feature for Reliance Nitro. I will be blogging more about this feature soon!

Also coming soon, keep an eye out for our 2012 Customer Survey, another way we seek to continuously improve our understanding of what our customers need. We sincerely hope to get your feedback on the survey, but don’t hesitate to contact us anytime if you have suggestions for improvement.

Learn More About Dynamic Transaction Point technology

AparnaBhaduri | October 15, 2012 | Datalight Products, Flash File System, Flash Memory | Leave a comment