Subscribe to Datalight's Blog


 

Automotive Challenges

Automobiles and Trucks have gone from simple contraptions to full blown multi-processor networks in just the last few years. Between M2M and the Internet of Things, today’s vehicles are communicating more than ever. Here are some of the challenges we have observed in this industry.

When the power comes on, the system head unit is the first to boot. What the user sees first, besides the dash default display, is the infotainment console – and in a most cases, the image from the reverse camera. A strict requirement in Europe is to display this within 3 seconds of vehicle power up.

From the perspective of system internals, this means the file system and flash must mount very quickly. Only a very small portion of these three seconds is available here – the rest are used to mount an OS kernel and bring up other system resources. There is no time to resynchronize a journal or catalog each flash region. Datalight software is designed to meet these requirements at both the file system and flash driver level.

Vehicle consumers expect some of the same experience in their car as on their smart phones. This includes the latest version of software and installable applications – with both available over-the-air. Embedded software updates can be risky, especially in an environment where power is not guaranteed. The ideal system would perform the software update on blocks not in use for the media, then switch to the new system in one operation. Reliance Nitro does this and more.

Security is being talked about a lot, and covers many things for automotive. One of these is the security of networks, keeping packets confidential and preventing malware injection. As we approach drive by wire, this one is really important! Talked about less but of equal concern is the confidentiality of user data. When you sell your car, you don’t want your private data going with it! Today’s eMMC media provides a Secure Delete option to remove data from the media completely. Datalight products support this hardware completely, allowing full application control of these important functions.

One important topic receiving a lot of press in other embedded devices is the shrinking lifetime of the flash media. NAND flash, the internal storage on many devices and automotive embedded as well, has a limited life span which is measured in program/erase cycles. Software designed to understand this can reduce the amount of write amplification, or in other words minimize the additional write cycles generated by each media write. Once again, Datalight’s software stack comes through for the automotive embedded designer.

While these may be new challenges for the automotive market, they are for the most part old challenges for the embedded world. You don’t necessarily need to reinvent the wheel (pun intended) when there are suppliers such as Datalight with a proven track record and industry recognized customer support.

Read more about Datalight Automotive Solutions

Thom Denholm | July 9, 2013 | Automotive, Datalight Products, Flash File System, Flash Memory Manager | Leave a comment

Wish Granted

OEM customers have told us for years that struggles with NAND supply and lack of standards costs them a great deal of time and money. A shortage or EOL on a key component like NAND flash memory can cause product delivery delays that impact topline revenue and potentially company reputation. What they wish for is a “plug and play” option that lets them multi-source their flash memory. For years Datalight has provided a software standard to make parts switching less painful, but the lack of hardware standards continued to plague our customers.

As you might expect, many are excited by eMMC because it promises to address a big concern with supply chain and parts availability. By adopting this hardware “standard”, OEM’s believe they will be free from vendor lock in – that is compelling. They and their ODM suppliers can source parts from whichever vendor is closer, has available supply or is easiest to work with. Powerful stuff for negotiating cost of goods.

However, the inconvenient truth is that though eMMC is a hardware standard that ensures pin-compatible alternatives from a plethora of suppliers, there are so many exceptions and vendor-specific variants that substantial software modifications are still required. You will likely find a driver from a BSP provider that enables their board or processor to work with eMMC – at the most basic level. This purpose-built software will be provided in un-modifiable binary form written expeditiously to “check the box” for eMMC support. It is unlikely that the supplied driver will work as-is with special capabilities in parts from another vendor (or even with the next die shrink of the first vendor’s parts!) Then starts the quest to either get the software updated by the BSP provider or negotiate for source code access and invest in making the changes yourself. Can you say “schedule impact”?

Another potential shortcoming was pointed out to me recently by a long time FlashFX customer — “how do you know how effective the wear-leveling is when it’s all done inside the black box?”

Ideally, the driver you use with your eMMC should be intelligent enough to assess the vendor-specific features available, the wear-leveling effectiveness and be provided in source code so you can make any modifications for as-yet-undefined capabilities of your hardware. And if you could have everything you wished for, the driver would be written by flash-vendor-neutral software and flash technology experts. Hmmm. I think I know some of those.

 

KerriMcConnell | July 1, 2013 | Datalight Products, Flash Memory, Flash Memory Manager, Uncategorized | Leave a comment

Software Power Consumption

One of the questions we received at Datalight is whether our software affects the power consumption of embedded devices. Not being power experts ourselves, we found an intern and faculty advisor from our nearby University of Washington to help us out with the process. After a little research, we also selected the PicoScope as the best solution for measuring the power.

As part of the final conclusions of this project, our intern Cameron wrote up his findings, which have now been published on the Pico Technology website – http://www.picotech.com/applications/datalight.html

The comparisons reflected in that article represent a version of our FlashFXe product that was in production at the time. The use case measured (SQLite operations) is the one targeted for improvement by our software. By writing the eMMC media in the most optimal fashion, FlashFXe generates fewer erases and uses less power. Each access to the media does more meaningful work in a more optimal way. At the media level, this also increases the dwell time, which in general has been shown to decrease the bit error rate over time, and may also improve long term data retention.

Datalight continues to research ways where software changes can improve the hardware experience for our customers.

Find out more about Datalight's FlashFXe

Thom Denholm | June 17, 2013 | Datalight Products, Extended Flash Life, Flash Memory, Flash Memory Manager | Leave a comment

Managed NAND Performance: It’s All About Use Case

Last week the UK journal PC Pro published an interesting article about fast SD cards http://www.pcpro.co.uk/features/380167/does-your-camera-need-a-fast-sd-card, with a good description of the SD card Class system. With some clever testing, they show how six cards perform in a continuous shooting situation.

These tests also demonstrate how the SD card manufacturers have customized their firmware to handle sequential write cases. A class 10 card requires a minimum of 10 MB/sec throughput, and a supplemental rating system for Ultra High Speed (UHS) indicates a higher clock rate and correspondingly higher transfer rate. For the larger frame sizes (12 megapixel photos, HD video) high transfer rates are a requirement. The resulting data is almost always sequential, which matches the firmware characteristics well.

This article brings out one more interesting point. The authors point out that the performance measurements from using an SD card in a desktop system don’t always reflect the use case. They end up performing their tests using an actual camera, thereby getting as close to the use case as possible.

For an application which uses random I/O (such as tablets and other Android devices), these firmware optimizations aren’t necessary. In some cases, such optimizations actually lower random I/O performance. Similar firmware shows up in eMMC media as well. A software solution (such as FlashFXe) can adjust much of the I/O to be more sequential and more closely match the optimized performance.

At Embedded World a few weeks ago we recorded our demonstration showing the benefits of our new FlashFXe product on eMMC.

Watch our FlashFXe Demo Video Here

Thom Denholm | March 15, 2013 | Flash Memory, Flash Memory Manager, Performance | Leave a comment

Even When Not Using a Database, You Are Still Using a Database

Recently, we’ve focused considerable development effort on improving database performance for embedded devices, specifically for Android. This is because Android is a particularly database-centric environment.

On an Android platform, each application is equipped with its own SQLite database. Data stored here is accessible by any class in the application, but not by outside applications. The database is entirely self-contained and server-less, while still being transactional and still using the standard SQL language for executing queries. With this approach, a crash in one application (the dreaded “force close” message) will not affect the data store of any other application. While fantastic for protection, this method is quite often implemented on flash media, which was designed for large sequential reads and writes.

For years, benchmarks have touted the pure performance of a drive through large sequential reads and writes. On managed flash media, the firmware programmers have responded by optimizing for this use case – at the expense of the random I/O used by most databases, including SQLite. Another challenge is the very high ratio of flushes performed by the database (sometimes 1:1). The majority of database writes are not done on sector boundaries – especially problematic for flash media which must write an entire block.

While there are a few unified “flash file systems” for Linux such as YAFFS and JFFS2, designed specifically for flash memory, they have fallen out of favor because they do not plug neatly into the standard software stack, and therefore cannot take advantage of standard Linux features such as the system cache. While traditional file systems such as VFAT and Ext2/3/4 can work with flash, they are not designed with that purpose in mind, and therefore their performance and reliability suffers. For example, discard support has largely been tacked onto Linux file systems, and is still considered to be somewhat experimental. To quote the Linux v3.5 Ext4 documentation, discard support is “off by default until sufficient testing has been done.” Another example: file systems on flash memory typically benefit from using a copy-on-write design, which ext4 does not use. The reality is that most file systems are designed for desktop (and often server) environments, where high resource usage is OK, and power-loss is infrequent.

Our solution to improving database performance on flash memory is to provide a more unified solution where the various pieces of the stack work in a cohesive fashion. Furthermore, the solution is specifically designed for embedded systems using flash memory, where power-loss is a common event. Datalight’s Reliance Nitro file system is a transactional, copy-on-write file system, designed from the ground up to support flash memory discards and power-loss safe operations.

The result of our work in this area is FlashFXe, a new Datalight product built on our many years of experience managing raw NAND, but designed for eMMC. When used together with Reliance Nitro, almost all write operations become sequential and aligned on sector boundaries for the highest performance. Internal operations are more efficiently organized for the copy-on-write nature of flash media. A multi-tiered approach allows small random writes with very frequent flushes to be efficiently handled while maintaining power-loss safe operations.

This month at Embedded World, we will be demonstrating the results of our efforts to improve database performance on embedded devices using Android. Prepare to be impressed!

Learn more about FlashFXe

Thom Denholm | February 12, 2013 | Datalight Products, Flash File System, Performance | Leave a comment

Why CRCs are important

Datalight’s Reliance Nitro and journaling file systems such as ext4 are designed to recover from unexpected power interruption. These kinds of “post mortem” recoveries typically consists of determining which files are in which states, and restoring them to the proper working state. Methods like these are fine for recovering from a power failure, but what about a media failure?

When a media block fails, it is either in the empty space, the user data, or the file system data. A block from the empty space can be detected on the next write, which will either cause failure at the application, or will be marked bad internally and the system will move on to another block. When a media block in the user space fails, it cannot be reliably read. Often, the media driver will detect and report an unreadable sector, resulting in an error status (and probably no data) to the user application. When a media block containing file system data or metadata fails, it is the responsibility of the file system to detect and (if possible) repair that damage. Often the best thing that can be done is to stop writing to the media immediately.

In some ways, blocks lost due to media corruption present a problem similar to recovering deleted files. If it is detected quickly enough, user analysis can be done on the cyclical journal file, and this might help determine the previous state of the file system metadata. Information about the previous state can then be used to create a replacement for that block, effectively restoring a file.

Metadata checksums have been added to several file system data blocks for ext4 in the 3.5 kernel release. Noticeably absent from this list are the indirect and double indirect point blocks, used to allocate trees of blocks for a very large file. The latest release of Datalight’s Reliance Nitro file system (version 3.0) adds CRCs to all file system metadata and internal blocks, allowing for rapid and thorough detection of media failures.

Optional within this new version of Reliance Nitro is using CRCs on user data blocks, for individual files or entire volumes. This failsafe can be configured to write protect the volume or halt system operations. Diagnostic messages are also available to indicate the specific logical block number of the corrupted block.

The combination of full CRC protection on every metadata block and optional protection of user file data blocks is one of the key attributes of this release of Reliance Nitro. Embedded system designers can detect more media failures in testing, and can diagnose failed units more quickly, leading to greater success in the marketplace.

Learn more about Reliance Nitro

Thom Denholm | January 26, 2013 | Flash File System, Flash Memory, Reliability | Leave a comment

eMMC Problems

If you’ve been following this blog, you’ve probably noticed a lot of discussion and analysis around eMMC. We’ve written about the reasons we are so excited about eMMC, but also why the Write Amplification issues caused by eMMC parts are a problem that needs more attention by the industry.

As more and more device manufacturers use eMMC in their devices, product reviews are beginning to highlight some of the limitations of eMMC that we have been discussing. A case in point is this recent review of Google Nexus 7 by Anand Lal Shimpi and Brian Klurg.

As the review points out, the performance downside of using eMMC parts is that they are “optimized for reading and writing large images as if they were used in a camera.” Also, eMMC was never designed to be used by a “full blown multitasking OS,” and therefore can cause major problems with device responsiveness. This is mainly because multi-tasking (i.e. any other action performed while download is in progress) effectively “turns the IO stream from purely sequential to pseudo-random.” This corroborates with our view that many eMMC parts are not equipped for optimal performance for random reads and writes. The authors’ benchmark results (below) underscore the severity of the problem:

So, how can device manufacturers get better performance from their eMMC parts, and continue to leverage the simplicity of programming and consistency of design parameters that eMMC offers?

Simplistically put, the eMMC driver is responsible for flash-aware allocation of data to flash memory. The combined layers of the driver and the file system, sometimes known as the flash file system, is the level at which hardware behavior can be translated to software behavior in a way that enhances performance without compromising the endurance and data integrity.  Also, the complementary interaction between the driver and the file system layer can bring further benefits to the device performance, endurance and reliability. Getting this part of the system right goes a long way to solving eMMC’s write amplification problem.

Here at Datalight, we have been researching the most efficient way of doing this, drawing on our decades of experience of developing driver and file system software for a wide array of flash parts. Stay tuned for more in-depth explanations on how we’re doing it, but for now we are very excited about the early test results we’re seeing in our lab, especially enhancements combining an optimized file system with our new eMMC driver.

Learn more about Datalight's eMMC solutions

AparnaBhaduri | December 19, 2012 | Datalight Products, Flash Industry Info, Flash Memory | 1 Comment

Multithreading in Focus: Performance & Power Efficiency

We’re constantly on the lookout for ways to help our customers boost performance and improve power efficiency, and often our inspiration comes by way of the conversations we have with them. Recently, several of these discussions highlighted user scenarios where the complexity of the application would benefit from an enhancement to the classic Dynamic Transaction Point™ technology found in our Reliance Nitro file system. Here are a couple examples of the user scenarios I’m talking about, specifically for multi-threaded environments:

In a multi-threaded system, the activity among threads can be unpredictable, sometimes requiring multiple writes by the file system to the media within milliseconds. Each write requiring its own transaction commit or flush by the file system takes a toll on performance with no real reliability benefit.

Another challenge in a multi-threaded system is power efficient utilization of the processor when the file system is configured to commit data after specific time intervals. These transactions “wake up” the processor just to generate a request, even though no actual commits or flushes occur if there was no disk activity since the last transaction point. This unnecessary activation of an inactive processor is a waste of valuable power. By suspending thread activity until new disk activity occurs, battery life could be extended significantly.

Understanding how customers use the configurable transaction points of our Reliance Nitro file system was instrumental in improving Reliance Nitro. Below is a little background on Reliance Nitro and Dynamic Transaction Point technology:

The Reliance Nitro file system is a highly reliable, power interrupt-safe transactional file system. Keeping the reliability intact without risking loss or corruption of data means that customers have the flexibility to configure when a “transaction” (i.e. a set of operations that constitute a change as a whole), is to be written to the storage media from cache. This can even be done during operation of the device (run time), and includes the following options:

(a) Timed: Transacts or commits to storage media after a specified time interval (e.g., commit data to storage media every 10 milliseconds).

 (b) Automatic: Transacts every time a file system event happens (e.g., handheld scanner commits every time the database file is written to (file_close)).

(c) Application-controlled: Transacts every time all conditions are met (e.g., several files that are dependent on each other and need to be updated together have been all changed.)

Using these options in combination gives customers the flexibility to choose under exactly which conditions they want to transact and protect important data, precision that enables total control over the balance between performance and protection for any use case.

Our efforts to address the needs of our multi-threaded customers described at the beginning of this blog post have led us to the next big breakthrough in embedded file system design, and the next big feature for Reliance Nitro. I will be blogging more about this feature soon!

Also coming soon, keep an eye out for our 2012 Customer Survey, another way we seek to continuously improve our understanding of what our customers need. We sincerely hope to get your feedback on the survey, but don’t hesitate to contact us anytime if you have suggestions for improvement.

Learn More About Dynamic Transaction Point technology

AparnaBhaduri | October 15, 2012 | Datalight Products, Flash File System, Flash Memory | Leave a comment

Device Longevity using Software

The new chief executive for Research in Motion Ltd., Thorsten Heins, mentioned recently that 80 to 90 percent of all BlackBerry users in the U.S. are still using older devices, rather than the latest Blackberry 7.

Longevity of a consumer device is something that we at Datalight know belongs firmly in the hands of the product designer, rather than being limited by the shortened lifespan of incorrectly programmed NAND flash media. Both Datalight’s FlashFX Tera and Reliance Nitro incorporate algorithms which reduce the Write Amplification on all Flash media. These methods are especially important on e-MMC, which is at its heart NAND flash. In addition, the static and dynamic wear leveling in FlashFX Tera provides even wearing of all flash for maximum achievable lifetime.

Shorter lifetime for some consumer devices, such as low end cell phones, may be found acceptable. However, many newer converged mobile devices that command a higher price, such as tablets, are expected by consumers to have a much longer lifetime. These devices may be replaced by the primary user with some frequency, although since they are viewed as mini-computers and therefore less “disposable,” they will likely be handed down to younger users rather than being discarded or recycled. Consumers will protest in if they discover their $500 tablet only has a lifespan of 3 years, and they will be even more upset if due to flash densities and write amplification that the next version they purchase may have even a shorter lifespan.

How will flash longevity affect your new embedded design?

Thom Denholm | March 6, 2012 | Extended Flash Life, Flash Industry Info, Flash Memory, Flash Memory Manager | Leave a comment

Datalight Sponsors Local High School Robotics Team

The Arlington Neobots are not like other high school technology clubs. For one thing they have access to a phenomenal pool of mentors from local technology companies like Boeing, Microsoft and now Datalight. They also have a growing number of female members, a rarity in youth organizations oriented to math and science.

Founded in 2008 with seed money from Boeing, the team competes in an annual robot building competition created by national non-profit organization FIRST (For Inspiration and Recognition of Science and Technology), and this year the competition is already ramping up. For 2012, FIRST has challenged the robotics teams to a game similar to basketball called Rebound Rumble. Six teams are split up into two alliances of three; one alliance is blue and the other red. During the 2-minute and 15-second match, teams compete by trying to make as many baskets as they can. Part of the match is devoted to a 15-second autonomous mode where the robot is controlled through an XBox Kinect instead of the robot’s standard remote control. There are four hoops – one high, two middle, and one low. The higher the hoop, the more points awarded for making a basket in it.

The Neobots will need to work together in teams to finish their robot by the competition deadline. First, the one-week design phase involves team analysis of the game and its rules manual, and a group decision on game strategy and design criteria for the team robot. Next, the team will split into design groups to brainstorm, research and present their findings to the team. Then, using 3D models and prototypes, each group will propose a robot design to be voted on by the team. After the design is established, the build phase involves again breaking into sub-groups that are each assigned projects like System Integration, Programming, and Drive-Base, and other functions. The team will follow an iterative process; every major milestone will be tested rigorously before they proceed.

You might ask why Datalight would sponsor a high school robotics club. VP of Engineering Ken Whitaker puts it this way; “This is one of the most important things we can do as a technology company. What you’re seeing in its raw form is the next generation of embedded engineers, and we have a responsibility to nurture and support them. In a few years time I could see any of these motivated students ending up on my engineering team.”

Learn more about Datalight

RobHart | February 20, 2012 | Datalight Products | Leave a comment