That cracking you may or may not have heard last month was the sound of SanDisk and Toshiba breaking the sub-20 nanometer NAND barrier. Flying in the face of conventional wisdom (and more than a few industry analysts), both companies recently announced they will be delivering 19nm NAND this year. Intel and Micron are close behind, each with their own 20 nanometer announcements. Those who said it couldn't (or shouldn't) be done had some very compelling reasons, chiefly that the physics behind multi-cell architecture in a 1x nanometer cell are shaky at best. How many electrons will there be in a 1x nanometer cell? How many levels of data can possibly be detected with so few of them? The supporting technologies for this detection, not to mention correction of the unavoidable errors that will creep in will be critical . In an industry that has come to expect product innovation in the form of shrinking die sizes being announced roughly every 12-18 months, keeping pace with this trend indefinitely is not only pushing the boundaries of physics, but also manufacturers' technical abilities. How low can they go? While the introduction of 19nm parts show that innovation and scaling of NAND Flash memory continues moving at breakneck speed, one wonders when the end point of this shrinkage will finally come. And while the drive for NAND innovation has dramatically improved both the cost and performance of the technology, moving to ever smaller die sizes is beginning to have severe consequences on data storage reliability and flash endurance - challenges which must be addressed not only by the supporting hardware technologies but also by the file system and flash management software. Bottom line: Will the devices you're responsible for provide the performance, life span and flexibility your customers require? What contingencies should you be planning for as the storage technologies get ever smaller?
Posted by: Thom Denholm