The challenge - making ext4 just as reliable as Datalight's Reliance Nitro file system, within limitations of the POSIX specification. Unlike most real world embedded designs, performance and media lifetime are not a consideration for this exercise.
1) Change the way programs are written. By changing the habits of coders (and underlying libraries), it is possible to gain some measure of reliability. I'm talking specifically about the write(), fsync(), close(), rename() combination discussed on some forums. This gets around the aggressive buffering of ext4, and gives some assurance that a file either exists or doesn't.
What this does not do is handle an update, which overwrites part of the file, unless an entire new copy of the file is written. What this also fails to handle is a multithreaded environment. As each of these operations is not atomic, cohesiveness can be lost if power is interrupted while one thread is performing an fsync() while another is writing, or issuing a rename.
2) Journal the data as well as the metadata. For smaller files or overwrites, this solution will have all the data in the journal and available for playback when recovering from a power loss. While this worked well in ext3, this option was changed in ext4. The delayed allocation strategy gives an increase in performance, but can cause more loss of data than is expected. Applications which frequently rewrite many small files seem especially vulnerable.
Shortening the system's writeback time, stored in shell variables such as dirty_expire_centisecs, can help mitigate this by reducing the amount of data which can be lost.
Some changes just aren't possible, however. Writes are not atomic, as the metadata and data are written separately. The power can be interrupted between the two types of writes, no matter what is done at the application level. While these suggestions make ext4 more reliable, the cost in changes to user code, loss of performance and additional wear are not insubstantial. Far better to use a file system designed for unexpected power loss, where atomic writes allow the system designer to decide when to put data at risk.