[clug] Flash memory: Load-levelling question.

Paul Wayper paulway at mabula.net
Fri Mar 27 08:41:32 GMT 2009


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

steve jenkin wrote:
| Daniel Pittman wrote on 27/3/09 11:31 AM:
| <big snip>
|>> If someone can point me at a definitive answer, I'd be most
|>> appreciative :-)
|> Sadly, these algorithms are trade secrets, so while you can get general
|> answers by googling up the whitepapers that flash folks publish, details
|> are practically impossible to come by and we mostly rely on the word
|> from people who have signed the NDAs. :?

It should be noted here that half the problem is that each block of flash
memory basically can either accumulate bits (either ones or zeros) in an 'or'
fashion - 'nand' memory accumulates zeros, 'or' memory accumulates ones as far
as I understand it - or erase the entire block back to the 'nothing
accumulated' state.  Each erasure is what hurts the chips.  Block sizes vary
but if you think in terms of 64 bit blocks then you probably won't be far wrong.

My idea is to aggregate each block so that it represents a smaller number of
bits.  For example, each byte in the raw block might represent one bit in the
output.  The representation might vary, but for example that bit might be the
odd parity of the byte - whether the number of bits set is odd.  Then, each
time you have to write to that bit, you first see if you can add bits to the
raw byte to change the parity.  That means you effectively get eight changes
of state - probably around sixteen writes on average - in that bit before you
have to erase the whole block and start over.

You now have a device that is one eighth smaller than its raw size but can
sustain around eight to sixteen times (depending on how you do your
accounting) as many writes before the equivalent raw flash memory would have
worn.  You also have the modest overhead of the circuitry doing that bit
combining - I say modest because in terms of the circuitry already managing
the block addressing, reading, writing, and erasures it's not that much extra.

It's anyone's conjecture whether any companies are already using this.

| I'm still left wondering:
|
|   If I build an embedded system with a CF as an IDE disk,
|   will I get better write life if the disk image is (much)
|   smaller than the CF capacity?

As a generalisation, I'd say yes for a wear-levelled device - e.g. a USB key -
and don't know for other devices where you don't know whether it's a raw or
wear-levelled device.

HTH,

Paul
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iEYEARECAAYFAknMkSwACgkQu7W0U8VsXYLSUgCgnPN8XbhxWO159YyMj3j8tcb2
ijIAnRl1zd6h4mqNcCo9XdikwN+3YYV2
=+Lod
-----END PGP SIGNATURE-----


More information about the linux mailing list