[clug] Flash memory: Load-levelling question.

David Cottrill cottrill.david at gmail.com
Fri Mar 27 13:29:09 GMT 2009


Having spare capacity could help, particularly if you do not partition your
flash media. Disk fragmentation is the friend of flash media because the
seek time is zero on a good flash device and a very small constant on the
others. Disk fragmentation makes seek times longer where there is a physical
distance to cross - in flash media all distances betwen addresses are equal
- the amount of time it takes the wear levelling chip to decide where it put
that chunk (cache size) of data. Presumably the location is a hash of the
address of data itself, in the cheap cards at least.

As for preserving a raw flash device, this all changes with a change of
fundamental principles. When you move from a CF ( Compact Flash ) card, made
to resemble an IDE disk to a bare flash chip working on direct addressing, a
few things are going to change. For one, wear leveling is now based in
software which gives almost as many opportunites as problems. By having
direct software interaction you could (but don't) address data depending on
the number of bits needed to be changed. Just think - I need to store 0xffg4
and as luck would have it theres a segment right there containing 0xffg4
which isn't addressed yet. No bits would need to be changed except in
addressing, giving minimum possible writes.

That isn't going to happen of course, but by compressing the raw data in a
manner such as jffs and integrating wear leveling, such as jffs, you write
the minimum (compressed) bits to an address that hopefully hasn't been
hammered repeatedly. Using wear leveling where it already exists in another
form is largely unknown but presumably disastrous.

David



On Fri, Mar 27, 2009 at 7:41 PM, Paul Wayper <paulway at mabula.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> steve jenkin wrote:
> | Daniel Pittman wrote on 27/3/09 11:31 AM:
> | <big snip>
> |>> If someone can point me at a definitive answer, I'd be most
> |>> appreciative :-)
> |> Sadly, these algorithms are trade secrets, so while you can get general
> |> answers by googling up the whitepapers that flash folks publish, details
> |> are practically impossible to come by and we mostly rely on the word
> |> from people who have signed the NDAs. :?
>
> It should be noted here that half the problem is that each block of flash
> memory basically can either accumulate bits (either ones or zeros) in an
> 'or'
> fashion - 'nand' memory accumulates zeros, 'or' memory accumulates ones as
> far
> as I understand it - or erase the entire block back to the 'nothing
> accumulated' state.  Each erasure is what hurts the chips.  Block sizes
> vary
> but if you think in terms of 64 bit blocks then you probably won't be far
> wrong.
>
> My idea is to aggregate each block so that it represents a smaller number
> of
> bits.  For example, each byte in the raw block might represent one bit in
> the
> output.  The representation might vary, but for example that bit might be
> the
> odd parity of the byte - whether the number of bits set is odd.  Then, each
> time you have to write to that bit, you first see if you can add bits to
> the
> raw byte to change the parity.  That means you effectively get eight
> changes
> of state - probably around sixteen writes on average - in that bit before
> you
> have to erase the whole block and start over.
>
> You now have a device that is one eighth smaller than its raw size but can
> sustain around eight to sixteen times (depending on how you do your
> accounting) as many writes before the equivalent raw flash memory would
> have
> worn.  You also have the modest overhead of the circuitry doing that bit
> combining - I say modest because in terms of the circuitry already managing
> the block addressing, reading, writing, and erasures it's not that much
> extra.
>
> It's anyone's conjecture whether any companies are already using this.
>
> | I'm still left wondering:
> |
> |   If I build an embedded system with a CF as an IDE disk,
> |   will I get better write life if the disk image is (much)
> |   smaller than the CF capacity?
>
> As a generalisation, I'd say yes for a wear-levelled device - e.g. a USB
> key -
> and don't know for other devices where you don't know whether it's a raw or
> wear-levelled device.
>
> HTH,
>
> Paul
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
>
> iEYEARECAAYFAknMkSwACgkQu7W0U8VsXYLSUgCgnPN8XbhxWO159YyMj3j8tcb2
> ijIAnRl1zd6h4mqNcCo9XdikwN+3YYV2
> =+Lod
> -----END PGP SIGNATURE-----
>
> --
> linux mailing list
> linux at lists.samba.org
> https://lists.samba.org/mailman/listinfo/linux
>


More information about the linux mailing list