[clug] Kernel without initramfs

Daniel Pittman daniel at rimspace.net
Sun Mar 29 00:03:36 GMT 2009

steve jenkin <sjenkin at canb.auug.org.au> writes:
> Daniel Pittman wrote on 28/3/09 10:07 AM:
>> That varies, as the other thread says, a lot based on your SSD.
>> Notably, the Intel SSDs use a block map, and combine multiple small
>> writes into a single large write with little regard for the LBA of the
>> individual blocks.
>> At least, in so far as filesystems written by Ted go. :)
> I thought the thread originator was using a CF+IDE adaptor, not an SSD
> unit.
> AFAI-can-see the only logical difference between the two flash devices
> is the connectors - but chip manufacturers might make very different
> designs & trade-offs.
> I'm suspecting there is more difference between manufacturers and
> product lines that between generic {CF vs SSD}.

Generally speaking, yes.

They are probably going to expect different use cases for them based on
the bulk of their use:

CF cards will mostly be deployed to limited write environments, running
FAT, and used to store configuration files or pictures.

SSD devices will be deployed to active environments running NTFS.

USB devices will mostly run FAT, occasionally NTFS, and be used as giant
floppy disks.

That said, the cost of building multiple wear levelling technologies
means that there will try to avoid it unless there is a compelling cost

(In other words, I would expect anything except a SATA-SSD to have more
 or less the same algorithm, and generation 2+ SSD devices to have
 better ones.)

> Anyone know about this?

Again, this mostly stays in the hands of the vendors, who don't talk a
lot about the wear levelling algorithms.

> PS: I read a piece by consultant/journalist/salesman Robin Harris in
> 2006 talking about using 100K-cycle Single-Level NAND flash SSD in
> 'enterprise' devices for tasks such as writing 2Kb log-file blocks.
> I've wondered if he got it right or was off in a hype-bubble.
> <http://storagemojo.com/2006/10/19/ram-based-ssds-are-toast-yippie-ki-yay/>
> "Let’s say you want to use it for a log file running 2k I/Os (question:
> do systems still do 2k I/Os? readers please help). So a 32 GB drive has
> 16,384,000 2k locations, which multiplied by 100,000 equals 1.64
> trillion 2k I/Os. So if your server is updating the log file 500 times
> per second, which would be a reasonably busy server, you’d be doing
> 1,800,000 RW cycles per hour.

The author here assumes that every write is synchronous.  In some cases
that may be true, but it certainly isn't in practice.

He also assumes that writes are strictly "one per 2K block" from block
zero to block N, then wrapping.  Again, this isn't likely in most cases.


> All for, I estimate, based on chip prices for about $1k per drive, or
> about 1/40th the price of a standard RAM-based SSD. So call me crazy,
> but I say flash is set to conquer the esoteric world of
> high-performance SSDs.

He is right, but his math is off.  Goes to show that being right doesn't
automatically make you correct, or something. ;)


More information about the linux mailing list