[clug] Disk size

rodney peters rodneyp at iinet.net.au
Wed Aug 29 06:47:28 UTC 2018



On 29/08/18 13:02, Eyal Lebedinsky via linux wrote:
> On 29/08/18 12:35, Bob Edwards via linux wrote:
>> On 29/08/18 09:26, Chris Smart via linux wrote:
>>> On Tue, 28 Aug 2018, at 14:28, Eyal Lebedinsky via linux wrote:
>>>> I thought we need some comic relief after the recent fun on the hill.
>>>> Not Linux specific but...
>>>>
>>>> For a while now I noticed that recently acquired USB disks offer less
>>>> storage than advertised.
>>>>
>>>> For example, a 32GB disk
>>>>     SanDisk Ultra USB 3.0 Flash Drive 64GB
>>>> provides (as fdisk tells)
>>>>     Disk /dev/sdj: 57.9 GiB, 62109253632 bytes, 121307136 sectors
>>>>
>>>> A smaller one
>>>>     SanDisk Ultra Fit USB 3.0 Flash Drive 32GB
>>>> has
>>>>     Disk /dev/sdj: 28.7 GiB, 30752636928 bytes, 60063744 sectors
>>>>
>>>> I just now saw a message (unrelated) on the linux-raid list 
>>>> mentioning:
>>>>     "I should note that for some reason the "32G" Optane only has 
>>>> 29.260 G
>>>>     bytes (27.3 GiB)"
>>>> So SSDs are afflicted too.
>>>>
>>>> What is this trend? The GB/Gib excuse ran out of steam? What is it 
>>>> now?
>>>>
>>>> Will disk space collapse into a black hole and become a write-only 
>>>> medium?
>>>>
>>>
>>> I expect the extra difference is probably due to over-provisioning, 
>>> the amount of space reserved on an SSD for managing the writing of 
>>> data (mostly required as the drive starts getting full). The amount 
>>> of space will often be different for each vendor, but probably 
>>> somewhere between 5-10%.
>>>
>>> IIRC, an SSD cannot write new data unless it first goes and clears 
>>> out the block. This is a slow operation and so SSDs instead write to 
>>> new blocks that it knows are already empty and then later comes back 
>>> and clears out (garbage collects) the unused blocks so that they can 
>>> take new data straight away next time.
>>>
>>> Writing to fresh blocks is fine while there's lots of spare space, 
>>> but as an SSD gets full, the drive can't easily write to new blocks 
>>> due to fragmentation (where some pages in a block are written), so 
>>> it slows down a lot as it has to move data and clear out blocks 
>>> first. The over-provisioning space works around this problem by just 
>>> always making sure there is some free space (which the user cannot 
>>> use).
>>>
>>> Notably, some manufacturers like Samsung will allow you to change 
>>> the size of this over-provision (with a Windows tool), and making it 
>>> larger (say 25%) results in much greater performance as the drive 
>>> gets full.
>>>
>>> -c
>>>
>>
>> Nice reply, Chris - clear and to the point. Couldn't have said it any 
>> better myself.
>>
>> cheers,
>> Bob Edwards
>
>
> Regarding my original posting, my concern is that until recently a 
> 32GB (decimal) disk really have
> at least 32,000,000,000 bytes, and now it does not. I expect the 
> over-provisioning was there earlier too.
>
> So I expect that either the actual raw storage is 10% less today (a 
> business move), or they increased
> the  over-provisioning by about 10% (improving robustness?) using the 
> same die size.
>
> cheers
>
It might be more a matter of chip size in SSD.  Those who have 
bare-board devices such as mSATA or M2 will probably find only 1, 2 or 4 
main chips on same.  It is the chip, rather than device makers 
determining size.

One of my nominal 240 GB SSD has 468862128 sectors, that is not an even 
multiple of 1000 nor of 1024.  It might be a case of maximising the 
placement of dies on a wafer and sizing each die accordingly.

There has been quite a drought of flash RAM chips for SSD during the 
last 18 months and prices soared accordingly.  Prices have recently 
tumbled, probably indicating that next iteration of fabrication 
processes has settled.  I don't have any of the latest generation, but 
it is possible actual size has changed again.


Cheers,

Rod



More information about the linux mailing list