[clug] Fwd: Re: Storage and networking for Storage as used for supporting [Enterprise] virtualized environments

George at Clug Clug at goproject.info
Sun Aug 18 23:19:17 UTC 2019


Brett,

You said, "The specific line that caught my eye was the one for iSCSI.   We're only using 100Gb/s
ethernet today and 400Gb/s is still a way off.
iSCSI                1GB/s, 10GB/s, and 40GB/s"

Thanks for the info.  I cannot even afford 10GB switches. Would you know anywhere in Australia that one can buy 10GB ethernet switches below $200 ?  Most that I see are around $500.

I had based the iSCSI figures in the list on this information;
https://esj.com/articles/2014/05/16/fibre-channel-or-iscsi.aspx
iSCSI uses standard Ethernet switches and cabling and operates at speeds of 1GB/s, 10GB/s, and 40GB/s. Basically, as Ethernet continues to advance, iSCSI advances right along with it.

>From what I can find on the Internet, 400Gb/s ethernet deivces can be purchased today, though I don't see any for sale in online shops. I wonder how much they cost? Like the old saying, "if you have to ask, you won't be able to afford".

https://www.theregister.co.uk/2014/06/12/400gbs_ethernet/
(2017) Californian company Ixia has shown what it claims is the the world’s first functioning 400Gb/s Higher Speed Ethernet test rig, based on the IEEE P802.3bs protocol.

https://ir.mellanox.com/news-releases/news-release-details/mellanox-introduces-ethernet-cloud-fabric-technology-based
SUNNYVALE, Calif. & YOKNEAM, Israel--(BUSINESS WIRE)--May 20, 2019-- Mellanox® Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today introduced breakthrough Ethernet Cloud Fabric (ECF) technology based on Spectrum-2, the world’s most advanced 100/200/400 Gb/s Ethernet switches.

https://interfacemasters.com/company/blog/the-migration-to-400-gigabit-ethernet-400gbe/
The majority of hyper-scale data centers have used 100 Gigabit Ethernet (100GbE) links and are in the process of transitioning to 400 Gigabit Ethernet (400GbE) links to achieve higher throughput. Per Crehan Research, 400G deployments started in 2018 and will become routine in data centers by 2020 as rapid 400GbE adoption by cloud vendors enables dramatically lower unit pricing in the initial phase of its lifecycle.

If you ever see any in Australia, please do let me know. The media do tend to over state things.

Do you use large scale RAID (i.e. 24 or great drive raids) ?  8 drive RAID is all that I have used, not any great performance improvements there.

George.


On Sunday, 18-08-2019 at 23:09 George at Clug wrote:

    Bret,

    Apologies that I included your personal email address when I replied. I keep forgetting to remove the senders email address.

    Thanks for you response.

    George.


    On Sunday, 18-08-2019 at 23:05 George at Clug wrote:

        On Sunday, 18-08-2019 at 21:56 Brett Worth wrote:
        > George,
        >
        > I think you are mixing bits per second and byes per second in that list.
        >  E.g the Ethernet numbers would be Gb/s and the SATA numbers would be
        > MB/s.
        >
        > Brett

        Brett,

        I had been very careful to ensure that all numbers were converted to bytes of actual data transferred per second and not to use bits per second. Normally network speeds are quoted in 'bits transmitted per second' not 'actual data bytes per second'.

        If anyone does find errors, please let me know.

        Theoretical values tend to be specified, while I have been trying to determine actual data transfer speeds in byte per second B/s.

        Interestingly NVMe  (M.2)  that provides local storage that is faster that most wired LANs, even 10GBASE-T.  It would be nice if my laptop could use NMVe, sadly only mSATA (a bit less than SATA III speeds)
        https://www.ramcity.com.au/data-storage/internal-ssd/m.2-pcie/SSDPEKKW020T8X1
        ntel 760P 2TB NVMe M.2 PCIe 3.0 x4 80mm (2280) Internal SSD - 2TB  Specs: 3230MB/s Read, 1625MB/s Write • PCIe 3.0 x4, NVMe • 5 Years Warranty

        I realised that I did not include Bluetooth, Wifi, and USB devices, so I am researching these too (but most values are stated in theoretical bits per second, and there is a huge variance due to configuration hence standard real world configurations will likely be on the lower end).

        https://www.intel.com/content/www/us/en/support/articles/000005725/network-and-i-o/wireless-networking.html
        Theoretical Values
        Legacy 802.11      1 and 2 Mbps.
        802.11b            1, 2, 5.5 and 11 Mbps.
        802.11a            6, 9, 12, 18, 24, 36, 48 and 54 Mbps.
        802.11g            6, 9, 12, 18, 24, 36, 48 and 54 Mbps; can revert to 1, 2, 5.5, and 11 Mbps using DSSS and CCK
        802.11n    72.2 Mbps ~ 450 Mbps
        802.11ac wave1     200 Mbps ~ 866 Mbps
        802.11ac wave2     200 Mbps ~ 1.73 Gbps
        802.11ax (Wi-Fi 6) 143 Mbps ~ 2.4 Gbps

        802.11c Bridging 802.11 and 802.1d
        802.11d Internationalization
        802.11e Improving service quality
        802.11f Roaming
        802.11g  The 802.11g standard offers high bandwidth (54 Mbps maximum throughput, 30 Mbps in practice) on the 2.4 GHz frequency range.
        802.11h  The 802.11h standard is intended to bring together the 802.11 standard and the European standard (HiperLAN 2,
        802.11i  The 802.11i standard is meant to improve the security of data transfers
        802.11Ir The 802.11r stadard has been elaborated so that it may use infra-red signals. This standard has become technologically obsolete.
        802.11j  The 802.11j standard is to Japanese regulation what the 802.11h is to European regulation.

        https://www.speedguide.net/faq/what-is-the-actual-real-life-speed-of-wireless-374
        Below is a breakdown of the various 802.11 WiFi standards and their corresponding maximum speeds. Theoretical wireless speeds (combined upstream and downstream) are as follows:
            802.11b      2-3 Mbps : Theoretical 11 Mbps (2.4GHz)
            802.11a       30 Mbps : Theoretical 54 Mbps (5 GHz)
            802.11g      ~20 Mbps : Theoretical 54 Mbps (2.4GHz)
            802.11n    40-50 Mbps : Theoretical 600 Mbps (2.4GHz and 5 GHz) - 150Mbps typical when not using bonding channels
            802.11ac 70-100+ Mbps : Theoretical 1300+Mbps (5 GHz) - newer standard that uses wider channels, QAM and spatial streams for higher throughput

        https://community.arm.com/developer/ip-products/system/b/embedded-blog/posts/time-to-consider-zigbee-in-your-next-design-zigbee-vs-bluetooth-vs-wifi
        Bluetooth  1 Mbit/s Range;10m

        https://www.howtogeek.com/179803/usb-2.0-vs.-usb-3.0-should-you-upgrade-your-flash-drives/
        USB 2.0 drives, which are at the bottom of the charts at between 7.9 MB/s to 9.5 MB/s in write speed.
        The USB 3.0 drives they tested go from 11.4 MB/s all the way up to 286.2 MB/s.

        >
        > On Sun, 18 Aug. 2019, 5:57 pm George at Clug via linux, <
        > linux at lists.samba.org> wrote:
        >
        > > Hi,
        > >
        > > I am looking for comments on my  below research on  "real world, a opposed
        > > to theoretical"  data speeds for various storage and network technologies;
        > >
        > > Below are the figures from my research (created for mono spaced font).
        > > Device                Realistic data throughput
        > > ADSL                 100KB/s, 200KB/s,400K/s, 1MB/s
        > > 7200 RPM HD        30-80MB/s
        > > 7200 RPM SATA        100MB/s
        > > 10K  RPM     100-130MB/s
        > > 15K  RPM         150-190MB/s
        > > Ethernet  100BASE-T   10MB/s
        > > Ethernet 1000BASE-T  116MB/s
        > > Ethernet 1000BASE-T (Jumbo Frames) 123MB/s
        > > SATA II             300MB/s
        > > SSDs                530/500MB/s
        > > SATA III            600MB/s
        > > RAID 6 x6 HD        600MB/s (4x read speed, no write speed gain)
        > > Ethernet 10GBASE-T 1.25GB/s
        > > RAID 6 x24 HD       2.2GB/s (22x read speed, no write speed gain)
        > > NVMe  (M.2)             3.4GB/s (3500MB/s )
        > > iSCSI                1GB/s, 10GB/s, and 40GB/s
        > > Fibre Channel        1GB/s, 2GB/s, 4GB/s, 8GB/s, 10GB/s, 16GB/s, 32GB/s
        > > and 128GB/s
        > >
        > > Not having much practical experience in Enterprise storage I am curious
        > > about any comments people can or would like to make.
        > >
        > > Regards,
        > >
        > > George.
        > >
        > >
        > >
        > >
        > > <B>Sources</B>
        > >
        > >
        > > https://forum.huawei.com/enterprise/en/some-differences-between-scsi-iscsi-fcp-fcoe-fcip-nfs-cifs-das-nas-san/thread/229549-891
        > > Some differences between SCSI, ISCSI, FCP, FCoE, FCIP, NFS, CIFS, DAS,
        > > NAS, SAN.
        > > Created: Sep 30, 2014 15:25:49Latest reply: Aug 6, 2019 20:20:21
        > >
        > > http://www.raid-calculator.com/default.aspx
        > >
        > > https://lenovopress.com/sg247986.pdf
        > >
        > >
        > > https://forum.huawei.com/enterprise/en/some-differences-between-scsi-iscsi-fcp-fcoe-fcip-nfs-cifs-das-nas-san/thread/229549-891
        > > Network-attached
        > > <https://forum.huawei.com/enterprise/en/some-differences-between-scsi-iscsi-fcp-fcoe-fcip-nfs-cifs-das-nas-san/thread/229549-891Network-attached>
        > > storage (NAS), in contrast to, uses file-based protocols such as NFS or
        > > SMB/CIFS where it is clear that the storage is remote, and computers
        > > request a portion of an abstract file rather than a disk block. The key
        > > difference between direct-attached storage (DAS) and NAS is that DAS is
        > > simply an extension to an existing server and is not necessarily networked.
        > > NAS is designed as an easy and self-contained solution for sharing files
        > > over the network.
        > > Network File System (NFS) is a distributed file system protocol originally
        > > developed by Sun Microsystems in 1984, allowing a user on a client computer
        > > to access files over a network in a manner similar to how local storage is
        > > accessed. On the contrary, CIFS is its Windows-based counterpart used in
        > > file sharing.
        > >
        > >
        > > https://www.sciencedirect.com/topics/computer-science/storage-virtualization
        > >
        > > https://esj.com/articles/2014/05/16/fibre-channel-or-iscsi.aspx
        > > Fibre Channel infrastructure operates at throughput speeds of 1, 2, 4, 8,
        > > 10 and 16GB per second (GB/s). Over the years, speeds have continued to
        > > increase as storage performance demands have accelerated. Even faster
        > > speeds of 32GB/s and 128GB/s are expected to hit the market in the next
        > > couple of years.
        > > iSCSI uses standard Ethernet switches and cabling and operates at speeds
        > > of 1GB/s, 10GB/s, and 40GB/s.
        > >
        > >
        > > https://www.enterprisestorageforum.com/storage-hardware/ssd-vs-hdd-speed.html
        > >
        > >
        > > https://www.velocitymicro.com/blog/nvme-vs-m-2-vs-sata-whats-the-difference/
        > >
        > >
        > > https://www.convert-me.com/en/convert/data_transfer_rate/dEtherthous.html?u=dEtherthous&v=1
        > >
        > >
        > > https://www.cablefree.net/wireless-technology/maximum-throughput-gigabit-ethernet/
        > > Theoretical throughput of Gigabit Ethernet with jumbo frames, and using
        > > TCP:
        > > 997Mbps – .886 – 1.33 – 1.55 – .443 – 2.21 – 2.21 – 1.33 = 987Mbps or
        > > 123MB/s.
        > > The approximate throughput for Gigabit Ethernet without jumbo frames and
        > > using TCP is around 928Mbps or 116MB/s.
        > >
        > > http://rickardnobel.se/actual-throughput-on-gigabit-ethernet/
        > > Conclusion: Default Gigabit Ethernet has an impressive number of frames
        > > (about 81000 per second) possible and a high throughput for actual data
        > > (about 118 MB/s). By increasing the MTU to 9000 we could deliver even more
        > > data on the same bandwidth, up to 123 MB/s, thanks to the decreased amount
        > > of overhead due to a lower number of frames. Jumbo Frames could use the
        > > whole of 99% of Gigabit Ethernet bandwidth to carry our data.
        > >
        > >
        > > https://www.velocitymicro.com/blog/nvme-vs-m-2-vs-sata-whats-the-difference/
        > > Modern motherboards use SATA III which maxes out at a throughput of
        > > 600MB/s (or 300MB/s for SATA II, in which case, it’s time to upgrade). Via
        > > that connection, most SSDs will provide Read/Write speeds in the
        > > neighborhood of 530/500 MB/s. For comparison, a 7200 RPM SATA drive manages
        > > around 100MB/s depending on age, condition, and level of fragmentation.
        > > NVMe drives, on the other hand, provide write speeds as high as 3500MB/s.
        > > That’s 7x over SATA SSDs and as much as 35x over spinning HDDs!
        > >
        > > https://photographylife.com/nvme-vs-ssd-vs-hdd-performance
        > > In just read performance alone, my M.2 drive turned out to be a whopping
        > > 25x times faster than my enterprise-grade WD 2 TB 7200 RPM drive. That is
        > > just mind boggling, considering that SSD is only around 5x times faster in
        > > comparison. In write speed, I was able to witness up to 15x more
        > > performance, which is also a very impressive number. And that’s just for
        > > one type of sequential read/write load – if you look at the above numbers,
        > > other performance metrics indicate even larger, more noticeable gains.
        > >
        > > https://www.zdnet.com/article/why-raid-6-stops-working-in-2019/
        > > 7200 RPM full drive writes average about 115 MB/sec
        > >
        > >
        > > https://www.microsemi.com/product-directory/raid-controllers/4047-raid-levels#16
        > >
        > >
        > > https://www.atto.com/software/files/techpdfs/TechnicalSpecifications_FastFrameNIC.pdf
        > >
        > > https://www.sciencedirect.com/topics/engineering/gigabit-ethernet
        > > Given a further 1% of overhead for TCP, this leaves a total of 118.75 MB/s
        > > for data transmission.
        > >
        > > Below are the figures from my research (created for tabs).
        > > Device                  Realistic data throughput
        > > ADSL                    100KB/s, 200KB/s,400K/s, 1MB/s
        > > 7200 RPM HD             30-80MB/s
        > > 7200 RPM SATA           100MB/s
        > > 10K  RPM                100-130MB/s
        > > 15K  RPM                150-190MB/s
        > > Ethernet 100BASE-T      10MB/s
        > > Ethernet 1000BASE-T     116MB/s
        > > Ethernet 1000BASE-T (Jumbo Frames)      123MB/s
        > > SATA II                 300MB/s
        > > SSD                     530/500MB/s
        > > SATA III                600MB/s
        > > RAID 6 x6 HD            600MB/s (4x read speed, no write speed gain)
        > > Ethernet 10GBASE-T      1.25GB/s
        > > RAID 6 x24 HD           2.2GB/s (22x read speed, no write speed gain)
        > > NVMe  (M.2)             3.4GB/s (3500MB/s )
        > > iSCSI                   1GB/s, 10GB/s, and 40GB/s
        > > Fibre Channel           1GB/s, 2GB/s, 4GB/s, 8GB/s, 10GB/s, 16GB/s, 32GB/s
        > > and 128GB/s
        > >
        > >
        > >
        > >
        > >
        > > --
        > > linux mailing list
        > > linux at lists.samba.org
        > > https://lists.samba.org/mailman/listinfo/linux
        > >
        >





More information about the linux mailing list