[clug] Fwd: Re: Storage and networking for Storage as used for supporting [Enterprise] virtualized environments

George at Clug Clug at goproject.info
Mon Aug 19 08:32:07 UTC 2019



On Monday, 19-08-2019 at 16:17 Wayne Haig via linux wrote:
> George,
> 
> I think what Brett was getting at with the speeds actually stems from
> the ESJ article on Fibre Channel and iSCSI. In that, the author seems to
> have mixed the GB/s and Gb/s.

Wayne,

I have thought that when the article said 40GB/s it was talking about 400GbE (but in bytes). Looking at the date the article was written, I now realise I was wrong.

Thanks for pointing out what I had missed.

If you wanted to mirror two systems with SATA III 4TB SSDs at the fastest possible speed, would you recommend using 10GbE (1.239 GB/s) ?

And then NVMe is 3.4GB/s. hmmm. I have not seen 4TG NVMe as yet.

https://en.wikipedia.org/wiki/List_of_interface_bit_rates has the list of bits/second and bytes/second. Most useful. I tend to think practical (application layer) data through put would be even less than these figures?
iSCSI 1Gb Ethernet, jumbo frames	123.9 MB/s 
iSCSI 10GbE	1.239 GB/s 	 
iSCSI InfiniBand 4× 	32 Gbit/s 	4 GB/s 	
iSCSI 100GbE 	12.392 GB/s 	
FCoE  100GbE 	12.064 GB/s 	
400GBASE-X 	50 GB/s 

1GFC  103.23 MB/s 	
2GFC  206.5 MB/s 	
4GFC  413 MB/s 	
8GFC  826 MB/s 	
16GFC 1.652 GB/s 
32GFC 3.303 GB/s

> 
> As it stands Fibre Channel variants offer 1Gbit/s (or 100MBytes/s),
> 2Gbits/s (200MBytes), 4Gbits/s (400MBytes), 8Gbits/s (800MBytes/s),
> 16Gbit/s (1600MBytes/s), 32Gbits/s (3200MBytes/s), 64Gbits/s
> (6400MBytes/s), 128Gbits/s (12800MBytes/s) and 256Gbits/s (25600MBytes/s).
> 
> iSCSI matches ethernet speeds so you have 1Gbit/s using 1GigE
> (~120MBytes/s), 10Gbits/s for 10GigE (~1200MBytes/s), 25GBits/s for
> 25GigE (~3000MBytes/s), 40Gbits/s for 40GigE (~4800MBytes/s), 50Gbits
> for 50GigE (~6000MBytes/s), and 100Gbits/s for 100GigE (~12300MBytes/s).
> Variants at 200Gbits/s (200GigE) and 400Gbits/s (400GigE) are available
> in some forms but will become readily available soon - the PCI slots
> that can drive the technology from servers are only just becoming available.
> 
> But both Fibre Channel and Ethernet speeds of greater than 10Gbit/s are
> not for your generic home users without a substantial bank loan.

Certainly above my pay level, but then a few years ago 10Gbit/second switches and interface cards were beyond the price range of the small business I was working for.


> 
> Cheers,
> 
> Wayne
> 
> 
> On 19/8/19 1:07 pm, George at Clug via linux wrote:
> >
> > On Monday, 19-08-2019 at 12:06 Brett Worth wrote:
> >> On 19/8/19 11:46 am, Alastair D'Silva via linux wrote:
> >>
> >>> One of my switches at home is a 48 port gigabit switch with a 10GB XFP
> >>> uplink on the back. I recently stuck a dual port 10Gb card, plus an
> >>> optical transciever in my fileserver, and a matching XFP module into
> >>> the switch, so my server is now connected at 10Gb.
> >> Hmmm.  Looks like general use of 400Gb ethernet is closer than I thought.
> >>
> >> I still use a lot of 16Gb FC and 100Gb IB which I thought was pretty fast.
> >>
> >> iSCSI still scares me but maybe I've just been burnt by it.  i.e.
> > I wonder if that is iSCSI's fault, or whether some of the hardware was not working up to scratch. I have had issues with 1Gbs networks when there has been an odd, hard to track down network issue (whether wiring or network switch, or network interface card, I never did find out).
> >
> > I have only used iSCSI on 1Gbs networks. Was stable, but I only used as proof of concept for a 3 month period using Openfiler/DRDB, all in the same server rack so cable distances were very short. 
> > https://www.howtoforge.com/openfiler-2.3-active-passive-cluster-heartbeat-drbd-with-offsite-replication-node
> >
> > In test virtualised environments I have used Openfiler to create simple iSCSI sans for VMware ESXi, so I could perform live migration and other migration tests, never had issues there either.
> >
> > Due to my very limited iSCSI experience, the above has given me the impression that iSCSI is rock solid.  
> >
> > I am not currently using iSCSI, but would like to learn more about configuring iSCSI in linux. I can search up links later, but if you know of any really easy ones to follow, please send them through.  I was going to ask Bob about DRDB as he once mentioned that. DRDB worked well, though I wonder if GlusterFS is a more modern technology to use? Then how does NFS compare to iSCSI  (other than one is file based the other is block based storage) ?
> > https://computingforgeeks.com/ceph-vs-glusterfs-vs-moosefs-vs-hdfs-vs-drbd/
> > https://duckduckgo.com/?q=openfiler+drdb+failover+cluster&t=ffnt&ia=web
> > https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/sect-nfs
> >
> > Without useful, real world experience it is difficult to know what performs better and is more stable, so thanks for replying.
> >
> > George.
> >
> >>   Network outage: does happen sometimes - no big deal.
> >>   SCSI Bus failure:  Almost never happens so seems really bad when it happens.
> >>
> >>   iSCSI:   Network failure == SCSI Bus failure.
> >>
> >> Maybe iSCSI with dmmultipath would be less scary but then you'd need multiple paths of
> >> ethernet to the target.
> >>
> >> Brett
> >>
> >> -- 
> >>   /) _ _ _/_/ / / /  _ _//
> >>  /_)/</= / / (_(_/()/< ///
> >>
> >>
> >>
> 
> -- 
> linux mailing list
> linux at lists.samba.org
> https://lists.samba.org/mailman/listinfo/linux
> 



More information about the linux mailing list