[clug] Fwd: Re: Storage and networking for Storage as used for supporting [Enterprise] virtualized environments

Wayne Haig wayne at haig.id.au
Mon Aug 19 06:17:28 UTC 2019


George,

I think what Brett was getting at with the speeds actually stems from
the ESJ article on Fibre Channel and iSCSI. In that, the author seems to
have mixed the GB/s and Gb/s.

As it stands Fibre Channel variants offer 1Gbit/s (or 100MBytes/s),
2Gbits/s (200MBytes), 4Gbits/s (400MBytes), 8Gbits/s (800MBytes/s),
16Gbit/s (1600MBytes/s), 32Gbits/s (3200MBytes/s), 64Gbits/s
(6400MBytes/s), 128Gbits/s (12800MBytes/s) and 256Gbits/s (25600MBytes/s).

iSCSI matches ethernet speeds so you have 1Gbit/s using 1GigE
(~120MBytes/s), 10Gbits/s for 10GigE (~1200MBytes/s), 25GBits/s for
25GigE (~3000MBytes/s), 40Gbits/s for 40GigE (~4800MBytes/s), 50Gbits
for 50GigE (~6000MBytes/s), and 100Gbits/s for 100GigE (~12300MBytes/s).
Variants at 200Gbits/s (200GigE) and 400Gbits/s (400GigE) are available
in some forms but will become readily available soon - the PCI slots
that can drive the technology from servers are only just becoming available.

But both Fibre Channel and Ethernet speeds of greater than 10Gbit/s are
not for your generic home users without a substantial bank loan.

Cheers,

Wayne


On 19/8/19 1:07 pm, George at Clug via linux wrote:
>
> On Monday, 19-08-2019 at 12:06 Brett Worth wrote:
>> On 19/8/19 11:46 am, Alastair D'Silva via linux wrote:
>>
>>> One of my switches at home is a 48 port gigabit switch with a 10GB XFP
>>> uplink on the back. I recently stuck a dual port 10Gb card, plus an
>>> optical transciever in my fileserver, and a matching XFP module into
>>> the switch, so my server is now connected at 10Gb.
>> Hmmm.  Looks like general use of 400Gb ethernet is closer than I thought.
>>
>> I still use a lot of 16Gb FC and 100Gb IB which I thought was pretty fast.
>>
>> iSCSI still scares me but maybe I've just been burnt by it.  i.e.
> I wonder if that is iSCSI's fault, or whether some of the hardware was not working up to scratch. I have had issues with 1Gbs networks when there has been an odd, hard to track down network issue (whether wiring or network switch, or network interface card, I never did find out).
>
> I have only used iSCSI on 1Gbs networks. Was stable, but I only used as proof of concept for a 3 month period using Openfiler/DRDB, all in the same server rack so cable distances were very short. 
> https://www.howtoforge.com/openfiler-2.3-active-passive-cluster-heartbeat-drbd-with-offsite-replication-node
>
> In test virtualised environments I have used Openfiler to create simple iSCSI sans for VMware ESXi, so I could perform live migration and other migration tests, never had issues there either.
>
> Due to my very limited iSCSI experience, the above has given me the impression that iSCSI is rock solid.  
>
> I am not currently using iSCSI, but would like to learn more about configuring iSCSI in linux. I can search up links later, but if you know of any really easy ones to follow, please send them through.  I was going to ask Bob about DRDB as he once mentioned that. DRDB worked well, though I wonder if GlusterFS is a more modern technology to use? Then how does NFS compare to iSCSI  (other than one is file based the other is block based storage) ?
> https://computingforgeeks.com/ceph-vs-glusterfs-vs-moosefs-vs-hdfs-vs-drbd/
> https://duckduckgo.com/?q=openfiler+drdb+failover+cluster&t=ffnt&ia=web
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/sect-nfs
>
> Without useful, real world experience it is difficult to know what performs better and is more stable, so thanks for replying.
>
> George.
>
>>   Network outage: does happen sometimes - no big deal.
>>   SCSI Bus failure:  Almost never happens so seems really bad when it happens.
>>
>>   iSCSI:   Network failure == SCSI Bus failure.
>>
>> Maybe iSCSI with dmmultipath would be less scary but then you'd need multiple paths of
>> ethernet to the target.
>>
>> Brett
>>
>> -- 
>>   /) _ _ _/_/ / / /  _ _//
>>  /_)/</= / / (_(_/()/< ///
>>
>>
>>



More information about the linux mailing list