[clug] Fwd: Re: Storage and networking for Storage as used for supporting [Enterprise] virtualized environments

George at Clug Clug at goproject.info
Mon Aug 19 03:07:32 UTC 2019

On Monday, 19-08-2019 at 12:06 Brett Worth wrote:
> On 19/8/19 11:46 am, Alastair D'Silva via linux wrote:
> > One of my switches at home is a 48 port gigabit switch with a 10GB XFP
> > uplink on the back. I recently stuck a dual port 10Gb card, plus an
> > optical transciever in my fileserver, and a matching XFP module into
> > the switch, so my server is now connected at 10Gb.
> Hmmm.  Looks like general use of 400Gb ethernet is closer than I thought.
> I still use a lot of 16Gb FC and 100Gb IB which I thought was pretty fast.
> iSCSI still scares me but maybe I've just been burnt by it.  i.e.

I wonder if that is iSCSI's fault, or whether some of the hardware was not working up to scratch. I have had issues with 1Gbs networks when there has been an odd, hard to track down network issue (whether wiring or network switch, or network interface card, I never did find out).

I have only used iSCSI on 1Gbs networks. Was stable, but I only used as proof of concept for a 3 month period using Openfiler/DRDB, all in the same server rack so cable distances were very short. 

In test virtualised environments I have used Openfiler to create simple iSCSI sans for VMware ESXi, so I could perform live migration and other migration tests, never had issues there either.

Due to my very limited iSCSI experience, the above has given me the impression that iSCSI is rock solid.  

I am not currently using iSCSI, but would like to learn more about configuring iSCSI in linux. I can search up links later, but if you know of any really easy ones to follow, please send them through.  I was going to ask Bob about DRDB as he once mentioned that. DRDB worked well, though I wonder if GlusterFS is a more modern technology to use? Then how does NFS compare to iSCSI  (other than one is file based the other is block based storage) ?

Without useful, real world experience it is difficult to know what performs better and is more stable, so thanks for replying.


>   Network outage: does happen sometimes - no big deal.
>   SCSI Bus failure:  Almost never happens so seems really bad when it happens.
>   iSCSI:   Network failure == SCSI Bus failure.
> Maybe iSCSI with dmmultipath would be less scary but then you'd need multiple paths of
> ethernet to the target.
> Brett
> -- 
>   /) _ _ _/_/ / / /  _ _//
>  /_)/</= / / (_(_/()/< ///

More information about the linux mailing list