[Samba] suggestions for a "fast" fileserver - 1G / 10G

Linda W samba at tlinx.org
Mon Mar 24 22:55:57 MDT 2014


Emmanuel Florac wrote:
I thought your
>  Le Tue, 25 Mar 2014 00:04:26 +0800
>  Chan Min Wai <dcmwai at gmail.com> �crivaite
>  It depends on many parameters, really. RAID-10 is better at writes and
>  rewrites; modifying an already allocated block has no negative impact
>  on RAID-10. However it's very uncommon to modify existing data
>  extensively on file servers.

-----
Note, I found your info excellent.  But had some Q's
----
Well a partial quote from one of the fs experts on the xfs
list:

-------- Original Message --------
Date: Sun, 20 Jan 2013 03:33:08 -0600
From: Stan Hoeppner
To:  xfs-oss <xfs at oss.sgi.com>
>  On Sat, Jan 19, 2013 at 03:55:17PM -0800, Linda Walsh wrote:
> >     All that talk about RAIDs recently, got me depressed a bit when
> >     I realize that while I can get fast speeds, type speeds in
> >     seeking around are about 1/10-1/20th the speed...sigh.

    Hay Linda, if you're to re-architect your storage, the first thing I'd
do is ditch that RAID50 setup.  RAID50 exists strictly to reduce some of
the penalties of RAID5.  But then you find new downsides specific to
RAID50, including the alignment issues you mentioned.  ....

In general, yes, more spindles will always be faster if utilized
properly.  But depending on your workload(s) you might be able to fix
your performance problems by simply moving your current array to non
parity RAID10, layered stripe over RAID1 pairs, concat, etc, thus
eliminating the RMW penalty entirely.  You'll need more drives to
maintain the same usable capacity, but as a consequence you wind up with
even more spindles, thus more performance.  .....
--------------

 From the above, I don't see how RAID6 could be faster than RAID0 unless
you are exceeding the card capacity (3.0Gb/s or 6.Gb/ or 12...
depending on SAS generation).



1) If you talk to the
>
>  It also depends on the number of drives you have, and the RAID
>  controller you're using. with 4 or 6 drives, RAID-10 is probably
>  better overall. However starting with 8 drives it begins to fall
>  behind modern RAID-6 controllers, even for write speed (but not
>  rewrite speed).
>
>  Here's the catch: RAID-5 and RAID-6 use striping. Basically, that
>  means that sequential reads are very efficient (about as fast as
>  a RAID-0 with the same number of drives), random reads are quite
>  efficient (because you can access drives separately), and sequential
>  writes are also very efficient (with good controllers, as fast as
>  number of drives minus parity drives).

Theoretically, RAID1 can be faster than RAID0 if the controller keeps
track of rotational position; i.e. one copy may be closer to read than
the other.  Cheap cards won't, but some of the LSI cards might given
their rigourous drive requirements (I got an order of Hitachi Desktars
by mistake once instead of Ultrastars) 75% wouldn't pass initialization
-- basically their rotational rates were up to 20% off from the rated
7200RPM)

When you say "8 drives" do you mean 8 data drives, or 8 total drives?
Cuz if you mean 8 total drives, then I would easily agree, but if you
are looking at 8-data drives in a raid6 v. raid10,  i can't see how
a raid6 would beat a raid10 -- at best it would tie, no...?

>
>  However, if you modify a block without rewriting the entire stripe
>  (across all drives), you must either reread all the other data blocks
>  from the stripe to or reread old parities and the overwritten data
>  blocks to recompute parities.  Therefore random writes/rewrites suck,
>  because each write operation becomes 3 reads (old block, 2 parities)
>  and 3 writes ( new block, 2 parities) in RAID-6.
>
>  As I said however, this is mostly a problem for database, email or
>  similar applications. For a true fileserver, this shouldn't really be
>  a problem unless it's chock full with small files (smaller than the
>  stripe width). Same goes for the filesystem journal that should fits
>  easily in cache to allow write reordering.
---

If you only serve file.. but with samba, I have profiles and all
data / content on the server where it gets modified regularly.  Only
thing I keep on windows are the programs.  Things that get modified
regularly get put on the server -- so by definition, it gets alot of
writes.

I've used XFS for about 14-15 years on my linux boxen and have rarely
had major problems -- but I also have backups.  Additionally, I have my
boxes on UPS's, so uptime for them when I'm not keeping up w/current
kernel has been as long a 90-120 days...

Someone mentioned Redhat is going with XFS for their servers -- so is
Novell/Suse.  If you need more reliability on XFS, you can get it by
reducing the caching and writeback delays -- until you have it down to
the performance of ext3/4.  But if you have a reliable system and power
(UPS), XFS is well worth the trade off.  But if you have heavy I/O
inflight all the time as with a heavily used MTA, it might be better to
go solid-state anyway, not to mention MTA's aren't XFS's forte -- it was
build to support heavy I/O of uncompressed video and sound recording and
production -- it was build for speed for large files (large back then,
when it was  initially designed, in the early 1990's, was in the MB-GB
range)... mail messages.. are generally smaller and wouldn't benefit
nearly as much as other loads...


As you can see from my forwarded message, I have been toying w/getting
a RAID1+0 setup... the expense is a bit icky though... disks haven't
dropped according to historical trends over the past 3-5 years... or
rather, they probably have -- but the dollar dropped alot due to heavy
devaluation in order to pay for the bailout back then ;-(.


Will definitely keep your tuning notes and play w/them.  Thanks much for
sharing your experience.  It is appreciated...




More information about the samba mailing list