[clug] iSCSI and shared filesystems

David Schoen neerolyte at gmail.com
Wed Apr 28 02:22:10 MDT 2010


On 23 April 2010 22:28, Brett Worth <brett at worth.id.au> wrote:
> Just this week I was also playing with GFS on RedHat but have only used it on a single
> node so far.  It's all I could find that could create a 32TB filesystem that actually came
> with RedHat.  BTW even though the doco says that ext4 support VERY large filesystems it
> seems that the tools (mkfs, fsck etc...) are still limited to 16TB.  Maybe  RHEL6.

I maintain a few application stacks that sit on top of GFS2 clusters,
so far 2 and 3 node clusters only, I think they all use iSCSI backends
but I'm not certain as I'm not involved in that part of the support
agreement. I'm more of a GFS user.

Even though I don't directly maintain GFS I still have a tip or two.

Read a tuning guide and implement their suggestions [0]. There are a
few very basic parameters that should be adjusted unless you have a
very good reason not to adjust them. Having other more critical
applications using the same physical storage would be a good reason
for not removing the rate limits, it might still be worth adjusting
them. "I don't yet know what that parameter does" is not a good
reason, most of them are documented somewhere.

Test with Tridge's ping_pong tester [1] even if you don't care about
benchmarking or performance. In one instance we managed to continually
crash a 3 node cluster, RedHat had to get involving in fixing the
system. They ended up writing patches and this was as recent as the
few weeks leading up to the RHEL 5.4 release (we ended up upgrading to
patches they added to 5.4 to solve the problem so that's why I
remember).

Also purely as hearsay I don't believe GFS functions well with SELinux
yet (even in permissive mode). This is apparently due to a bug in the
extended attribute handling but I haven't seen any reliable
documentation on it yet. What I have seen (as recently yesterday) is a
3 node cluster that was reporting more than 30% free inodes and free
space but for some actions was getting "No space left on device" and
similar errors. We could touch new files, we could put content in them
but mktemp would consistently fail when we specified a path under the
GFS mount. The RedHat techs believe this is due to a bad interaction
between GFS2 and the extended attributes from SELinux that they found
and apparently patched in February. The theory is we had some left
over corruption.

[0] - http://www.linuxdynasty.org/howto-increase-gfs2-performance-in-a-cluster.html
[1] - http://wiki.samba.org/index.php/Ping_pong


More information about the linux mailing list