Parallel serving via NFS from CTDB considered harmful

Amitay Isaacs amitay at gmail.com
Wed Oct 24 21:19:48 MDT 2012


Hi Jeff,

On Sat, Oct 20, 2012 at 10:43 PM, Jeff Layton <jlayton at samba.org> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> I've been playing with CTDB recently and ran across a couple of pages
> that seem to indicate that serving a clustered filesystem from multiple
> nodes using NFS is safe:
>
>     http://ctdb.samba.org/nfs.html
>
> ...and...
>
>     https://wiki.samba.org/index.php/CTDB_Setup#Setting_up_CTDB_for_clustered_NFS
>
> I'm concerned that these pages are misleading since from what I can
> tell, when a failover event occurs the lock recovery grace period is not
> reinstated across the entire cluster.
>
> The problematic scenario is something like this:
>
> - - client1 mounts an exported filesystem from a public IP on serverA and
>   acquires a lock on it
>
> - - serverA crashes, its locks are released by DLM (or other clustered
> lock manager)
>
> - - client2 now races in and acquires a conflicting lock on serverB
>
> - - the ip address now "floats" to serverC
>
> - - client1 tries to reclaim his lock now, but can't
>
> ...even worse is a variant of the above case where client2 just briefly
> gets the lock, modifies the data protected by it and then releases it.
> client1 will think he's had exclusive access to the lock the whole
> time. The problem in that case will manifest itself as silent data
> corruption.
>
> Getting clustered NFS locking right is really, really hard due to the
> recovery semantics. I'd like to suggest that we either remove the pages
> above, or at least add some warnings that locking may not be reliable
> in such configurations.
>
> Thoughts?
> - --
> Jeff Layton <jlayton at samba.org>

CTDB provides a mechanism via ipreallocated event to protect against
this. Here is the relevant portion of the code from statd-callout
handler.

  notify)
        # we must restart the lockmanager (on all nodes) so that we get
        # a clusterwide grace period (so other clients dont take out
        # conflicting locks through other nodes before all locks have been
        # reclaimed)

        # we need these settings to make sure that no tcp connections survive
        # across a very fast failover/failback
        #echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout
        #echo 0 > /proc/sys/net/ipv4/tcp_max_tw_buckets
        #echo 0 > /proc/sys/net/ipv4/tcp_max_orphans

        # Delete the notification list for statd, we dont want it to
        # ping any clients
        rm -f /var/lib/nfs/statd/sm/*
        rm -f /var/lib/nfs/statd/sm.bak/*

        # we must keep a monotonically increasing state variable for the entire
        # cluster  so state always increases when ip addresses fail from one
        # node to another
        # We use epoch and hope the nodes are close enough in clock.
        # Even numbers mean service is shut down, odd numbers mean
        # service is started.
        STATE=$(( $(date '+%s') / 2 * 2))

        # we must also let some time pass between stopping and restarting the
        # lockmanager since othervise there is a window where the lockmanager
        # will respond "strangely" immediately after restarting it, which
        # causes clients to fail to reclaim the locks.
        #
        if [ "$NFS_SERVER_MODE" = "ganesha" ] ; then
            startstop_ganesha stop >/dev/null 2>&1
            sleep 2
            startstop_ganesha start >/dev/null 2>&1
        else
            startstop_nfslock stop >/dev/null 2>&1
            sleep 2
            startstop_nfslock start >/dev/null 2>&1
        fi

        # we now need to send out additional statd notifications to ensure
        # that clients understand that the lockmanager has restarted.
        # we have three cases:
        # 1, clients that ignore the ip address the stat notification came from
        #    and ONLY care about the 'name' in the notify packet.
        #    these clients ONLY work with lock failover IFF that name
        #    can be resolved into an ipaddress that matches the one used
        #    to mount the share.  (==linux clients)
        #    This is handled when starting lockmanager above,  but those
        #    packets are sent from the "wrong" ip address, something linux
        #    clients are ok with, buth other clients will barf at.
        # 2, Some clients only accept statd packets IFF they come from the
        #    'correct' ip address.
        # 2a,Send out the notification using the 'correct' ip address and also
        #    specify the 'correct' hostname in the statd packet.
        #    Some clients require both the correct source address and also the
        #    correct name. (these clients also ONLY work if the ip addresses
        #    used to map the share can be resolved into the name returned in
        #    the notify packet.)
        # 2b,Other clients require that the source ip address of the notify
        #    packet matches the ip address used to take out the lock.
        #    I.e. that the correct source address is used.
        #    These clients also require that the statd notify packet contains
        #    the name as the ip address used when the lock was taken out.
        #
        # Both 2a and 2b are commonly used in lockmanagers since they maximize
        # probability that the client will accept the statd notify packet and
        # not just ignore it.
        # For all IPs we serve, collect info and push to the config database
        PNN=`ctdb xpnn | sed -e "s/.*://"`
        ctdb ip -Y | tail -n +2 | while read LINE; do
                NODE=`echo $LINE | cut -f3 -d:`
                [ "$NODE" = "$PNN" ] || {
                        continue
                }
                IP=`echo $LINE | cut -f2 -d:`

                ls $CTDB_VARDIR/state/statd/ip/$IP | while read CLIENT; do
                        rm $CTDB_VARDIR/state/statd/ip/$IP/$CLIENT
                        smnotify --client=$CLIENT --ip=$IP
--server=$ip --stateval=$STATE
                        smnotify --client=$CLIENT --ip=$IP
--server=$NFS_HOSTNAME --stateval=$STATE
                        STATE=$(($STATE + 1))
                        smnotify --client=$CLIENT --ip=$IP
--server=$ip --stateval=$STATE
                        smnotify --client=$CLIENT --ip=$IP
--server=$NFS_HOSTNAME --stateval=$STATE
                done
        done
        ;;
esac

Since Ronnie has written most of the comments, he may be able to
explain better or comment on the limitations if any.

Amitay.


More information about the samba-technical mailing list