CTDB: Split brain and banning

David Disseldorp ddiss at samba.org
Tue Oct 30 14:54:44 UTC 2018

Hi Michel,

Thanks for the report...

On Tue, 30 Oct 2018 15:10:18 +0100, Michel Buijsman via samba-technical wrote:

> Hi list, 
> I'm building a 3 node cluster of storage gateways using CTDB to connect 
> various NFS and ISCSI clients to CEPH storage. I'm using a rados object as 
> reclock using ctdb_mutex_ceph_rados_helper.
> I'm having two problems:
> 1. Node banning: Unless I disable bans, the whole cluster tends to ban 
>    itself when something goes wrong. As in: Node #1 (recovery master) dies, 
>    then nodes #2 and #3 will both try to get the reclock, fail, and ban 
>    themselves.
>    I've "fixed" this for now with EnableBans=0.

As of ce289e89e5c469cf2c5626dc7f2666b945dba3bd, which is carried in
Samba 4.9.1 as a fix for bso#13540, the recovery master's reclock should
timeout after 10 seconds, allowing for one of the remaining nodes to
successfully takeover. How long after recovery master outage do you see
the ban occur? Full logs of this would be helpful.

> 2. Split brain: If the current recovery master drops off the network for 
>    whatever reason but keeps running, it will ignore the fact that it can't 
>    get the reclock: "Time out getting recovery lock, allowing recmode set 
>    anyway". It will remain at status "OK" and start to claim every virtual
>    IP in the cluster.

Is the recovery master dropping off both the ceph and ctdb networks in
this case, or just the latter? I've not done much testing of either
scenario, so I think it's worth tracking via a new bugzilla.samba.org
ticket. I think the failure to obtain quorum in this case should see the
isolated node stop taking part in IP failover, etc. 

Cheers, David

More information about the samba-technical mailing list