CTDB: Split brain and banning

Michel Buijsman michelb at bit.nl
Wed Oct 31 09:39:41 UTC 2018


On Tue, Oct 30, 2018 at 03:54:44PM +0100, David Disseldorp via samba-technical wrote:
> On Tue, 30 Oct 2018 15:10:18 +0100, Michel Buijsman via samba-technical wrote:
> > 1. Node banning: Unless I disable bans, the whole cluster tends to ban 
> >    itself when something goes wrong. As in: Node #1 (recovery master) dies, 
> >    then nodes #2 and #3 will both try to get the reclock, fail, and ban 
> >    themselves.
> >    
> >    I've "fixed" this for now with EnableBans=0.
> 
> As of ce289e89e5c469cf2c5626dc7f2666b945dba3bd, which is carried in
> Samba 4.9.1 as a fix for bso#13540, the recovery master's reclock should
> timeout after 10 seconds, allowing for one of the remaining nodes to
> successfully takeover. How long after recovery master outage do you see
> the ban occur? Full logs of this would be helpful.

I've run a test with RecoveryBanPeriod=10 to keep the ban time somewhat
manageable. It takes a few seconds for the whole cluster to get banned,
certainly less than 10. I've attached the relevant logs from two nodes 
after I'd killed the third. Lock contention, looks like.

> > 2. Split brain: If the current recovery master drops off the network for 
> >    whatever reason but keeps running, it will ignore the fact that it can't 
> >    get the reclock: "Time out getting recovery lock, allowing recmode set 
> >    anyway". It will remain at status "OK" and start to claim every virtual
> >    IP in the cluster.
> 
> Is the recovery master dropping off both the ceph and ctdb networks in
> this case, or just the latter? I've not done much testing of either
> scenario, so I think it's worth tracking via a new bugzilla.samba.org
> ticket. I think the failure to obtain quorum in this case should see the
> isolated node stop taking part in IP failover, etc. 

I've been testing this by killing openvswitch, so everything is offline.
Attached some relevant logs of this too.

-- 
Michel Buijsman
BIT BV | Unix beheer | michelb at bit.nl | 08B90948
-------------- next part --------------
2018/10/30 15:36:22.733357 ctdbd[13348]: dead count reached for node 0
2018/10/30 15:36:22.733388 ctdbd[13348]: 2001:7b8:80:3:0:1:9:2:4379: node 2001:7b8:80:1:0:1:9:1:4379 is dead: 1 connected
2018/10/30 15:36:22.733424 ctdbd[13348]: Tearing down connection to dead node :0
2018/10/30 15:36:22.733482 ctdbd[13348]: dead count reached for node 2
2018/10/30 15:36:22.733487 ctdbd[13348]: 2001:7b8:80:3:0:1:9:2:4379: node 2001:7b8:80:2:0:1:9:3:4379 is dead: 0 connected
2018/10/30 15:36:22.733486 ctdb-recoverd[13456]: ctdb_control error: 'node is disconnected'
2018/10/30 15:36:22.733515 ctdbd[13348]: Tearing down connection to dead node :2
2018/10/30 15:36:22.733518 ctdb-recoverd[13456]: ctdb_control error: 'node is disconnected'
2018/10/30 15:36:22.733537 ctdb-recoverd[13456]: Async operation failed with ret=-1 res=-1 opcode=80
2018/10/30 15:36:22.733545 ctdb-recoverd[13456]: ctdb_control error: 'node is disconnected'
2018/10/30 15:36:22.733562 ctdb-recoverd[13456]: ctdb_control error: 'node is disconnected'
2018/10/30 15:36:22.733566 ctdb-recoverd[13456]: Async operation failed with ret=-1 res=-1 opcode=80
2018/10/30 15:36:22.733569 ctdb-recoverd[13456]: Async wait failed - fail_count=2
2018/10/30 15:36:22.733573 ctdb-recoverd[13456]: ../ctdb/server/ctdb_client.c:1931 Failed to read node capabilities.
2018/10/30 15:36:22.733577 ctdb-recoverd[13456]: ../ctdb/server/ctdb_recoverd.c:370 Failed to get node capabilities
2018/10/30 15:36:22.733581 ctdb-recoverd[13456]: ../ctdb/server/ctdb_recoverd.c:2731 Unable to update node capabilities.
2018/10/30 15:36:22.734269 ctdb-recoverd[13456]: ../ctdb/server/ctdb_recoverd.c:2953 The vnnmap count is different from the number of active lmaster nodes: 3 vs 1
2018/10/30 15:36:22.734302 ctdb-recoverd[13456]: ../ctdb/server/ctdb_recoverd.c:1317 Starting do_recovery
2018/10/30 15:36:22.734307 ctdb-recoverd[13456]: Already holding recovery lock
2018/10/30 15:36:22.734310 ctdb-recoverd[13456]: ../ctdb/server/ctdb_recoverd.c:1392 Recovery initiated due to problem with node 1
2018/10/30 15:36:22.734329 ctdb-recoverd[13456]: ../ctdb/server/ctdb_recoverd.c:1417 Recovery - created remote databases
2018/10/30 15:36:22.735998 ctdb-recoverd[13456]: ../ctdb/server/ctdb_recoverd.c:1446 Recovery - updated flags
2018/10/30 15:36:22.737649 ctdbd[13348]: Recovery mode set to ACTIVE
2018/10/30 15:36:22.737671 ctdb-recovery[7278]: Set recovery mode to ACTIVE
2018/10/30 15:36:22.737705 ctdbd[13348]: Recovery has started
2018/10/30 15:36:22.738055 ctdbd[13348]: Monitoring event was cancelled
2018/10/30 15:36:22.753639 ctdb-recovery[7278]: start_recovery event finished
2018/10/30 15:36:22.753742 ctdb-recovery[7278]: updated VNNMAP
2018/10/30 15:36:22.753766 ctdb-recovery[7278]: recover database 0x6645c6c4
2018/10/30 15:36:22.753938 ctdbd[13348]: Freeze db: ctdb.tdb
2018/10/30 15:36:22.781406 ctdbd[13348]: Thaw db: ctdb.tdb generation 2041215824
2018/10/30 15:36:22.781422 ctdbd[13348]: Release freeze handle for db ctdb.tdb
2018/10/30 15:36:22.781618 ctdb-recovery[7278]: 1 of 1 databases recovered
2018/10/30 15:36:22.781645 ctdbd[13348]: Recovery mode set to NORMAL
2018/10/30 15:36:27.782614 ctdbd[13348]: ../ctdb/server/ctdb_recover.c:787Time out getting recovery lock, allowing recmode set anyway
2018/10/30 15:36:27.782788 ctdb-recovery[7278]: Set recovery mode to NORMAL
2018/10/30 15:36:27.783451 ctdbd[13348]: Recovery has finished
2018/10/30 15:36:27.798121 ctdb-recovery[7278]: recovered event finished
2018/10/30 15:36:27.798189 ctdb-recoverd[13456]: Takeover run starting

-------------- next part --------------
2018/10/30 16:36:16.382280 ctdbd[1987]: dead count reached for node 2
2018/10/30 16:36:16.382308 ctdbd[1987]: 2001:7b8:80:1:0:1:9:1:4379: node 2001:7b8:80:2:0:1:9:3:4379 is dead: 1 connected
2018/10/30 16:36:16.382330 ctdbd[1987]: Tearing down connection to dead node :2
2018/10/30 16:36:16.382407 ctdb-recoverd[2361]: ctdb_control error: 'node is disconnected'
2018/10/30 16:36:16.382421 ctdb-recoverd[2361]: ctdb_control error: 'node is disconnected'
2018/10/30 16:36:16.382442 ctdb-recoverd[2361]: Async operation failed with ret=-1 res=-1 opcode=80
2018/10/30 16:36:16.382448 ctdb-recoverd[2361]: Async wait failed - fail_count=1
2018/10/30 16:36:16.382466 ctdb-recoverd[2361]: ../ctdb/server/ctdb_client.c:1931 Failed to read node capabilities.
2018/10/30 16:36:16.382471 ctdb-recoverd[2361]: ../ctdb/server/ctdb_recoverd.c:370 Failed to get node capabilities
2018/10/30 16:36:16.382476 ctdb-recoverd[2361]: ../ctdb/server/ctdb_recoverd.c:2731 Unable to update node capabilities.
2018/10/30 16:36:16.383423 ctdb-recoverd[2361]:  Current recmaster node 2 does not have CAP_RECMASTER, but we (node 0) have - force an election
2018/10/30 16:36:16.383473 ctdbd[1987]: Recovery mode set to ACTIVE
2018/10/30 16:36:16.383806 ctdbd[1987]: This node (0) is now the recovery master
2018/10/30 16:36:19.387132 ctdb-recoverd[2361]: Election period ended
2018/10/30 16:36:19.389388 ctdb-recoverd[2361]: Node:0 was in recovery mode. Start recovery process
2018/10/30 16:36:19.389676 ctdb-recoverd[2361]: Node:1 was in recovery mode. Start recovery process
2018/10/30 16:36:19.389699 ctdb-recoverd[2361]: ../ctdb/server/ctdb_recoverd.c:1317 Starting do_recovery
2018/10/30 16:36:19.389704 ctdb-recoverd[2361]: Attempting to take recovery lock (!/usr/local/bin/ctdb_mutex_ceph_rados_helper ceph client.test foo ctdb_sgw_recovery_lock 30)
2018/10/30 16:36:19.407603 ctdb-recoverd[2361]: Unable to take recovery lock - contention
2018/10/30 16:36:19.407656 ctdb-recoverd[2361]: Unable to take recovery lock
2018/10/30 16:36:19.407661 ctdb-recoverd[2361]: Abort recovery, ban this node for 10 seconds
2018/10/30 16:36:19.407680 ctdb-recoverd[2361]: Banning node 0 for 10 seconds
2018/10/30 16:36:19.407777 ctdbd[1987]: Banning this node for 10 seconds
2018/10/30 16:36:19.407801 ctdbd[1987]: This node has been banned - releasing all public IPs and setting the generation to INVALID.
2018/10/30 16:36:19.407865 ctdbd[1987]: ../ctdb/server/ctdb_takeover.c:1638 Released 0 public IPs
2018/10/30 16:36:20.089256 ctdbd[1987]: Remote node (1) is now the recovery master
2018/10/30 16:36:23.090501 ctdb-recoverd[2361]: Election period ended
2018/10/30 16:36:23.093966 ctdb-recoverd[2361]: Node 0 has changed flags - now 0x8  was 0x0
2018/10/30 16:36:23.392377 ctdbd[1987]: Freeze all
2018/10/30 16:36:23.392400 ctdbd[1987]: Freeze db: ctdb.tdb
2018/10/30 16:36:29.408695 ctdbd[1987]: Banning timed out
2018/10/30 16:36:30.405253 ctdb-recoverd[2361]: Recmaster node 1 is inactive. Force election
2018/10/30 16:36:30.405641 ctdbd[1987]: This node (0) is now the recovery master
2018/10/30 16:36:33.409040 ctdb-recoverd[2361]: Election period ended
2018/10/30 16:36:33.411652 ctdb-recoverd[2361]: ../ctdb/server/ctdb_recoverd.c:1317 Starting do_recovery
2018/10/30 16:36:33.411688 ctdb-recoverd[2361]: Attempting to take recovery lock (!/usr/local/bin/ctdb_mutex_ceph_rados_helper ceph client.test foo ctdb_sgw_recovery_lock 30)
2018/10/30 16:36:33.505511 ctdb-recoverd[2361]: Recovery lock taken successfully
2018/10/30 16:36:33.505539 ctdb-recoverd[2361]: ../ctdb/server/ctdb_recoverd.c:1392 Recovery initiated due to problem with node 0
2018/10/30 16:36:33.506860 ctdb-recoverd[2361]: ../ctdb/server/ctdb_recoverd.c:1417 Recovery - created remote databases
2018/10/30 16:36:33.508873 ctdb-recoverd[2361]: ../ctdb/server/ctdb_recoverd.c:1446 Recovery - updated flags
2018/10/30 16:36:33.511932 ctdb-recovery[17712]: Set recovery mode to ACTIVE
2018/10/30 16:36:33.512044 ctdbd[1987]: Recovery has started
2018/10/30 16:36:33.526738 ctdb-recovery[17712]: start_recovery event finished
2018/10/30 16:36:33.527215 ctdb-recovery[17712]: updated VNNMAP
2018/10/30 16:36:33.527239 ctdb-recovery[17712]: recover database 0x6645c6c4
2018/10/30 16:36:33.527591 ctdbd[1987]: Freeze db: ctdb.tdb frozen
2018/10/30 16:36:33.556014 ctdbd[1987]: Thaw db: ctdb.tdb generation 1301072303
2018/10/30 16:36:33.556035 ctdbd[1987]: Release freeze handle for db ctdb.tdb
2018/10/30 16:36:33.556570 ctdb-recovery[17712]: 1 of 1 databases recovered
2018/10/30 16:36:33.556681 ctdbd[1987]: Recovery mode set to NORMAL
2018/10/30 16:36:33.556696 ctdbd[1987]: Thawing all
2018/10/30 16:36:33.556700 ctdbd[1987]: Release freeze handle
2018/10/30 16:36:33.573810 ctdb-recovery[17712]: Set recovery mode to NORMAL
2018/10/30 16:36:33.573928 ctdbd[1987]: Recovery has finished
2018/10/30 16:36:33.588100 ctdb-recovery[17712]: recovered event finished
2018/10/30 16:36:33.588162 ctdb-recoverd[2361]: Takeover run starting
2018/10/30 16:36:33.590387 ctdbd[1987]: Takeover of IP [etc...]

-------------- next part --------------
2018/10/30 16:36:17.082699 ctdbd[2603]: dead count reached for node 2
2018/10/30 16:36:17.082724 ctdbd[2603]: 2001:7b8:80:3:0:1:9:2:4379: node 2001:7b8:80:2:0:1:9:3:4379 is dead: 1 connected
2018/10/30 16:36:17.082760 ctdbd[2603]: Tearing down connection to dead node :2
2018/10/30 16:36:17.082822 ctdb-recoverd[2712]: ctdb_control error: 'node is disconnected'
2018/10/30 16:36:17.082850 ctdb-recoverd[2712]: ctdb_control error: 'node is disconnected'
2018/10/30 16:36:17.082871 ctdb-recoverd[2712]: Async operation failed with ret=-1 res=-1 opcode=80
2018/10/30 16:36:17.082877 ctdb-recoverd[2712]: Async wait failed - fail_count=1
2018/10/30 16:36:17.082881 ctdb-recoverd[2712]: ../ctdb/server/ctdb_client.c:1931 Failed to read node capabilities.
2018/10/30 16:36:17.082885 ctdb-recoverd[2712]: ../ctdb/server/ctdb_recoverd.c:370 Failed to get node capabilities
2018/10/30 16:36:17.082890 ctdb-recoverd[2712]: ../ctdb/server/ctdb_recoverd.c:2731 Unable to update node capabilities.
2018/10/30 16:36:19.384992 ctdb-recoverd[2712]: Election period ended
2018/10/30 16:36:20.088412 ctdb-recoverd[2712]: Recmaster node 0 is inactive. Force election
2018/10/30 16:36:20.088558 ctdbd[2603]: This node (1) is now the recovery master
2018/10/30 16:36:23.091797 ctdb-recoverd[2712]: Election period ended
2018/10/30 16:36:23.093502 ctdb-recoverd[2712]: Node 0 has changed flags - now 0x8  was 0x0
2018/10/30 16:36:23.093804 ctdb-recoverd[2712]: Remote node 0 had flags 0x8, local had 0x0 - updating local
2018/10/30 16:36:23.094017 ctdb-recoverd[2712]: Node:1 was in recovery mode. Start recovery process
2018/10/30 16:36:23.094027 ctdb-recoverd[2712]: ../ctdb/server/ctdb_recoverd.c:1317 Starting do_recovery
2018/10/30 16:36:23.094031 ctdb-recoverd[2712]: Attempting to take recovery lock (!/usr/local/bin/ctdb_mutex_ceph_rados_helper ceph client.test foo ctdb_sgw_recovery_lock 30)
2018/10/30 16:36:23.111688 ctdb-recoverd[2712]: Unable to take recovery lock - contention
2018/10/30 16:36:23.111720 ctdb-recoverd[2712]: Unable to take recovery lock
2018/10/30 16:36:23.111743 ctdb-recoverd[2712]: Abort recovery, ban this node for 10 seconds
2018/10/30 16:36:23.111746 ctdb-recoverd[2712]: Banning node 1 for 10 seconds
2018/10/30 16:36:23.111817 ctdbd[2603]: Banning this node for 10 seconds
2018/10/30 16:36:23.111828 ctdbd[2603]: This node has been banned - releasing all public IPs and setting the generation to INVALID.
2018/10/30 16:36:23.111863 ctdbd[2603]: ../ctdb/server/ctdb_takeover.c:1638 Released 0 public IPs
2018/10/30 16:36:24.094361 ctdbd[2603]: Freeze all
2018/10/30 16:36:24.094386 ctdbd[2603]: Freeze db: ctdb.tdb
2018/10/30 16:36:30.406297 ctdbd[2603]: Remote node (0) is now the recovery master
2018/10/30 16:36:33.112055 ctdbd[2603]: Banning timed out
2018/10/30 16:36:33.407389 ctdb-recoverd[2712]: Election period ended
2018/10/30 16:36:33.508008 ctdb-recoverd[2712]: Node 0 has changed flags - now 0x0  was 0x8
2018/10/30 16:36:33.512282 ctdbd[2603]: Recovery has started
2018/10/30 16:36:33.527700 ctdbd[2603]: Freeze db: ctdb.tdb frozen
2018/10/30 16:36:33.556146 ctdbd[2603]: Thaw db: ctdb.tdb generation 1301072303
2018/10/30 16:36:33.556167 ctdbd[2603]: Release freeze handle for db ctdb.tdb
2018/10/30 16:36:33.556859 ctdbd[2603]: Recovery mode set to NORMAL
2018/10/30 16:36:33.556876 ctdbd[2603]: Thawing all
2018/10/30 16:36:33.556880 ctdbd[2603]: Release freeze handle
2018/10/30 16:36:33.573947 ctdbd[2603]: Recovery has finished
2018/10/30 16:36:33.588355 ctdb-recoverd[2712]: Disabling takeover runs for 60 seconds
2018/10/30 16:36:33.590411 ctdbd[2603]: Takeover of IP [etc...]



More information about the samba-technical mailing list