[Samba] CTDB: test network oscillations on the leader node, resulting in brain splitting

tu.qiuping tu.qiuping at qq.com
Thu Jun 15 07:48:52 UTC 2023


Thank you so much for taking the time to reply to my email.


I'm sorry that I missed the log of node 2. The log of node 2 after 2023-06-12T19:39:40.086620 is as follows:


2023-06-12T19:39:40.086536+08:00 host-192-168-40-133 ctdb-recovery[2603566]: recovered event finished
2023-06-12T19:39:40.086620+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Takeover run starting
2023-06-12T19:39:40.100386+08:00 host-192-168-40-133 ctdbd[2520003]: Takeover of IP 101.101.40.133/16 on interface bond1.906
2023-06-12T19:39:40.101038+08:00 host-192-168-40-133 ctdbd[2520003]: Takeover of IP 101.101.40.132/16 on interface bond1.906
2023-06-12T19:39:40.101633+08:00 host-192-168-40-133 ctdbd[2520003]: Takeover of IP 101.101.40.131/16 on interface bond1.906
2023-06-12T19:39:40.424385+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Takeover run completed successfully
2023-06-12T19:39:40.424452+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ../../ctdb/server/ctdb_recoverd.c:1404 Recovery complete
2023-06-12T19:39:40.424467+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Resetting ban count to 0 for all nodes
2023-06-12T19:39:40.424500+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Just finished a recovery. New recoveries will now be suppressed for the rerecovery timeout (10 seconds)
2023-06-12T19:39:40.424516+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Disabling recoveries for 10 seconds
2023-06-12T19:39:40.687836+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Takeover run starting
2023-06-12T19:39:40.774038+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Takeover run completed successfully
2023-06-12T19:39:41.420552+08:00 host-192-168-40-133 ctdbd[2520003]: 192.136.40.133:4379: connected to 192.136.40.131:4379 - 1 connected
2023-06-12T19:39:41.424307+08:00 host-192-168-40-133 ctdbd[2520003]: Node 0 has changed flags - 0x2 -> 0x0
2023-06-12T19:39:42.013192+08:00 host-192-168-40-133 ctdbd[2520003]: 192.136.40.133:4379: connected to 192.136.40.132:4379 - 2 connected
2023-06-12T19:39:42.690010+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Remote node 1 had flags 0x0, local had 0x2 - updating local
2023-06-12T19:39:42.690138+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Pushing updated flags for node 1 (0x0)
2023-06-12T19:39:42.690201+08:00 host-192-168-40-133 ctdbd[2520003]: Node 1 has changed flags - 0x2 -> 0x0
2023-06-12T19:39:42.690282+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Pushing updated flags for node 2 (0x0)
2023-06-12T19:39:42.866980+08:00 host-192-168-40-133 ctdbd[2520003]: Lock contention during renew: -16
2023-06-12T19:39:42.888760+08:00 host-192-168-40-133 ctdbd[2520003]: /usr/libexec/ctdb/ctdb_mutex_clove_rados_helper: Failed to drop lock on RADOS object 'ctdb_reclock' - (No such file or directory)
2023-06-12T19:39:42.894979+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Cluster lock helper terminated
2023-06-12T19:39:42.895168+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Start election
2023-06-12T19:39:42.895267+08:00 host-192-168-40-133 ctdbd[2520003]: Recovery mode set to ACTIVE
2023-06-12T19:39:42.895637+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Attempting to take cluster lock (!/usr/libexec/ctdb/ctdb_mutex_clove_rados_helper clove client.admin fs_pool ctdb_reclock 4)
2023-06-12T19:39:43.119817+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Unable to take cluster lock - contention
2023-06-12T19:39:47.897800+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Leader broadcast timeout
2023-06-12T19:39:47.898145+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Start election
2023-06-12T19:39:50.354774+08:00 host-192-168-40-133 ctdbd[2520003]: dead count reached for node 1
2023-06-12T19:39:50.355104+08:00 host-192-168-40-133 ctdbd[2520003]: Tearing down connection to dead node :1
2023-06-12T19:39:50.355153+08:00 host-192-168-40-133 ctdbd[2520003]: 192.136.40.133:4379: node 192.136.40.132:4379 is dead: 1 connected
2023-06-12T19:39:50.355322+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:50.355380+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:50.355438+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Async operation failed with ret=-1 res=-1 opcode=16
2023-06-12T19:39:50.355489+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:50.355539+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:50.355555+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Async operation failed with ret=-1 res=-1 opcode=80
2023-06-12T19:39:50.424654+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Reenabling recoveries after timeout
2023-06-12T19:39:52.355600+08:00 host-192-168-40-133 ctdbd[2520003]: dead count reached for node 0
2023-06-12T19:39:52.355946+08:00 host-192-168-40-133 ctdbd[2520003]: Tearing down connection to dead node :0
2023-06-12T19:39:52.356002+08:00 host-192-168-40-133 ctdbd[2520003]: 192.136.40.133:4379: node 192.136.40.131:4379 is dead: 0 connected
2023-06-12T19:39:52.356236+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:52.356282+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:52.356302+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Async operation failed with ret=-1 res=-1 opcode=16
2023-06-12T19:39:52.356321+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Async wait failed - fail_count=2
2023-06-12T19:39:52.356339+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ../../ctdb/server/ctdb_recoverd.c:504 Unable to set recovery mode. Recovery failed.
2023-06-12T19:39:52.356358+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ../../ctdb/server/ctdb_recoverd.c:1921 Unable to set recovery mode to active on cluster
2023-06-12T19:39:52.356374+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:52.356392+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ctdb_control error: 'node is disconnected'
2023-06-12T19:39:52.356408+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Async operation failed with ret=-1 res=-1 opcode=80
2023-06-12T19:39:52.356431+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Async wait failed - fail_count=2
2023-06-12T19:39:52.356443+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ../../ctdb/server/ctdb_client.c:1558 Failed to read node capabilities.
2023-06-12T19:39:52.356455+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ../../ctdb/server/ctdb_recoverd.c:456 Failed to get node capabilities
2023-06-12T19:39:52.356467+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ../../ctdb/server/ctdb_recoverd.c:2636 Unable to update node capabilities.
2023-06-12T19:39:57.357507+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Leader broadcast timeout
2023-06-12T19:39:57.357811+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Start election
2023-06-12T19:39:57.358129+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Attempting to take cluster lock (!/usr/libexec/ctdb/ctdb_mutex_clove_rados_helper clove client.admin fs_pool ctdb_reclock 4)
2023-06-12T19:39:57.580652+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Unable to take cluster lock - contention
2023-06-12T19:39:58.361021+08:00 host-192-168-40-133 ctdbd[2520003]: 192.136.40.133:4379: connected to 192.136.40.132:4379 - 1 connected
......
2023-06-12T19:41:21.460624+08:00 host-192-168-40-133 ctdbd[2520003]: 192.136.40.133:4379: connected to 192.136.40.131:4379 - 2 connected
2023-06-12T19:41:28.459502+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Leader broadcast timeout
2023-06-12T19:41:28.459854+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Start election
2023-06-12T19:41:28.460222+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Attempting to take cluster lock (!/usr/libexec/ctdb/ctdb_mutex_clove_rados_helper clove client.admin fs_pool ctdb_reclock 4)
2023-06-12T19:41:28.685596+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Unable to take cluster lock - contention
2023-06-12T19:41:38.462504+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Leader broadcast timeout
2023-06-12T19:41:38.462856+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Start election
2023-06-12T19:41:38.463151+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Attempting to take cluster lock (!/usr/libexec/ctdb/ctdb_mutex_clove_rados_helper clove client.admin fs_pool ctdb_reclock 4)
2023-06-12T19:41:38.699469+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Unable to take cluster lock - contention
2023-06-12T19:41:42.896272+08:00 host-192-168-40-133 ctdbd[2520003]: ../../ctdb/server/ctdb_recover.c:609 Been in recovery mode for too long. Dropping all IPS
2023-06-12T19:41:42.899299+08:00 host-192-168-40-133 ctdbd[2520003]: ../../ctdb/server/ctdb_takeover.c:1689 Released 0 public IPs
2023-06-12T19:41:43.464497+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Leader broadcast timeout
2023-06-12T19:41:43.464592+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Start election
2023-06-12T19:41:43.464971+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Attempting to take cluster lock (!/usr/libexec/ctdb/ctdb_mutex_clove_rados_helper clove client.admin fs_pool ctdb_reclock 4)
2023-06-12T19:41:43.688413+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: Unable to take cluster lock - contention



As you can see from the log, node 2 did not receive the broadcast from node 0.


At the same time, my CTDB cluster is setup as follows:
MonitorInterval=5
RecoveryBanPeriod=5
KeepaliveInterval=1
KeepaliveLimit=2
ElectionTimeout=2
And the renew time of the cluster lock is 4s, and the default values are used for other settings.


And, I use the following command to test node 2 for network vibration.
==> for i in {1..1000};do ifdown bond1.906&&sleep 5s&&ifup bond1.906&&sleep 5s;done


It should be noted that interface bond1.906 contains private addresses and public addresses of CTDB


Can you help me again to see what is causing this?


thanks
tuqp







------------------ Original ------------------
From: Martin Schwenke <martin at meltin.net>
Date: Thu,Jun 15,2023 9:22 AM
To: tu.qiuping <tu.qiuping at qq.com>
Cc: samba <samba at lists.samba.org>
Subject: Re: [Samba] CTDB: test network oscillations on the leader node,resulting in brain splitting



Hi,

On Wed, 14 Jun 2023 09:15:58 +0800, "tu.qiuping via samba"
 wrote:

> My ctdb version is 4.17.7

> My ctdb cluster configuration is correct and the cluster is healthy before operation.

> My cluster has three nodes, namely 192.168.40.131(node 0),
> 192.168.40.132(node 1), and 192.168.40.133(node 2). And the node
> 192.168.40.133 is the leader.

> I conducted network oscillation testing on node 192.168.40.133, and
> after a period of time, the lock update of this node failed, and at
> this time, the lock was taken away by node 0. Amazingly, after node 0
> received the lock, it sent a message with leader=0 to node 1, but did
> not send it to node 2, even though the network of node 2 was healthy
> at this time. And when I restored the network of node 2, node 1 and
> node 2 kept trying to acquire the lock and reported an error: Unable
> to take cluster lock - contention.

It would be excellent if you could describe your network oscillation
testing in some more details.  I assume you are taking down a network
connection for some time and then bringing it back up.  The down and up
timings would be useful for debugging.

By the way, you should never be amazed by clustering.  It is all based
on timeout-based heuristics, so you will almost certainly see unexpected
things.  :-)

I strongly suggest that you merge the logs by sorting them when trying
to debug these types of issues.

The main events are...

Nodes 0 and 1 are worried about their leader:

  2023-06-12T19:39:32.603600+08:00 host-192-168-40-131 ctdb-recoverd[2853595]: Leader broadcast timeout
  2023-06-12T19:39:32.604185+08:00 host-192-168-40-132 ctdb-recoverd[659348]: Leader broadcast timeout

The nodes notice that node 2 has become disconnected from the rest of
the cluster:

  2023-06-12T19:39:34.005643+08:00 host-192-168-40-132 ctdbd[659192]: dead count reached for node 2
  2023-06-12T19:39:34.343927+08:00 host-192-168-40-133 ctdbd[2520003]: dead count reached for node 0
  2023-06-12T19:39:34.344332+08:00 host-192-168-40-133 ctdbd[2520003]: dead count reached for node 1
  2023-06-12T19:39:35.414699+08:00 host-192-168-40-131 ctdbd[2853172]: dead count reached for node 2

During this time, node 2 completes recovery with itself:

  2023-06-12T19:39:34.361318+08:00 host-192-168-40-133 ctdbd[2520003]: Recovery mode set to ACTIVE
  ...
  2023-06-12T19:39:34.445242+08:00 host-192-168-40-133 ctdbd[2520003]: Recovery mode set to NORMAL

Node 2 starts to have problems because it can now take the cluster
lock during recovery:

  2023-06-12T19:39:37.680876+08:00 host-192-168-40-133 ctdbd[2520003]: ERROR: Daemon able to take recovery lock on "!/usr/libexec/ctdb/ctdb_mutex_clove_rados_helper clove client.admin fs_pool ctdb_reclock 4" during recovery

Node 0 successfully takes the cluster lock:

  2023-06-12T19:39:39.229085+08:00 host-192-168-40-131 ctdb-recoverd[2853595]: Took cluster lock, leader=0

Node 0 runs a recovery for nodes 0 and 1:

  2023-06-12T19:39:39.420005+08:00 host-192-168-40-131 ctdb-recoverd[2853595]: ../../ctdb/server/ctdb_recoverd.c:1312 Starting do_recovery
  ...
  2023-06-12T19:39:39.432822+08:00 host-192-168-40-131 ctdbd[2853172]: Recovery has started
  2023-06-12T19:39:39.432969+08:00 host-192-168-40-132 ctdbd[659192]: Recovery has started
  ...
  2023-06-12T19:39:39.549895+08:00 host-192-168-40-131 ctdb-recovery[3065583]: 1 of 1 databases recovered
  2023-06-12T19:39:39.549985+08:00 host-192-168-40-131 ctdbd[2853172]: Recovery mode set to NORMAL
  2023-06-12T19:39:39.551083+08:00 host-192-168-40-132 ctdbd[659192]: Recovery mode set to NORMAL

Node 2 still tries to recover because it thinks it is leader:

  2023-06-12T19:39:39.684162+08:00 host-192-168-40-133 ctdb-recoverd[2520071]: ../../ctdb/server/ctdb_recoverd.c:1312 Starting do_recovery

Node 1 receives a leader broadcast (note that node 2 is still disconnected):

  2023-06-12T19:39:39.685181+08:00 host-192-168-40-132 ctdb-recoverd[659348]: Received leader broadcast, leader=0

Nodes 0 & 1 notice the return of node 2:

  2023-06-12T19:39:41.420521+08:00 host-192-168-40-131 ctdbd[2853172]: 192.136.40.131:4379: connected to 192.136.40.133:4379 - 2 connected
  2023-06-12T19:39:42.013100+08:00 host-192-168-40-132 ctdbd[659192]: 192.136.40.132:4379: connected to 192.136.40.133:4379 - 2 connected

Node 2 things it is still leader, so sends a leader broadcast,
received by node 0:

  2023-06-12T19:39:42.617032+08:00 host-192-168-40-132 ctdb-recoverd[659348]: Received leader broadcast, leader=2

Note that there aren't enough logs from node 2 to know if it also
received a leader broadcast from node 0.

Now node 2 is gone again:

  2023-06-12T19:39:51.420623+08:00 host-192-168-40-131 ctdbd[2853172]: dead count reached for node 2

This is only 10s after it was rejoined, so KeepaliveInterval and/or
KeepaliveLimit must be greatly reduced from the defaults.  It is hard
to understand what is happening without knowing which configuration
values have been changed from the defaults.

I can see 1 main problem here: you have an unresolved split brain.
Node 2 does not realise it has lost the cluster lock.  It still thinks
it is leader while node 0 is able to take the lock.  It seems like you
need to reduce timeouts on the cluster lock.  For the standard fcntl
helper, we added a configurable re-check time.

The general situation could be helped by introducing a stronger notion
of cluster membership (where membership is granted by the leader).  At
the moment nodes join the cluster when they are connected and this is
too weak.  Fixing this requires a major redesign, of the type we have
spoken about at SambaXP for years but have not yet had time to do.

By the way, putting "CTDB" in the subject is good because I don't read
anything on the general samba list that doesn't have CTDB in the
subject.  Also, if you post a merge request for CTDB, please add me as
a reviewer so I see it.  I don't get time to check all merge requests.

Thanks...

peace & happiness,
martin


More information about the samba mailing list