CTDB IP takeover/failover tunables - do you use them?

Martin Schwenke martin at meltin.net
Thu Apr 20 10:42:54 UTC 2017

On Thu, 20 Apr 2017 12:24:48 +0200, hvjunk <hvjunk at gmail.com> wrote:

> However, would it be difficult/etc. to have it “preferred” to a
> specific node? ie, in the  CTDB_PUBLIC_ADDRESSES for a node, have
> something like:

> eth1 prefer
> eth1

Not with the current code.

> In a case with the cluster “local” I know it’ll not make a
> difference, but with my “distributed” nodes, the locality would be
> nice to have it assigned, rather than computed, especially if the
> algorithm might change during an upgrade.

OK, I can see why you want the locality.

If everything goes according to plan then we will completely rewrite
the way IP failover is done in CTDB, while maintaining approximately
the same functionality.  It is likely that we will factor out the
program that takes the IP layout and the node states and produces a new
IP layout. If we do this, then it would be simple to make it
pluggable... and you could easily replace it with a script that handles
your locality requirement.  However, that's probably 10 or 12 months

A bit of history...  The original algorithm was deterministic IPs,
which is good for simple configurations (e.g. all nodes host IPs and
they all have the same configuration). The next algorithm was
non-deterministic IPs, where things are constantly rebalanced and can
end up anyway. However, this doesn't work well for multiple
networks/interfaces.  The current default LCP2 algorithm uses a
heuristic to be able to balance a lot of IPs across multiple
networks/interfaces per node, with different configurations on
different nodes.  So, we have worked to support more complex scenarios
rather than the simple ones... like the one you want.

> Anycase, thank you Martin for the help, it helped me a lot!

You're very welcome!  :-)

peace & happiness,

More information about the samba-technical mailing list