ctdb client python bindings

Andrew Walker awalker at ixsystems.com
Fri Apr 29 13:09:23 UTC 2022


On Fri, Apr 29, 2022 at 5:35 AM Amitay Isaacs <amitay at gmail.com> wrote:

> Hi Andrew,
>
> On Fri, Apr 29, 2022 at 1:04 AM Andrew Walker <awalker at ixsystems.com>
> wrote:
> > On Thu, Apr 28, 2022 at 4:14 AM Amitay Isaacs <amitay at gmail.com> wrote:
> >>
> >> I appreciate the efforts to implement python bindings for ctdb client
> >> interfaces.  However, I fail to understand the motivation behind this
> >> work.  Is there a requirement from some applications to have a python
> >> interface to CTDB?  Or do you have some other plans?
> >
> >
> > Well, I was working on this because our own (truenas) has python-based
> > middleware and I was wanting to be able to get ctdb status info without
> > having to launch subprocesses. I was also planning to write python-based
> > collectd plugin to gather stats from ctdb at configurable intervals.
>
> Thanks for describing the motivation for python bindings for CTDB client
> API.
>
> >> In the past, Martin and I had considered developing python bindings
> >> for client interfaces.  The motivation there was to rewrite the ctdb
> >> tool in python. However, we never got around to doing that.
> >
> >
> > That's a good idea. I could go that route, which would reduce code
> > duplication. Basically keep existing behaviors and arguments for ctdb
> tool,
> > but have it be python tool. Then it will probably not increase
> maintenance
> > load.
>
> Before you commit to the idea of rewriting the ctdb tool in python,
> there are few things that need some consideration (Martin might have
> more things to add.)
>
> For the last few years, Martin and I have been discussing the
> monolithic ctdb daemon into separate daemons based on gross
> functionality.  These include database, cluster, failover, event etc.
> Unfortunately we have not been able to make much progress on that
> front in recent years.  That does not mean we have given up on the
> idea, it's still in the works.  Whenever that happens, obviously the
> python bindings need to be modified accordingly.


Hmm... do you have a general idea of the gross functionality you want to
have separate daemon's for. I was already breaking up functionality into
different classes in the bindings I was writing. E.g.

ctdb.Ctdb - interacting with databases
ctdb.Node - interacting with cluster nodes
ctdb.IP - interacting with public IP addresses

I could eliminate the convention of
```
cl = ctdb.Client() # get client handle
ctdb.Ctdb(cl, ...)
```
and just initialize whatever client structures are needed in the object's
tp_init()
function so that the backend daemon / api can change without having to alter
the python binding significantly.

This has implications on the ctdb tool also.  We would like to group the
> ctdb
> functions as per the gross functionality.  This means restructuring
> the ctdb commands in the style of "ctdb event" and subcommands, rather
> than top-level commands.  This breakup of ctdb tool functionality is
> likely to happen sooner than splitting of the daemon code.  We are
> also thinking of transforming the ctdb tool into a ctdb shell with
> readline.
>

I think these sorts of changes are probably quicker if the tool is written
in
python.


> I am sure it will be possible to adapt all the features with python
> implementation.  One thing I haven't yet figured out is how to run
> tevent event loop along with the python main loop.


I wonder if we need to have separate temporary tevent contexts for each
awaitable that
we return. Basically let python wrap around the loop_once() to iterate
until it completes.
C.F. PEP 492 and tp_as_async.am_await function (granted this is pre-coffee
thought).

If we decide to rewrite the ctdb tool in python, then it's essential to
> maintain the
> async event handling in python. I would like to get rid of the
> synchronous api layer completely.


I could switch to using the async API in the python bindings. Even for
implementing
a non-async module, this probably would result in better exception handling.


More information about the samba-technical mailing list