ctdb relock file issues with glusterfs

Amitay Isaacs amitay at gmail.com
Mon Oct 8 23:32:12 MDT 2012


Hi Gilbert,

On Tue, Oct 9, 2012 at 1:55 PM, patrick medina <pgmedinajr at gmail.com> wrote:
> Howdy samba folks,
>
> I've been running into a lot of issues lately with ctdb's re-lock file and
> glusterfs as the shared storage.  When I started, I could get one or the
> other node to become healthy, but at least one would complain it could not
> lock the re-lock file.  Nowi'm at the point where neither node will become
> healthy and stay in a recovery loop.  Just to be sure it was the re-lock
> file, I commented it out in the config and both nodes became healthy.
>
> I verified posix locking is enabled, along with ping_pong on both nodes
> (919565 locks/sec (node 1), 924201 locks/sec (node 2)).
>
> Here is a snip from the log files that continuously loops until I stop the
> service.
>
> 2012/10/08 20:24:57.165153 [recoverd:29685]: The rerecovery timeout has
> elapsed. We now allow recoveries to trigger again.
> 2012/10/08 20:24:57.167774 [recoverd:29685]: server/ctdb_recoverd.c:2739
> reclock child process returned error 5
> 2012/10/08 20:24:57.167840 [recoverd:29685]: server/ctdb_recoverd.c:2839
> reclock child failed when checking file
> 2012/10/08 20:24:57.169273 [recoverd:29685]: Failed check_recovery_lock.
> Force a recovery
> 2012/10/08 20:24:57.169288 [recoverd:29685]: server/ctdb_recoverd.c:1364
> Starting do_recovery
> 2012/10/08 20:24:57.169297 [recoverd:29685]: Taking out recovery lock from
> recovery daemon
> 2012/10/08 20:24:57.169306 [recoverd:29685]: Take the recovery lock
> 2012/10/08 20:24:57.174917 [recoverd:29685]: Recovery lock taken
> successfully
> 2012/10/08 20:24:57.174931 [recoverd:29685]: ctdb_recovery_lock: Got
> recovery lock on '/mnt/gluster/ctdb/lock'
> 2012/10/08 20:24:57.174999 [recoverd:29685]: Recovery lock taken
> successfully by recovery daemon
> 2012/10/08 20:24:57.175011 [recoverd:29685]: server/ctdb_recoverd.c:1401
> Recovery initiated due to problem with node 0
> 2012/10/08 20:24:57.175063 [recoverd:29685]: server/ctdb_recoverd.c:1426
> Recovery - created remote databases
> 2012/10/08 20:24:57.175076 [recoverd:29685]: server/ctdb_recoverd.c:1433
> Recovery - updated db priority for all databases
> 2012/10/08 20:24:57.175160 [29614]: Freeze priority 1
> 2012/10/08 20:24:57.176049 [29614]: Freeze priority 2
> 2012/10/08 20:24:57.177009 [29614]: Freeze priority 3
> 2012/10/08 20:24:57.177943 [29614]: server/ctdb_recover.c:724 Recovery mode
> set to ACTIVE
> 2012/10/08 20:24:57.178041 [29614]: server/ctdb_recover.c:1080
> startrecovery eventscript has been invoked
> 2012/10/08 20:24:57.322767 [recoverd:29685]: server/ctdb_recoverd.c:1470
> Recovery - updated flags
> 2012/10/08 20:24:57.323054 [recoverd:29685]: server/ctdb_recoverd.c:1514
> started transactions on all nodes
> 2012/10/08 20:24:57.323069 [recoverd:29685]: server/ctdb_recoverd.c:1527
> Recovery - starting database commits
> 2012/10/08 20:24:57.323120 [recoverd:29685]: server/ctdb_recoverd.c:1539
> Recovery - committed databases
> 2012/10/08 20:24:57.323224 [recoverd:29685]: server/ctdb_recoverd.c:1589
> Recovery - updated vnnmap
> 2012/10/08 20:24:57.323275 [recoverd:29685]: server/ctdb_recoverd.c:1598
> Recovery - updated recmaster
> 2012/10/08 20:24:57.323399 [recoverd:29685]: server/ctdb_recoverd.c:1615
> Recovery - updated flags
> 2012/10/08 20:24:57.323431 [29614]: server/ctdb_recover.c:724 Recovery mode
> set to NORMAL
> 2012/10/08 20:24:57.323455 [29614]: Thawing priority 1
> 2012/10/08 20:24:57.323465 [29614]: Release freeze handler for prio 1
> 2012/10/08 20:24:57.323494 [29614]: Thawing priority 2
> 2012/10/08 20:24:57.323504 [29614]: Release freeze handler for prio 2
> 2012/10/08 20:24:57.323517 [29614]: Thawing priority 3
> 2012/10/08 20:24:57.323527 [29614]: Release freeze handler for prio 3
> 2012/10/08 20:24:57.328926 [recoverd:29685]: server/ctdb_recoverd.c:1624
> Recovery - disabled recovery mode
> 2012/10/08 20:24:57.329037 [recoverd:29685]: Failed to find node to cover
> ip 10.100.55.198
> 2012/10/08 20:24:57.329161 [29614]: Forced running of eventscripts with
> arguments ipreallocated
> 2012/10/08 20:24:57.471796 [29614]: Recovery has finished
> 2012/10/08 20:24:57.577986 [recoverd:29685]: server/ctdb_recoverd.c:1650
> Recovery - finished the recovered event
> 2012/10/08 20:24:57.578053 [recoverd:29685]: server/ctdb_recoverd.c:1656
> Recovery complete
> 2012/10/08 20:24:57.578063 [recoverd:29685]: Resetting ban count to 0 for
> all nodes
> 2012/10/08 20:24:57.578071 [recoverd:29685]: Just finished a recovery. New
> recoveries will now be supressed for the rerecovery timeout (10 seconds)
> 2012/10/08 20:24:57.758349 [29614]: server/ctdb_monitor.c:257 wait for
> pending recoveries to end. Wait one more second.
>
> Thanks in advance,
> Gilbert

What version of CTDB are you using? Can you attach the log file where
you notice CTDB is continuously going in recovery? It would be useful
to get log files from all the nodes.

Thanks.

Amitay.


More information about the samba-technical mailing list