[Samba] CTDB: About the setting of take cluster lock timeout time

tu.qiuping tu.qiuping at qq.com
Sun Jul 23 01:58:23 UTC 2023


My ctdb version is 4.17.7


Hello everyone.
My ctdb cluster setup is as follows:
[cluster]
recovery lock = !/usr/libexec/ctdb/ctdb_mutex_ceph_rados_helper ceph client.admin meta_pool ctdb_reclock 4
node address = 100.100.40.131


[legacy]
realtime scheduling = false


[logging]
log level = NOTICE


When I power off a node in the cluster, the acquisition lock timed out, and the log of one of the nodes is as follows:
2023-07-21T10:02:46.753424+08:00 host-192-168-134-22 ctdbd[6751]: abcd:909::a8f8:8989:eeff:cd22:4379: node abcd:909::a8f8:8989:eeff:cd21:4379 is dead: 0 connected
2023-07-21T10:02:46.753519+08:00 host-192-168-134-22 ctdb-recoverd[8338]: ctdb_control error: 'node is disconnected'
2023-07-21T10:02:46.753598+08:00 host-192-168-134-22 ctdb-recoverd[8338]: ctdb_control error: 'node is disconnected'
2023-07-21T10:02:46.753619+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Async operation failed with ret=-1 res=-1 opcode=80
2023-07-21T10:02:46.753637+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Async wait failed - fail_count=1
2023-07-21T10:02:46.753652+08:00 host-192-168-134-22 ctdb-recoverd[8338]: ../../ctdb/server/ctdb_client.c:1558 Failed to read node capabilities.
2023-07-21T10:02:46.753667+08:00 host-192-168-134-22 ctdb-recoverd[8338]: ../../ctdb/server/ctdb_recoverd.c:456 Failed to get node capabilities
2023-07-21T10:02:46.753683+08:00 host-192-168-134-22 ctdb-recoverd[8338]: ../../ctdb/server/ctdb_recoverd.c:2640 Unable to update node capabilities.
2023-07-21T10:02:49.400062+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:02:49.400163+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:02:49.400282+08:00 host-192-168-134-22 ctdbd[6751]: Recovery mode set to ACTIVE
2023-07-21T10:02:49.400400+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Attempting to take cluster lock (!/usr/libexec/ctdb/ctdb_mutex_ceph_rados_helper ceph client.admin pool ctdb_reclock 4)
2023-07-21T10:02:54.401302+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:02:54.401418+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:02:59.402798+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:02:59.402890+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:00.832008+08:00 host-192-168-134-22 ctdb-eventd[6753]: 64.lvs_nat_maps: Control GET_IFACES failed, ret=-1
2023-07-21T10:03:04.403350+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:04.403454+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:09.404295+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:09.404358+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:14.405346+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:14.405428+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:19.406774+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:19.406843+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:24.407414+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:24.407496+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:29.408295+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:29.408356+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:34.409306+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:34.409392+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:39.410757+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:39.410835+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:44.411290+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:44.411352+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:48.756449+08:00 host-192-168-134-22 ctdbd[6751]: Recovery daemon ping timeout. Count : 0
2023-07-21T10:03:49.412758+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:49.412829+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:54.413749+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:54.413822+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:03:59.414345+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:03:59.414426+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:04.415746+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:04.415810+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:09.416352+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:09.416434+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:14.417719+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:14.417804+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:19.418327+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:19.418388+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:24.419726+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:24.419822+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:29.420351+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:29.420431+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:34.421347+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:34.421428+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:39.422348+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:39.422437+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:44.423467+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:44.423531+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:48.757408+08:00 host-192-168-134-22 ctdbd[6751]: Recovery daemon ping timeout. Count : 1
2023-07-21T10:04:49.400713+08:00 host-192-168-134-22 ctdbd[6751]: ../../ctdb/server/ctdb_recover.c:609 Been in recovery mode for too long. Dropping all IPS
2023-07-21T10:04:49.401346+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Unable to take cluster lock - timeout
2023-07-21T10:04:49.423974+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Leader broadcast timeout
2023-07-21T10:04:49.424021+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Start election
2023-07-21T10:04:49.424120+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Attempting to take cluster lock (!/usr/libexec/ctdb/ctdb_mutex_ceph_rados_helper_rados_helper ceph client.admin pool ctdb_reclock 4)
2023-07-21T10:04:49.647722+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Cluster lock taken successfully
2023-07-21T10:04:49.647814+08:00 host-192-168-134-22 ctdb-recoverd[8338]: Took cluster lock, leader=1


I think the timeout period of 120s is too long. If the ceph cluster recovers quickly, ctdb will still have to wait for the 120s timeout before acquiring the lock again, which will have a relatively large impact on the business.


So, if I set the timeout to 30s, will it have any effect and waste system resources? Or can I reduce this timeout by modifying vars in ctdb.tunables.? I look at the source code, this setting should be set in h = ctdb_cluster_mutex(s, ctdb, ctdb->recovery_lock, 120, take_cluster_lock_handler, s, lost_cluster_lock_handler, rec).


More information about the samba mailing list