[Samba] CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?

Michael Adam obnox at samba.org
Mon Aug 3 07:39:31 MDT 2009


Hi,

CTDB is pretty ignorant of CMAN as such.
It just relies on a cluster file system, like GFS2.

So you should only start ctdbd when the cluster is up
and the gfs2 file system is mounted. I think you should
not start ctdbd as a cluster service managed by cman,
since ctdbd can be considered a cluster manager for
certain services (like samba...) itself. Apart from
that, ctdb should be considered pretty much independent
of the red hat cluster manager.

CTDB needs a file in the cluster file system, the
recovery lock file. The location of this file (or a
directory, in which such a file can be created) should
be specified in the CTDB_RECOVERY_LOCK=... setting
in /etc/sysconfig/ctdb.

At a glance, your cluster.conf looks sane, but
I think manual fencing can be a real problem with
cman.

GPFS is very well tested with ctdb.
I think there are many people testing ctdb with gfs2.
I have heard positive feedback of people using ctdb
with GlusterFS and lustre (and recently with ocfs2).

You might want to join the #ctdb irc channel on freenode.
There are ususally some people around with more expertise
in gfs2 than me.

Cheers - Michael

Yauheni Labko wrote:
> Hi everybody,
> 
> I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not 
> understand some points.
> It is possible to run the CTDB defining it under services section in 
> cluster.conf but running it on the second node shuts down the process at the 
> first one. My CTDB configuration implies 2 active-active nodes.
> 
> Does CTDB care if the node starts with clean_start="0" or clean_start="1"? man 
> fenced says this is a safe way especially during startup because it prevents 
> a data corruption if a node was dead for some reason. From my understanding 
> CTDB uses CMAN only as "module" to get access to gfs/gfs2 partitions. Or 
> maybe it is better to look at GPFS and LustreFS?
> 
> Could anybody show the working configuration of cluster.conf for 
> CTDB+GFS2+CMAN? 
> 
> I used the following cluster.conf and ctd conf:
> 
> <?xml version="1.0"?>
> <cluster name="smb-cluster" config_version="8">
>   <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
>   <cman expected_votes="1" two_node="1"/>
>   <cman cluster_id="101"/>
>   <clusternodes>
>     <clusternode name="smb01" votes="1" nodeid="1">
>       <fence>
>         <!-- Handle fencing manually -->
>         <method name="human">
>           <device name="human" nodename="smb01"/>
>         </method>
>       </fence>
>     </clusternode>
>     <clusternode name="smb02" votes="1" nodeid="2">
>       <fence>
>         <!-- Handle fencing manually -->
>         <method name="human">
>           <device name="human" nodename="smb02"/>
>         </method>
>       </fence>
>     </clusternode>
>   </clusternodes>
>   <fencedevices>
>     <!-- Define manual fencing -->
>     <fencedevice name="human" agent="fence_manual"/>
>     <!-- Define ilo fencing -->
>     <fencedevice name="ilo" agent="fence_ilo" login="admin" password="foo"/>
>   </fencedevices>
> </cluster>
> 
> # Options to ctdbd. This is read by /etc/init.d/ctdb
> CTDB_RECOVERY_LOCK="/smb-ctdb/.ctdb_locking"
> CTDB_PUBLIC_INTERFACE=eth2
> CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
> CTDB_MANAGES_SAMBA=yes
> CTDB_INIT_STYLE=ubuntu
> CTDB_NODES=/etc/ctdb/nodes
> CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh
> CTDB_DBDIR=/var/ctdb
> CTDB_DBDIR_PERSISTENT=/var/ctdb/persistent
> CTDB_SOCKET=/tmp/ctdb.socket
> CTDB_LOGFILE=/var/log/ctdb.log
> CTDB_DEBUGLEVEL=2
> 
> Yauheni Labko (Eugene Lobko)
> Junior System Administrator
> Chapdelaine & Co.
> (212)208-9150
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 206 bytes
Desc: not available
URL: <http://lists.samba.org/pipermail/samba/attachments/20090803/efac14a3/attachment.pgp>


More information about the samba mailing list