[Samba] CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?

Yauheni Labko yyl at chappy.com
Mon Aug 17 07:11:50 MDT 2009


Thank you Michael. I tried OCFS2. OCFS2 administration looks easier than GFS 
one.

Yauheni Labko (Eugene Lobko)
Junior System Administrator
Chapdelaine & Co.
(212)208-9150

On Tuesday 11 August 2009 05:10:22 pm Michael Adam wrote:
> Yauheni Labko wrote:
> > Thank you for the answer, Michael.
> >
> > As far as I understood clean_star="1" is absolutely ok for GFS/GFS2?
>
> Sorry, I am not an expert in GFS settings. (But read on...)
>
> > CTDB is not going to work without Red Hat Cluster manage. CMAN starts
> > dlm_controld and gfs_controld. ccsd handles node-to-node communication.
>
> Well GFS needs the cman processes, so CTDB needs them, too.
> But CTDB only uses one lock file in the cluster file system.
> Apart from that, the CTDB daemons communicate with each other
> via tcp all on their own.
>
> > I think GPFS has the similar manager like CMAN. The clean_start="1" is
> > the only setting which can provide the necessary access to the GFS/GFS2
> > partitions as CTDB required. Correct me if I'm wrong.
>
> Sorry again. CTDB is completely ignorant with respect to GFS or
> CMAN configuration options. It only needs a cluster file system
> that supports POSIX fcntl() byte range locks. CTDB basicall treats
> the file system as a black box.
> So CTDB does not care about the value "clean_start" as such. Just make
> sure you don't sure that you don't start ctdbd before the cman
> stuff is up and running and the  GFS file system is mounted.
>
> > Btw, i thought OCFS2 is not ready to use with CTDB due to the lacks of
> > some features. This was primary reason why I started  with GFS.
>
> OCFS2 was lacking support of POSIX fcntl byte range locks (which
> are required to run ctdb) until recently. But this has changed!
> I have not tried it myself, but I think Jim McDonough
> (jmcd at samba.org, I have added him to Cc) might be able to give
> you some details (versions and such).
>
> > I left manual fencing for testing only. I was going to use iLO in
> > production.
>
> OK.
>
> Hope this somewhat helps... :-)
>
> Cheers - Michael
>
> > Yauheni Labko (Eugene Lobko)
> > Junior System Administrator
> > Chapdelaine & Co.
> > (212)208-9150
> >
> > > CTDB is pretty ignorant of CMAN as such.
> > > It just relies on a cluster file system, like GFS2.
> > >
> > >
> > > So you should only start ctdbd when the cluster is up
> > > and the gfs2 file system is mounted. I think you should
> > > not start ctdbd as a cluster service managed by cman,
> > > since ctdbd can be considered a cluster manager for
> > > certain services (like samba...) itself. Apart from
> > > that, ctdb should be considered pretty much independent
> > > of the red hat cluster manager.
> > >
> > > CTDB needs a file in the cluster file system, the
> > > recovery lock file. The location of this file (or a
> > > directory, in which such a file can be created) should
> > > be specified in the CTDB_RECOVERY_LOCK=... setting
> > > in /etc/sysconfig/ctdb.
> > >
> > > At a glance, your cluster.conf looks sane, but
> > > I think manual fencing can be a real problem with
> > > cman.
> > >
> > > GPFS is very well tested with ctdb.
> > > I think there are many people testing ctdb with gfs2.
> > > I have heard positive feedback of people using ctdb
> > > with GlusterFS and lustre (and recently with ocfs2).
> > >
> > > You might want to join the #ctdb irc channel on freenode.
> > > There are ususally some people around with more expertise
> > > in gfs2 than me.
> > >
> > > Cheers - Michael
> > >
> > > Yauheni Labko wrote:
> > > > Hi everybody,
> > > >
> > > > I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
> > > > understand some points.
> > > > It is possible to run the CTDB defining it under services section in
> > > > cluster.conf but running it on the second node shuts down the process
> > > > at the first one. My CTDB configuration implies 2 active-active
> > > > nodes.
> > > >
> > > > Does CTDB care if the node starts with clean_start="0" or
> > > > clean_start="1"? man fenced says this is a safe way especially during
> > > > startup because it prevents a data corruption if a node was dead for
> > > > some reason. From my understanding CTDB uses CMAN only as "module" to
> > > > get access to gfs/gfs2 partitions. Or maybe it is better to look at
> > > > GPFS and LustreFS?
> > > >
> > > > Could anybody show the working configuration of cluster.conf for
> > > > CTDB+GFS2+CMAN?
> > > >
> > > > I used the following cluster.conf and ctd conf:
> > > >
> > > > <?xml version="1.0"?>
> > > > <cluster name="smb-cluster" config_version="8">
> > > >   <fence_daemon clean_start="0" post_fail_delay="0"
> > > > post_join_delay="3"/> <cman expected_votes="1" two_node="1"/>
> > > >   <cman cluster_id="101"/>
> > > >   <clusternodes>
> > > >     <clusternode name="smb01" votes="1" nodeid="1">
> > > >       <fence>
> > > >         <!-- Handle fencing manually -->
> > > >         <method name="human">
> > > >           <device name="human" nodename="smb01"/>
> > > >         </method>
> > > >       </fence>
> > > >     </clusternode>
> > > >     <clusternode name="smb02" votes="1" nodeid="2">
> > > >       <fence>
> > > >         <!-- Handle fencing manually -->
> > > >         <method name="human">
> > > >           <device name="human" nodename="smb02"/>
> > > >         </method>
> > > >       </fence>
> > > >     </clusternode>
> > > >   </clusternodes>
> > > >   <fencedevices>
> > > >     <!-- Define manual fencing -->
> > > >     <fencedevice name="human" agent="fence_manual"/>
> > > >     <!-- Define ilo fencing -->
> > > >     <fencedevice name="ilo" agent="fence_ilo" login="admin"
> > > > password="foo"/> </fencedevices>
> > > > </cluster>
> > > >
> > > > # Options to ctdbd. This is read by /etc/init.d/ctdb
> > > > CTDB_RECOVERY_LOCK="/smb-ctdb/.ctdb_locking"
> > > > CTDB_PUBLIC_INTERFACE=eth2
> > > > CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
> > > > CTDB_MANAGES_SAMBA=yes
> > > > CTDB_INIT_STYLE=ubuntu
> > > > CTDB_NODES=/etc/ctdb/nodes
> > > > CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh
> > > > CTDB_DBDIR=/var/ctdb
> > > > CTDB_DBDIR_PERSISTENT=/var/ctdb/persistent
> > > > CTDB_SOCKET=/tmp/ctdb.socket
> > > > CTDB_LOGFILE=/var/log/ctdb.log
> > > > CTDB_DEBUGLEVEL=2
> > > >
> > > > Yauheni Labko (Eugene Lobko)
> > > > Junior System Administrator
> > > > Chapdelaine & Co.
> > > > (212)208-9150
> >
> > --
> > To unsubscribe from this list go to the following URL and read the
> > instructions:  https://lists.samba.org/mailman/options/samba




More information about the samba mailing list