Setting up CTDB on OCFS2 and VMs ...

Richard Sharpe realrichardsharpe at gmail.com
Mon Dec 8 15:10:07 MST 2014


On Sat, Dec 6, 2014 at 7:33 AM, Rowland Penny <repenny241155 at gmail.com> wrote:
> On 06/12/14 15:24, Richard Sharpe wrote:
>>
>> On Sat, Dec 6, 2014 at 2:58 AM, Rowland Penny <repenny241155 at gmail.com>
>> wrote:
>>>
>>> On 04/12/14 18:08, Richard Sharpe wrote:
>>>>
>>>> Hi folks,
>>>>
>>>> Here are the steps I used, as far as I can remember them. Please
>>>> excuse any mistakes and be prepared to think for yourself when
>>>> following them.
>>>>
>>>> 1. Create two VirtualBox VMs with enough memory and disk for your
>>>> Linux Distro. I used CentOS 6.6 with 4GB and 20GB. (I actually
>>>> installed CentOS 6.3 and upgraded because I had the ISO handy.) You
>>>> will also need an extra interface on each VM for the clustering
>>>> private network. I set them to an internal type.
>>>>
>>>> 2. Because you will need a shared disk, create one:
>>>>
>>>> vboxmanage createhd --filename ~/VirtualBox\ VMs/SharedHD1 --size
>>>> 10240 --variant Fixed --format VDI # Creates a 10GB fixed sized disk
>>>> vboxmanage modifyhd 22ae1fcc-fda7-4e42-be9f-3b8bd7fc0c0e --type
>>>> shareable # Make it shareable.
>>>>
>>>> Note, for that second command use the UUID for your disk which you can
>>>> find with:
>>>>
>>>> vboxmanage list hdds --brief
>>>>
>>>> Also, use the GUI to add the shared disk to both VMs.
>>>>
>>>> 3. Install the OS on each of the VMs.
>>>>
>>>> 4. I installed a bunch of clustering RPMs next:
>>>>
>>>> yum install openais corosync pacemaker-libs pacemaker-libs-devel gcc
>>>> corosync-devel openais-devel rpm-build e2fsprogs-devel libuuid-devel
>>>> git pygtk2 python-devel readline-devel clusterlib-devel redhat-lsb
>>>> sqlite-devel gnutls-devel byacc flex nss-devel
>>>>
>>>> It is not clear to me that openais was needed, for example
>>>>
>>>> 5. Next I installed Oracles UEK and ocfs2-tools
>>>>
>>>> wget http://public-yum.oracle.com/public-yum-ol6.repo /etc/yum.repos.d
>>>> wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O
>>>> /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
>>>> yum install kernel-uek kernel-uek-devel
>>>> yum install ocfs2-tools
>>>> yum install openaislib-devel corosync-devel # It is not clear that I
>>>> needed to install the first
>>>>
>>>> echo 'KERNEL=="ocfs2_control", NAME="misc/ocfs2_control", MODE="0660"'
>>>>>
>>>>> /etc/udev/rules.d/99-ocfs2_control.rules
>>>>
>>>> reboot # on each
>>>>
>>>> 6. Configure cman and pacemaker
>>>>
>>>> # configure corosync first
>>>> cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
>>>>
>>>> # Make sure that bindnetaddr is defined and points to your private
>>>> interface. I set it to 192.168.122.0
>>>>
>>>> # Make sure that the mcastaddr is defined. I used 239.255.1.1
>>>> # make sure that the mcastport is defined. I used 5405
>>>>
>>>> # Copy that file to the other node.
>>>> scp /etc/corosync/corosync.conf root at 172.16.170.6:/etc/corosync
>>>>
>>>> /etc/init.d/pacemaker stop  # Stop these in case they were running
>>>> /etc/init.d/corosync stop # Same here
>>>>
>>>> yum install ccs pcs
>>>>
>>>> # Create a cluster
>>>>
>>>> ccs -f /etc/cluster/cluster.conf --createcluster ctdbdemo
>>>> ccs -f /etc/cluster/cluster.conf --addnode ocfs2-1
>>>> ccs -f /etc/cluster/cluster.conf --addnode ocfs2-2
>>>> ccs -f /etc/cluster/cluster.conf --addfencedev pcmk agent=fence_pcmk
>>>> ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect ocfs2-1
>>>> ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect ocfs2-2
>>>> ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk ocfs2-1
>>>> pcmk-redirect port=ocfs2-1
>>>> ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk ocfs2-2
>>>> pcmk-redirect port=ocfs2-2
>>>>
>>>> # Copy the cluster config file to the other node:
>>>> scp /etc/cluster/cluster.conf root at 172.16.170.6:/etc/cluster
>>>>
>>>> #Now turn off NetworkManager:
>>>> chkconfig NetworkManager off
>>>> service NetworkManager stop
>>>>
>>>> # now start the cluster
>>>> service cman start
>>>> pcs property set stonith-enabled=false
>>>> service pacemaker start
>>>>
>>>> # Also start it on the other node(s).
>>>>
>>>> # Now check the status:
>>>>       [root at ocfs2-1 ~]# crm_mon -1
>>>>       Last updated: Thu Dec  4 09:40:16 2014
>>>>       Last change: Tue Dec  2 10:12:50 2014
>>>>       Stack: cman
>>>>       Current DC: ocfs2-2 - partition with quorum
>>>>       Version: 1.1.11-97629de
>>>>       2 Nodes configured
>>>>       0 Resources configured
>>>>
>>>>
>>>>       Online: [ ocfs2-1 ocfs2-2 ]
>>>>
>>>> If you do not see all the other nodes online, then you have to debug
>>>> the problem.
>>>>
>>>> These are essentially the steps from here:
>>>> http://clusterlabs.org/quickstart-redhat.html
>>>>
>>>> 7. Configure the Oracle cluster
>>>>
>>>> o2cb add-cluster ctdbdemo
>>>> o2cb add-node --ip 192.168.122.10 --port --number 1 ctdbdemo ocfs2-1
>>>> o2cb add-node --ip 192.168.122.10 --port 7777 --number 1 ctdbdemo
>>>> ocfs2-1
>>>> o2cb add-node --ip 192.168.122.11 --port 7777 --number 2 ctdbdemo
>>>> ocfs2-2
>>>>
>>>> service o2cb configure # This step will fail claiming that it can't
>>>> find /sbin/ocfs2_controld.cman
>>>> #
>>>> # However, it does the important stuff.
>>>> #
>>>> # NOTE, during the configuration steps you MUST SELECT cman AS THE
>>>> CLUSTER STACK!
>>>> #
>>>>
>>>> 8. Find and install the ocfs2-tools git repos
>>>>
>>>> git clone git://oss.oracle.com/git/ocfs2-tools.git ocfs2-tools
>>>>
>>>> # install stuff needed
>>>> yum install libaio libaio-devel
>>>> yum install pacemaker-libs-devel
>>>>
>>>> # Now build
>>>> cd ocfs2-tools
>>>> ./configure
>>>> make
>>>>
>>>> # This will likely fail. If it first fails complaining about
>>>> xml/tree.h then you can do the following:
>>>> CPPFLAGS='-I/usr/include/libxml2' ./configure
>>>> make
>>>>
>>>> # It might complain again complaining about some AIS include files that
>>>> are no
>>>> # longer in the packages installed. That is OK. It should have built
>>>> ocfs2_controld.cman,
>>>> # so copy it where it needs to be:
>>>>
>>>> cp ocfs2_controld.cman /usr/sbin/
>>>> scp ocfs2_controld.cman root at 172.16.170.6:/usr/sbin/
>>>>
>>>> # Now stop those two you started and start everything:
>>>>
>>>> service pacemaker stop
>>>> service cman stop
>>>>
>>>> service cman start
>>>> service o2cb start
>>>> service pacemaker start
>>>>
>>>> 8. Create the shared shared file system on one node:
>>>>
>>>> mkfs.ocfs2 -L CTDBdemocommon --cluster-name ctdbdemo --cluster-stack
>>>> ocfs2 -N 4 /dev/sdb
>>>>
>>>> 9. Mount it on both and ensure that you can create files/dirs on one
>>>> node and see them on the other node.
>>>>
>>>> 10. Install ctdb and samba
>>>>
>>>> 11. Configure samba for the domain you want to join
>>>>
>>>> # Make sure you have clustering = yes and the other things you need.
>>>>
>>>> 12. Configure ctdb (/etc/sysconfig/ctdb) and make sure that you disable
>>>> winbindd
>>>>
>>>> 13. Start ctdb on all nodes
>>>>
>>>> # You must have ctdb started so that the secrets file will get
>>>> distributed
>>>>
>>>> 14. join the domain
>>>>
>>>> 15. Enable winbindd in the ctdb config
>>>>
>>>> 16. Restart ctdb on all nodes
>>>>
>>>> At this point you should be done. The steps you need might vary.
>>>>
>>>> I have limited time to help you with this.
>>>>
>>> OK, I have followed Richards 'howto' but using Debian 7.7 instead of
>>> Centos,
>>> I also used the standard Debian kernel. I have got up to step 9, after a
>>> bit
>>> of a battle and it all started so well. :-)
>>>
>>> Most of the required packages are available from the repos:
>>>
>>> apt-get install openais corosync pacemaker cman ocfs2-tools-cman
>>> ocfs2-tools-pacemaker
>>>
>>> Unfortunately, it turned out that pacemaker is not built to use the cman
>>> stack, so I had to rebuild it
>>>
>>> Next problem, ccs and pcs are not available, so I had to download & build
>>> them, though even this was not without problems, ccs put
>>> 'empty_cluster.conf' in the wrong place and pcs is hardwired to use
>>> /usr/libexec
>>>
>>> Next problem 'o2cb' appears to be called 'o2cb_ctl' on Debian.
>>>
>>> Started cman, o2cb and pacemaker (first time round, this is where I found
>>> that pacemaker wouldn't work with cman)
>>>
>>> I then created the shared shared file system and mounted it on both nodes
>>
>> OK, looks like you got the real stuff done.
>>
>>> At this point I have a shared cluster, but in a way that I cannot see any
>>> sane sysadmin using. Most of the software is heavily modified or not
>>> available from the distros repos. I am going to have to stop and think
>>> about
>>> this and see if there is a Debian way of doing this, without modifying
>>> anything or using anything that is not available from a repo.
>
>
> OK, I am getting closer :-)
>
> I have got it working with just packages available from Debian repos, apart
> from 'ccs', once I find  a replacement for this, I will move onto ctdb &
> samba.

I have made some progress on getting ocfs2-tools to build under CentOS
6.6 as well. Still one (or more) build problems to resolve.

-- 
Regards,
Richard Sharpe
(何以解憂?唯有杜康。--曹操)


More information about the samba-technical mailing list