about active/active clustered nfs with ctdb

风无名 wuming_81 at 163.com
Thu Jan 28 10:00:02 UTC 2021


"In your scenario, is the filesystem on each LUN associated with a particular public IP address?"
yes


"It would be good if you could do this without modifying 10.interface. It would be better if you could do it by adding a new event script."
thanks.
I am sorry that I have another question.
redhat provides another solution:
https://www.linuxtechi.com/configure-nfs-server-clustering-pacemaker-centos-7-rhel-7/
they use pacemaker to make an active/passive  nfs cluster. its goal is very similar to mine. 


if the cluster consists of just two nodes, we know that there does not exist a correct algorithm for the consensus problem. The pacemaker solution of redhat uses a fence device (we can use a shared disk. for example iscsi lun, as a fencing device),  so it may be correct.
But I have not found any doc about fence device and ctdb, so in theory my solution may be not correct for two-nodes cluster.
I am very curious how does ctdb tackle the problem or the problem is not tackled.


if any how-tos or implementation/principle of ctdb is provided I will be glad.
sorry to bother. 
thanks for your reply.
At 2021-01-28 17:25:16, "Martin Schwenke" <martin at meltin.net> wrote:
>Hmmm.  Sorry, I might have read too quickly and misunderstood.  70.iscsi
>is only designed to run tgtd on nodes and export LUNs from public IP
>addresses. In your example the nodes are iSCSI clients, mounting a
>filesystem on the LUN and exporting it via NFS.  That is very different.
>
>Sorry for the confusion.
>
>In your scenario, is the filesystem on each LUN associated with a
>particular public IP address?
>
>It would be good if you could do this without modifying 10.interface.
>It would be better if you could do it by adding a new event script.
>
>peace & happiness,
>martin
>
>On Thu, 28 Jan 2021 09:55:29 +0800 (CST), 风无名 <wuming_81 at 163.com>
>wrote:
>
>> martin, thanks for your reply.
>> No, I did not modify 70.iscsi. Maybe I need to make full understanding of it.
>> 
>> 
>> after many days reading/debuging the source code of ctdb and its shell scripts, I found the key point in the script 10.interface. 
>> my modification  is:
>> 1 create nfs share(mount fs, modify /etc/exports, restart nfs service ..) before any public ip is added to some interface
>> 2 delete the corresponding nfs share after any public ip is removed from some interface
>> 
>> 
>> I tested many shutdown-reboot cycles (of node in a ctdb cluster), and the results are the same as my expectation.
>> I think I need more tests and more scenario tests.


More information about the samba-technical mailing list