Custom VFS Module Help

Petr Cervenka [cerw] cerw at nano.cz
Fri Aug 17 13:46:43 GMT 2007


-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160

- -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
- From my Boss Adam:
>>
>> On Fri, Aug 17, 2007 at 03:34:57PM +0800, Petr Cervenka [cerw]
>> wrote:
>>>>>> So as I understand part of it would be SMB coding, to get
>>>>>> the lock status etc.. and then use KERNEL to do real
>>>>>> FLOCK. Do we need to use kernel functions that? As the
>>>>>> files will be read/write only from SMB, no real access to
>>>>>> it.  ( I am not real kernel hacker ok  :)
>>
>> Well, the KERNEL_FLOCK routine might be a bit mis-named in your
>> case. And it's a bit of an abuse of it.

I don't know about the KERNEL_FLOCK / etc, because in reality, each
server might have locks on the local file, but these are not relevant to
the samba server process. The samba server process will decide whether a
lock is available or not based on the information in the SQL DB.

>>>>>>>> I'm pretty sure this can be made to work, my main
>>>>>>>> worries would be speed and robustness in case of any
>>>>>>>> failure in between.
>>>>>> The synchronization would be called every period of time
>>>>>> , 1 -2 per hours, and all access to files (LOCK status)
>>>>>> would be loged to MYSQL server (for audit purposes as
>>>>>> well) so later we can just transfer the files we need to.
>>>>>> It's not real time synchronization, but as long both
>>>>>> sides can lock some files and use them without conflict
>>>>>> the others, then it's good solution.
>>
>> That's the difficult part. If you just want logging of files
>> changed, that's trivial. Just intercept the open call and log all
>> files that are opened for writing. Maybe even just the pwrite
>> call, but then you need to make sure you only catch the first
>> write. It's the locking that worries me. What are your semantics?
>> Do you have cross-site locking or not? What happens if site A
>> wants to open a file that site B has already modified? If you
>> don't have cross-site locking, then there's no point in doing
>> locking at all, you can just rely on logging.

It should work the same way as if there was only one server. In other
words, the first person to request a read/write lock gets it, the second
person is denied a read/write lock but can be granted a read only lock.

If the connection between the sites is down, then a read/write lock
should always be denied, and a read only lock can be granted.

One thing I would worry about is what happens if we have a read/write
lock on a file, the connection to the primary site goes down, and then
the user closes the file. We need to remember to clear the lock when the
link comes back up, as well as sync the file later if needed.

BTW, if the file contents are different between the primary/secondary,
and a user requests to open the file (read only or read/write) then we
must update the local copy before granting access to the file.

>>>>>> Also if there are someone out there who would take this
>>>>>> as Paid job please email me, We are located in Sydney.
>>
>> Well, SerNet (www.sernet.de) could do it, but right now I'm not
>> sure it can work the way you want it to work. Possibly I just
>> misunderstood it though.

- - From what I understand of sernet, it is only working with WINS, but we
need something that will work with the shares and files, not just the
windows names.

One issue that will need to be resolved is how it will all work.
Originally I considered a mysql DB at each end, with some sort of
replication, but I don't think that will work, because we need to ensure
that a lock is atomic. I think we will only have a single DB at the
master samba server side, and the remote 'cache' will need to do the
locking against this DB directly, but the problem here is the delay
(latency) to connect to the DB, and run possibly two or three queries
before being able to return the result. Ideally, we want to return the
result much more quickly.

I thought perhaps we could just take control of some of the API (ie, the
lock/unlock portions of the SMB API) and leave the rest to work on the
cache as if it is the master. Then when the lock API is called, we
simply send a lock to the remote SMB server, and if we get the lock, we
grant pass the success back. Same on unlock. This removes the DB for
lock/unlock, but we still need some way to ensure the file content is up
to date before granting access. (Perhaps on unlock we could force a sync
of that file, but we still need to ensure the file is up to date on lock
in the case it was modified by the head office user).

Does that help at all??

Regards,
Adam
- -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxaCTGyoxogrTyiURAmMYAKC94eT6TfqCMwy7ZzlHAlmPVMzszQCfaVTb
bN6AW/Fc0bRj7oPjAB5w170=
=IEFI
- -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxabBHksDalUgasMRAz3YAKDu3EfN6m9CpwQjmqBrFldc840HsgCguNYB
jev3rJgMEhpc9GHZKuMIpN8=
=I/2Z
-----END PGP SIGNATURE-----



More information about the samba-technical mailing list