don_mccall at hp.com
Mon Mar 26 20:51:11 GMT 2001
I like Chris's idea of a 'wrapper' fs. I supported Advanced Server for Unix
for a long
time (still do), and the way it implements ntfs emulation is via a database
implementation using 'blob' (binary large object) files. The difficulties
in support we hit most were:
1. as ntfs permission use grew, the database got larger and slower and more
corruption. Since any of the daemons could be writing to the database on
behalf of a client, if the daemon crashed, the db could be corrupted.
2. the 'syncing' of the db when file removal was done from the OS (thus the
db might have an entry for a file that no longer actually existed).
3. backup and restore from the unix side always lost ntfs
permission/ownership info; same with 'mv' commands, etc; to ensure that you
had a 'good' backup, you had to stop AS/U, so that the acl database exactly
matched the fs'es being backed up, so that if you had to do a restore,
you'ed be sure to get your acl info matched up.
5. large directories (many files) tended to be suceptible to hashing
collisions, or very slow access, if the hashing algorythm was sufficiently
complex to ensure collision avoidance.
AS/U had a number of very specific 'maintenence tools' to be used to 'prune'
'repair' and enumerate this acl database, based on users being deleted,
files/directories being deleted/moved (outside asu), etc; but they WEREN't
What we always wished for was a 'true' NTFS fs type that we could run on top
of, where the ntfs permission/ownership, etc, was actually kept in the
native OS with the file itself; so when a file was removed/manipulated, ONLY
that inode was at risk.
NetWare for Unix ran into many of the SAME issues once they started moving
towards a single 'inode database' per 'netware volume' on Unix to keep
Novell file/directory permissions that couldn't be represented on the native
I guess I'm thinking that a wrapper fs, where both Samba and the OS accessed
the files thru the same mechanism would do a lot to address concerns #2 & #3
above. Many of us who are sharing out Unix files using Samba, AS/U, etc.
are doing so because we access these files
both from Unix and PC's. It sure would be nice to have both accesses
goverened by the same 'virtual' filesystem....
In addition, avoiding one single LARGE db implementation to cover all the
shares on a samba server might be a good idea; perhaps a 'per share' ntfs
db, so you might have one .tbd in a central location like
/usr/local/samba/private... for each share - keeps them individually
smaller and less suceptible to corruption? In addition to being able to
cache more RELEVANT info per smbd; a treeconnect&x could initate a cache of
some of the info from the .tdb for that share, without having to go
searching thru a huge db with info about shares/fs'es he doesn't care about
at that point. The issue this brings to mind is 'shares within shares' - if
you are working in shareA\dir1\dir2, which can also be reached thru another
share point shareB\dirA\dirB\shareA\dir1/dir2, do you respect the ntfs info
all the way back to the shareB db (complicated), or consider the sharepoint
to be a virtual 'root' for the ntfs fs when entered. But, IMHO, trying to
keep ntfs info beyond the sharepoint is a lot of work for not much gain.
It's not like we are REALLY an NT fileserver where you could log on to the
console and DIRECTLY manipulate ntfs filesystem permissions OUTSIDE smb/cifs
structured network calls.
So your 'entrypoint' to a given file is always going to be a sharename...
Anyway, some things to think about.
I hear Veritas is getting ready to (or has?) market a product that delivers
an enhanced installable FS that keeps the ntfs metadata IN the FS itself,
and including a modified version of samba to access it, but I don't know a
lot about that at the moment.
Wouldn't it be nice if everyone SHARED....
My 2 cents worth.
From: Christopher R. Hertel [mailto:crh at nts.umn.edu]
Sent: Monday, March 26, 2001 1:32 PM
To: Jeremy Allison
Cc: Mayers, Philip J; 'samba-technical at samba.org'
Subject: Re: ACL database
Just my 2cents to the discussion.
When I got back from Connectathon I suggested that this sort of thing
would work very well on appliance servers. That is, systems with
controlled access to the underlying filesystem so that all file I/O would
need to come through the Samba VFS layer, thus preventing problems keeping
the database in sync with the actual filesystem. JF talked about picking
up the idea.
The other thought that I presented, one that might prove simpler than
running a daemon to keep the database in sync, was to create a "wrapper"
filesystem that used the native filesystem plus the kind of database
you've described. The OS would actually mount the wrapper filesystem,
thus forcing all file access to go though the ACL database system, but
the wrapper would access the underlying system.
There are a lot of ways that this could be done. It is just an idea, but
it might prove easier and more reliable than running a daemon. Needs
> "Mayers, Philip J" wrote:
> > I suggested this a *long* time ago (couple of years, before the VFS
> > IIRC) and was shot down in flames, for ease-of-use, engineering and
> > performance reasons. I'll be very interested to see if this works.
> Yes, this was before we had tdb, vfs and a very easy way to store external
> data. It'll be slower, and we definately don't want it to be the only
> permission check due to security concerns. But thinking has changed
> along these lines since it was first proposed :-) :-).
> Buying an operating system without source is like buying
> a self-assembly Space Shuttle with no instructions.
Christopher R. Hertel -)----- University of Minnesota
crh at nts.umn.edu Networking and Telecommunications Services
Ideals are like stars; you will not succeed in touching them
with your hands...you choose them as your guides, and following
them you will reach your destiny. --Carl Schultz
More information about the samba-technical