CIFS VFS posted

Urban Widmark urban at
Thu Jun 20 18:32:01 GMT 2002

On Thu, 20 Jun 2002, Steven French wrote:

> Good question - at some point I need to look at that.   Clearly the
> structure is quite a bit different between the two.  Most visibly the mount
> specification.   Over time function between the two will probably diverge
> quite a bit more.    I wanted to be much more aggressive in adding function
> and in design risks in the cifs vfs (ie more aggressive than I guessed we
> would be able to do in the smbfs which people rely on today to be stable)
> e.g. in adding function such as access control and Kerberos integration.

smbfs does the ActiveDirectory kerberos thing already, even if I haven't
let the patches out yet because of some silly mapping bug. It needs 32 bit
error codes and a flag in smbmount (and possibly a connection to the
kerberos tools/libs to refresh tickets for long-time mounts).

I saw the mention of a future userspace helper, if you do that why not
reuse samba code for other (non-performance critical) things such as

smbmount is backwards, it should be mount that mounts smbfs and then smbfs
calls back to smbmount (smbconnect). I have some rough patches for that
too. Using "net use" to mount seems like a really bad idea to me, but
maybe that was just for getting connections?

smbfs used to have the connect code inside the kernel. I think someone was
tired of copying work from samba and fitting it into the kernel when the
current smbmount was done. That is a difference, but I'm not sure why it's
an improvement over "smbconnect".

To me the structure of the code isn't all that different. Although there
are some differences in what is supported, I find that to be more on the
level of where work has been done than some fundamental design difference.
Some examples:

If you look at the current in-kernel smbfs code it is totally single
threaded within one mountpoint (I think that dates back to when the kernel
itself was single threaded), and that alone makes it a lot different.

But if you have taken a look at the current work in progress you would see
something that resembles the demultiplex_thread. The smbfs variant is
called smbiod and is responsible for all network IO including oplock
breaks, which is why it exists in the first place.

This also brings "alloc_request", which is similar to the allocations
smb_init() does, and "smb_rput" which matches the buf_release() calls.
That makes the locking requirement similar, and we have the ability to do
multiple parallel smb requests.

smbfs supports mmap and does all I/O through the page cache. This limits 
reads and writes to the page size (typ. 4k) but the readpage code allows 
it to plug into the kernel's readahead code (with an async version that's 
not been written yet). Readahead will allow merging requests into larger 

Unless I'm missing something cifs vfs does not support mmap or any
caching, and does read/writes directly to the userspace buffer. But to
support mmap you need to implement readpage and friends, which I believe
will limit read/write to work on page size blocks.

DFS support isn't available in smbfs. Although I haven't checked all the
details of the cifs vfs support it seems to use one mount for each
"referral" which is how I have thought about doing it in smbfs too.
(Is dfs working in this version of cifs vfs?)

cifs vfs stores fileid in the file struct, making each local open do one
open on the server. smbfs stores the fileid in the inode and does one open
to serve all it's local clients, instead counting the open/release calls.

I believe this is also an old design decision in smbfs, but it's certainly
not unreasonable to change and it is under consideration esp for the
fcntl locking support.

Having the server only see one file open may help when trying to make
smbfs behave like some programs expect when more than one program accesses
the same file locally and possibly with caching. It seems to complicate
things in other cases such as when you do fcntl locks.

ACLs, multi-user mounts, signing, quotas are all more or less unknowns to
me so I can't comment on what those would require of smbfs. Maybe there
are reasons why those would be a lot harder to support in smbfs than from
the current cifs vfs base.

In the 2.5 kernels you can temporarily break anything you want :), so
disrupting current users aren't that much of a problem. The only cost of
breaking things is all the questions it generates.


More information about the samba-technical mailing list