[linux-cifs-client] Slow Folder Traversal on One system, Fast on Another
Jeff Layton
jlayton at samba.org
Fri Jan 22 07:32:57 MST 2010
On Thu, 21 Jan 2010 00:34:05 -0800
Tanthrix <tanthrix at gmail.com> wrote:
> Jeff Layton wrote:
> > It seems strange (and is probably a bug) that the older system doesn't
> > cause all of those QPathInfo calls. I suppose the ls on that system
> > might be able to use the d_type field in the dirent to determine a type
> > and decides that it doesn't need to stat() each entry. Or it could be a
> > bug and that older kernel didn't do an on the wire call for each
> > stat(). You might consider stracing both colorized 'ls' commands and
> > see what they're actually doing at a system call level.
> >
> > As far as preventing the "problem", it's hard for me to say since it
> > all sort of depends on what the program is doing at the system call
> > level. I suspect that it too is probably doing a stat() on each
> > directory entry. In that case, no there's little you can do to help
> > this. A stat() system call currently means at least one call out on
> > the wire per file.
> >
> > One thing you can try is turning off querying for server inode numbers
> > (mount with -o noserverino). It'll mean that you can't properly detect
> > hardlinks but it'll prevent one on-the-wire call per stat() syscall in
> > your case.
> Well, I just tried a few tests of -o noserverino on my big test share,
> as well as -o nounix just for fun, and again neither made any real
> decrease in time. At best it was faster by about 200 ms, but that's well
> within the margin of error of me pressing my stop watch button.
>
> There is some good news though, as I have managed to side-step the
> problem a bit. I picked up a gigabit switch today, figuring it
> certainly couldn't hurt the problem. The results: Going from my old
> 100mb switch to the new 1000mb switch dropped my test share from ~18
> seconds to 4-5 seconds, and the share I actually use on a daily basis is
> now down to about 2 seconds flat.
>
> For now, I think I can live with that, considering how much time I have
> already spent on this issue. Besides, now that I can transfer files
> between my two systems at 60 MB/s sustained, I may end up just moving
> all my media right onto the linux box in question and share it with my
> other linux machines via NFS, further making this a non-issue. So unless
> you would like me to do any experiments for your own curiosity, I think
> I'm going to consider this matter closed until the next time I've got
> some free time to kill.
>
> Thanks again for all your help Jeff, it has been much appreciated.
>
No problem. I think what we probably need to do is to establish some
better attribute caching in CIFS. Right now, we revalidate attributes
every second. That's probably a bit too aggressive, especially in this
case where we've just fetched attributes for the inodes via the
FIND_FILE.
We have to be very cautious when doing that though. With any network
filesystem there's always a tradeoff between correctness and
performance and we have to take great care not to sacrifice too much of
either.
--
Jeff Layton <jlayton at samba.org>
More information about the linux-cifs-client
mailing list