[linux-cifs-client] [PATCH 0/4] cifs: fix "Busy inodes after umount" issues (RFC)

Jeff Layton jlayton at redhat.com
Mon May 24 04:52:42 MDT 2010


On Mon, 24 May 2010 12:43:52 +0530
Suresh Jayaraman <sjayaraman at suse.de> wrote:

> On 05/21/2010 11:55 PM, Jeff Layton wrote:
> > We've had a spate of "Busy inodes after umount" problems in recent
> > kernels. With the help of a reproducer that Suresh provided, I tracked
> > down the cause of one of these and wrote it up here:
> > 
> >     https://bugzilla.samba.org/show_bug.cgi?id=7433
> > 
> > The main problem is that CIFS opens files during create operations,
> > puts these files on a list and then expects that a subsequent open
> > operation will find those on the list and make them full, productive
> > members of society.
> > 
> > This expectation is wrong however. There's no guarantee that cifs_open
> > will be called at all after a create. There are several scenarios that
> > can prevent it from occuring. When this happens, these entries get left
> > dangling on the list and nothing will ever clean them up. Recent changes
> > have made it so that cifsFileInfo structs hold an inode reference as
> > well, which is what actually leads to the busy inodes after umount
> > problems.
> > 
> > This patch is intended to fix this in the right way. It has the create
> > operations properly use lookup_instantiate_filp to create an open file
> > during the operation that does the actual create. With this, there's
> > no need to have cifs_open scrape these entries off the list.
> 
> I think this is how we should do it. This makes the code a lot cleaner
> and simpler to follow.
> 
> > This fixes the busy inodes problem I was able to reproduce. It's not
> > very well tested yet however, and I could stand for someone else to
> > review it and help test it.
> > 
> 
> However, I still see "VFS: Busy inode" errors with my reproducer (with
> all patches except 1/4). Perhaps it has made the problem less frequent
> or it has got something to do with quick inode recyling.
> 
> Nevertheless, I think this patchset is a good step in the right direction.

Good to know. That probably means there's more than one problem. You
may need to get out a debugger and see if you can figure out why you're
seeing that on your machines.

With my test for this (just running fsstress on the mount), I'm also
seeing a memory leak in the kmalloc-512 slab that's potentially
related as well. I'm not sure yet though whether that preexists this
patchset or not.

-- 
Jeff Layton <jlayton at redhat.com>


More information about the linux-cifs-client mailing list