[linux-cifs-client] Re: fsx-linux failing with latest cifs-2.6 git tree

Jeff Layton jlayton at redhat.com
Wed Nov 26 16:37:58 GMT 2008


On Wed, 26 Nov 2008 16:23:32 +0100
Nick Piggin <npiggin at suse.de> wrote:
> 
> > > That seems pretty reasonable, although keep in mind that
> > > AOP_FLAG_UNINTERRUPTIBLE is not going to be the common case (unless
> > > you're running loop or nfsd or something on the filesystem).
> > > 
> > > It would be really nice to figure out a way to avoid the reads in
> > > the interruptible case as well.
> > 
>  
> > > I can't remember the CIFS code very well, but in several of the new
> > > aops conversions I did, I added something like a BUG_ON(!PageUptodate())
> > > in the write_end methods to ensure I wasn't missing some key part of
> > > the logic. It's entirely possible that cifs is almost ready to handle
> > > a !uptodate page in write_end...
> > 
> > Well, CIFS is "special". Rather than just updating the pagecache, we
> > can fall back to doing a sync write instead. So I don't think we want
> > to BUG if the page isn't up to date. It's not ideal, but I think it's a
> > situation we can deal with if necessary.
> 
> Yes, that would be better. That sync write fallback is quite clever I
> think...
> 

Since we're able to do the sync write fallback here, I think we can
probably not bother with checking for AOP_FLAG_UNINTERRUPTIBLE at all.
As long as we only set the page uptodate in write_end in the case that
the write isn't short, then we'll be ok I think even when it's
interruptible.

This does make the assumption that the interrupted case is somewhat
rare...

Thoughts on this patch? 

------------------[snip]-------------------

From a8e5ea4d9859f590b5c04954e8e2a13023f690f5 Mon Sep 17 00:00:00 2001
From: Jeff Layton <jlayton at redhat.com>
Date: Wed, 26 Nov 2008 11:32:09 -0500
Subject: [PATCH] cifs: fix regression in cifs_write_begin/cifs_write_end

The conversion to write_begin/write_end interfaces had a bug where we
were passing a bad parameter to cifs_readpage_worker. Rather than
passing the page offset of the start of the write, we needed to pass the
offset of the beginning of the page. This was reliably showing up as
data corruption in the fsx-linux test from LTP.

It also became evident that this code was occasionally doing unnecessary
read calls. Optimize those away by using the PG_checked flag to indicate
that the unwritten part of the page has been initialized.

Signed-off-by: Jeff Layton <jlayton at redhat.com>
---
 fs/cifs/file.c |   79 +++++++++++++++++++++++++++++++++++++++++---------------
 1 files changed, 58 insertions(+), 21 deletions(-)

diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index b691b89..49d4a97 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -1475,7 +1475,11 @@ static int cifs_write_end(struct file *file, struct address_space *mapping,
 	cFYI(1, ("write_end for page %p from pos %lld with %d bytes",
 		 page, pos, copied));
 
-	if (!PageUptodate(page) && copied == PAGE_CACHE_SIZE)
+	if (PageChecked(page)) {
+		if (copied == len)
+			SetPageUptodate(page);
+		ClearPageChecked(page);
+	} else if (!PageUptodate(page) && copied == PAGE_CACHE_SIZE)
 		SetPageUptodate(page);
 
 	if (!PageUptodate(page)) {
@@ -2062,39 +2066,72 @@ static int cifs_write_begin(struct file *file, struct address_space *mapping,
 {
 	pgoff_t index = pos >> PAGE_CACHE_SHIFT;
 	loff_t offset = pos & (PAGE_CACHE_SIZE - 1);
+	loff_t page_start = pos & PAGE_MASK;
+	loff_t i_size;
+	struct inode *inode;
+	struct page *page;
+	int rc = 0;
 
 	cFYI(1, ("write_begin from %lld len %d", (long long)pos, len));
 
-	*pagep = __grab_cache_page(mapping, index);
-	if (!*pagep)
-		return -ENOMEM;
-
-	if (PageUptodate(*pagep))
-		return 0;
+	page = __grab_cache_page(mapping, index);
+	if (!page) {
+		rc = -ENOMEM;
+		goto out;
+	}
 
-	/* If we are writing a full page it will be up to date,
-	   no need to read from the server */
-	if (len == PAGE_CACHE_SIZE && flags & AOP_FLAG_UNINTERRUPTIBLE)
-		return 0;
+	if (PageUptodate(page))
+		goto out;
 
-	if ((file->f_flags & O_ACCMODE) != O_WRONLY) {
-		int rc;
+	/*
+	 * If we write a full page it will be up to date, no need to read from
+	 * the server. If the write is short, we'll end up doing a sync write
+	 * instead.
+	 */
+	if (len == PAGE_CACHE_SIZE)
+		goto out;
 
-		/* might as well read a page, it is fast enough */
-		rc = cifs_readpage_worker(file, *pagep, &offset);
+	/*
+	 * optimize away the read when we have an oplock, and we're not
+	 * expecting to use any of the data we'd be reading in. That
+	 * is, when the page lies beyond the EOF, or straddles the EOF
+	 * and the write will cover all of the existing data.
+	 */
+	inode = page->mapping->host;
+	if (CIFS_I(inode)->clientCanCacheRead) {
+		i_size = i_size_read(inode);
+		if (page_start >= i_size ||
+		    (offset == 0 && (pos + len) >= i_size)) {
+			zero_user_segments(page, 0, offset,
+					   offset + len,
+					   PAGE_CACHE_SIZE);
+			/*
+			 * PageChecked means that the parts of the page
+			 * to which we're not writing are considered up
+			 * to date. Once the data is copied to the
+			 * page, it can be set uptodate.
+			 */
+			SetPageChecked(page);
+			goto out;
+		}
+	}
 
-		/* we do not need to pass errors back
-		   e.g. if we do not have read access to the file
-		   because cifs_write_end will attempt synchronous writes
-		   -- shaggy */
+	if ((file->f_flags & O_ACCMODE) != O_WRONLY) {
+		/*
+		 * might as well read a page, it is fast enough. If we get
+		 * an error, we don't need to return it. cifs_write_end will
+		 * do a sync write instead since PG_uptodate isn't set.
+		 */
+		cifs_readpage_worker(file, page, &page_start);
 	} else {
 		/* we could try using another file handle if there is one -
 		   but how would we lock it to prevent close of that handle
 		   racing with this read? In any case
 		   this will be written out by write_end so is fine */
 	}
-
-	return 0;
+out:
+	*pagep = page;
+	return rc;
 }
 
 const struct address_space_operations cifs_addr_ops = {
-- 
1.5.5.1



More information about the linux-cifs-client mailing list