Intermittent file corruption problems with cifs driver?

Jeff Layton jlayton at redhat.com
Mon Sep 12 07:16:24 MDT 2011


On Mon, 12 Sep 2011 10:36:55 +0200
sean finney <seanius at seanius.net> wrote:

> Hi all,
> 
> Recently at $customer I've been tasked into looking into a problem they
> are intermittently having with corrupt file transfers from linux servers
> to a windows share.  
> 
> Little info on the servers:
> 
> 	Ubuntu Lucid 10.04
> 	Stock and up to date Linux 2.6.32-33-server distro package
> 	Stock cifs-utils 4.5-2 packages
> 
> Description of behavior:
> 
> 	The servers are all part of a distributed service where each server
> 	regularly uploads 100-200MB zipfiles to the windows share.  Intermittently
> 	the resulting files will be corrupted.  On the client that performs
> 	the upload, the corrupted file will appear to have the correct checksum,
> 	but any other remote client will see it as corrupted.
> 
> 	The problem used to be much more frequent, and mounting with -o directio
> 	seems to have greatly reduced, but not eliminated, the recurrence of the
> 	corruption.  But recently (perhaps due to higher reates of uploads?),
> 	the problem has started recurring.  It doesn't seem uniformly occuring,
> 	but rather in spurts where a couple files will be corrupted in one day,
> 	and then a week will go by with no corruptions.
> 
> 	I do see occasional errors in the kernel logs, though I'm not sure if
> 	they are relevant or not (note that they're at substantially different
> 	times, and at present I have no way to correlate them with corruption,
> 	though I'm working on that):
> 
> 	[170873.721023]  CIFS VFS: Error -104 sending data on socket to server
> 	[170873.728747]  CIFS VFS: Error -32 sending data on socket to server
> 	[515039.940104]  CIFS VFS: No response to cmd 115 mid 32714
> 	[515039.947933]  CIFS VFS: Send error in SessSetup = -11
> 	[521901.595381]  CIFS VFS: No response to cmd 46 mid 37426
> 	[521901.603422]  CIFS VFS: Send error in read = -11
> 	[2097744.571138]  CIFS VFS: No response for cmd 50 mid 48502
> 	[2097849.771138]  CIFS VFS: No response for cmd 114 mid 48519
> 
> 
> Reading through the archives along with the rest of teh internetz I've found 
> very little info.  Someone posted here back in february about a similar
> sounding problem, though I do not see the wsize-len blocks of NULL bytes in
> the resulting files like they did.
> 
> I've written a small python script that right now is running on a pair of
> these servers, which with a couple dozen threads is uploading similarly sized
> files of arbitrary data, and comparing the upload results of each other.
> after a few hours I haven't seen it yet, but will keep it runnign for
> a couple days to see if it shows up.
> 
> I've also found a couple suggestions out there to "disable linux
> extensions" and "disable oplocks" when searching on the above kernel error
> messages, but am hesitant to try them unless there's a strong indication
> that they will help, and i'm not entirely sure if/whether they will.
> 
> 
> does this ring a bell with anyone?  at this point i can't just do a
> blanket "try the latest" upgrade of these servers because they're part
> of a production application, at least without any further indication that
> there was a fix for this problem between the current and latest versions.
> If I can repro the problem, however, and can then take it to a non-prod
> machine, then I might have a bit more flexibility, but in the meantime
> thought I'd field the question here on the off chance...
> 
> 
> thanks!
> 	sean
> 


Older kernels were particularly bad about giving up on writes that
timed out. When this happens, it typically will mark the mapping bad
so that you get an error on fsync or close, but that's small
consolation since a lot of programs don't check the return value on
close.

That said, the messages you post above seem to indicate timeouts
on reads, not writes, but I don't recall whether writepages spewed any
errors when writes would time out.

The patchset that converted cifs to use async writes should not only
improve performance, but make this more robust as well. One thing you
can try is backporting 941b853 and see if that helps. Other than that
I'd suggest moving to a newer kernel.

Anything before 2.6.38 is probably going to suck for data integrity for
those reasons, unless someone backported the newer code to it (like we
did for RHEL6).

-- 
Jeff Layton <jlayton at redhat.com>


More information about the samba-technical mailing list