read caching across close

Steve French smfltc at us.ibm.com
Wed Apr 28 21:03:14 GMT 2004


unless I start doing lazy close of oplocked files (which is an option),

Although the cifs vfs gets good cache across close performance in the
common case of:

1) open file, get oplock
2) read 1st page
5) close
6) reopen file, get oplock
7) read 1st page out of client's cache

We are not getting cache across close performance in the following case:

1) open file, get oplock
2) read 1st page
3) write anything
4) flush write data to server
5) close
6) reopen file, get oplock
(discard pages from cache due to difference between local and server
time)
7) read 1st page - this read has to go to server hurting performance

Even though the page would have been current in the client's page cache,
the client has to discard the cached data on reopen because the reopen
of the file returns a different time than the time that ended up on the
server for the file (due to data being written to server at step 4
slightly after the client's inode last write time)

There are multiple options:
1) Lazy close file (close the file at unmount time, or on a timer or
when oplock is broken)

2) Create a slightly modified SMB close for the CIFS Unix extensions -
to set the file time in the SMB close request (can not do this today
because the 100 nanosecond NT time is not accepted in SMB close AFAIK). 
3) Find a different way to do file close that accepts the NT timestamp
(100 nanosecond units) 

4) Do a SetFileInfo of the client's view of last write time, after flush
is complete, but before the formerly dirty oplocked file is closed
(before we lose our oplock)

5) Do a QueryFileInfo of the server's view of last write time, after
flush of any dirty data is complete but before the formerly dirty
oplocked file is closed (before we lose our oplock)

6) Do like those filesystems without oplock or equivalent and just use
timers (I don't like this since it makes data corruption more likely)

7) Use the CIFS change notification mechanisms to watch for changes
(more complex)

Suggestions?



More information about the samba-technical mailing list