read caching across close
Peter Waechtler
peter at helios.de
Thu Apr 29 07:18:39 GMT 2004
Am Mittwoch, 28. April 2004 23:03 schrieb Steve French:
> unless I start doing lazy close of oplocked files (which is an option),
>
> Although the cifs vfs gets good cache across close performance in the
> common case of:
>
> 1) open file, get oplock
> 2) read 1st page
> 5) close
> 6) reopen file, get oplock
> 7) read 1st page out of client's cache
>
You give up the oplock when you send a close over the wire.
IMHO you have to purge the page and read it in again and don't rely
on the timestamp.
With a BATCH oplock you wouldn't sent the close over the wire.
Instead you start a timer and close the file when a program does
not open the file again.
If you close the file - you give up the oplock.
> We are not getting cache across close performance in the following case:
>
> 1) open file, get oplock
> 2) read 1st page
> 3) write anything
> 4) flush write data to server
> 5) close
> 6) reopen file, get oplock
> (discard pages from cache due to difference between local and server
> time)
> 7) read 1st page - this read has to go to server hurting performance
>
> Even though the page would have been current in the client's page cache,
> the client has to discard the cached data on reopen because the reopen
> of the file returns a different time than the time that ended up on the
> server for the file (due to data being written to server at step 4
> slightly after the client's inode last write time)
>
Again: with a BATCH oplock you don't sent close, write, lock over the wire.
> There are multiple options:
> 1) Lazy close file (close the file at unmount time, or on a timer or
> when oplock is broken)
>
Only when holding a BATCH oplock.
> 2) Create a slightly modified SMB close for the CIFS Unix extensions -
> to set the file time in the SMB close request (can not do this today
> because the 100 nanosecond NT time is not accepted in SMB close AFAIK).
I wouldn't want to rely on timestamps. The oplock are there for a reason.
> 4) Do a SetFileInfo of the client's view of last write time, after flush
> is complete, but before the formerly dirty oplocked file is closed
> (before we lose our oplock)
If you had a BATCH oplock and decide to close the file on the
server, you have to flush everything (you as in redirector:)
>
> 6) Do like those filesystems without oplock or equivalent and just use
> timers (I don't like this since it makes data corruption more likely)
But NFS comes with a lock server?
>
> 7) Use the CIFS change notification mechanisms to watch for changes
> (more complex)
No, just use the oplocks properly. The clients make heavily use of BATCH
oplocks - the redirector and here the cifs vfs - "buffers" the open/lock,write
close - until a timer expired, then the cache is flushed, the file is closed
and with the close the oplock is gone.
More information about the samba-technical
mailing list