[linux-cifs-client] Re: Page allocation failure

Steven French sfrench at us.ibm.com
Mon Mar 28 16:26:57 GMT 2005







> Mar 25 21:05:29 igestion-l kernel: cifsd: page allocation failure.
order:3, mode:0xd0
>
> After this error the server hangs and needs hardware reset.

It is an indication of severe memory pressure. Linux can run into memory
pressure when caching large files even without a particular application
intentially allocating large amounts of memory. Normally this message
appears to be harmelss since the kernel retries memory allocations and if
the allocation actually fails - the cifs demultiplex thread itself will
wait a few seconds and retry. When the Linux memory management code is
caching writebehind data for CIFS it can get hard to free memory since the
cifs requests to write the dirty pages can block indirectly on the cifs
threads needing memory. If memory is still so tight that the memory
allocation retries keep failing, it is possible the system will grind to a
halt although I have not seen that from 2.6.11 kernels and later.

I have switched CIFS to use two buffer sizes - a large pool of 512 byte
buffers - and a smaller pool of CIFS network buffers (the size of the
typical SMB negotiated buffer size + header) which are somewhat large by
default (just over 4 pages - so they require an "order 3" (eight page)
request to the memory manager). To avoid having to allocate them at run
time there is a pool of them by default and in reasonably current code
(2.6.11 kernel and later) it is adjustable at module install (insmod) time
via a parameter (you can run /sbin/modinfo on cifs.ko to get the name of
the parameter - I think it is min_rcv_buffer of something similar).

There are three things (besides tuning the agressiveness of Linux file
caching which is outside of the scope of CIFS) which can help -

1st: Increase the number of cifs network buffers in the memory pool as I
describe above. The default is 5 (one additional for every subsequent
server you are mounted to). 2nd: decrease the size of the cifs network
buffers (also a module install parameter) - decreasing it to 15K instead of
default of 16K should make it an order 2 (4 page) instead of an order 3 (8
page) allocation which reduces memory pressure at little performance cost.
3rd - mount with the forcedirectio mount option if you don't care about
client caching (this is ok if you are reading or copying large sequential
files and your app uses reasonably big read or write operations).

There are a few things which will help a lot here - one is reducing the
number of times we use the large buffers - cifs_demultiplex_thread could
default to using smaller buffers when the size of the SMB response is small
- and second we could finish up the experimental code in the write path
that will pass the page to write directly to the socket buffer without
reallocating a buffer to put it in (that might cut 1/3 of the large buffer
allocations in large file writes to the server).

Steve French
Senior Software Engineer
Linux Technology Center - IBM Austin
phone: 512-838-2294
email: sfrench at-sign us dot ibm dot com
-------------- next part --------------
HTML attachment scrubbed and removed


More information about the linux-cifs-client mailing list