[clug] nr-file question

Andrew andrew at donehue.net
Sun Oct 24 02:36:21 GMT 2004


Thank you for replying.

I am going to take two approaches to this problem -

1) As suggested, look at why so many file descriptors are being used

2) At the same time raise the limit, for temporary relief.

ulimit -a turns out 1024 open files - I know I can change this at the 
start of a bash script with ulimit -S -n 2048 (for example) (or in 
/etc/profile).  Where should I place this limit so it has the effect on 
any apache threads started? (Debian system)

Cheers,
Andrew



Peter Barker wrote:

>On Sun, 24 Oct 2004, Andrew wrote:
>
>  
>
>>The kernel documents say that the first value of file-nr (from sysctl
>>-a) is dynamically set by the kernel (up to a maxium of the third
>>value).  Am I correct to assume that if the second value goes close to
>>the first, then the first will raise automatically without giving
>>errors? Or is there some intervention required on my part?  I am trying
>>to work out why a system I am looking after sometimes gives "Too many
>>files open" errors in php -  There seems to be a huge capacity here?
>>    
>>
>
>You might like to look in
>/usr/src/linux/Documentation/filesystems/proc.txt; search for file-nr:
>
>---
>The three  values  in file-nr denote the number of allocated file handles,
>the
>number of  used file handles, and the maximum number of file handles. When
>the
>allocated file  handles  come close to the maximum, but the number of
>actually
>used ones  is  far  behind,  you've  encountered  a peak in your usage of
>file
>handles and you don't need to increase the maximum.
>---
>
>  
>
>>fs.file-nr = 2264       425     104032
>>    
>>
>
>  
>
>>Any thoughts?
>>    
>>
>
>Yes, that's a whole heap of filehandles you allow on your system. You
>/can/ raise that by poking new values into /proc/sys/fs/file-max, but
>that's not your problem here :)
>
>You possibly have two problems here.
>
>The first is that you're possibly leaking file descriptiors; 2264 is a
>whole heap. /Should/ your application have that many open? If your
>application is long lived (php indicates it probably is), then over many
>runnings of your script in the same process a fdile descriptor leak is
>cumulative. What this basically means is - look for an open(...)
>unbalanced by a close(...). Failing finding one of those, socket() calls
>and various other bits and pieces will allocate file descriptors.
>
>At a guess, the limit you are hitting is the per-process file desciptor
>limit. Type "ulimit -a" in a shell and it will give you what your
>/shell's/ per-process limits are (they might be different for your
>script). open-files is what you're looking at.
>
>Oh, one more thing; while the process is running, /proc/<pid>/fd contains
>a list of descriptors this process has open (also, /proc/self/pid, which
>is handy for self-diagnosis :)).
>
>  
>
>>Andrew.
>>    
>>
>
>Yours,
>  
>



More information about the linux mailing list