[clug] kernel options

Andrew andrew at donehue.net
Wed Aug 6 12:18:15 EST 2003


Hi Matt,
             Would this be creating the error "Too Many open files"? (I 
have had this problem before, and it is almost impossible to create a 
new bash session....) -

CPU and RAM isn't too much of an issue (dual 2.8 xeon, 2GB of ram) -


Cheers.
             Andrew.
P.S Please excuse email directly sent before...

Matthew Hawkins wrote:

> Daniel said:
>  
>
>> Thanks Andrew and Martijn - this is heading me in the right direction
>> for a diagnosis of why I can't copy a really huge maildir directory.
>> /proc/sys/fs# cat file-max
>> 52352
>>   
>
>
> Don't go chasing red herrings.  file-max is how many file descriptors the
> system is able to have open at once.  To your userspace cp(1) (or
> whatever) program, you're also limited by the per-process file descriptor
> limit (1024 by default) and any further limitations imposed by
> setrlimit(2) by your systems administrator.Unless you have some 
> threaded version of cp(1) that's capable of shifting
> many files simultaneously, whatever the limit is is highly unlikely to be
> causing your problem.
> My guess would be, should you have some lame old filesystem like ext2 or
> ext3, with sufficient files in a single directory you're simply running
> into the fs's inability to quickly obtain the inodes for these files. 
> Does running ls(1) take a long time in this maildir?  If this is the 
> case,
> you may like to consider using a modern filesystem like reiserfs or 
> XFS or
> something.  This is true if either end (source, destination) fits this
> criteria.
>  
>
>> http://pierre.mit.edu/compfac/linux/Securing-Optimizing-Linux-RH-Edition-v1.3/chap6sec72.html> 
>> I read above and wonder what happens if I set the number too high -
>> does it just slow down the system or something worse?
>>   
>
>
> It'll waste a little kernel memory, and if you also increase the
> per-process limit you'll slow down any select(2) or similar syscalls. 
> It's usually not noticeable (read: unnoticeable) *unless* you actually
> start using those extra fd's, or you have (a) slow CPU(s).I crank 
> file-max (and related settings) up on proxy servers and web
> servers mainly to stave off performance *degredation* (funnily enough) 
> and
> denial of service due to fd starvation, but haven't really seen a use
> beyond that in the normal course of things I set Linux boxes doing.  Even
> in this case (and I've tested) I haven't had a real need for file-max to
> go beyond 16384. 








More information about the linux mailing list