Recommended max. limit of number of files per directory?

Ric Wheeler rwheeler at redhat.com
Thu Mar 26 15:56:32 UTC 2009


On 03/26/2009 09:58 AM, howard chen wrote:
> Hi,
>
> On Thu, Mar 26, 2009 at 12:58 PM, Christian Kujau<lists at nerdbynature.de>  wrote:
>    
>> http://nerdbynature.de/bench/sid/2009-03-26/di-b.log.txt
>> http://nerdbynature.de/bench/sid/2009-03-26/
>> (dmesg, .config, JFS oops, benchmark script)
>>
>>      
>
> Apparently ext3 start to suck when files>  1000000. Not bad in fact,
>
> I will try to run your script on my server for a comparison.
>
> Also I might try to measure the random read time when many directories
> containing many files. But I want to know:
>
> If I am writing  a script to do such testing, what step is needed to
> prevent stuffs such as OS caching effect (not sure if it is the right
> name), so I can arrive a fair testing ?
>
> Thanks.
>
> _______________________________________________
>    

I ran similar tests using fs_mark, basically, run it against 1 directory 
writing 10 or 20 thousand files per iteration and watch as performance 
(files/sec) degrades as the file system fills or the directory 
limitation kick in.

If you want to be reproducible, you should probably start with a new 
file system but note that this does not reflect the reality of a 
naturally aged (say year or so old) file system well.

You can also unmount/remount to clear out cached state for an older file 
system (or tweak the /proc/sys/vm/drop_caches knob to clear out the cache).

Regards,

Ric




More information about the Ext3-users mailing list