ext3 filesystem performance issues

John Wendel john.wendel at metnet.navy.mil
Wed Sep 12 16:50:45 UTC 2007


aragonx at dcsnow.com wrote:
> I'm wondering at what point ext3 starts having issues with number of files
> in a directory.
> 
> For instance, will any of the utilities fail (ls, mv, chown etc) if you
> have more than x files in a directory?
> 
> At what point does things really start slowing down?
> 
> I was told by a coworker that all UNIX varieties have to do an ordered
> list search when they have to preform any operations on a directory.  They
> also stated that if there is more than 100k files in a directory, these
> tools would fail.
> 
> This seems like a low number to me but I was looking for some expert
> analysis.  :)
> 
> Thanks
> Will


Your cow-orker is (mostly) wrong! These failures are a shell 
limitation, and have nothing to do with the filesystem itself.

Ext3 uses hashed directory access, with constant directory access 
times, if configured properly. I work with some ext3 filesystems that 
keep over 100K files in a single directory.

Command line utilities fail with "command line is too long" (due to 
the large number of files), but this is usually easy to work around 
(man xargs, man find).

I seems to recall that XFS has been tested with a million files in a 
single directory, but we only use XFS on our SGI IRIX systems.

Things work as expected, but generally things are better with fewer 
files per directory. Command line length limitation is a major pain 
for the casual user.

Regards,

John





More information about the fedora-list mailing list