problem restated RE: inode-max and file-max question(lesscryptic)

Ben Yau byau at cardcommerce.com
Tue Mar 16 16:34:34 UTC 2004


Weird .. a lot of my last email got accidnetally chopped off...  here it is
again


> >
> > I just realized my last message was too cryptic.
> >
> > You really don't want to start creating a directory with up to 1.5
> > million files.  The performance will be horrendous.
> >
> > What you would really want to do is have your tracking application use a
> > database to keep the tracking records.  Will also make it much easier to
> > search/retrieve information.
> >
> > Regards,
> > Ed
> >
>
> Yes I agree with you.   I'm glad you pointed it out also.  At the
> same time
> I emailed this request to the mailing list (a lot of it out of
> curiosity) i
> sent an email  back to the programmre who requested it.   Doing
> some simple
> calcluatoins basically they would be creating a file every 2 seconds (!!!)


Basically, i agree with you.   It should be db driven and that's what I'm
suggesting to the programmer .  Esp. considering with that many number of
files you're looking at one file created every 2 seconds over the course of
a year.

However, I got very curoius about how to accomplish this task anyway.  Would
I need to change the file-max/inode-max ?  Doing some searching on google I
found that quite a few admins believe the max num of files per directory is
about 30,000 files, maybe up to 40,000 files (they did not give a number
related to phys memory or disk space) before affecting systme performance.

anyway, i have a non-netwokred debian box on my desk so i wrote a script to
create the file structure requested by the admins...  12 dirs, 1.4million
flies per dir.  It's running now and I'll let you know when the smoke starts
coming out of the CPU.  right now it's on file 44,000 of directory 1 :D







More information about the redhat-list mailing list