10 million files on Redhat?

Matthew Melvin matthewm at webcentral.com.au
Tue Jul 12 02:02:49 UTC 2005


On Mon, 11 Jul 2005 at 4:46pm (-0700), Cabbar Duzayak wrote:

> I am planning to undertake a project where I need to store around 10
> million images on (a) dedicated server(s)... At this point, I am
> trying to decide whether I should store them on the database or on the
> filesystem.
>
> And, my question is do you think Redhat can handle this many files? I
> will probably store them on a RAID 5 system, and of course I will
> partition them over 4 level of directories and one directory will
> contain at most (100 files) or (100 directories), but still, I am
> curios to know if Redhat can handle this?
>
> Does anyone have experience with this many files on linux filesystems?
> Also, can you recommend which FS would be the best (in terms of
> reliability and/or speed) Any suggestions/feedback?
>

I have two mailstores of 19.3 and 18.4 million files each on (mostly*) 
vanilla rh7.3 ext3 partitions without drama.  The number of files per dir is 
a much bigger issue but your 100 files per dir isn't going to have any 
problems there.  Naturally fsck'ing a file system with that many files on it 
takes a looooong time but since ext3 I havn't had to do that.

I can't answer if this better or worse than a database for you but it does 
work. :)

M.

* bytes per inode was tuned so that inodes and blocks ran out at roughly the 
same rate but this isn't relevent to your question - it acutally results in 
less indoes than would be available normally.

-- 
:wq!




More information about the redhat-list mailing list