[Linux-cluster] Cluster Project FAQ - GFS tuning section
Jon Erickson
erickson.jon at gmail.com
Thu Jan 11 16:41:04 UTC 2007
I have a couple of question regarding the Cluster Project FAQ – GFS
tuning section (http://sources.redhat.com/cluster/faq.html#gfs_tuning).
First:
- Use –r 2048 on gfs_mkfs and mkfs.gfs2 for large file systems.
I noticed that when I used the –r 2048 switch while creating my file
system it ended up creating the file system with the 256MB resource
group size. When I omitted the –r flag the file system was created
with 2048Mb resource group size. Is there a problem with the –r flag,
and does gfs_mkfs dynamically come up with the best resource group
size based on your file system size? Another thing I did which ended
up in a problem was executing the gfs_mkfs command while my current
GFS file system was mounted. The command completed successfully but
when I went into the mount point all the old files and directories
still showed up. When I attempted to remove files bad things
happened…I believe I received invalid metadata blocks error and the
cluster went into an infinite loop trying to restart the service. I
ended up fixing this problem by un-mounting my file system re-creating
the GFS file system and re-mounting. This problem was caused by my
user error, but maybe there should be some sort of check that
determines whether the file system is currently mounted.
Second:
- Break file systems up when huge numbers of file are involved.
This FAQ states that there is an amount of overhead when dealing with
lots (millions) of files. What is a recommended limit of files in a
file system? The theoretical limit of 8 exabytes for a file system
does not seem at all realistic if you can't have (millions) of files
in a file system.
I just curious to see what everyone thinks about this. Thanks
--
Jon
More information about the Linux-cluster
mailing list