[Linux-cluster] GFS RG size (and tuning)

Wendy Cheng wcheng at redhat.com
Fri Oct 26 23:57:18 UTC 2007


Jos Vos wrote:

>Hi,
>
>The gfs_mkfs manual page (RHEL 5.0) says:
>
>  If not  specified,  gfs_mkfs  will  choose the RG size based on the size
>  of the file system: average size file systems will have 256 MB  RGs,
>  and bigger file systems will have bigger RGs for better performance.
>
>My 3 TB filesystems still seem to have 256 MB RG's (I don't know how to
>see the RG size, but there are 11173 of them, so that seems to indicate
>a size of 256 MB).  Is 3 TB considered to be "average size"? ;-)
>
>Anyway, it is recommended trying to rebuild the fs's with "-r 2048" for
>3 TB filesystems, each with between 1 and 2 million files on it?
>Especially gfs_scand uses *huge* amounts of CPU time and doing df takes
>a *very* long time....
>
>
>  
>
1. 3TB is not "average size". Smaller RG can help with "df" command - 
but if your system is congested, it won't help much.
2. The gfs_scand issue is more to do with the number of glock count. One 
way to tune this is via purge_glock tunable. There is an old write-up in:
http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4 
. It is for RHEL4 but should work the same way for RHEL5.
3. If you don't need to know the exact disk usage and/or can tolerate 
some delays in disk usage update, there is another tunable 
"statfs_fast". The old write-up (RHEL4) is in: 
http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_fast_statfs.R4 
(and should work the same way as in RHEL 5).

-- Wendy







More information about the Linux-cluster mailing list