[Linux-cluster] GFS2 performance on large files

Andy Wallace andy at linxit.eclipse.co.uk
Wed Apr 22 23:11:45 UTC 2009


Hi,

I've just set up a GFS2 filesystem for a client, but have some serious
issues with the performance on large files (i.e. > 2GB). This is a real
problem, as the files we're going to be using are going to be approx.
20GB up to 170GB.

Hardware setup is:
2 x IBM X3650 servers with 2 x Dual Xeon, 4GB RAM, 2 x 2GB/s HBAs per
server;
Storage on IBM DS4700 - 48 x 1TB SATA disks

Files will be written to the storage via FTP, read via NFS mounts, both
on an LVS virtual IP address.

Although it's not as quick as I'd like, I'm getting about 150MB/s on
average when reading/writing files in the 100MB - 1GB range. However, if
I try to write a 10GB file, this goes down to about 50MB/s. That's just
doing dd to the mounted gfs2 on an individual node. If I do a get from
an ftp client, I'm seeing half that; cp from an NFS mount is more like
1/5.

I've spent a lot of time reading up on GFS2 performance, but I haven't
found anything useful for improving throughput with large files. Has
anyone got any suggestions or managed to solve a similar problem?

-- 
Andy Wallace




More information about the Linux-cluster mailing list