[Linux-cluster] since GFS we are having issues with load and bad fs performance

Patrick Neuner ortsinfo at gmail.com
Thu Aug 28 10:40:29 UTC 2008


we are using a Virtualisation Virtuozzo and migrated all to GFS File System.

All virtual Environments are within different directories.
Mounted GFS: /vz/
Virutual Environments are unter /vz/private/###
Templates that could be used by all for reading only is under

Hardware are Blades with 2 quadcore, connected to a HP 4100 SAN with
Using multipath that manages 4 routes to the storage. VRAID 5 with lots of
HDD's for fast performance on the SAN side.
450 GB.
Servers are connected to each other with GBIT.

Formatting of filesystem was done using conga (luci) with defaults.
Using newest packages from 5.2, all current updates.

Since then (using exaxt same hardware as before), now even another server,
and still,
we are having very slow performances. If talking about about load: All
Servers were between 1-2 behaving normally,
now we are between 4-12, having it jumping all the time, when doing backups,
even more problematic.
We are already in the process of moving environments away from GFS back to

The virtual environments usually have Webserver and Mailservers running, so
lot's of small files, usually more read then write.

Opening Webpages, especially using cms systems, are slower and when doing
backups (just reading) very slow.

I knew there is some overhead at GFS, but I have following stat on our
780 MB File compressed:
copy from ext3 to ext3: taking 6 seconds
copy from ext3 to gfs: taking 12 seconds
copy from gfs to gfs: taking 15 seconds.

(gfs and ext3 are LUN's from the same SAN).

Here I am talking about 1 file, so locking shouldn't be an issue.
I have googled a lot about performance, and found some hints (fast_fs and so
on) which could be changes using gfs_tools, but it seems, they are not
available with standard rpms

Everything else seems stable for us, just the performance of the fs is soo
slow on all servers connected to the gfs, and I know it's not our hardware,
using ext3 on the same hardware is like way faster. not 20 % or so, but

So what I would like to know, as I am sure lot's of people are using GFS
with a bigger environment than we do (only 3 servers on 450 GB GFS), if this
is a normal behaviour, is GFS so much slower and producse so much overhead
due locking,
in comparison with ext3, or could this be any other reason.

Our hardware is not the key here.
It's probably my last approach to see if there is anything else we can do,
or have to move back to ext3.

BTW: How could I do a filesystem-check of a GFS, without having to unmount
the running one.
I thought of doing a snapshot and using this presentation, but my first
approach didn't work, gfs_fsck said, no gfs file system on the presented
LUN.. even it was a live snapshop.
So not sure if that is possible at all to just make a snapshot of a gfs and
then do a filesystem check. Just to get sure, it's not an error of the fs
after formatting it. (read that too someone had this problem.).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20080828/61c765aa/attachment.htm>

More information about the Linux-cluster mailing list