[Linux-cluster] GFS in a small cluster

Kees Hoekzema kees at tweakers.net
Thu May 29 14:37:02 UTC 2008

Hello List,

Recently we have been looking at replacing our NFS server with a SAN in our
(relatively small) webserver cluster. We decided to go with the Dell
MD3000i, an iSCSI SAN. Right now I have it for testing purposes and I'm
trying to set up a simple cluster to get more experience with it. At the
moment we do not run Redhat, but Debian; so although this is probably the
wrong mailing list for me, I could not find any other place where problems
like this are discussed.

The cluster, if it goes into production, will have to serve 'dynamic' files
to the webservers, these include images, videos and generic downloads. So
what will happen on the SAN is many reads, and relatively very few writes,
at the moment the read-write proportions on the NFS server are around 99%
reads vs 1% writes, the only writes that occur are users uploading a new
image, or one server creating some graphs.

Not only the webservers will use this SAN, but also the database servers
will use it to read some files from it. I have been looking at different
filesystems to run on this SAN the suit my needs, and GFS is one of those,
but I have a few problems and questions.
- Is locking really needed? There is no chance one webserver will try to
write to a file that is being written to by another file.
- How about fencing? I'd rather have a corrupt filesystem than a corrupt
database, how silly that may sound, but I do not want the webservers be able
to switch off the (infinite more important) database servers, and all
servers can easily work without any problem without the share, they will
still serve most of the content, just not the user-uploaded images / videos
/ downloads.

Is GFS the right FS for me or do I need to look to other (cluster aware)

>From the FAQ: http://sources.redhat.com/cluster/wiki/FAQ/GFS#gfs_whatgood
What I really need is a filesystem that is cluster-aware, aka that it knows
and reacts to the fact that other systems than himself are able to write to
the disk, and as said, ext3 does not know that; mount it on both systems and
they do see the original data, but as soon as one changes something the
other won't pick it up. 

Anyway, I tried gfs with the lock_nolock protocol, but I might as well use
ext3 than. With any other protocol, the mount will just hang with:

Trying to join cluster "lock_dlm", "tweakers:webdata"
dlm: Using TCP for communications
dlm: connect from non cluster node
BUG: soft lockup - CPU#2 stuck for 11s! [dlm_send:3566]
Pid: 3566, comm: dlm_send Not tainted (2.6.24-1-686 #1)
EIP: 0060:[<c02bdbe9>] EFLAGS: 00000202 CPU: 2
EIP is at _spin_unlock_irqrestore+0xa/0x13

The other FS we looked at was OCFS2, but although it is a lot easier to set
up, and it works without any problems, it does have a limit of 32k
directories in one directory, something which we easily surpass on our
current shares (over 50k directories in one dir).

Anyway, is there a method to have gfs mounted without locking, but still be
cluster-aware (aka; the fs can be updated by other servers) and without


More information about the Linux-cluster mailing list