RES: [Linux-cluster] GFS Performance advise

Leonardo Rodrigues de Mello Leonardo.Mello at
Wed Jul 26 12:51:07 UTC 2006

Other things I have forgot in the last message:

1- You dont need to use the SAN to have one ACTIVE/PASSIVE cluster, with data replication between the servers. Check out DRBD 0.7 + Heartbeat or RH-Cluster-Suite. This will be able to do the job. without the cost of one external storage device.
a good point to start is: 

2- RedHat are developing one Cluster Raid aproach that will be better than drbd because it will be possible to create a distributed raid to split the storage between the servers. I dont know how the development of this draid is going or when it will be made stable. Does anyone from RH can tell something about this topic ? 

Some links for you: 
a - Official documentation

b - drbd instalation

c - drbd + heartbeat integration

best regards
Leonardo Rodrigues de Mello
-----Mensagem original-----
De:	Leonardo Rodrigues de Mello em nome de Leonardo Rodrigues de Mello
Enviada:	qua 26/7/2006 09:36
Para:	linux clustering
Assunto:	RES: [Linux-cluster] GFS Performance advise

Gfs is only necessary if you have two or more machines that access (READ+WRITE) the filesystem at the same time. GFS will create and manage a global lock of the filesystem, and other things to make shure the filesystem can be shared among the cluster nodes without filesystem corruption.

beside that fact, if you have active/passive you can use ext3 without any problem. you can use gfs no_lock too. if you are having problems with gfs no_lock maybe because something is misconfigured in your setup. You CANT use gfs no_lock the same way you use gfs with dlm or gulm because if you do that you can get a filesystem corruption... i dont know if gfs permit one configuration like that. 

best regards
Leonardo Rodrigues de Mello

-----Mensagem original-----
De:	linux-cluster-bounces at em nome de Tomer Okavi
Enviada:	qua 26/7/2006 03:17
Para:	linux-cluster at
Assunto:	[Linux-cluster] GFS Performance advise

I've a samba file server cluster (Active\Passive) with 2 cluster nodes on
Cent OS 4.3
both nodes are connected to a shared storage through Fiber switch+HBA
the shared storage holds the file system that samba shares to the windows
only one cluster node mounts the file system (the active one)
currently I'm using ext3 as the file system on the shared storage because
I've experienced slow response time and locking problems from the samba
I've tried formatting the shared file system with GFS disabling locks
(lock_nolock), tried mounting the file system with
with no success, samba still complains about oplocks breakes and the windows
system connecting to the shares experience slow performance from samba.
the samba file system exports the file system to 3 IIS servers through unc
it's dealing with lots (1,000,000) of small (under 250KB) files.
when using ext3 as the file system for the samba shares i have no problem.

1. should i use GFS for the file system?, to avoid file system corruption in
case one cluster node crash or is ext3 is a good enough solution?
2. why when using GFS with lockproto=lock_nolock,localchaching,localflocks i
still see "glock nq calls" and  "lm_lock calls" in gfs_tool counters

my main goal is to achieve maximum samba performance with the lowest chance
for file system corruption in case of a failover or crashed cluster node.


Tom Ok.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 4556 bytes
Desc: not available
URL: <>

More information about the Linux-cluster mailing list