[Linux-cluster] Performance of ES3+GFS6.0+GNBD+LOCK_GULM

Hong Zheng hong.zheng at wsdtx.org
Tue Jan 31 15:02:31 UTC 2006


I found a problem with lock_nolock. If I mount two GFS nodes to the same
GFS filesystem, actually each node just treats it as local drive,
whenever I make some change on one node, the change won't show up on the
other node. I don't know if this is why the manual says that lock_nolock
only works for single node, or is there a workaround to resolve this
issue?

 

Thank you all.

 

________________________________

From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Hong Zheng
Sent: Monday, January 30, 2006 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] Performance of ES3+GFS6.0+GNBD+LOCK_GULM

 

 

Today, I tried the configuration with lock_nolock. I configured one GFS
node with lock_nolock and the performance acts as a local drive. But
here is the question, I still want to make it as a cluster at least a
active-passive cluster. Since for active-passive cluster every time
there is only one active node, I assume the data will be consistent when
the backup node takes over. I'm not sure if this is a compromise way to
keep the better performance and data consistence.

________________________________

From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Hong Zheng
Sent: Monday, January 30, 2006 9:20 AM
To: linux-cluster at redhat.com
Subject: Re: [Linux-cluster] Performance of ES3+GFS6.0+GNBD+LOCK_GULM

 

Thanks, Kevin.

 

Actually I did try the way you recommend. I configured one GFS
application node with software iscsi initiator and two lock_gulm
servers, the data transfer speed just improved a little bit, but for our
application the performance is about the same. Do you know if there is a
way to tune GFS performance?

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20060131/5fefb4b6/attachment.htm>


More information about the Linux-cluster mailing list