[Linux-cluster] doubts about using clvm

Bowie Bailey Bowie_Bailey at BUC.com
Tue Jan 3 19:44:50 UTC 2006


Ian Blenke wrote:
> Bowie Bailey wrote:
> > carlopmart wrote:
> > 
> > > Thanks Erling. But I have last question. I will try to combine two
> > > disks on GFS clients side. If I understand correct, first i need
> > > to import gnbd devices on both GFS nodes, right??. And second, I
> > > need to setup lvm from GFS nodes and start clvm service on both
> > > nodes too. But, I need to create shared lvm disk on both GFS
> > > nodes or only on one node??? 
> > > 
> > 
> > I am doing the same thing on my server using AoE drives rather than
> > GNBD. 
> > 
> > You create the clvm volumes and GFS filesystem(s) from one node,
> > and then use "vgscan" to load it all in on the second node.
> 
> When a node goes down/is rebooted, how do you restore the "down,
> closewait" state on the remaining nodes that refer to that
> vblade/vblade-kernel?
> 
> The "solution" appears to be stop lvm (to release open file handles to
> the /dev/etherd/e?.? devices), unload "aoe", and reload "aoe". On the
> remaining "good" nodes.
> 
> This particular problem has me looking at gnbd devices again.
> 
> If aoe were truly stateless, and the aoe clients could recover
> seamlessly on the restore of a vblade server, I'd have no issues.

I'm not sure what you mean.  I have shutdown my GFS nodes several times
now without any effect whatsoever on the remaining nodes.  The only
issue I have had has been with fencing since I am currently using manual
fencing while trying to get my WTI power switches configured.

We still need to do quite a bit of testing on this setup, so it's
possible there are problems that I have not encountered yet, but so far
it has worked very well for me.

-- 
Bowie




More information about the Linux-cluster mailing list