[Linux-cluster] doubts about using clvm

Ian Blenke ian at blenke.com
Tue Jan 3 18:18:04 UTC 2006


Bowie Bailey wrote:
> carlopmart wrote:
>   
>> Thanks Erling. But I have last question. I will try to combine two
>> disks on GFS clients side. If I understand correct, first i need to
>> import gnbd devices on both GFS nodes, right??. And second, I need to
>> setup lvm from GFS nodes and start clvm service on both nodes too.
>> But, I need to create shared lvm disk on both GFS nodes or only on one
>> node???
>>     
>
> I am doing the same thing on my server using AoE drives rather than
> GNBD.
>
> You create the clvm volumes and GFS filesystem(s) from one node, and then
> use "vgscan" to load it all in on the second node.
>   
When a node goes down/is rebooted, how do you restore the "down, 
closewait" state on the remaining nodes that refer to that 
vblade/vblade-kernel?

The "solution" appears to be stop lvm (to release open file handles to 
the /dev/etherd/e?.? devices), unload "aoe", and reload "aoe". On the 
remaining "good" nodes.

This particular problem has me looking at gnbd devices again.

If aoe were truly stateless, and the aoe clients could recover 
seamlessly on the restore of a vblade server, I'd have no issues.

 - Ian C. Blenke <ian at blenke.com> http://ian.blenke.com/




More information about the Linux-cluster mailing list