[Linux-cluster] Storage Cluster Newbie Questions - any help with answers greatly appreciated!

Michael @ Professional Edge LLC m3 at professionaledgellc.com
Mon Mar 15 05:37:54 UTC 2010


Rafael,

Appreciate the link to - Brem's script.  - I had actually found that 
previously - and I think the piece of the puzzle I'm missing to make 
that useful - is figuring out how to disable either the auto-load of 
"qla2xxx" (my fiber card HBA) and/or disable the auto-find and 
reassemble on boot of all - mdadm volumes... or as Leo said - allow it 
all to start, but put in a S00 - mdadm stop script (which seems a bit 
too much like a hack for my tastes).

As for the CLVM w/mirrors... just reading through the technical docs - 
made me worry... your results are about on par with what I was 
thinking... which is I - would like to - wait a few more releases before 
I play with that in a Production mode.

-Michael

Rafael Micó Miranda wrote, On 3/11/2010 3:13 PM:
> Hi Michael
>
> El mié, 03-03-2010 a las 11:16 -0800, Michael @ Professional Edge LLC 
> escribió:
>> Hail Linux Cluster gurus,
>>
>>      
>
> [...]
>
>>    From what I can find messing with Luci (Conga) though... is - I don't
>> see any resource scripts listed for - "mdadm" (on RHEL 5.4) - so would
>> my idea even work  (I have found some posts asking for a mdadm resource
>> script but I've seen no response)?  I also see with RHEL 5.3 LVM has
>> mirrors that can be clustered now - is this the right answer?  I've done
>> a ton of reading but everything I've dug up so far; assumes that the
>> fiber devices are being presented by a SAN that is doing the redundancy
>> before the RHEL box sees the disk... or... there are a ton of examples
>> of where fiber is not in the picture and there are a bunch of locally
>> attached hosts presenting storage onto the TCP (ethernet) - but I've not
>> found nearly anything on my situation...
>>
>>      
>
> You can find an unofficial MDADM rgmanager resource script at this topic:
>
> https://www.redhat.com/archives/linux-cluster/2009-August/msg00111.html
>
> The resource was developed by Brem Belguebli, maybe if he sees this 
> topic he could give you more detail of the capabilities of the 
> resource script and it's compatibility with LVM / CLVM on top of it.
>
> In the other hand, you can use directly mirrored LVM volumes with 
> CLVM. I made some testing with the behaviours of mirrored LVM and I 
> found some strange stuff when only one node fails its connection to 
> the device - in your situation it will happen when one of the nodes 
> fails one of the links with one of the shelves. The fact was the 
> device being marked as "unknown device" in the node with the failed 
> path, and in the other everything seemed ok. What can happen during 
> the "reconstruction" of the failed volume can be unpredictable. Note 
> the "99.6" in the Copy% column, it always had the same value while the 
> path was failed:
>
> NODE A
> lvs -a -o +vg_name,device
> LV                      VG       Attr   LSize   Origin Snap%  Move 
> Log               Copy%  Convert VG       Devices
> logvolquorum            volgrp01 mwi-ao 996,00M                   
> logvolquorum_mlog 100,00         volgrp01 
> logvolquorum_mimage_0(0),logvolquorum_mimage_1(0)
> [logvolquorum_mimage_0] volgrp01 iwi-ao 
> 996,00M                                                     volgrp01 
> /dev/mpath/quorum01(0)
> [logvolquorum_mimage_1] volgrp01 iwi-ao 
> 996,00M                                                     volgrp01 
> /dev/mpath/quorum02(0)
> [logvolquorum_mlog]     volgrp01 lwi-ao   
> 4,00M                                                     volgrp01 
> /dev/mpath/logquorum01(0)
>
>
> NODE B (failed path)
> lvs -a -o +vg_name,device
> LV                      VG       Attr   LSize   Origin Snap%  Move 
> Log               Copy%  Convert VG       Devices
> logvolquorum            volgrp01 mwi-ao 996,00M                    
> logvolquorum_mlog  99,60         volgrp01 
> logvolquorum_mimage_0(0),logvolquorum_mimage_1(0)
> [logvolquorum_mimage_0] volgrp01 Iwi-ao 
> 996,00M                                                     volgrp01 
> /dev/mpath/quorum01(0)
> [logvolquorum_mimage_1] volgrp01 Iwi-ao 
> 996,00M                                                     volgrp01 
> unknown device(0)
> [logvolquorum_mlog]     volgrp01 lwi-ao   
> 4,00M                                                     volgrp01 
> /dev/mpath/logquorum01(0)
>
>
> I hope this brings some light to your questions.
>
> Cheers,
>
> Rafael
>
> -- 
> Rafael Micó Miranda
>          
>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20100314/e00774c2/attachment.htm>


More information about the Linux-cluster mailing list