[linux-lvm] Mirror between different SAN fabrics

Christian.Rohrmeier at SCHERING.DE Christian.Rohrmeier at SCHERING.DE
Thu Dec 28 10:55:49 UTC 2006


Hi,

I haven't tried it in a cluster yet. I was planning on using HP's
MC-ServiceGuard to deal with HA clustering. I don't see why the LUNs that
are used on one system with mdadm can't be used another, since the RAID
block is on the disk and is readable even on a system upon which it wasn't
created on. /etc/mdadm.conf will ofcourse need to be copied and kept
current on all cluster nodes, but with the config file and the RAID block
on the disk, an "mdadm --assemble" should work. Importing the LVM
structures should then also not be a problem.

Well, like I said, thats what I assume will work, as I have not yet tested
this setup. I will do so shortly and can report my findings to the list. If
that doesn't work, then I would, as you suggested, also try RH cluster
suite.

Cheers,

Christian



                                                                           
             <mathias.herzog at p                                             
             ostfinance.ch>                                                
             Sent by:                                                   To 
             linux-lvm-bounces         <linux-lvm at redhat.com>              
             @redhat.com                                                cc 
                                                                           
                                                                   Subject 
             28.12.2006 11:13          RE: [linux-lvm] Mirror between      
                                       different SAN fabrics               
                                                                           
             Please respond to                                             
                LVM general                                                
              discussion and                                               
                development                                                
             <linux-lvm at redhat                                             
                   .com>                                                   
                                                                           
                                                                           




Hi

Looks nice your solution.
But I just found out that unlike lvm2, mdadm is not cluster aware. It
seems not possible to transfer RAID state information from one node to
another.
As we use Red Hat Cluster Suite, we depend on a cluster solution.

Regards Mathias

> -----Original Message-----
> From: linux-lvm-bounces at redhat.com
> [mailto:linux-lvm-bounces at redhat.com] On Behalf Of
> Christian.Rohrmeier at SCHERING.DE
> Sent: Donnerstag, 28 Dezember, 2006 09:49
[...]
> Hi,
>
> Here is a nice example from one of my RHEL 4 Oracle servers:
>
> We have three layers:
>
> first the LUNs from the SAN are multipathed to device aliases:
>
> [root@ ~]# multipath -ll
> sanb (XXXX60e8003f653000000XXXX000001c7)
> [size=101 GB][features="1 queue_if_no_path"][hwhandler="0"]
> \_ round-robin 0 [active]  \_ 0:0:1:1 sdb 8:16
> [active][ready]  \_ 1:0:1:1 sdd 8:48 [active][ready]
>
> sana (XXXX60e80039cbe000000XXXX000006ad)
> [size=101 GB][features="1 queue_if_no_path"][hwhandler="0"]
> \_ round-robin 0 [active]  \_ 0:0:0:1 sda 8:0
> [active][ready]  \_ 1:0:0:1 sdc 8:32 [active][ready]
>
> Next these multipath aliases are RAIDed:
>
> [root@ ~]# mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.01
>   Creation Time : Thu Nov  2 13:07:01 2006
>      Raid Level : raid1
>      Array Size : 106788160 (101.84 GiB 109.35 GB)
>     Device Size : 106788160 (101.84 GiB 109.35 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 0
>     Persistence : Superblock is persistent
>
>     Update Time : Thu Dec 28 09:36:19 2006
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>
>
>     Number   Major   Minor   RaidDevice State
>        0     253        2        0      active sync   /dev/mapper/sana
>        1     253        3        1      active sync   /dev/mapper/sanb
>            UUID : b5ac4ae9:99da8114:744a7ebb:aba6f687
>          Events : 0.4254576
>
> And finally, the RAID device is used with LVM:
>
> [root@ ~]# vgs -o +devices
>   VG   #PV #LV #SN Attr   VSize   VFree Devices
>   vg00   2   2   0 wz--n-  31.78G    0  /dev/cciss/c0d0p2(0)
>   vg00   2   2   0 wz--n-  31.78G    0  /dev/cciss/c0d0p4(0)
>   vg00   2   2   0 wz--n-  31.78G    0  /dev/cciss/c0d0p2(250)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(0)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(5120)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(5376)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(5632)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(8192)
>
> This works very well, as both paths and mirrors are able to
> break away without any disruption in disk access.
>
> Cheers,
>
> Christian

Sicherheitshinweis:
Dieses E-Mail von PostFinance ist signiert. Weitere Informationen finden
Sie unter:
https://www.postfinance.ch/e-signature.
Geben Sie Ihre Sicherheitselemente niemals Dritten bekannt.(See attached
file: smime.p7s)_______________________________________________
linux-lvm mailing list
linux-lvm at redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7
Size: 4479 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20061228/84adf60e/attachment.bin>


More information about the linux-lvm mailing list