[Linux-cluster] How do you HA your storage?

Corey Kovacs corey.kovacs at gmail.com
Sat Apr 30 12:27:51 UTC 2011


This has nothing to do with any network. It's all over the fiber...

Points in time? It's a raid 1, it's relatively instant. It's more
complex to manage a failover in the way you describe if anything.

Well, my $0.02 anyway.

-C

On Sat, Apr 30, 2011 at 11:03 AM, urgrue <urgrue at bulbous.org> wrote:
> Yes, these work, but then I'm having each server handle the job of mirroring
> their own disks, which has some disadvantages. Network usage instead of
> fiber, more complex management of points-in-time compared to a nice big fat
> centralized SAN, etc. In my experience most companies favor SAN-level
> replication.
> The challenge is just getting Linux to recover gracefully when the SAN fails
> over. Worst case you can just reboot, but, that's not very HA.
>
>
> On 30/4/11 13:23, Corey Kovacs wrote:
>>
>> What you seem to be describing is the mirror target for device mapper.
>>
>> Another alternative would be to setup a software raid using multipath'd
>> luns.
>>
>> SANVOL1            SANVOL2
>>    |                           |
>>    \                          /
>>     \                       /
>>       \                   /
>>     MPATH1    MPATH2
>>          \             /
>>        RAID 1 DEV
>>                |
>>              PV
>>                |
>>               VG
>>                |
>>               LV
>>
>> That might work
>>
>> -C
>>
>>
>> On Sat, Apr 30, 2011 at 10:08 AM, urgrue<urgrue at bulbous.org>  wrote:
>>>
>>> But, how do you get dm-multipath to consider two different LUNs to be in
>>> fact two paths to the same device?
>>> I mean, normally multipath has two paths to one device.
>>> When we're talking about san-level mirroring, we've got two paths to two
>>> different devices (which just happen to contain identical data).
>>>
>>> On 30/4/11 11:47, Kit Gerrits wrote:
>>>>
>>>> With dual-controller arrays, dm-multipath  keeps checking if the current
>>>> device is still responding and switches to a different path if it is
>>>> not.
>>>> (for examply, by reading sector 0)
>>>>
>>>> With SAN failover, you may need to tell the secondary SAN LUN to go into
>>>> read-write mode.
>>>> Unfortunately, I am not familiar with tying this into RHEL.
>>>> (also, sector 0 will already be readable on the secundary LUN, but not
>>>> writable)
>>>>
>>>> Maybe there is a write test, which tries to write to both SANs
>>>> The one which allows write access will become the active LUN.
>>>>
>>>> If you can switch your SANs inside 30 seconds, you might even be able to
>>>> salvage/execute pending write operations.
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Kit
>>>>
>>>> -----Original Message-----
>>>> From: linux-cluster-bounces at redhat.com
>>>> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of urgrue
>>>> Sent: zaterdag 30 april 2011 11:01
>>>> To: linux-cluster at redhat.com
>>>> Subject: [Linux-cluster] How do you HA your storage?
>>>>
>>>> I'm struggling to find the best way to deal with SAN failover.
>>>> By this I mean the common scenario where you have SAN-based mirroring.
>>>> It's pretty easy with host-based mirroring (md, DRBD, LVM, etc) but how
>>>> can
>>>> you minimize the impact and manual effort to recover from losing a LUN,
>>>> and
>>>> needing to somehow get your system to realize the data is now on a
>>>> different
>>>> LUN (the now-active mirror)?
>>>> --
>>>> Linux-cluster mailing list
>>>> Linux-cluster at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>
>>>> --
>>>> Linux-cluster mailing list
>>>> Linux-cluster at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>




More information about the Linux-cluster mailing list