[Linux-cluster] Quorum disk over RAID software device
Rafael Micó Miranda
rmicmirregs at gmail.com
Mon Dec 21 20:05:03 UTC 2009
Hi Brem
El mié, 16-12-2009 a las 20:41 +0100, brem belguebli escribió:
> In my multipath setup I use the following :
>
> polling_interval 3 (checks the storage every 3 seconds)
> no_path_retry 5 (will check 5 times the path if failure happens on
> it, making it last scsi_timer (/sys/block/sdXX/device/timeout) + 5*3
> secondes )
>
> path_grouping_policy multibus (to load-balance accross all paths,
> group_by_prio may be recommended with MSA if it is an active/passive
> array?)
>
> >From my experience,
> no_path_retry, when using mirror (md or LVM) could be put to fail
> instead of 5 in my case.
>
> Concerning the flush_on_last_del, it just means that for a given LUN,
> when there is only one path remaining, if it comes to fail, what
> behaviour to adopt.
>
> Same consideration, if using mirror, just fail.
>
> The thing to take into account is the interval at which your qdisk
> process accesses the qdisk lun, if configured to a high value (let's
> imagine every 65 seconds) it'll take (worst case) 60 seconds of scsi
> timeout (default) + 12 times default polling interval (30 seconds if
> I'm not wrong) + 5 seconds= 425 seconds.....
>
> Brem
>
> 2009/12/16 Rafael Micó Miranda <rmicmirregs at gmail.com>:
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
After some testings I have to drop the idea.
With both solutions (MDADM RAID software and LVM Mirror) I have some
inconsistencies. When a disk of the pair fails sometimes it leaves the
qdisk device unreachable. I have tested it with different multipath
options (fail_if_no_queue with fail value) and I find the behaviour not
predictable.
I'm leaving the idea and I'll go back to a 6 nodes + 1 qdisk device
architecture.
Thanks to all. Cheers,
Rafael
--
Rafael Micó Miranda
More information about the Linux-cluster
mailing list