[Linux-cluster] fs.sh status hangs after device failures
jose nuno neto
jose.neto at liber4e.com
Mon Apr 19 15:30:39 UTC 2010
> On Mon, 2010-04-19 at 14:07 +0000, jose nuno neto wrote:
>> Hellos
>>
>> Im testing SAN under Multipath failures and founding a behavior on the
>> fs.sh that is not what I wanned.
>>
>> On simulating a SAN failure either with portdown on the san switch or on
>> the OS ( echo offline > /sys/block/$DEVICE/device/state ) the fs.sh
>> status
>> script doesn't give back an error.
>>
>> I looked at the script and think it hangs on the ls or touch test
>> (depends
>> on timings )
>> In fact if I issue an ls/touch on the failed mountpoints it hangs
>> forever.
>
> Set multipath configuration to "no_path_retry fail"
I have it:
blacklist {
wwid SSun_VOL0_266DCF4A
wwid SSun_VOL0_5875CF4A
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
}
defaults {
user_friendly_names yes
bindings_file /etc/multipath/bindings
}
devices {
device {
vendor "HITACHI"
product "OPEN-V"
path_grouping_policy multibus
failback immediate
no_path_retry fail
}
device {
vendor "IET"
product "VIRTUAL-DISK"
path_checker tur
path_grouping_policy failover
failback immediate
no_path_retry fail
}
}
>
>
>> If I fail the devices with /sys/block/$DEVICE/device/delete then the
>> touch
>> test returns an error and service switches.
>
> Right.
>
> -- Lon
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
More information about the Linux-cluster
mailing list