[dm-devel] multipath - AAArgh! How do I turn "features=1 queue_if_no_path" off?

John Hughes john at Calva.COM
Thu Oct 1 08:55:12 UTC 2009


Hannes Reinecke wrote:
> malahal at us.ibm.com wrote:
>> John Hughes [john at Calva.COM] wrote:
>>> I want to turn queue_if_no_path off and use
>>>
>>>                polling_interval        5
>>>                no_path_retry           5
>>>
>>> because I've had problems with things hanging when a lun "vanishes" 
>>> (I deleted it from my external raid box).
>>>
>>> But whatever I put in /etc/multipath.conf when I do a "multipath -l" 
>>> or "multipath-ll" it shows:
>>
>> Did you reload the mapper table?
>>
>>> 360024e80005b3add000001b64ab05c87dm-28 DELL    ,MD3000        
>>> [size=68G][features=1 queue_if_no_path][hwhandler=1 rdac]
>>> \_ round-robin 0 [prio=3][active]
>>> \_ 3:0:1:13 sdad 65:208 [active][ready]
>>> \_ round-robin 0 [prio=0][enabled]
>>> \_ 4:0:0:13 sdas 66:192 [active][ghost]
>>>
> Which is entirely correct. The 'queue_if_no_path' flag _has_ to
> be set here as we do want to retry failed paths, if only for
> a limited amount of retries.
>
> The in-kernel dm-multipath module should handle the situation correctly
> and switch off the queue_if_no_path flag (= pass I/O errors upwards)
> when the amount of retries is exhausted.
As far as I can tell it retries forever (even with polling_interval 5 
and no_path_retry 5).   The mdadm raid10 built on top of the multipath 
devices hangs, even /proc/mdstat hangs.

You're saying that without queue_if_no_path multipath basicly won't work 
- mdadm will see I/O errors on multipath devices if a path fails?






More information about the dm-devel mailing list