[dm-devel] boot scripts and dm-multipath

Edward Goggin egoggin at emc.com
Wed Oct 4 19:42:02 UTC 2006


SuSE and Red Hat (maybe others also) enterprise distributions currently
invoke multipath fairly early in the boot sequence, followed sometime
later by multipathd.  I think that these distributions run the risk of
incurring a boot-time hang when read I/Os issued (by kpartx in
/etc/boot.multipath for instance) to dm-multipath mapped devices before
multipathd is started, fail (due to failures which are not SCSI
transport related) when sent to storage targets configured with the
dm-multipath queue_if_no_path.  Without multipathd running there is no
ability to timeout the "queue I/O forever" behavior during an
all-paths-down use case.

In these cases, the dm-multipath device is created because the storage
target responds successfully to the device id (VPD page 0x83) inquiry
request for each path but all path tests and read/write I/O requests
issued on any/all paths to the block device will fail.  Currently, the
kernel resident dm-multipath code will queue the failed read/write I/O
indefinitely if the queue_if_no_path attribute is configured for the
mapped device since this code is currently unable to differentiate
between SCSI transport related failures and other errors.

Newer versions of multipathd (that is, ones based on
multipath-tools-0.4.7) do not need to invoke multipath in order to
configure the dm-multipath mapped devices, simply invoking multipathd
suffices.  Is it reasonable to change these scripts to invoke multipathd
instead of multipath at early boot and not invoke multipath at all from
these scripts?




More information about the dm-devel mailing list