[dm-devel] [PATCH v3 0/8] dm: add request-based blk-mq support

Mike Snitzer snitzer at redhat.com
Thu Dec 18 04:58:24 UTC 2014


On Wed, Dec 17 2014 at  8:41pm -0500,
Keith Busch <keith.busch at intel.com> wrote:

> On Wed, 17 Dec 2014, Mike Snitzer wrote:
> >As for blk-mq support... I don't have access to any NVMe hardware, etc.
> >I only tested with virtio-blk (to a ramdisk, scsi-debug, device on the
> >host) so I'm really going to need to lean on Keith and others to
> >validate blk-mq performance.
> 
> There's a reason no one has multipath capable NVMe drives: they are not
> generally available to anyone right now. :) Mine is a prototype so not
> a good candidate for performance comparisons.
> 
> I was able to get my loaner back a couple hours ago though, so I built and
> tested your tree and happy to say it is very successful. While running
> filesystem fio, I simulated path alternating hot-removal/add sequences
> and everything worked. So functionally it appears great, but I can't
> speak on performance right now.

Great news.
 
> One thing with dual ported PCI-e SSDs is each path can be on a different
> pci domain local to different NUMA nodes. I think there's performance
> to gain if we select the target path closest to the CPU that the thread
> is scheduled on. I don't have data to back that up yet, but could such
> a path selection algorithm be considered in the future?

Definitely, if you look at the comment above
dm-mpath.c:parse_path_selector() you'll see that we have a very generic
mechanism for seeding the path selectors with information that they'll
use as the basis for deciding which path to select.  So in this case
we'd have userspace supply the path to NUMA node mapping, etc.  Not
exactly sure what it'd look like at this point but it should be doable.




More information about the dm-devel mailing list