[dm-devel] [PATCH 5/5] dm-mpath: improve I/O schedule
Mike Snitzer
snitzer at redhat.com
Fri Sep 15 20:10:24 UTC 2017
On Fri, Sep 15 2017 at 12:44pm -0400,
Ming Lei <ming.lei at redhat.com> wrote:
> The actual I/O schedule is done in dm-mpath layer, and the
> underlying I/O schedule is simply bypassed.
>
> This patch sets underlying queue's nr_requests as its queue's
> queue_depth, then we can get its queue busy feedback by simply
> checking if blk_get_request() returns successfully.
>
> In this way, dm-mpath can reports its queue busy to block layer
> effectively, so I/O scheduling is improved much.
>
> Follows the test results on lpfc*:
>
> - fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs,
> over dm-mpath disk)
> - system(12 cores, dual sockets, mem: 64G)
>
> ---------------------------------------
> |v4.13+ |v4.13+
> |+scsi_mq_perf |+scsi_mq_perf+patches
> -----------------------------------------
> IOPS(K) |MQ-DEADLINE |MQ-DEADLINE
> ------------------------------------------
> read | 30.71 | 343.91
> -----------------------------------------
> randread | 22.98 | 17.17
> ------------------------------------------
> write | 16.45 | 390.88
> ------------------------------------------
> randwrite | 16.21 | 16.09
> ---------------------------------------
>
> *:
> 1) lpfc.lpfc_lun_queue_depth=3, so that it is same with .cmd_per_lun
> 2) scsi_mq_perf means the patchset of 'blk-mq-sched: improve SCSI-MQ performance(V4)'[1]
> 3) v4.13+: top commit is 46c1e79fee41 Merge branch 'perf-urgent-for-linus'
> 4) the patchset 'blk-mq-sched: improve SCSI-MQ performance(V4)' focuses
> on improving on SCSI-MQ, and all the test result in that coverletter was
> against the raw lpfc/ib(run after 'multipath -F'), instead of dm-mpath.
> 5) this patchset itself doesn't depend on the scsi_mq_perf patchset[1]
>
> [1] https://marc.info/?t=150436555700002&r=1&w=2
>
> Signed-off-by: Ming Lei <ming.lei at redhat.com>
Very impressive gains Ming!
I'll review in more detail next week but Bart's concerns on patch 1 and
2 definitely need consideration.
Thanks,
Mike
More information about the dm-devel
mailing list