[dm-devel] [PATCH V5 8/8] blk-mq: improve bio merge from blk-mq sw queue

Ming Lei ming.lei at redhat.com
Mon Oct 9 04:28:37 UTC 2017


On Tue, Oct 03, 2017 at 02:21:43AM -0700, Christoph Hellwig wrote:
> This looks generally good to me, but I really worry about the impact
> on very high iops devices.  Did you try this e.g. for random reads
> from unallocated blocks on an enterprise NVMe SSD?

Looks no such impact, please see the following data
in the fio test(libaio, direct, bs=4k, 64jobs, randread, none scheduler)

[root at storageqe-62 results]# ../parse_fio 4.14.0-rc2.no_blk_mq_perf+-nvme-64jobs-mq-none.log 4.14.0-rc2.BLK_MQ_PERF_V5+-nvme-64jobs-mq-none.log
---------------------------------------
 IOPS(K)  |    NONE     |    NONE
---------------------------------------
randread  |      650.98 |      653.15
---------------------------------------

OR:

If you worry about this impact, can we simply disable merge on NVMe
for none scheduler? It is basically impossible to merge NVMe's
request/bio when none is used, but it can be doable in case of kyber
scheduler.

-- 
Ming




More information about the dm-devel mailing list