[libvirt-users] nvme, spdk and host linux version
Michal Privoznik
mprivozn at redhat.com
Thu Dec 12 17:35:38 UTC 2019
On 12/12/19 3:51 PM, Mauricio Tavares wrote:
> On Thu, Dec 12, 2019 at 5:40 AM Michal Privoznik <mprivozn at redhat.com> wrote:
>>
>> On 11/27/19 4:12 PM, Mauricio Tavares wrote:
>>> I have been following the patches on nvme support on the list and was
>>> wondering: If I wanted to build a vm host to be on the bleeding edge
>>> for nvme and spdk fun in libvirt, which linux distro --
>>> fedora/ubuntu/centos/etc -- should I pick?
>>>
>>
>> For NVMe itself it probably doesn't matter as it doesn't require any
>> special library. However, I'm not so sure about SPDK, esp. whether my
>> NVMe patches is what you really need? My patches enable the only missing
>> combination:
>>
>> host kernel storage stack + qemu storage stack = <disk type='block>
>> <source dev='/dev/nvme0n1'/> </disk>
>> This has disadvantage of latency added by both stacks, but allows
>> migration.
>>
>> neither host kernel nor qemu storage stack = <hostdev/> (aka PCI assignment)
>> This offers near bare metal latencies, but prohibits migration.
>>
>> qemu storage stack only = <disk type='nvme'/>
>> This is what my patches implement and should combine the above two:
>> small latencies and ability to migrate.
>>
> That is actually my question: is handing the hard drive through
> PCI assignment faster or slowe than disk type='nvme'?
According to:
https://www.linux-kvm.org/images/4/4c/Userspace_NVMe_driver_in_QEMU_-_Fam_Zheng.pdf
(slide 25)
the fastest is the host, followed by PCI assignment (in qemu it's called
VFIO), then disk type='nvme' and the last one is disk type='block' with
/dev/nvme0 (referred to as linux-aio).
Michal
More information about the libvirt-users
mailing list