[Libguestfs] Is there any plan to support SPDK disks?

Bob Chen a175818323 at gmail.com
Fri Jul 20 10:05:32 UTC 2018


​
 CentOS-7.3-x86_64-SPDK-NVMe.img.tar.bz2
<https://drive.google.com/file/d/1AL8yXbcCa6LAJYcykE2OF31GkRUTP2um/view?usp=drive_web>
​
Theoretically SPDK should be able to support multiple storage backends,
including NVMe drive, Linux AIO and malloc ramdisk.

But after trying the latter two methods, I found the VM just couldn't boot
up. Even if I had setup a virtual CDROM drive and reinstalled the guest OS
inside. Don't know the reason yet, will consult SPDK community later.

So it looks like you do need to have a NVMe drive...

Below are the instructions of how to setup the test environment.

--------

Host Environment: CentOS 7

1. Compile SPDK
$ git clone https://github.com/spdk/spdk
$ cd spdk
$ git submodule update --init    (clone submodule DPDK)
$ sudo ./scripts/pkgdep.sh (install dependencies, ignore unnecessary
packages)
$ ./configure
$ make -j8


2. Install QEMU 2.12.0
$ wget https://download.qemu.org/qemu-2.12.0.tar.xz
$ tar xvf qemu-2.12.0.tar.xz && cd qemu-2.12.0
$ ./configure --target-list=x86_64-softmmu
$ make -j8 && sudo make install


3. Setup Hugepages
Edit /etc/default/grub, add
intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages=16    (reserve
16GB hugepages for testing)
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg


4. Auto load vfio module
Edit /etc/modules-load.d/vfio.conf, add
vfio_pci


5. Reboot
$ sudo reboot


6. Setup SPDK
$ cd spdk && sudo su
# ulimit -l unlimited
# HUGEMEM=16384 ./scripts/setup.sh
# ./app/vhost/vhost -S /var/tmp -s 1024 -m 0x1 &        (start vhost daemon)
# ./scripts/rpc.py construct_nvme_bdev -b nvme0 -t pcie -a 0000:03:00.0
    (NVMe drive's pci address)
# ./scripts/rpc.py construct_lvol_store nvme0n1 lvs.nvme0n1
# ./scripts/rpc.py construct_lvol_bdev -l lvs.nvme0n1 lvol.0 20480 (create
a 20GB logical volume)
# ./scripts/rpc.py construct_vhost_scsi_controller vhost.0
# ./scripts/rpc.py add_vhost_scsi_lun vhost.0 0 lvs.nvme0n1/lvol.0 (should
then have a /var/tmp/vhost.0 socket)


7. Copy rootfs image into logical volume
# ./scripts/rpc.py start_nbd_disk lvs.nvme0n1/lvol.0 /dev/nbd0
# dd if=CentOS-7.3-x86_64-SPDK-NVMe.img of=/dev/nbd0 oflag=direct bs=1M (Copy
the 20GB image into the logical volume, see attached file)
# ./scripts/rpc.py stop_nbd_disk /dev/nbd0


8. Download UEFI bootloader (
https://github.com/tianocore/tianocore.github.io/wiki/OVMF)
# wget
https://www.kraxel.org/repos/jenkins/edk2/edk2.git-ovmf-x64-0-20180612.196.gb420d98502.noarch.rpm
rpm install ...


9. Start QEMU
# /usr/local/bin/qemu-system-x86_64 \
-enable-kvm -cpu host -smp 4 \
-m 4G -object
memory-backend-file,id=mem0,size=4G,mem-path=/dev/hugepages,share=on -numa
node,memdev=mem0 \
-drive
file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd,format=raw,if=pflash
\
-chardev socket,id=spdk_vhost_scsi0,path=/var/tmp/vhost.0 \
-device
vhost-user-scsi-pci,chardev=spdk_vhost_scsi0,num_queues=4,bootindex=0 \
...


Richard W.M. Jones <rjones at redhat.com> 于2018年7月18日周三 下午9:44写道:

> On Wed, Jul 18, 2018 at 12:10:30PM +0800, Bob Chen wrote:
> > Considering that the technic of SPDK + QEMU is making progress toward
> > maturity.
> >
> > Personally I'd like to do the integration work. Not sure somebody would
> > mind to give me some clue on that? Because I'm not familiar with
> libguestfs
> > code structures.
>
> If qemu supports SPDK then we should get it "for free".  At most we
> might need to add support for the qemu command line like we do for
> other non-file stuff (eg. NBD, iSCSI).
>
> What does a qemu command line accessing SPDK look like?
>
> What sort of resources do we need to set up an SPDK test environment?
> (For example, are NVMe drives absolutely required?)
>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-builder quickly builds VMs from scratch
> http://libguestfs.org/virt-builder.1.html
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libguestfs/attachments/20180720/b2a02e20/attachment.htm>


More information about the Libguestfs mailing list