[vfio-users] [FEEDBACK NEEDED] Rewriting the Arch wiki article

Alex Williamson alex.l.williamson at gmail.com
Tue Apr 12 21:24:39 UTC 2016


On Tue, Apr 12, 2016 at 2:30 PM, Bronek Kozicki <brok at spamcop.net> wrote:

> On 12/04/2016 20:36, Nicolas Roy-Renaud wrote:
>
>> I've already rewritten the first two sections ("Prerequisites" and
>> "Setting up IOMMU"), and the rest of the article should essentially
>> follow the same basic structure and style. Replies here or on the wiki's
>> discussion page would be much appreciated.
>>
>
> Hope Alex can clear any misconception I am about to present below
>
>
> 1. I am using option iommu=pt, under impression this is expected to
> improve performance (my CPU is Xeon IvyBridge)
>

It only affects the performance of host devices.  There's less latency if
host DMAs don't do dynamic mappings through the IOMMU, also less isolation
between drivers.


> 2. does PCI bridge have to be in a separate IOMMU group than
> passed-through device?
>

No.  Blank is mostly correct on this, newer kernel remove the pcieport
driver test and presumes any driver attached to a bridge device is ok.


> 3. would be nice to provide hints for headless host. FWIW, I use
> combination of
> 3.1. kernel options:
> console=ttyS0,115200N8R nomodest video=vesa:off video=efifb:off vga=normal
> 3.2.following line in /etc/modprobe.d/vfio.conf:
> options vfio-pci disable_vga=1
> 3.3. large list of blacklisted modules (all framebuffers and nvidia and
> AMD drivers) in /etc/modprobe.d/blacklist.conf:
> # This host is headless, prevent any modules from attaching to video
> hardware
> # NVIDIA
> blacklist nouveau
> blacklist nvidia
> # AMD
> blacklist radeon
> blacklist amdgpu
> blacklist amdkfd
> blacklist fglrx
> # HDMI sound on a GPU
> blacklist snd_hda_intel
> # Framebuffers (ALL of them)
> blacklist vesafb
> blacklist aty128fb
> blacklist atyfb
> blacklist radeonfb
> blacklist cirrusfb
> blacklist cyber2000fb
> blacklist cyblafb
> blacklist gx1fb
> blacklist hgafb
> blacklist i810fb
> blacklist intelfb
> blacklist kyrofb
> blacklist lxfb
> blacklist matroxfb_base
> blacklist neofb
> blacklist nvidiafb
> blacklist pm2fb
> blacklist rivafb
> blacklist s1d13xxxfb
> blacklist savagefb
> blacklist sisfb
> blacklist sstfb
> blacklist tdfxfb
> blacklist tridentfb
> blacklist vfb
> blacklist viafb
> blacklist vt8623fb
> blacklist udlfb
>

I suspect that blacklisting framebuffer drivers doesn't actually do
anything.


> 4. ignore_msrs=1 also helps running Linux guests
>

Never been needed in my experience.


> 5. do not use qemu:arg for binding host device to guest, here is example
> how to do it properly:
>     <hostdev mode='subsystem' type='pci' managed='yes'>
>       <driver name='vfio'/>
>       <source>
>         <address domain='0x0000' bus='0x82' slot='0x00' function='0x0'/>
>       </source>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x08'
> function='0x0'/>
>     </hostdev>
> 5.1. for nVidia Quadro, add just below </source ...>
> <rom bar='off'/>
>

Shouldn't be necessary to hide the ROM, Quadro should never be the primary
graphics device and thus its ROM should never get executed.


>
> 6. if guest is started from BIOS rather than UEFI, keep the above <hostdev
> ...> but replace emulator with a script, e.g.
> # virsh dumpxml gdynia-vfio1 | grep emulator
>     <emulator>/usr/bin/qemu-system-x86_64.xvga.sh</emulator>
> # cat /usr/bin/qemu-system-x86_64.xvga.sh
> #!/bin/sh
> exec nice --adjustment=-5 /usr/bin/qemu-system-x86_64 `echo "$@" | \
>     sed 's/-device vfio-pci,host=82:00.0/-device
> vfio-pci,host=82:00.0,x-vga=on/g' | \
>     sed 's/-device vfio-pci,host=03:00.0/-device
> vfio-pci,host=03:00.0,x-vga=on/g'`
>
>
> 7. performance optimizations
> 7.1. use huge pages
> 7.2. use isolcpus
> 7.3. use vCPU pinnig
> 7.4. use virtio-scsi with multiple queues (depending on number of
> available CPUs, after removing these dedicated to guest)
> 7.5. use multiple queues for virtio-net
> 7.6. for Linux  guests, use P9 for mounting host filesystems in guest


There are numerous ways to do this, that's one.   Hard to make any
universal recommendation there, NFS, sshfs, SMB are also options.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160412/15c4f7f9/attachment.htm>


More information about the vfio-users mailing list