<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>On 02/09/2017 07:23 PM, Alex Williamson wrote:<br>
</p>
<blockquote
cite="mid:CAEMbtc+LjRp__sR59aArz-WnqaTi-ojQnNrNnc01zOkKoBn_yQ@mail.gmail.com"
type="cite">
<pre wrap="">On Thu, Feb 9, 2017 at 5:09 PM, David Reed <a class="moz-txt-link-rfc2396E" href="mailto:david.byui@gmail.com"><david.byui@gmail.com></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">I've successfully been able to get two VMs setup with GPU/USB pass-through
and individually they both work, but I can't run both of them at the same
time. Virt-manager will complain that the other's PCI device (GPU) is
already in use even though they don't share the same GPU.
I suspect it is because both GPUs have the same IOMMU group that is being
assigned to the vfio driver. I was hoping there would be some way to make
this work as they are both being controlled by vfio.
</pre>
</blockquote>
<pre wrap="">
Sorry, this is working as expected for your hardware. The PCIe root ports
do not guarantee upstream routing, allowing the possibility of non-IOMMU
translated peer-to-peer between downstream devices. See here for further
info <a class="moz-txt-link-freetext" href="http://vfio.blogspot.com/2014/08/iommu-groups-inside-and-out.html">http://vfio.blogspot.com/2014/08/iommu-groups-inside-and-out.html</a>
Hacks to bypass this isolation are not supported upstream. You can find
information about processors supporting isolation on root ports here:
<a class="moz-txt-link-freetext" href="http://vfio.blogspot.com/2015/10/intel-processors-with-acs-support.html">http://vfio.blogspot.com/2015/10/intel-processors-with-acs-support.html</a>
(it's a bit dated but you can extrapolate from the trend). Thanks,
Alex
</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
vfio-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a>
</pre>
</blockquote>
If the OP wants hardware reccs I like AMD's c32/g34 opteron's on a
coreboot motherboard, they are a great option for a cheap *proper*
iommu supporting virtualization setup ($20 per 16 cores). <br>
On my no-blobs coreboot kgpe-d16 system every device gets its own
iommu group.<br>
<br>
Before that I bought two different computers that "supported" IOMMU
and wasted too much money, even a lot of new intel "server"
motherboards don't properly implement it for one reason or another
so it is a good idea to go with a mobo/system that supports free
firmware so that problems are fixable.<br>
</body>
</html>