[vfio-users] problem with hugepages and sound

Alex Williamson alex.l.williamson at gmail.com
Thu Nov 5 03:49:29 UTC 2015


On Wed, Nov 4, 2015 at 8:06 PM, Okky Hendriansyah <okky at nostratech.com>
wrote:

> On November 5, 2015 at 09:49:27, Alex Williamson (
> alex.l.williamson at gmail.com) wrote:
>
> I don't buy that hugepages allocated at boot perform any different than
> hugepages allocated dynamically.
>
> I stand corrected. So that means memory fragmentation on the host has no
> practical performance hit on using hugepages for device assignment?
>

The fragmentation issue with hugepages is generally that as memory gets
fragmented, there's no guarantee that you can reliably allocate hugepages.
A script that doesn't do any error checking to test how many huge pages are
actually available might work 99% of the time, but the 100th time you start
that guest without a host system reboot, maybe it won't get all the pages
you requested and starting the VM will fail.  That's the more significant
benefit of boot time allocated hugepages I think, but you also lose the
flexibility of having that memory available for general use otherwise,
which is why transparent hugepages are so nice, but as I already mentioned,
those don't work with page pinning for device assignment.

I can imagine that contiguous hugepages has some tiny benefit for TLB hits
and prefetching, but I also expect it would be well within the noise of
anything other than a very targeted benchmark.

AMD-Vi does actually have quite robust superpage support in the IOMMU, so
on those platforms the IOTLB might see a benefit from being able to map
4MB, 8MB, 16MB, etc contiguous memory ranges, but it's probably still very
difficult to measure.  It also only seems to work reliably on the FX
systems since many APU users need to disable this feature for stability.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20151104/8ccb48a7/attachment.htm>


More information about the vfio-users mailing list