[vfio-users] Speed tips requested

Zachary Boley zboley00 at gmail.com
Sun Mar 26 22:04:18 UTC 2017


Also some source:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Tuning_and_Optimization_Guide/chap-Virtualization_Tuning_Optimization_Guide-BlockIO.html

I really couldnt find the exact one i read but it basically says to do Raw
with what i said above. I'd give most of that a good read because it does
give you some small things to tune etc

On Sun, Mar 26, 2017 at 4:47 PM, Zachary Boley <zboley00 at gmail.com> wrote:

> Just normal virtio, I have it set to that, don't know how I would go about
> setting virtio scsi or if I would need too
>
> On Mar 26, 2017 12:45 PM, "Nick Sarnie" <commendsarnex at gmail.com> wrote:
>
>> Sorry, is that a SCSI controller set to virtio type, or the virtio option
>> selected for the disk instead of SCSI? I've seen both recommended.
>>
>> Thanks,
>> Sarnex
>>
>> On Sun, Mar 26, 2017 at 12:58 PM, Zachary Boley <zboley00 at gmail.com>
>> wrote:
>>
>>> From what I've read Red Hat recommends virtio with raw on no cache with
>>> io thread due to the reasons listed above. Not sure about LVM but they did
>>> also say (or someone did) do not use BTRFS for keeping the image.
>>>
>>> The only optimization I would immediately recommended is to do
>>> host-passthrough as the CPU option in your guests xml. It's noticeably
>>> different assuming you haven't already done it
>>>
>>> On Mar 26, 2017 11:13 AM, "Bronek Kozicki" <brok at spamcop.net> wrote:
>>>
>>>> On 26/03/2017 15:31, Alex Williamson wrote:
>>>>
>>>>> On Sun, Mar 26, 2017 at 4:33 AM, Patrick O'Callaghan <poc at usb.ve
>>>>> <mailto:poc at usb.ve>> wrote:
>>>>>
>>>>>     On Sun, 2017-03-26 at 10:58 +0100, Bronek Kozicki wrote:
>>>>>     > Assuming you use libvirt, make sure to use vCPU pinning. For
>>>>> disk access, try cache='writeback' io='threads'. If you switch to
>>>>> scsio-vfio, this will give you the ability to define queue length which
>>>>> might additionally improve IO. Also, try QCOW2 format for guest disk, it
>>>>> might enable some additional optimizations. However given you host seem to
>>>>> have little spare capacity, YMMV
>>>>>
>>>>>     Thanks. I'm already using CPU pinning as I said. The disk options
>>>>> are
>>>>>     both set to "hypervisor default" so I'll try changing them. I'd
>>>>>     configured the guest disk as 'raw' assuming that would be faster
>>>>> than
>>>>>     QCOW2 but I'll look into it.
>>>>>
>>>>>
>>>>>
>>>>> Generally the recommendation is to use raw (not sparse), LVM, or a
>>>>> block
>>>>> device for the best performance.  QCOW is never going to be as fast as
>>>>> these at writing unused blocks since it needs to go out and allocate
>>>>> new
>>>>> blocks from the underlying file system when this happens.
>>>>>
>>>>
>>>> I am not going to argue with your experience here, only wanted to note
>>>> that QCOW2 can be created with preallcation=falloc (or full, which is not
>>>> very useful) which means there will no extra allocations in the rutime.
>>>> Everything will be allocated at the moment of disk creation with qemu-img
>>>> create -f qcow2 -o preallcation=falloc ....
>>>>
>>>>
>>>>
>>>> B.
>>>>
>>>> _______________________________________________
>>>> vfio-users mailing list
>>>> vfio-users at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>>>>
>>>
>>> _______________________________________________
>>> vfio-users mailing list
>>> vfio-users at redhat.com
>>> https://www.redhat.com/mailman/listinfo/vfio-users
>>>
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20170326/bdc5664a/attachment.htm>


More information about the vfio-users mailing list