[vfio-users] regression from linux 4.0.4 to 4.2.0
Janusz
januszmk6 at gmail.com
Thu Oct 1 23:08:52 UTC 2015
Check this: https://lkml.org/lkml/2015/10/1/735
I wanted to test this, but 4.3 gives me some other problems, like:
xcb_connection_has_error() returned true
Could not initialize SDL(No available video device) - exiting
and with -nographic, VM is starting with first error, but its not
working properly, but I am not sure if those my new problems are causing
this, or this bug was introduced somewhere else
W dniu 01.10.2015 o 18:27, Okky Hendriansyah pisze:
> Hmm, my intention is to replicate the physical Intel Core i7-4770
> which has 4 cores with 8 threads in a single socket. I just double
> checked if I use cpus=8,cores=4,threads=2,sockets=1 my Windows 10
> detects 8 processors in the Device Manager, 8 virtual processors in
> Task Manager, and CPU-Z also detects 4 cores with 8 threads just like
> I want to.
>
> If I change the config to cpus=8,cores=4,threads=1,sockets=2, Device
> Manager detects the same, CPU-Z detects 2 socket of 4 single threaded
> cores, but Task Manager is smart it knows that it only runs in 1
> socket with 8 virtual processors.
>
> Best regards,
> Okky Hendriansyah
>
> On Oct 1, 2015, at 12:05, Blank Field <ihatethisfield at gmail.com
> <mailto:ihatethisfield at gmail.com>> wrote:
>
>> Just a side-note:
>> AFAIR -smp option ignores topology specs(sockets-cores-threads) if
>> abregular value is specified.
>> And doing cores=8 threads=2 should give you a 16 (logical) core vcpu.
>> I'm pretty sure that's not what you want to do. Consult
>> qemu-system-x86_64 help to make sure.
>>
>> On Oct 1, 2015 3:26 AM, "Okky Hendriansyah" <okky at nostratech.com
>> <mailto:okky at nostratech.com>> wrote:
>>
>> Hi Alex,
>>
>> I have experienced my setup is also not working as it used to
>> after upgrading from 4.1.6 to 4.2.1. After reading on the list
>> mentioned I googled around and found [1].
>>
>> Changing my processor and memory settings from -smp
>> cpus=8,cores=4,threads=2,sockets=1 -m size=16G to -smp
>> cpus=4,cores=4,threads=1,sockets=1 -m size=8G resulting the VM
>> successfully boots like normal. But changing to other values did
>> not always succeeded. I did not now the formula so I kind of
>> brute forcing my way. I think I managed to boot the VM also
>> with -smp cpus=8,cores=4,threads=2,sockets=1 -m size=4G. I forgot
>> the details.
>>
>> Currently I’m rolling back to 4.1.6, but if there’s something
>> that I could provide to help finding out the issue, I can upgrade
>> it and test it again. Please let me know.
>>
>> [1] https://github.com/tianocore/edk2/issues/21
>>
>> --
>> *Okky Hendriansyah*
>>
>> On September 28, 2015 at 23:53:08, globalgorrilla at fastmail.fm
>> <mailto:globalgorrilla at fastmail.fm> (globalgorrilla at fastmail.fm
>> <mailto:globalgorrilla at fastmail.fm>) wrote:
>>
>>> Alex,
>>>
>>> I reported this on 08/18. It's been echoed several times on this
>>> list
>>> since.
>>>
>>> You've said everything is working for you and you appear to have
>>> a very
>>> similar setup to us others (passing through devices with VFIO to
>>> QEMU
>>> and using OVMF).
>>>
>>> Circumstantial evidence is that there is something in > 4.1 that
>>> (often)
>>> breaks OVMF in QEMU.
>>>
>>> How could we dig into this? Perhaps it's not related to vfio?
>>>
>>> Regarding the MTRR patch, I had made the fix myself in the 4.2
>>> RCs and I
>>> believe the patch is merged in already to 4.2 +. I don't think
>>> that is
>>> the culprit ...?
>>>
>>> Thoughts?
>>>
>>> On 20 Sep 2015, at 9:09, Kővágó Zoltán wrote:
>>>
>>> > Hello,
>>> >
>>> > I've been using vfio-pci to passthrough a GPU to virtual
>>> machine since
>>> > some time now, and it worked great. But this weekend I've
>>> finally had
>>> > enough time to update kernel, and things are completely broken
>>> with
>>> > the new kernel...
>>> >
>>> > I've been using the ACS override patch (and a quick-and-dirty
>>> fix for
>>> > multiple GOPs, but created a proper-ish patch yesterday, see
>>> > http://article.gmane.org/gmane.linux.kernel.efi/6332 ), CSM
>>> disabled
>>> > in UEFI and using OVMF virtual machines. The motherboard is an
>>> ASRock
>>> > Z87M Extreme4, with two PCI video cards (an NVidia GT640 (the
>>> primary
>>> > card, used for linux), for which I almost had to beg at Gigabyte
>>> > support to send an UEFI compatible VBIOS, and a GTX980 (secondary
>>> > card, to passthrough)). The integrated Intel GPU is diabled in
>>> UEFI
>>> > settings. I'm not sure if it's supposed to work, but with 4.0.4
>>> > kernels it worked like a charm.
>>> >
>>> > Now with 4.2.0, when I start qemu the monitor attached to the
>>> > secondary card powers down, and then nothing happens, except qemu
>>> > eating about 150% cpu. I've started bisecting the kernel, and
>>> found
>>> > out that
>>> >
>>> > d69afbc6b1b5d0579f13d1a6339d952c4f60a9f4 KVM: MMU: fix
>>> decoding cache
>>> > type from MTRR
>>> >
>>> > is the culprit. When mtrr is diabled, the old code returns
>>> 0xFF while
>>> > the new returns MTRR_TYPE_UNCACHABLE. I have absolutely no
>>> idea what
>>> > the hell is going on here, but changing that return statement
>>> back
>>> > solves the problem, until
>>> >
>>> > b18d5431acc7a2fd22767925f3a6f597aa4bd29e KVM: x86: fix CR0.CD
>>> <http://CR0.CD>
>>> > virtualization
>>> >
>>> > If I comment out the if kvm_read_cd0 part it will work.. until
>>> > 4e241557fc1cb560bd9e77ca1b4a9352732a5427, which is a merge
>>> commit(!).
>>> > I'm attaching a patch, it fixes the problem until
>>> > f2ae45edbca7ba5324eef01719ede0151dc5cead for me. But as I said
>>> > earlier I have no freakin' idea what's going on here.
>>> >
>>> > I have recompiled OVMF from svn yesterday evening, and have a
>>> > recent-ish qemu master (with some audio related patches). Tell
>>> me if
>>> > you need any more information.
>>> >
>>> > Thanks,
>>> > Zoltan
>>> >
>>> > [magic.patch]
>>> > _______________________________________________
>>> > vfio-users mailing list
>>> > vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>>> > https://www.redhat.com/mailman/listinfo/vfio-users
>>>
>>> _______________________________________________
>>> vfio-users mailing list
>>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>>> https://www.redhat.com/mailman/listinfo/vfio-users
>>
>> _______________________________________________
>> vfio-users mailing list
>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>> https://www.redhat.com/mailman/listinfo/vfio-users
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20151002/f021af23/attachment.htm>
More information about the vfio-users
mailing list