[libvirt-users] libvirtd segfault when using oVirt 4.1 with graphic console - CentOS7.3

Rafał Wojciechowski it at rafalwojciechowski.pl
Fri Apr 21 13:34:12 UTC 2017


Hi,

I can confirm. without tun0 after reboot it is working fine. it was not 
so obvious. thanks


Regards,

Rafal Wojciechowski


W dniu 21.04.2017 o 10:53, Pavel Hrdina pisze:
> On Thu, Apr 20, 2017 at 06:10:04PM +0200, Rafał Wojciechowski wrote:
>> hello,
>>
>> I attached core dump - not sure if it was what you have asked for
>> I am rather just admin not developer :)
>>
>> Regards,
>>
>> Rafal Wojciechowski
> The attached core send in private mail helped to figure out where the crash
> happened.  Backtrace:
>
> Thread 1 (Thread 0x7f194b99d700 (LWP 5631)):
> #0  virNetDevGetifaddrsAddress (addr=0x7f194b99c7c0, ifname=0x7f193400e2b0 "ovirtmgmt") at util/virnetdevip.c:738
> #1  virNetDevIPAddrGet (ifname=0x7f193400e2b0 "ovirtmgmt", addr=addr at entry=0x7f194b99c7c0) at util/virnetdevip.c:795
> #2  0x00007f19467800d6 in networkGetNetworkAddress (netname=<optimized out>, netaddr=netaddr at entry=0x7f1924013f18) at network/bridge_driver.c:4780
> #3  0x00007f193e43a33c in qemuProcessGraphicsSetupNetworkAddress (listenAddr=0x7f19340f7650 "127.0.0.1", glisten=0x7f1924013f10) at qemu/qemu_process.c:4062
> #4  qemuProcessGraphicsSetupListen (vm=<optimized out>, graphics=0x7f1924014f10, cfg=0x7f1934119f00) at qemu/qemu_process.c:4133
> #5  qemuProcessSetupGraphics (flags=17, vm=0x7f19240155d0, driver=0x7f193411f1d0) at qemu/qemu_process.c:4196
> #6  qemuProcessPrepareDomain (conn=conn at entry=0x7f192c00ab50, driver=driver at entry=0x7f193411f1d0, vm=vm at entry=0x7f19240155d0, flags=flags at entry=17) at qemu/qemu_process.c:4969
> #7  0x00007f193e4417c0 in qemuProcessStart (conn=conn at entry=0x7f192c00ab50, driver=driver at entry=0x7f193411f1d0, vm=0x7f19240155d0, asyncJob=asyncJob at entry=QEMU_ASYNC_JOB_START, migrateFrom=migrateFrom at entry=0x0, migrateFd=migrateFd at entry=-1, migratePath=migratePath at entry=0x0, snapshot=snapshot at entry=0x0, vmop=vmop at entry=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, flags=17, flags at entry=1) at qemu/qemu_process.c:5553
> #8  0x00007f193e490030 in qemuDomainCreateXML (conn=0x7f192c00ab50, xml=<optimized out>, flags=<optimized out>) at qemu/qemu_driver.c:1774
> #9  0x00007f195aa6af81 in virDomainCreateXML (conn=0x7f192c00ab50, xmlDesc=0x7f1924003f00 "<?xml version='1.0' encoding='UTF-8'?>\n<domain xmlns:ovirt=\"http://ovirt.org/vm/tune/1.0\" type=\"kvm\">\n    <name>zzzzz</name>\n    <uuid>7d14f6e2-978b-47a5-875d-be5d6b28af2c</uuid>\n    <memory>1048576</"..., flags=0) at libvirt-domain.c:180
> #10 0x00007f195b6e8dfa in remoteDispatchDomainCreateXML (server=0x7f195bc4eb40, msg=0x7f195bc73ee0, ret=0x7f19240038b0, args=0x7f1924003a70, rerr=0x7f194b99cc50, client=0x7f195bc71e90) at remote_dispatch.h:4257
> #11 remoteDispatchDomainCreateXMLHelper (server=0x7f195bc4eb40, client=0x7f195bc71e90, msg=0x7f195bc73ee0, rerr=0x7f194b99cc50, args=0x7f1924003a70, ret=0x7f19240038b0) at remote_dispatch.h:4235
> #12 0x00007f195aaea002 in virNetServerProgramDispatchCall (msg=0x7f195bc73ee0, client=0x7f195bc71e90, server=0x7f195bc4eb40, prog=0x7f195bc62fa0) at rpc/virnetserverprogram.c:437
> #13 virNetServerProgramDispatch (prog=0x7f195bc62fa0, server=server at entry=0x7f195bc4eb40, client=0x7f195bc71e90, msg=0x7f195bc73ee0) at rpc/virnetserverprogram.c:307
> #14 0x00007f195b6f9c6d in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7f195bc4eb40) at rpc/virnetserver.c:148
> #15 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7f195bc4eb40) at rpc/virnetserver.c:169
> #16 0x00007f195a9d6d41 in virThreadPoolWorker (opaque=opaque at entry=0x7f195bc44160) at util/virthreadpool.c:167
> #17 0x00007f195a9d60c8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
> #18 0x00007f1957ff9dc5 in start_thread (arg=0x7f194b99d700) at pthread_create.c:308
> #19 0x00007f1957d2873d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>
> I'll send a patch to upstream libvirt to fix this crash.  However it can take
> a while to get it back to CentOS/RHEL.  The source of this crash is that you
> have a "tun0" network interface without IP address and that interface is
> checked before "ovirtmgmt" and it causes the crash.  You can workaround it
> by removing the "tun0" interface if it doesn't have any IP address.
>
> Pavel
>
>> W dniu 20.04.2017 o 16:44, Pavel Hrdina pisze:
>>> On Thu, Apr 20, 2017 at 07:36:42AM +0200, Rafał Wojciechowski wrote:
>>>> hello,
>>>>
>>>> I am getting such error:
>>>> libvirtd[27218]: segfault at 0 ip 00007f4940725721 sp 00007f4930711740
>>>> error 4 in libvirt.so.0.2000.0[7f4940678000+353000]
>>>>
>>>> when I am trying to start VM with graphic spice/vnc console - in
>>>> headless mode(without graphic console) it is running
>>>> I noticed this after update oVirt 4.0 to oVirt 4.1 however I noticed
>>>> that also libvirtd and related packages were upgraded from 1.2.7 to 2.0.0:
>>>> libvirt-daemon-kvm-2.0.0-10.el7_3.5.x86_64
>>>> libvirt-daemon-driver-qemu-2.0.0-10.el7_3.5.x86_64
>>>>
>>>> I am running up to date CentOS7.3 with 3.10.0-514.16.1.el7.x86_64
>>>> I have same issue with SELinux and without SELinux(checked after reboot
>>>> with permissive mode)
>>>>
>>>>
>>>> I tried to get information about this issue from oVirt team, but after
>>>> mail conversation both me and oVirt team thinks that it might be issue
>>>> in libvirtd
>>>>
>>>> in below link I am putting xmls generated by vdsm which are passing to
>>>> the libvirtd to run the VMs
>>>> first one is from vdsm log and second one is extracted from dump after
>>>> segfault of libvirtd
>>>>
>>>> https://paste.fedoraproject.org/paste/eqpe8Byu2l-3SRdXc6LTLl5M1UNdIGYhyRLivL9gydE=
>>> Hi,
>>>
>>> could you please provide core backtrace? I've tried to reproduce it on
>>> CentOS 7.3 with the same graphics configuration but the guest started
>>> and there was no segfault.
>>>
>>> Thanks,
>>>
>>> Pavel




More information about the libvirt-users mailing list