[PATCH 4/7] qemu: Fix resource leak and coding error in qemuDomainGetIOThreadsMon
John Ferlan
jferlan at redhat.com
Wed Dec 2 16:20:39 UTC 2020
On 12/2/20 9:44 AM, Ján Tomko wrote:
> The commit summary uses 'and' which makes me think there are two
> issues, but the commit message and code only seem to fix one.
>
True - I changed my mind multiple times on this one w/r/t how involved
the fix should be. Originally I did have cleanup added, but then changed
my mind to allow the caller to do it and pass &niothreads instead.
I can rework this one - thanks for the reviews/pushes. I'll also drop
the pidfilefd one and keep it in my false positive list.
Tks -
John
> On a Wednesday in 2020, John Ferlan wrote:
>> If qemuDomainGetIOThreadsMon fails because qemuDomainObjExitMonitor
>> fails, then a -1 is returned which overwrites @niothreads causing a
>> memory leak. Let's pass @niothreads instead.
>>
>> Found by Coverity.
>>
>> Signed-off-by: John Ferlan <jferlan at redhat.com>
>> ---
>> src/qemu/qemu_driver.c | 19 +++++++++----------
>> 1 file changed, 9 insertions(+), 10 deletions(-)
>>
>> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
>> index 8eaa3ce68f..870159de47 100644
>> --- a/src/qemu/qemu_driver.c
>> +++ b/src/qemu/qemu_driver.c
>> @@ -4972,17 +4972,16 @@ qemuDomainGetMaxVcpus(virDomainPtr dom)
>> static int
>> qemuDomainGetIOThreadsMon(virQEMUDriverPtr driver,
>> virDomainObjPtr vm,
>> - qemuMonitorIOThreadInfoPtr **iothreads)
>> + qemuMonitorIOThreadInfoPtr **iothreads,
>> + int *niothreads)
>
> Returning niothreads separately from success/error might clear things
> up,
>
>> {
>> qemuDomainObjPrivatePtr priv = vm->privateData;
>> - int niothreads = 0;
>>
>> qemuDomainObjEnterMonitor(driver, vm);
>> - niothreads = qemuMonitorGetIOThreads(priv->mon, iothreads);
>> - if (qemuDomainObjExitMonitor(driver, vm) < 0 || niothreads < 0)
>> + *niothreads = qemuMonitorGetIOThreads(priv->mon, iothreads);
>
> but qemuMonitorGetIOThreads can also return -1. In that case we
> should not:
> a) return 0 in this function
> b) compare the return value against unsigned size_t in the cleanup
> section of qemuDomainGetStatsIOThread
>
>> + if (qemuDomainObjExitMonitor(driver, vm) < 0)
>
> I think that now that we take a job even for destroying the domain
> in processMonitorEOFEvent, this should not happen.
>
> But if it still can, it would be more consistent if
> qemuDomainGetIOThreadsMon
> cleaned up after itself if it returns -1, instead of leaving it up
> to the caller.
>
> (Other callers of qemuMonitorGetIOThreads and qemuDomainGetIOThreadsMon
> seem to handle it properly and check if (iothreads) in their cleanup
> sections, it's only qemuDomainGetStatsIOThread that relies on
> niothreads being right)
>
> Jano
>
>> return -1;
>> -
>> - return niothreads;
>> + return 0;
>> }
>>
>>
>> @@ -5014,7 +5013,7 @@ qemuDomainGetIOThreadsLive(virQEMUDriverPtr driver,
>> goto endjob;
>> }
>>
>> - if ((niothreads = qemuDomainGetIOThreadsMon(driver, vm,
>> &iothreads)) < 0)
>> + if (qemuDomainGetIOThreadsMon(driver, vm, &iothreads,
>> &niothreads) < 0)
>> goto endjob;
>>
>> /* Nothing to do */
>> @@ -18507,7 +18506,7 @@ qemuDomainGetStatsIOThread(virQEMUDriverPtr
>> driver,
>> qemuDomainObjPrivatePtr priv = dom->privateData;
>> size_t i;
>> qemuMonitorIOThreadInfoPtr *iothreads = NULL;
>> - int niothreads;
>> + int niothreads = 0;
>> int ret = -1;
>>
>> if (!HAVE_JOB(privflags) || !virDomainObjIsActive(dom))
>> @@ -18516,8 +18515,8 @@ qemuDomainGetStatsIOThread(virQEMUDriverPtr
>> driver,
>> if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_OBJECT_IOTHREAD))
>> return 0;
>>
>> - if ((niothreads = qemuDomainGetIOThreadsMon(driver, dom,
>> &iothreads)) < 0)
>> - return -1;
>> + if (qemuDomainGetIOThreadsMon(driver, dom, &iothreads,
>> &niothreads) < 0)
>> + goto cleanup;
>>
>> /* qemuDomainGetIOThreadsMon returns a NULL-terminated list, so we
>> must free
>> * it even if it returns 0 */
>> --
>> 2.28.0
>>
More information about the libvir-list
mailing list