[libvirt] [PATCH] qemuBuildMemoryBackendStr: Handle one more corner case

Martin Kletzander mkletzan at redhat.com
Fri Aug 11 05:34:10 UTC 2017


On Thu, Aug 10, 2017 at 05:43:13PM +0200, Michal Privoznik wrote:
>On 08/10/2017 03:44 PM, Martin Kletzander wrote:
>> On Tue, Aug 08, 2017 at 05:04:15PM +0200, Michal Privoznik wrote:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1458638
>>>
>>> This code is so complicated because we allow enabling the same
>>> bits at many places. Just like in this case: huge pages can be
>>> enabled by global <hugepages/> element under <memoryBacking> or
>>> on per <memory/> basis. To complicate things a bit more, users
>>> are allowed to not specify any specific page size in which case
>>
>> s/to not specify/not to specify/
>>
>> reads a bit nicely, or even
>>
>> s/to not specify any/to specify no/ or "allowed to omit the page size"
>
>I'm going with the latter.
>
>>
>> Choice is yours
>>
>> Reviewed-by: Martin Kletzander <mkletzan at redhat.com>
>
>Pushed, thanks.
>
>>
>> One more note that I told you in person, but I can't resist the urge to
>> share it more widely.  The code is gross because it can't be made much
>> better due to the fact that we keep the parsed data the same way they
>> are in the XML.  I think this could be made nicer if we were to fill all
>> the information during parsing.
>>
>> When parsing /memory/hugepages we would fill in missing information
>> right away.  Then, when parsing /cpu/numa/cell we could copy that
>> information to the specific cells (if missing) and because the whole
>> definition is already available from parsing /memory/hugepages there's
>> less lookups.  Finally, when parsing /devices/memory/source we can
>> directly get missing info from the numa cell if nodemask is present or
>> global hugepages if it is not, and we don't have to lookup anything,
>> because it is already guaranteed to be filled in for the previous parts.
>>
>> I hope it's understandable.  What I hope even more is that someone
>> refactors the code this way =)
>
>Right. One thing though that I haven't realized during our conversation
>is that we would need to keep track if the info is supplied by user or
>by this algorithm. The problem is the following: users can specify
><memoryBacking> <hugepages/> </memoryBacking>  in which case so called
>default huge page size is used. The default is whatever page size is
>printed in /proc/meminfo. For instance, on my local machine:
>
>$ cat /proc/meminfo | grep Hugepagesize:
>Hugepagesize:       2048 kB
>
>$ cat /proc/cpuinfo | grep -o pdpe1gb
>pdpe1gb
>
>So the default huge page size is 2M but the processor supports also 1GB.
>However, on different machine the default may be something else. 2MB
>might not even be accessible there. Therefore if we would fill that info
>during parsing we might be screwed. Alternatively, if admin have 2MB
>enabled during define, but then prohibits 2MB in favour of 1GB, we are
>back in the problem. Hope it makes sense what I'm saying.
>

You don't have to copy the size, just whatever information you might
have.  We can talk it through later or I can explain myself with the
code, but I don't want to do the rework right now, to be honest.

>Michal
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Digital signature
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20170811/168cbb58/attachment-0001.sig>


More information about the libvir-list mailing list