[libvirt] [PATCH 4/9] xen: Add coverity[ptr_arith] and [sign_extension] tags

John Ferlan jferlan at redhat.com
Tue Jan 29 13:41:05 UTC 2013


On 01/28/2013 06:52 PM, Eric Blake wrote:
> On 01/22/2013 12:28 PM, John Ferlan wrote:
> 
>>>> @@ -1795,8 +1795,11 @@ virXen_setvcpumap(int handle, int id, unsigned int vcpu,
>>>>              return -1;
>>>>  
>>>>          memset(pm, 0, sizeof(cpumap_t));
>>>> -        for (j = 0; j < maplen; j++)
>>>> +        for (j = 0; j < maplen; j++) {
>>>> +            /* coverity[ptr_arith] */
>>>> +            /* coverity[sign_extension] */
>>>>              *(pm + (j / 8)) |= cpumap[j] << (8 * (j & 7));
>>>> +        }
>>>
>>> Having to add two comments to shut up Coverity feels awkward.  Would it
>>> also work to do 'uint64_t j' instead of the current 'int j' in the
>>> declaration a few lines earlier?  Not only would it be a smaller diff,
>>> but the fewer Coverity comments we have to use, the better I feel.
>>>
>>> I know this has already been pushed, but it is still worth seeing if a
>>> followup patch can clean things further.
> 
> Ouch, we really DO have a bug, not to mention some very horrible code
> trying to do nasty aliasing that is not very portable.  I'm surprised we
> don't have alignment complaints, by trying to treat cpumap_t as an array
> of 64-bit integers.
> 
>>>
>>
>> Nope, just tried using uint64_t on 'j' without any luck.  I also tried putting the comments on the same line without the desired effect. Here's data on the two reported defects (I turned OFF line wrap for this - the line numbers are from an older analysis):
>>
>> Error: ARRAY_VS_SINGLETON (CWE-119): [#def1]
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1751: cond_false: Condition "hv_versions.hypervisor > 1", taking false branch
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1790: else_branch: Reached else branch
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1792: address_of: Taking address with "&xen_cpumap" yields a singleton pointer.
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1792: assign: Assigning: "pm" = "&xen_cpumap".
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1795: cond_false: Condition "maplen > 8 /* (int)sizeof (cpumap_t) */", taking false branch
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1795: cond_false: Condition "0UL /* sizeof (cpumap_t) & 7 */", taking false branch
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1799: cond_true: Condition "j < maplen", taking true branch
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1800: ptr_arith: Using "pm" as an array.  This might corrupt or misinterpret adjacent memory locations.
> 
> This one, I don't know if we can silence without a coverity comment.
> Basically, it boils down to whether cpumap_t is typedef'd to something
> that can possibly be larger than 64 bits (it isn't - Coverity just
> confirmed that sizeof(cpumap_t) is 8 bytes).  Since we just ensured that
> maplen will not go beyond the bounds of a 64-bit int array that overlays
> the same memory space, I'm okay with the /* coverity[ptr_arith] */
> comment, but see below...
> 
>>
>> AND
>>
>> Error: SIGN_EXTENSION (CWE-194): [#def245]
>> libvirt-1.0.0/src/xen/xen_hypervisor.c:1800: sign_extension: Suspicious implicit sign extension: "cpumap[j]" with type "unsigned char" (8 bits, unsigned) is promoted in "cpumap[j] << 8 * (j & 7)" to type "int" (32 bits, signed), then sign-extended to type "unsigned long" (64 bits, unsigned).  If "cpumap[j] << 8 * (j & 7)" is greater than 0x7FFFFFFF, the upper bits of the result will all be 1.
> 
> Here is the real bug (but I'm surprised why it didn't go away when you
> changed j from int to int64_t).  When j==4, you are attempting to do
> 'int << (8*4)'; but you _can't_ portably shift a 32-bit integer by any
> more than 31 bits.  We _have_ to add in a type conversion to force this
> shift to occur in 64-bit math, such as:
> 
> *(pm + (j / 8)) |= cpumap[j] << (8ULL * (j & 7));
> 
> Or better yet, why even futz around with 64-bit aliasing?  It looks like
> this code is trying to take endian-independent input and force it into
> an endian-dependent xen_cpumap variable.  I think it might be cleaner as:
> 
>     } else {
>         cpumap_t xen_cpumap; /* limited to 64 CPUs in old hypervisors */
>         uint64_t val = 0;
>         int j;
> 
>         if ((maplen > (int)sizeof(cpumap_t)) || (sizeof(cpumap_t) & 7))
>             return -1;
> 
>         memset(&xen_cpumap, 0, sizeof(*xen_cpumap));
>         for (j = 0; j < maplen; j++) {
>             val |= cpumap[j] << (8ULL * (j & 7));
>             if (j % 7 == 7) {
>                 memcpy(((char *)&xen_cpumap) + j, &val, sizeof(val));
>                 val = 0;
>             }
>         }
> 
> and see if that shuts up Coverity.
> 

FWIW:  This path of code only occurs when "(hv_versions.hypervisor <=
1)" - IOW really old code.  I checked history and the code was added by
commit id '86247f2c'.  Also since the target of the "|" operation is a
'uint64_t' (e.g. *(pm + (j / 8)), wouldn't the shift from 0->56 be OK
(e.g. (8 * (j & 7)))?  That is it's not an 'int << (8*4)' it's a
'uint64_t << (8*4)'.

When first approaching this I figured I didn't want to introduce a bug
into code that's been around a long time and that may not have any one
using it. I agree the line looks ugly and it did take me a bit to think
about it.

Mathematically what you propose with the memcpy() works; however, I come
from an architecture where a memcpy to an unaligned address causes a
fault of which there'd be many. I wrote a little sample loop that went
from 0->63 and printed out values just to see what one would get.  The
memcpy value is the newer algorithm and the *pm value is the former
algorithm.

j=0 j%7=0 j&7=0 memcpy=0x7fffde5b4828 *pm=0x7fffde5b4828 shift=0
j=1 j%7=1 j&7=1 memcpy=0x7fffde5b4829 *pm=0x7fffde5b4828 shift=8
j=2 j%7=2 j&7=2 memcpy=0x7fffde5b482a *pm=0x7fffde5b4828 shift=16
j=3 j%7=3 j&7=3 memcpy=0x7fffde5b482b *pm=0x7fffde5b4828 shift=24
j=4 j%7=4 j&7=4 memcpy=0x7fffde5b482c *pm=0x7fffde5b4828 shift=32
j=5 j%7=5 j&7=5 memcpy=0x7fffde5b482d *pm=0x7fffde5b4828 shift=40
j=6 j%7=6 j&7=6 memcpy=0x7fffde5b482e *pm=0x7fffde5b4828 shift=48
j=7 j%7=0 j&7=7 memcpy=0x7fffde5b482f *pm=0x7fffde5b4828 shift=56
j=8 j%7=1 j&7=0 memcpy=0x7fffde5b4830 *pm=0x7fffde5b4830 shift=0
j=9 j%7=2 j&7=1 memcpy=0x7fffde5b4831 *pm=0x7fffde5b4830 shift=8
j=10 j%7=3 j&7=2 memcpy=0x7fffde5b4832 *pm=0x7fffde5b4830 shift=16
j=11 j%7=4 j&7=3 memcpy=0x7fffde5b4833 *pm=0x7fffde5b4830 shift=24
j=12 j%7=5 j&7=4 memcpy=0x7fffde5b4834 *pm=0x7fffde5b4830 shift=32
j=13 j%7=6 j&7=5 memcpy=0x7fffde5b4835 *pm=0x7fffde5b4830 shift=40
j=14 j%7=0 j&7=6 memcpy=0x7fffde5b4836 *pm=0x7fffde5b4830 shift=48
j=15 j%7=1 j&7=7 memcpy=0x7fffde5b4837 *pm=0x7fffde5b4830 shift=56
...
j=56 j%7=0 j&7=0 memcpy=0x7fffde5b4860 *pm=0x7fffde5b4860 shift=0
j=57 j%7=1 j&7=1 memcpy=0x7fffde5b4861 *pm=0x7fffde5b4860 shift=8
j=58 j%7=2 j&7=2 memcpy=0x7fffde5b4862 *pm=0x7fffde5b4860 shift=16
j=59 j%7=3 j&7=3 memcpy=0x7fffde5b4863 *pm=0x7fffde5b4860 shift=24
j=60 j%7=4 j&7=4 memcpy=0x7fffde5b4864 *pm=0x7fffde5b4860 shift=32
j=61 j%7=5 j&7=5 memcpy=0x7fffde5b4865 *pm=0x7fffde5b4860 shift=40
j=62 j%7=6 j&7=6 memcpy=0x7fffde5b4866 *pm=0x7fffde5b4860 shift=48
j=63 j%7=0 j&7=7 memcpy=0x7fffde5b4867 *pm=0x7fffde5b4860 shift=56

John




More information about the libvir-list mailing list