[Crash-utility] Crash-utility Digest, Vol 178, Issue 21

lijiang lijiang at redhat.com
Fri Aug 7 05:52:22 UTC 2020


在 2020年08月06日 15:51, Mathias Krause 写道:
> Hi Lianbo,
> 
> Am 06.08.20 um 05:03 schrieb lijiang:
>> 在 2020年07月30日 21:34, crash-utility-request at redhat.com 写道:
>>> Message: 4
>>> Date: Thu, 30 Jul 2020 15:34:30 +0200
>>> From: Mathias Krause <minipli at grsecurity.net>
>>> To: crash-utility at redhat.com
>>> Subject: [Crash-utility] [PATCH 3/3] Support core files with "unusual"
>>> 	layout
>>> Message-ID: <20200730133430.7773-4-minipli at grsecurity.net>
>>> Content-Type: text/plain; charset=US-ASCII
>>>
>>> The netdump code not only gets used for netdump/diskdump files, but also
>>> for kdump core files. These can also be generated with the 'vmss2core'
>>> tool that'll produce a slightly different format that isn't as densely
>>> packed as we expect it to be. In fact, the implicit assumption that the
>>> ELF program headers directly follow the ELF header isn't always true for
>>
>> Hi, Mathias
>>
>> Thanks for your patch. I agree with you, the actual files may differ.
>>
>>> these files, as they may contain a small padding area after the ELF
>>> header -- which is totally conforming in regards to the ELF spec. This
>>> padding in combination with the implicit assumption of densely packed
>>> headers make us interpret the padding bytes as program headers which is
>>> obviously wrong.
>>>
>>> Support these kind of core files too by not blindly assuming the program
>>> headers follow the ELF header but by looking at the program header
>>> offset in the ELF header and use that instead. Add some guarding sanity
>>> checks to decline operating on obviously malicious or broken core files.
>>>
>>> To not needlessly make things too complicated, allow a "padding space" of
>>> up to 128 bytes only.
>>>
>>> Signed-off-by: Mathias Krause <minipli at grsecurity.net>
>>> ---
>>>  netdump.c | 86 ++++++++++++++++++++++++++++++++++++++-----------------
>>>  netdump.h |  2 ++
>>>  2 files changed, 61 insertions(+), 27 deletions(-)
>>>
>>> diff --git a/netdump.c b/netdump.c
>>> index 406416af36bf..0490bb52a8ed 100644
>>> --- a/netdump.c
>>> +++ b/netdump.c
>>> @@ -132,7 +132,7 @@ is_netdump(char *file, ulong source_query)
>>>  		}
>>>  	}
>>>  
>>> -	size = MIN_NETDUMP_ELF_HEADER_SIZE;
>>> +	size = SAFE_NETDUMP_ELF_HEADER_SIZE;
>>>          if ((eheader = (char *)malloc(size)) == NULL) {
>>>                  fprintf(stderr, "cannot malloc minimum ELF header buffer\n");
>>>                  clean_exit(1);
>>> @@ -219,8 +219,22 @@ is_netdump(char *file, ulong source_query)
>>>  		    source_query))
>>>  			goto bailout;
>>>  
>>> -                load32 = (Elf32_Phdr *)
>>> -                        &eheader[sizeof(Elf32_Ehdr)+sizeof(Elf32_Phdr)];
>>
>> For the Program Header table, could it be optional? If present, the value of
>> e_phoff should be non-zero, otherwise its value is zero. Would it be better
>> to check if the value of e_phoff is valid?
> 
> We already ensure the file has at least two program headers by testing
> e_phnum to be >= 2 before. So program headers aren't optional any more
> when reaching this code.
> 

Thank you for the explanation in detail.

> Admitted, while it would be rather unusual to have the program headers
> starting at file offset 0, I don't think we should decline parsing such

Only the ELF header has a fixed position in the file, and an ELF header always
resides at the beginning. If e_phoff is zero, there may be something wrong.

> files just because. IMHO, it's rather more sensible to test for the
> minimal number of program headers -- as we already do -- and otherwise
> take e_phoff as-is while still ensuring it's within the file's size
> limit. We could add a warning if e_phoff is zero, though.
> 
>>
>>> +		if (elf32->e_phoff != sizeof(Elf32_Ehdr)) {
>>> +			if (CRASHDEBUG(1))
>>> +				error(WARNING, "%s: first PHdr not following "
>>> +					"EHdr (PHdr offset = %u)", file,
> 
> Just noticed the missing '\n' here. Will fix in v2, if needed.
> 
Yes.  Thanks.

BTW: There are some code style issues such as the code indent, etc. I found out
its errors with the script tool(checkpatch.pl).

>>> +					elf32->e_phoff);
>>> +			/* it's okay as long as we've read enough data */
>>> +			if (elf32->e_phoff > size - 2 * sizeof(Elf32_Phdr)) {
>>> +				error(WARNING, "%s: PHdr to far into file!\n",
>>> +					file);
>>> +				goto bailout;
>>> +			}
>>> +		}
>>> +
>>> +		/* skip the NOTE program header */
>>> +		load32 = (Elf32_Phdr *)
>>> +			&eheader[elf32->e_phoff+sizeof(Elf32_Phdr)];
>>>  
>>>  		if ((load32->p_offset & (MIN_PAGE_SIZE-1)) ||
>>>  		    (load32->p_align == 0))
>>> @@ -291,8 +305,22 @@ is_netdump(char *file, ulong source_query)
>>>  		    source_query))
>>>  			goto bailout;
>>>  
>>> -                load64 = (Elf64_Phdr *)
>>> -                        &eheader[sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)];
>>> +		if (elf64->e_phoff != sizeof(Elf64_Ehdr)) {
>>> +			if (CRASHDEBUG(1))
>>> +				error(WARNING, "%s: first PHdr not following "
>>> +					"EHdr (PHdr offset = %u)", file,
> 
> Here too.
> 
>>> +					elf64->e_phoff);
>>> +			/* it's okay as long as we've read enough data */
>>> +			if (elf64->e_phoff > size - 2 * sizeof(Elf64_Phdr)) {
>>> +				error(WARNING, "%s: PHdr to far into file!\n",
>>> +					file);
>>> +				goto bailout;
>>> +			}
>>> +		}
>>> +
>>> +		/* skip the NOTE program header */
>>> +		load64 = (Elf64_Phdr *)
>>> +			&eheader[elf64->e_phoff+sizeof(Elf64_Phdr)];
>>>  
>>>  		if ((load64->p_offset & (MIN_PAGE_SIZE-1)) ||
>>>  		    (load64->p_align == 0))
>>> @@ -353,9 +381,8 @@ is_netdump(char *file, ulong source_query)
>>>  			clean_exit(1);
>>>  		}
>>>          	nd->notes32 = (Elf32_Phdr *)
>>> -		    &nd->elf_header[sizeof(Elf32_Ehdr)];
>>> -        	nd->load32 = (Elf32_Phdr *)
>>> -		    &nd->elf_header[sizeof(Elf32_Ehdr)+sizeof(Elf32_Phdr)];
>>> +		    &nd->elf_header[nd->elf32->e_phoff];
>>> +		nd->load32 = nd->notes32 + 1;
>>>  		if (format == NETDUMP_ELF32)
>>>  			nd->page_size = (uint)nd->load32->p_align;
>>>                  dump_Elf32_Ehdr(nd->elf32);
>>> @@ -392,9 +419,8 @@ is_netdump(char *file, ulong source_query)
>>>                          clean_exit(1);
>>>                  }
>>>                  nd->notes64 = (Elf64_Phdr *)
>>> -                    &nd->elf_header[sizeof(Elf64_Ehdr)];
>>> -                nd->load64 = (Elf64_Phdr *)
>>> -                    &nd->elf_header[sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)];
>>> +                    &nd->elf_header[nd->elf64->e_phoff];
>>> +                nd->load64 = nd->notes64 + 1;
>>>  		if (format == NETDUMP_ELF64)
>>>  			nd->page_size = (uint)nd->load64->p_align;
>>>                  dump_Elf64_Ehdr(nd->elf64);
>>> @@ -469,8 +495,8 @@ resize_elf_header(int fd, char *file, char **eheader_ptr, char **sect0_ptr,
>>>  	case NETDUMP_ELF32:
>>>  	case KDUMP_ELF32:
>>>  		num_pt_load_segments = elf32->e_phnum - 1;
>>> -		header_size = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr) +
>>> -			(sizeof(Elf32_Phdr) * num_pt_load_segments);
>>> +		header_size = MAX(sizeof(Elf32_Ehdr), elf32->e_phoff) +
>>> +			(sizeof(Elf32_Phdr) * (num_pt_load_segments + 1));
>>>  		break;
>>>  
>>>  	case NETDUMP_ELF64:
>>> @@ -513,8 +539,8 @@ resize_elf_header(int fd, char *file, char **eheader_ptr, char **sect0_ptr,
>>>  		} else
>>>  			num_pt_load_segments = elf64->e_phnum - 1;
>>>  
>>> -		header_size = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr) +
>>> -			(sizeof(Elf64_Phdr) * num_pt_load_segments);
>>> +		header_size = MAX(sizeof(Elf64_Ehdr), elf64->e_phoff) +
>>> +			(sizeof(Elf64_Phdr) * (num_pt_load_segments + 1));
>>>  		break;
>>>  	}
>>>  
>>> @@ -544,7 +570,7 @@ resize_elf_header(int fd, char *file, char **eheader_ptr, char **sect0_ptr,
>>>  	{
>>>  	case NETDUMP_ELF32:
>>>  	case KDUMP_ELF32:
>>> -		load32 = (Elf32_Phdr *)&eheader[sizeof(Elf32_Ehdr)+sizeof(Elf32_Phdr)];
>>> +		load32 = (Elf32_Phdr *)&eheader[elf32->e_phoff+sizeof(Elf32_Phdr)];
>>>  		p_offset32 = load32->p_offset;
>>>  		for (i = 0; i < num_pt_load_segments; i++, load32 += 1) {
>>>  			if (load32->p_offset && 
>>> @@ -556,7 +582,7 @@ resize_elf_header(int fd, char *file, char **eheader_ptr, char **sect0_ptr,
>>>  
>>>  	case NETDUMP_ELF64:
>>>  	case KDUMP_ELF64:
>>> -		load64 = (Elf64_Phdr *)&eheader[sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)];
>>> +		load64 = (Elf64_Phdr *)&eheader[elf64->e_phoff+sizeof(Elf64_Phdr)];
>>>  		p_offset64 = load64->p_offset;
>>>  		for (i = 0; i < num_pt_load_segments; i++, load64 += 1) {
>>>  			if (load64->p_offset &&
>>> @@ -4459,8 +4485,12 @@ proc_kcore_init_32(FILE *fp, int kcore_fd)
>>>  		close(fd);
>>>  
>>>  	elf32 = (Elf32_Ehdr *)&eheader[0];
>>> -	notes32 = (Elf32_Phdr *)&eheader[sizeof(Elf32_Ehdr)];
>>> -	load32 = (Elf32_Phdr *)&eheader[sizeof(Elf32_Ehdr)+sizeof(Elf32_Phdr)];
>>> +	if (elf32->e_phoff > sizeof(eheader) - 2 * sizeof(Elf32_Phdr)) {
>>> +		error(INFO, "/proc/kcore: ELF program header offset too big!\n");
>>> +		return FALSE;
>>> +	}
>>> +	notes32 = (Elf32_Phdr *)&eheader[elf32->e_phoff];
>>> +	load32 = notes32 + 1;
>>>  
>>>  	pkd->segments = elf32->e_phnum - 1;
>>>  
>>> @@ -4479,9 +4509,8 @@ proc_kcore_init_32(FILE *fp, int kcore_fd)
>>>  	}
>>>  
>>>  	BCOPY(&eheader[0], &pkd->elf_header[0], pkd->header_size);	
>>> -	pkd->notes32 = (Elf32_Phdr *)&pkd->elf_header[sizeof(Elf32_Ehdr)];
>>> -	pkd->load32 = (Elf32_Phdr *)
>>> -		&pkd->elf_header[sizeof(Elf32_Ehdr)+sizeof(Elf32_Phdr)];
>>> +	pkd->notes32 = (Elf32_Phdr *)&pkd->elf_header[elf32->e_phoff];
>>> +	pkd->load32 = pkd->notes32 + 1;
>>>  	pkd->flags |= KCORE_ELF32;
>>>  	
>>>  	kcore_memory_dump(CRASHDEBUG(1) ? fp : pc->nullfp);
>>> @@ -4529,8 +4558,12 @@ proc_kcore_init_64(FILE *fp, int kcore_fd)
>>>  		close(fd);
>>>  
>>>  	elf64 = (Elf64_Ehdr *)&eheader[0];
>>> -	notes64 = (Elf64_Phdr *)&eheader[sizeof(Elf64_Ehdr)];
>>> -	load64 = (Elf64_Phdr *)&eheader[sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)];
>>> +	if (elf64->e_phoff > sizeof(eheader) - 2 * sizeof(Elf64_Phdr)) {
>>> +		error(INFO, "/proc/kcore: ELF program header offset too big!\n");
>>> +		return FALSE;
>>> +	}
>>> +	notes64 = (Elf64_Phdr *)&eheader[elf64->e_phoff];
>>> +	load64 = notes64 + 1;
>>>  
>>>  	pkd->segments = elf64->e_phnum - 1;
>>>  
>>> @@ -4550,9 +4583,8 @@ proc_kcore_init_64(FILE *fp, int kcore_fd)
>>>  	}
>>>  
>>>  	BCOPY(&eheader[0], &pkd->elf_header[0], pkd->header_size);	
>>> -	pkd->notes64 = (Elf64_Phdr *)&pkd->elf_header[sizeof(Elf64_Ehdr)];
>>> -	pkd->load64 = (Elf64_Phdr *)
>>> -		&pkd->elf_header[sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)];
>>> +	pkd->notes64 = (Elf64_Phdr *)&pkd->elf_header[elf64->e_phoff];
>>> +	pkd->load64 = pkd->notes64 + 1;
>>>  	pkd->flags |= KCORE_ELF64;
>>>  	
>>>  	kcore_memory_dump(CRASHDEBUG(1) ? fp : pc->nullfp);
>>> diff --git a/netdump.h b/netdump.h
>>> index 7fa04f7c3a0f..844279bc4a00 100644
>>> --- a/netdump.h
>>> +++ b/netdump.h
>>> @@ -25,6 +25,8 @@
>>>          sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)+sizeof(Elf64_Phdr)
>>>  #define MIN_NETDUMP_ELF_HEADER_SIZE \
>>>          MAX(MIN_NETDUMP_ELF32_HEADER_SIZE, MIN_NETDUMP_ELF64_HEADER_SIZE)
>>> +#define SAFE_NETDUMP_ELF_HEADER_SIZE \
>>> +	(MIN_NETDUMP_ELF_HEADER_SIZE+128)
>>>  

Could it be better to define the macro like this? That can avoid using directly
the magic number.

 #define MIN_NETDUMP_ELF32_HEADER_SIZE \
-        sizeof(Elf32_Ehdr)+sizeof(Elf32_Phdr)+sizeof(Elf32_Phdr)
+        sizeof(Elf32_Ehdr)+sizeof(Elf32_Phdr)+sizeof(Elf32_Phdr)+sizeof(Elf32_Shdr)
 #define MIN_NETDUMP_ELF64_HEADER_SIZE \
-        sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)+sizeof(Elf64_Phdr)
+        sizeof(Elf64_Ehdr)+sizeof(Elf64_Phdr)+sizeof(Elf64_Phdr)+sizeof(Elf64_Shdr)
 #define MIN_NETDUMP_ELF_HEADER_SIZE \
         MAX(MIN_NETDUMP_ELF32_HEADER_SIZE, MIN_NETDUMP_ELF64_HEADER_SIZE)

Actually, it is better not to assume its size, but to read out the ELF header firstly,
and then decide how to access the Phdr and Shdr Header based on the e_phnum, e_shnum,
e_phentsize, e_shentsize, e_phoff and e_shoff,etc. But, that will make the logic of
code become complicated.

>> Can you please describe more details why the size of padding area is 128 bytes?
>> Are there any particular reasons?
> 
> The reasoning behind the 128 bytes limit is simple: to not complicate
> things too much. ;)
> The rest of the code in netdump.c assumes the header is contiguous, i.e.
> the ELF header and the program headers can be read and put into a single
> (small) buffer. Assuming the "padding" after the ELF header to the first
> program header isn't all that big in the exceptional case made me go for
> this arbitrary (but effective) limit.
> 
> The padding I've observed is as follows:
> 
> $ readelf -h vmw.core
> ELF Header:
>   Magic:   7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
>   Class:                             ELF64
>   Data:                              2's complement, little endian
>   Version:                           1 (current)
>   OS/ABI:                            UNIX - System V
>   ABI Version:                       0
>   Type:                              CORE (Core file)
>   Machine:                           Advanced Micro Devices X86-64
>   Version:                           0x1
>   Entry point address:               0x0
>   Start of program headers:          128 (bytes into file)
>   Start of section headers:          64 (bytes into file)
>   Flags:                             0x0
>   Size of this header:               64 (bytes)
>   Size of program headers:           56 (bytes)
>   Number of program headers:         8
>   Size of section headers:           64 (bytes)
>   Number of section headers:         1
>   Section header string table index: 0
> 
> So here the section headers have be generated in front of the program

It may be related to the method of generating ELF Header files.

> headers. Thereby the "padding bytes" are actually the section headers,
> occupying 64 bytes, making the program headers start at offset 128
> instead of the "usual" 64.
> 
The section header table may be optional, sometimes.

Thanks.
Lianbo

>>
>> Thanks.
>> Lianbo
> 
> Thanks for your review!
> 
> Mathias
> 
>>
>>>  #define NT_TASKSTRUCT 4
>>>  #define NT_DISKDUMP   0x70000001
>>> -- 2.20.1
> 




More information about the Crash-utility mailing list