[edk2-devel] [PATCH 2/3] ArmPkg/CpuDxe: Perform preliminary NX remap of free memory

Ard Biesheuvel ardb at kernel.org
Wed Feb 8 22:52:49 UTC 2023


On Wed, 8 Feb 2023 at 19:57, Taylor Beebe <t at taylorbeebe.com> wrote:
>
>
>
> On 2/8/2023 10:49 AM, Ard Biesheuvel wrote:
> > On Wed, 8 Feb 2023 at 19:32, Marvin Häuser <mhaeuser at posteo.de> wrote:
> >>
> >> Thanks!! :) Comments inline.
> >>
> >>> On 8. Feb 2023, at 18:58, Ard Biesheuvel <ardb at kernel.org> wrote:
> >>>
> >>> The DXE core implementation of PcdDxeNxMemoryProtectionPolicy already
> >>> contains an assertion that EfiConventionalMemory and EfiBootServicesData
> >>> are subjected to the same policy when it comes to the use of NX
> >>> permissions. The reason for this is that we may otherwise end up with
> >>> unbounded recursion in the page table code, given that allocating a page
> >>> table would then involve a permission attribute change, and this could
> >>> result in the need for a block entry to be split, which would trigger
> >>> the allocation of a page table recursively.
> >>>
> >>> For the same reason, a shortcut exists in ApplyMemoryProtectionPolicy()
> >>> where, instead of setting the memory attributes unconditionally, we
> >>> compare the NX policies and avoid touching the page tables if they are
> >>> the same for the old and the new memory types. Without this shortcut, we
> >>> may end up in a situation where, as the CPU arch protocol DXE driver is
> >>> ramping up, the same unbounded recursion is triggered, due to the fact
> >>> that the NX policy for EfiConventionalMemory has not been applied yet.
> >>>
> >>> To break this cycle, let's remap all EfiConventionalMemory regions
> >>> according to the NX policy for EfiBootServicesData before exposing the
> >>> CPU arch protocol to the DXE core and other drivers. This ensures that
> >>> creating EfiBootServicesData allocations does not result in memory
> >>> attribute changes, and therefore no recursion.
> >>>
> >>> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> >>> ---
> >>> ArmPkg/Drivers/CpuDxe/CpuDxe.c   | 77 ++++++++++++++++++++
> >>> ArmPkg/Drivers/CpuDxe/CpuDxe.inf |  2 +
> >>> 2 files changed, 79 insertions(+)
> >>>
> >>> diff --git a/ArmPkg/Drivers/CpuDxe/CpuDxe.c b/ArmPkg/Drivers/CpuDxe/CpuDxe.c
> >>> index d04958e79e52..83fd6fd4e476 100644
> >>> --- a/ArmPkg/Drivers/CpuDxe/CpuDxe.c
> >>> +++ b/ArmPkg/Drivers/CpuDxe/CpuDxe.c
> >>> @@ -11,6 +11,8 @@
> >>>
> >>> #include <Guid/IdleLoopEvent.h>
> >>>
> >>> +#include <Library/MemoryAllocationLib.h>
> >>> +
> >>> BOOLEAN  mIsFlushingGCD;
> >>>
> >>> /**
> >>> @@ -227,6 +229,69 @@ InitializeDma (
> >>>    CpuArchProtocol->DmaBufferAlignment = ArmCacheWritebackGranule ();
> >>> }
> >>>
> >>> +STATIC
> >>> +VOID
> >>> +RemapUnusedMemoryNx (
> >>> +  VOID
> >>> +  )
> >>> +{
>
> This feels somewhat hacky but it's strictly better than what we have
> right now and a value add so I'm all for this change.
>
> >>> +  UINT64                     TestBit;
> >>> +  UINTN                      MemoryMapSize;
> >>> +  UINTN                      MapKey;
> >>> +  UINTN                      DescriptorSize;
> >>> +  UINT32                     DescriptorVersion;
> >>> +  EFI_MEMORY_DESCRIPTOR      *MemoryMap;
> >>> +  EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry;
> >>> +  EFI_MEMORY_DESCRIPTOR      *MemoryMapEnd;
> >>> +  EFI_STATUS                 Status;
> >>> +
> >>> +  TestBit = LShiftU64 (1, EfiBootServicesData);
> >>> +  if ((PcdGet64 (PcdDxeNxMemoryProtectionPolicy) & TestBit) == 0) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  MemoryMapSize = 0;
> >>> +  MemoryMap     = NULL;
> >>> +
> >>> +  Status = gBS->GetMemoryMap (
> >>> +                  &MemoryMapSize,
> >>> +                  MemoryMap,
> >>> +                  &MapKey,
> >>> +                  &DescriptorSize,
> >>> +                  &DescriptorVersion
> >>> +                  );
> >>> +  ASSERT (Status == EFI_BUFFER_TOO_SMALL);
> >>> +  do {
> >>> +    MemoryMap = (EFI_MEMORY_DESCRIPTOR *)AllocatePool (MemoryMapSize);
> >>
> >> nitpick: *If* there is a V2, maybe drop the cast?
> >>
> >>> +    ASSERT (MemoryMap != NULL);
> >>
> >> I know ASSERTing for NULL is a common pattern for some reason, but I'd rather not have more code with this added.
> >>
> >>> +    Status = gBS->GetMemoryMap (
> >>> +                    &MemoryMapSize,
> >>> +                    MemoryMap,
> >>> +                    &MapKey,
> >>> +                    &DescriptorSize,
> >>> +                    &DescriptorVersion
> >>> +                    );
> >>
> >> Another nitpick, isn't it common practice to call the Core* functions directly within *Core? I know there is code that doesn't, but the former should be more efficient.
> >>
> >>> +    if (EFI_ERROR (Status)) {
> >>> +      FreePool (MemoryMap);
> >>> +    }
> >>> +  } while (Status == EFI_BUFFER_TOO_SMALL);
> >>
> >> Is this guaranteed to terminate? I mean, I get the idea - try again with the larger size and when allocating the bigger buffer, its potential new entry will already be accounted for. However, I saw downstream code that tried something like this (they actually added a constant overhead ahead of time) bounded by something like 5 iterations. Might just have been defensive programming - probably was -, but it's not trivial to verify, I think, is it?
> >>
> >> Regarding the added constant, the spec actually says the following, which obviously is just to shortcut a second round of GetMemoryMap(), but still:
> >> "The actual size of the buffer allocated for the consequent call to GetMemoryMap() should be bigger then the value returned in MemoryMapSize"
> >>
> >> It appears the spec did not define a canonical way to call GetMemoryMap(), did it? :(
> >>
> >
> > This is all copy-pasted from MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> >
> >>> +
> >>> +  ASSERT_EFI_ERROR (Status);
> >>> +
> >>> +  MemoryMapEntry = MemoryMap;
> >>> +  MemoryMapEnd   = (EFI_MEMORY_DESCRIPTOR *)((UINT8 *)MemoryMap + MemoryMapSize);
> >>> +  while ((UINTN)MemoryMapEntry < (UINTN)MemoryMapEnd) {
> >>> +    if (MemoryMapEntry->Type == EfiConventionalMemory) {
> >>> +      ArmSetMemoryRegionNoExec (
> >>> +        MemoryMapEntry->PhysicalStart,
> >>> +        EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages)
> >>> +        );
> >>> +    }
> >>> +
> >>> +    MemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
> >>> +  }
> >>> +}
> >>> +
> >>> EFI_STATUS
> >>> CpuDxeInitialize (
> >>>    IN EFI_HANDLE        ImageHandle,
> >>> @@ -240,6 +305,18 @@ CpuDxeInitialize (
> >>>
> >>>    InitializeDma (&mCpu);
> >>>
> >>> +  //
> >>> +  // Once we install the CPU arch protocol, the DXE core's memory
> >>> +  // protection routines will invoke them to manage the permissions of page
> >>> +  // allocations as they are created. Given that this includes pages
> >>> +  // allocated for page tables by this driver, we must ensure that unused
> >>> +  // memory is mapped with the same permissions as boot services data
> >>> +  // regions. Otherwise, we may end up with unbounded recursion, due to the
> >>> +  // fact that updating permissions on a newly allocated page table may trigger
> >>> +  // a block entry split, which triggers a page table allocation, etc etc
> >>> +  //
> >>> +  RemapUnusedMemoryNx ();
> >>
> >> Hmm. I might be too tired, but why is this not already done by InitializeDxeNxMemoryProtectionPolicy(), assuming BS_Data and ConvMem have the same XP configuration?
> >>
> >> This reassures me that the CPU Arch protocol producers should be linked into DxeCore rather than loaded at some arbitrary point in time... Unrelated to the patch, of course.
> >>
> >
> > The ordering here is a bit tricky. As soon as the CPU arch protocol is
> > exposed, every allocation will be remapped individually, resulting in
> > page table splits and therefore recursion.
> >
> >
>
> As Ard says, this is done in InitializeDxeNxMemoryProtectionPolicy(),
> but at that point the calls to ApplyMemoryProtectionPolicy() will cause
> the infinite recursion issue when you no longer skip applying attributes
> if the to-from types have the same NX policy.
>
> Yes, it is arbitrary that protection is linked to the CPU Arch Protocol.
> But, it would also be bad practice to apply the memory protection policy
> before the interface for manipulating attributes has been published for
> use by modules outside of the core. Page table and translation table
> manipulation is architecturally defined, so I think a good long term
> goal should be to allow the manipulation of attributes via a library
> rather than a protocol in DXE as ARM platforms currently do.
>

Note that the routine above is slightly dodgy in the sense that the
memory map might change while it is being traversed, due to the page
allocations that result from splitting block entries. This is why it
is important that EfiConventionalMemory and EfiBootServicesData use
the same NX policy, so that such changes to the memory map do not
affect the outcome of the traversal.

If we want to move this into generic code, I think it should be
feasible for the DXE core to
- grab the CPU arch protocol but don't make it available globally
right away, so that AllocatePages() will not recurse
- call the CPU arch protocol to map all EfiConventionalMemory as NX
(with the caveat above)
- proceed with remapping the active regions, which will now set the
attributes on all EfiBootServicesData regions again.

That would make the thing a bit more robust, but I am reluctant to add
this to generic code and deal with the fallout on x86 or RISC-V if it
breaks something there.


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#99821): https://edk2.groups.io/g/devel/message/99821
Mute This Topic: https://groups.io/mt/96835915/1813853
Group Owner: devel+owner at edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [edk2-devel-archive at redhat.com]
-=-=-=-=-=-=-=-=-=-=-=-




More information about the edk2-devel-archive mailing list