[edk2-devel] [PATCH 18/23] OvmfPkg: Enable Tdx in SecMain.c

Yao, Jiewen jiewen.yao at intel.com
Wed Aug 25 16:28:51 UTC 2021


Comment below:

> -----Original Message-----
> From: kraxel at redhat.com <kraxel at redhat.com>
> Sent: Wednesday, August 25, 2021 10:52 PM
> To: Yao, Jiewen <jiewen.yao at intel.com>
> Cc: devel at edk2.groups.io; Ard Biesheuvel <ardb at kernel.org>; Xu, Min M
> <min.m.xu at intel.com>; Ard Biesheuvel <ardb+tianocore at kernel.org>; Justen,
> Jordan L <jordan.l.justen at intel.com>; Brijesh Singh <brijesh.singh at amd.com>;
> Erdem Aktas <erdemaktas at google.com>; James Bottomley
> <jejb at linux.ibm.com>; Tom Lendacky <thomas.lendacky at amd.com>
> Subject: Re: [edk2-devel] [PATCH 18/23] OvmfPkg: Enable Tdx in SecMain.c
> 
>   Hi,
> 
> > > > In TDVF design, we choose the use TDX defined initial pointer to pass
> > > > the initial memory information - TD_HOB, instead of CMOS region.
> > > > Please help me understand what is the real concern here.
> > >
> > > Well, qemu settled to the fw_cfg design or a number of reasons.  It is
> > > pretty flexible for example.  The firmware can ask for the information
> > > it needs at any time and can store it as it pleases.
> > >
> > > I'd suggest to not take it for granted that an additional alternative
> > > way to do basically the same thing will be accepted to upstream qemu.
> > > Submit your patches to qemu-devel to discuss that.
> >
> > [Jiewen] I think Intel Linux team is doing that separately.
> 
> Please ask them to send the patches.  Changes like this obviously need
> coordination and agreement between qemu and edk2 projects, and ideally
> both guest and host code is reviewed in parallel.

[Jiewen] Sure.

I add Yamahata, Isaku <isaku.yamahata at intel.com> here. He can help answer the KVM/QEMU related question.

Some reference for QEMU:
https://lists.nongnu.org/archive/html/qemu-devel/2021-07/msg01682.html
in patchwork, https://patchwork.kernel.org/project/qemu-devel/cover/cover.1625704980.git.isaku.yamahata@intel.com/

And I guess you probably need look at the KVM as well.



> 
> > > Most fw_cfg entries are constant anyway, so we can easily avoid a second
> > > call by caching the results of the first call if that helps TDVF.
> >
> > [Jiewen] It is possible. We can have multiple ways:
> > 1) Per usage cache. However, that means every driver need use its own way to
> cache the data, either PCD or HOB in PEI phase. Also driver A need to know
> clearly that driver B will use the same data, then it will cache otherwise it will
> not cache. I treat it as a huge burden for the developer.
> > 2) Always cache per driver. That means every driver need follow the same
> pattern: search cache, if miss the get it and cache it. But it still cannot guarantee
> the data order in different path architecturally.
> > 3) Always cache in one common driver. One driver can get all data one time
> and cache them. That can resolve the data order problem. I am not sure if that is
> desired. But I cannot see too much difference between passing data at entry
> point.
> 
> Not investigated yet.  seabios fw_cfg handling is close to (3) for small
> items (not kernel or initrd or other large data sets) so I think I would
> look into that first.

[Jiewen] I don't think it is urgent at this moment.


> 
> > > > Using HOB in the initial pointer can be an alternative pattern to
> > > > mitigate such risk. We just need measure them once then any component
> > > > can use that. Also, it can help the people to evaluate the RTMR hash
> > > > and TD event log data for the configuration in attestation flow,
> > > > because the configuration is independent with the code execution flow.
> > >
> > > Well, it covers only the memory map, correct?  All other configuration
> > > is still loaded from fw_cfg.  I can't see the improvement here.
> >
> > [Jiewen] At this point of time, memory map is the most important
> > parameter in the TD Hob, because we do need the memory information at
> > the TD entrypoint. That is mandatory for any TD boot.
> 
> Well, I can see that the memory map is kind of special here because you
> need that quite early in the firmware initialization workflow.

[Jiewen] That is correct.


> 
> > The fw_cfg is still allowed in the TDVF design guide, just because we
> > feel it is a burden to convert everything suddenly.
> 
> What is the longer-term plan here?
> 
> Does it make sense to special-case the memory map?
> 
> If we want handle other fw_cfg items that way too later on, shouldn't we
> better check how we can improve the fw_cfg interface so it works better
> with confidential computing?

[Jiewen] So far, my hope is to limit the fw_cfg as much as possible.
My worry is that we have to measure fw_cfg everywhere. If we miss one place, it will be a completeness vulnerability for trusted computing.

I also think if we can add measurement code inside of fw_cfg get function.
Then we need improve the FwCfg API - Current style: QemuFwCfgSelectItem() + QemuFwCfgReadxxx() is not friendly for measurement. For example, we can combine them and do QemuFwCfgSelectRead ().

The QemuFwCfgWritexxx() interface may also bring inconsistency issue. If we use this API, we have 2 copy data. One is in TDVF (trusted), and the other is in VMM/QEMU (untrusted). What if the VMM modifies its untrusted copy?


What I can see is many potential attack surfaces. :-(


> 
> > > How do you pass the HOB to the guest?  Copy data to guest ram?  Map a
> > > ro page into guest address space?  What happens on VM reset?
> 
> > [Jiewen] Yes, VMM will prepare the memory information based upon TDVF
> > metadata.  The VMM need copy TD HOB data to a predefined memory region
> > according to TDVF metadata.
> 
> Is all that documented somewhere?  The TVDF design overview focuses on
> the guest/firmware side of things, so it isn't very helpful here.

[Jiewen] The TDX architecture define this architecture way, the RCX/R9 refer is the pointer.

We have a couple of TDX document at https://software.intel.com/content/www/us/en/develop/articles/intel-trust-domain-extensions.html

https://software.intel.com/content/dam/develop/external/us/en/documents/tdx-module-1eas-v0.85.039.pdf
Section 8.1 defines the VCPU init state. The RCX/R8 hold the launch parameter.

https://software.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.pdf
Section 4.1.2 describes the TD HOB usage for RCX/R8. Section 7 adds more detail on memory map reporting.

Please let me know if you need any other information.


> 
> Did I mention posting the qemu patches would be a good idea?
> 
> > I don't fully understand the VM reset question. I try to answer. But if that is not
> what you are asking, please clarify.
> 
> What happens if you reboot the guest?
> 
> On non-TDX guests the VM will be reset, the cpu will jump to the reset
> vector (executing from rom / flash), firmware will re-initialize
> everything and re-load any config information it needs from fw_cfg
> 
> > This action that VMM initiates TD HOB happens when the VMM launches a TD
> guest.
> > After that the region will becomes TD private memory and own by TD. VMM
> can no longer access it (no read/no write).
> > If VM reset, then this memory is gone.
> > If VMM need launch a new TD, the VMM need initiate the data again.
> 
> Sounds like reset is not supported, you need to stop and re-start the
> guest instead.  Is that correct?

[Jiewen] That is correct. In our definition, reset == stop + restart.


> 
> take care,
>   Gerd



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#79803): https://edk2.groups.io/g/devel/message/79803
Mute This Topic: https://groups.io/mt/84837914/1813853
Group Owner: devel+owner at edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [edk2-devel-archive at redhat.com]
-=-=-=-=-=-=-=-=-=-=-=-






More information about the edk2-devel-archive mailing list