[edk2-devel] [PATCH 18/23] OvmfPkg: Enable Tdx in SecMain.c

Yao, Jiewen jiewen.yao at intel.com
Wed Aug 25 09:07:13 UTC 2021


Comment below:

> -----Original Message-----
> From: devel at edk2.groups.io <devel at edk2.groups.io> On Behalf Of Gerd
> Hoffmann
> Sent: Wednesday, August 25, 2021 3:52 PM
> To: Yao, Jiewen <jiewen.yao at intel.com>
> Cc: Ard Biesheuvel <ardb at kernel.org>; Xu, Min M <min.m.xu at intel.com>;
> devel at edk2.groups.io; Ard Biesheuvel <ardb+tianocore at kernel.org>; Justen,
> Jordan L <jordan.l.justen at intel.com>; Brijesh Singh <brijesh.singh at amd.com>;
> Erdem Aktas <erdemaktas at google.com>; James Bottomley
> <jejb at linux.ibm.com>; Tom Lendacky <thomas.lendacky at amd.com>
> Subject: Re: [edk2-devel] [PATCH 18/23] OvmfPkg: Enable Tdx in SecMain.c
> 
>   Hi,
> 
> > fw_cfg is just a KVM/QEMU specific way to pass some parameter, but not
> > all parameter.  For example, OVMF today still get the memory size from
> > CMOS.
> >
> https://github.com/tianocore/edk2/blob/master/OvmfPkg/PlatformPei/MemDe
> tect.c#L278
> 
> Patches to fix that are on the list.

[Jiewen] Surprisingly. It was sent one week ago.
I obviously miss that email.

Please file a Bugzilla and include me in CC list next time.

> 
> > In TDVF design, we choose the use TDX defined initial pointer to pass
> > the initial memory information - TD_HOB, instead of CMOS region.
> > Please help me understand what is the real concern here.
> 
> Well, qemu settled to the fw_cfg design or a number of reasons.  It is
> pretty flexible for example.  The firmware can ask for the information
> it needs at any time and can store it as it pleases.
> 
> I'd suggest to not take it for granted that an additional alternative
> way to do basically the same thing will be accepted to upstream qemu.
> Submit your patches to qemu-devel to discuss that.

[Jiewen] I think Intel Linux team is doing that separately.

> 
> > That means, if you get the same data twice from the fw_cfg, the TDVF
> > must measure them twice. And TDVF may need handle the case that the
> > data in first call is different with the data in second call.
> 
> Most fw_cfg entries are constant anyway, so we can easily avoid a second
> call by caching the results of the first call if that helps TDVF.


[Jiewen] It is possible. We can have multiple ways:
1) Per usage cache. However, that means every driver need use its own way to cache the data, either PCD or HOB in PEI phase. Also driver A need to know clearly that driver B will use the same data, then it will cache otherwise it will not cache. I treat it as a huge burden for the developer.
2) Always cache per driver. That means every driver need follow the same pattern: search cache, if miss the get it and cache it. But it still cannot guarantee the data order in different path architecturally.
3) Always cache in one common driver. One driver can get all data one time and cache them. That can resolve the data order problem. I am not sure if that is desired. But I cannot see too much difference between passing data at entry point.

> 
> > Using HOB in the initial pointer can be an alternative pattern to
> > mitigate such risk. We just need measure them once then any component
> > can use that. Also, it can help the people to evaluate the RTMR hash
> > and TD event log data for the configuration in attestation flow,
> > because the configuration is independent with the code execution flow.
> 
> Well, it covers only the memory map, correct?  All other configuration
> is still loaded from fw_cfg.  I can't see the improvement here.

[Jiewen] At this point of time, memory map is the most important parameter in the TD Hob, because we do need the memory information at the TD entrypoint. That is mandatory for any TD boot.

The fw_cfg is still allowed in the TDVF design guide, just because we feel it is a burden to convert everything suddenly.
I hope to limit the configuration from VMM. Most fw_cfg will NOT be used for TDVF, for example, "etc/smi", "etc/tpm", "etc/edk2/https/cacerts", "etc/msr_feature_control", "etc/system-states", especially in the container use case.

The flexibility is a double-sward.
You can treat the TD Hob as the boot parameter for the kernel, here is the kernel == TDVF.
Having a static way to get configuration data in memory at one time is the simplest solution, from my perspective. 


> 
> How do you pass the HOB to the guest?  Copy data to guest ram?  Map a
> ro page into guest address space?  What happens on VM reset?
[Jiewen] Yes, VMM will prepare the memory information based upon TDVF metadata.
The VMM need copy TD HOB data to a predefined memory region according to TDVF metadata.

I don't fully understand the VM reset question. I try to answer. But if that is not what you are asking, please clarify.

This action that VMM initiates TD HOB happens when the VMM launches a TD guest.
After that the region will becomes TD private memory and own by TD. VMM can no longer access it (no read/no write).
If VM reset, then this memory is gone.
If VMM need launch a new TD, the VMM need initiate the data again.



> 
> > Please be aware that confidential computing (TDX) changes the threat
> > model completely, any input from VMM is considered as malicious.
> > Current solution might be OK for normal OVMF. But it does not mean the
> > same solution is still the best one for confidential computing use
> > case.
> 
> Well, SEV seems to be happy with fw_cfg.
> Any input from AMD on the topic?



> 
> take care,
>   Gerd
> 
> 
> 
> 
> 



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#79794): https://edk2.groups.io/g/devel/message/79794
Mute This Topic: https://groups.io/mt/84837914/1813853
Group Owner: devel+owner at edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [edk2-devel-archive at redhat.com]
-=-=-=-=-=-=-=-=-=-=-=-






More information about the edk2-devel-archive mailing list