[edk2-devel] VirtIO Sound Driver (GSoC 2021)

Ethin Probst harlydavidsen at gmail.com
Fri Apr 16 05:33:06 UTC 2021


Thanks for that explanation (I missed Mike's message). Earlier I sent
a summary of those things that we can agree on: mainly, that we have
mute, volume control, a load buffer, (maybe) an unload buffer, and a
start/stop stream function. Now that I fully understand the
ramifications of this I don't mind settling for a specific format and
sample rate, and signed 16-bit PCM audio is, I think, the most widely
used one out there, besides 64-bit floating point samples, which I've
only seen used in DAWs, and that's something we don't need.
Are you sure you want the firmware itself to handle the decoding of
WAV audio? I can make a library class for that, but I'll definitely
need help with the security aspect.

On 4/16/21, Andrew Fish via groups.io <afish=apple.com at groups.io> wrote:
>
>
>> On Apr 15, 2021, at 5:59 PM, Michael Brown <mcb30 at ipxe.org> wrote:
>>
>> On 16/04/2021 00:42, Ethin Probst wrote:
>>> Forcing a particular channel mapping, sample rate and sample format on
>>> everyone would complicate application code. From an application point
>>> of view, one would, with that type of protocol, need to do the
>>> following:
>>> 1) Load an audio file in any audio file format from any storage
>>> mechanism.
>>> 2) Decode the audio file format to extract the samples and audio
>>> metadata.
>>> 3) Resample the (now decoded) audio samples and convert (quantize) the
>>> audio samples into signed 16-bit PCM audio.
>>> 4) forward the samples onto the EFI audio protocol.
>>
>> You have made an incorrect assumption that there exists a requirement to
>> be able to play audio files in arbitrary formats.  This requirement does
>> not exist.
>>
>> With a protocol-mandated fixed baseline set of audio parameters (sample
>> rate etc), what would happen in practice is that the audio files would be
>> encoded in that format at *build* time, using tools entirely external to
>> UEFI.  The application code is then trivially simple: it just does "load
>> blob, pass blob to audio protocol".
>>
>
>
> Ethin,
>
> Given the goal is an industry standard we value interoperability more that
> flexibility.
>
> How about another use case. Lets say the Linux OS loader (Grub) wants to
> have an accessible UI so it decides to sore sound files on the EFI System
> Partition and use our new fancy UEFI Audio Protocol to add audio to the OS
> loader GUI. So that version of Grub needs to work on 1,000 of different PCs
> and a wide range of UEFI Audio driver implementations. It is a much easier
> world if Wave PCM 16 bit just works every place. You could add a lot of
> complexity and try to encode the audio on the fly, maybe even in Linux
> proper but that falls down if you are booting from read only media like a
> DVD or backup tape (yes people still do that in server land).
>
> The other problem with flexibility is you just made the test matrix very
> large for every driver that needs to get implemented. For something as
> complex as Intel HDA how you hook up the hardware and what CODECs you use
> may impact the quality of the playback for a given board. Your EFI is likely
> going to pick a single encoding at that will get tested all the time if your
> system has audio, but all 50 other things you support not so much. So that
> will required testing, and some one with audiophile ears (or an AI program)
> to test all the combinations. I’m not kidding I get BZs on the quality of
> the boot bong on our systems.
>
>
>>> typedef struct EFI_SIMPLE_AUDIO_PROTOCOL {
>>>   EFI_SIMPLE_AUDIO_PROTOCOL_RESET Reset;
>>>   EFI_SIMPLE_AUDIO_PROTOCOL_START Start;
>>>   EFI_SIMPLE_AUDIO_PROTOCOL_STOP Stop;
>>> } EFI_SIMPLE_AUDIO_PROTOCOL;
>>
>> This is now starting to look like something that belongs in boot-time
>> firmware.  :)
>>
>
> I think that got a little too simple I’d go back and look at the example I
> posted to the thread but add an API to load the buffer, and then play the
> buffer (that way we can an API in the future to twiddle knobs). That API
> also implements the async EFI interface. Trust me the 1st thing that is
> going to happen when we add audio is some one is going to complain in xyz
> state we should mute audio, or we should honer audio volume and mute
> settings from setup, or from values set in the OS. Or some one is going to
> want the volume keys on the keyboard to work in EFI.
>
> Also if you need to pick apart the Wave PCM 16 byte file to feed it into the
> audio hardware that probably means we should have a library that does that
> work, so other Audio drivers can share that code. Also having a library
> makes it easier to write a unit test. We need to be security conscious as we
> need to treat the Audo file as attacker controlled data.
>
> Thanks,
>
> Andrew Fish
>
>> Michael
>>
>>
>>
>>
>>
>
>
>
> 
>
>
>


-- 
Signed,
Ethin D. Probst


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#74195): https://edk2.groups.io/g/devel/message/74195
Mute This Topic: https://groups.io/mt/81710286/1813853
Group Owner: devel+owner at edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [edk2-devel-archive at redhat.com]
-=-=-=-=-=-=-=-=-=-=-=-






More information about the edk2-devel-archive mailing list