Sharing sound hardware

Ryan Gammon rgammon at
Mon Feb 7 20:56:19 UTC 2005

Sitsofe Wheeler wrote:

>You have to be a member of the helix community in order to get a
>prebuilt helix player that supports ALSA don't you? 

Is that a bad thing? Membership is free, all are welcome.

In any case, the helix player code is all available under a GPL license, 
and you have the associated freedoms to modify & distribute. Fedora, for 
example, has a helix player source rpm, though the alsa code in that 
particular branch was in rough shape.

I put together some docs on how to do an alsa enabled build if you are 

If you want to try to do a similar build with the gtk player, we can 
help you out on the helix mailing lists, eg: player-dev at

>>3. The first process to play back sound defines the characteristics of 
>>the sound device device. If it open up the device at a low sample rate, 
>>all playback would suffer.
>Does it? I thought it was fixed in a configuration file (I'm not an ALSA
>developer or someone who develops apps with ALSA - I'm just looking at
> ). Perhaps apps should try and work with whatever it defaults to.

The asoundrc file I put together seemed to do this back when I was 
looking at this... I'm not an alsa expert, so certainly take all these 
problems with a grain of salt.

>I have to inject here. For legacy reasons wouldn't it be better to
>support esd? Then you would get artsd support for free (sure it will
>latent but it's better than nothing) and support old OSS systems. Newer
>systems with ALSA would get ALSA (with dmix) support.
>(goes away and reads posted links)
>Heh. You already know this and are submitting patches to make this work.
>I guess that tells me...

Good suggestions.

One thing to keep in mind with esound is that, last time I checked, 
esound had limited support for measuring the latency between when an app 
writes a sound for playback, and when the sound actually hits the 
speakers. This missing functionality makes it generally unappealing for 
A/V playback.

Most distributions have their esound configured to release the sound 
device when not in use, which allows us play back video with good A/V 
sync as an OSS app. Not ideal, but hopefully good enough.

IMO, OSS is still the best way to get a generic binary download working 
on the largest possible number of distributions without popping up a 
"select your sound server" type of dialog, which we're not really 
interested in doing for various reasons.

>>Ideally IMHO dmix / asym would:
>>- Be a sound server that runs on startup instead of something that forks 
>>off the first process to open the sound device
>I think you might be better off suggesting this on the alsa-devel list.
>I don't know if Fedora could afford to deviate too far from upstream
>ALSA packages so the best thing would be to have a change like this come
>"down" from upstream. I suspect this won't be a popular idea because the
>"server-less" set up of dmix appears to have been done for design
>reasons (low latency problems, no need for RT priorities). I'll give a
>link in a minute.

We'll see what Colin finds out in his testing... There are potentially 
other ways to solve this problem than with ALSA if things don't end up 
looking good.

>>- Open the sound device using sensible maximum capabilities of the 
>>device in terms of sampling rate and # of channels, etc. The guys who 
>>write our resamplers generally prefer to have helix doing any sample 
>>rate conversion where possible:
>This doesn't sound like it will happen with dmix (although it could with
>another ALSA plugin). In the UKUUG paper "Sound Systems on
>Linux" ( ) Takashi Iwai talks a little about the rationale behind dmix. It sounds like the aim is to be simple so existing ALSA apps do not have to be rewritten, low enough latency for consumer use and free of RT-scheduling requirements.

Thanks for the link, I'll check it out.

>>I'm very interested in what others are thinking here for fedora, be it 
>>alsa-related or otherwise.
>My general take on things so far has been tainted because although helix
>and real player work great on hardware mixed cards the lack of shipped
>support for esd/artsd has users wondering why things like real player
>don't start when they are streaming a radio station in their web browser
>('cos the sound device is locked and there's no hardware mixing is the

There's no good answer... If we did ship with esound support turned on, 
users would wonder why they're getting bad A/V sync. There are also 
issues around how helix uses the audio device playback rate to drive its 
overall playback timeline.

Given the choice between fixing helix esound support or fixing something 
like alsa, I'd tend toward fixing alsa.

>I suspect the number of sound servers for typical applications will
>eventually drop to two - whatever ALSA usually supplies (virtual or
>otherwise) and esd. 

I'd say it's just going to be alsa. The only thing esound offers over 
dmix / alsa is network transparency.

When it comes to the (fairly rare) dumb terminal / network transparency 
case, my take would be to put the media engine on the terminal side & 
figure out how to make it work.  The alternative (coming up with some 
transcoding / streaming translation framework that sits between 
something like a helix server, terminal server, and client) doesn't make 
sense to me.

>Rumour has it that KDE will drop artsd and move to
>gstreamer and since gstreamer supports both esd and ALSA out of box
>sound experience on typical consumer hardware should stop being such a

As I understand it, KDE has a kdemm project (currently a little stalled) 
that has the goal of providing basic access to multiple media engines.

I have some links on the helix-qt resources page:

It's cool that gstreamer ships with good esound support, but that will 
only get you so far given the state of the underlying esound technology.

Ryan Gammon
rgammon at
Developer for Helix Player

More information about the fedora-devel-list mailing list