Fedora Tour

Jeff Spaleta jspaleta at gmail.com
Thu Dec 15 03:51:01 UTC 2005


On 12/14/05, Jeff Spaleta <jspaleta at gmail.com> wrote:
> but there has to be a way to do all this
> inside the existing gstreamer framework.

I think I have found a gst pipeline that works which can take an
audio.wav file and apply it to a desktop-recording.ogg  theora video
and end up with a result.ogg theora video with audio.   I'm pretty
sure something similar will work with a vorbis audio track as well
instead of a wav. I'm going to update the screencast wiki page with a
simple bash script for now which uses gst-launch. I'll probably
replace it with a slightly better python script as soon as I get the
chance. Nothing fancy mind you, but something which can be used to
batch localized audio for the same video.


Consequences:
*Everything necessary to encode video and audio seperately exist in
Core/Extras development as provided by gstreamer-* and audacity

*Istanbul makes encoding videos point and click easy, and it should
work from kde and gnome desktops

*Audacity should provide reasonable interface for create an audio
track of the corresponding length to the video. We might require some
sort of visual ques in the video to help with audio sync if audio is
going to be made after the video.

*gst pipeline to batch to splice the audio and video into a final
video, which I can turn into a simple cmdline tool to be used as
needed.

*Should be able to to construct some other post-processing gst 
pipelines to add title sequence or text overlays, but I haven't looked
deep into that yet.

Here's the pipeline to mix in the wav audio and the istanbul video:
{ oggmux name=mux ! filesink location=result.ogg }
{ filesrc location=desktop-recording.ogg ! decodebin name=v }
{ filesrc location=audio.wav ! decodebin name=a }
{ v. ! queue ! ffmpegcolorspace ! theoraenc ! queue name=theora-q ! mux. }
{ a. ! queue ! audioconvert ! rawvorbisenc ! queue name=vorbis-q ! mux. }

-jef




More information about the Fedora-marketing-list mailing list