[FFmpeg-devel] Transcoding HLS segments post-hoc

Ray Hilton ray at wirestorm.net
Thu Aug 15 09:17:21 CEST 2013


Sorry, this isn't directly to do with internal dev, but the consulting
page recommended emailing this list if I wasn't sure who to contact.
Rather than individually emailing everyone with more or less the same
thing, i'll post here and hopefully someone will be able to help!  We
might be able to get money together to pay for consulting if the
solution to this problem is some custom ffmpeg work.

Bit of background: we are providing a streaming live/on-demand
solution to the community radio sector in Australia.  We're trying to
find a way to provide HLS streaming from live, continuous audio in a
variety of bitrates and codecs and ideally take advantage of adaptive
streaming (i.e. segments are in sync).

First, is this possible:  I am segmenting audio from a live stream
using AAC at 128k.  I send those segments to our cluster and they
transcode them into various other bitrates/formats using things like
neroaac (better low bitrate quality) and lame.  This works, but it
introduces gaps between the segments.  This is an ideal solution as
the origin (radio stations) upload just the highest quality format and
we distribute the workload to do the rest.  We can also use the same
implementation for "live" as well as "on demand".  What is ffmpeg
doing internally when segmenting to avoid the gaps?

The second option is to use small machines in the studio to transcode
from audio device to multiple nitrates/formats.  Ideally, for adaptive
streaming, we want these segments to be in sync - is that possible?

Ray Hilton
(maker of mobile apps)
13/243 Collins St, Melbourne VIC 3000  |  +61 (0) 430 484 708  |  http://ray.sh

More information about the ffmpeg-devel mailing list