[FFmpeg-devel] [RFC] possible API for opening external demuxer references

Gabriel Forté gforte
Wed Feb 27 20:01:58 CET 2008


On Wed, Feb 27, 2008 at 03:07:25PM +0100, Reimar D?ffinger wrote:
> > it may seem a bit irrelevant here (and other problems need to be
> > adressed as well), but I've been thinking for a while about a similar thing
> > to open auxiliary streams such as external subtitles files
> > (SRT, vob/idx, whatever, ...), to be able to get their frames
> > synchronized with the streams contained in the main file.
> > 
> > currently I'm doing this as a hack completely external to libavformat,
> > by fiddling with AVFormatContext->streams (quite dirty).
> > 
> > would it be relevant to discuss that in this thread ?
> 
> and it might be better to keep such a thing outside of libavformat,

why not ? actually, doing it from outside (at least from my point of view,
which is primarily as an user of the libav* APIs) looks much worse than having
a clean way to do it from inside.

then there's the question of whether or not it would be desirable to
have av_read_frame() return frames from two "merged" containers
(be it for external subtitles files or anything else...).

from an application standpoint (ie. not necessarily ffmpeg, but a player
for example), it would be nice to treat external SRT files the same way
as they are when muxed in a single matroska file for example.

ie. providing one (or more) auxiliary URLs or AVFormatContexts to
av_open_input_file (through AVFormatParameters or whatever), finding
their streams "merged" in the main AVFormatContext (although it causes
a problem with AVStream->id collisions...), and getting their frames
through av_read_frame() in sequence with the ones of the main context.


here's the snippet of code I currently use to perform the opening
and "merging".
I think it sucks a bit, and I'm not really satisfied with it, althought
it serves my current purpose (ie. synchronising an SRT file containing
only one AVStream created with a quick and dirty parser/demuxer).

static AVFormatContext *add_aux(AVFormatContext *ctx, char *url)
{
        AVFormatContext *ret = NULL;
        AVStream *aux;
        int i;

        if (!url || av_open_input_file(&ret, url, NULL, -1, NULL) != 0)
                return NULL;

        if (!(aux = ret->streams[0])) {
                av_close_input_file(ret);
                return NULL;
        }

        /* insert fake streams in the aux context so that the real subtitles
         * stream gets the same index in both contexts */
        for (i = 0; i < ctx->nb_streams; i++) {
                AVStream *fake = av_new_stream(ret, i);
                fake->codec->codec_type = CODEC_TYPE_UNKNOWN;
        }
        aux = ret->streams[0];
        ret->streams[0] = ret->streams[ctx->nb_streams];
        ret->streams[ctx->nb_streams] = aux;
        aux->index = aux->id = ctx->nb_streams;
        ctx->streams[ctx->nb_streams++] = aux;

	return ret;
}


the reading part is IMO even dirtier, as I'm forced to keep the current
pts/dts values for every stream, and find out before each
av_read_frame() which AVFormatContext I should use based on expected values.
ie. if aux's DTS <= main's DTS, then read from aux.

I'm open for suggestions on how to do it more accurately, since my
current hack has a few annoying bugs.

I hope it doesn't sound too absurd (thinking about it, it would also be
worth thinking about external index files such as vob/idx pairs used for
subtitles ripped from DVD).


cheers,

-- 
Gabriel Fort? <gforte at wyplay.com>





More information about the ffmpeg-devel mailing list