[FFmpeg-devel] [PATCH 2/2] lavfi: add concat filter.

Stefano Sabatini stefasab at gmail.com
Sat Jul 21 19:09:58 CEST 2012


On date Friday 2012-07-20 10:40:55 +0200, Nicolas George encoded:
> 
> Signed-off-by: Nicolas George <nicolas.george at normalesup.org>
> ---
>  Changelog                |    1 +
>  doc/filters.texi         |   75 ++++++++
>  libavfilter/Makefile     |    1 +
>  libavfilter/allfilters.c |    1 +
>  libavfilter/f_concat.c   |  428 ++++++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 506 insertions(+)
>  create mode 100644 libavfilter/f_concat.c
> 
> diff --git a/Changelog b/Changelog
> index 4242cea..15ac8d6 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -32,6 +32,7 @@ version next:
>  - 3GPP Timed Text decoder
>  - GeoTIFF decoder support
>  - ffmpeg -(no)stdin option
> +- concat filter
>  
>  
>  version 0.11:
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 4a6c092..306d82d 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -3988,6 +3988,81 @@ tools.
>  
>  Below is a description of the currently available transmedia filters.
>  
> + at section concat
> +
> +Concatenate audio and video streams, joining them together one after the
> +other.
> +
> +The filter works on segments of synchronized video and audio streams. All
> +segments must have the same number of streams of each type, and that will
> +also be the number of streams at output.
> +
> +The filter accepts the following named parameters:
> + at table @option
> +
> + at item n
> +Set the number of segments. Default is 2.
> +
> + at item v
> +Set the number of output video streams, that is also the number of video
> +streams in each segment. Default is 1.
> +
> + at item a
> +Set the number of output audio streams, that is also the number of video
> +streams in each segment. Default is 0.
> +
> + at end table
> +
> +The filter has @var{v}+ at var{a} outputs: first @var{v} video outputs, then
> + at var{a} audio outputs.
> +

> +There are @var{s}×(@var{v}+ at var{a}) inputs: first the inputs for the first

@var{n}x...

> +segment, in the same order as the outputs, then the inputs for the second
> +segment, etc.
> +
> +Related streams do not always have exactly the same duration, for various
> +reasons including codec frame size or sloppy authoring. For that reason,
> +related synchronized streams (e.g. a video and its audio track) should be
> +concatenated at once. The concat filter will use the duration of the longest
> +stream in each segment (except the last one), and if necessary pad shorter
> +audio streams with silence.
> +
> +For this filter to work correctly, all segments must start at timestamp 0.
> +
> +All corresponding streams must have the same parameters in all segments; the
> +filtering system will automatically select a common pixel format for video
> +streams, and a common sample format, sample rate and channel layout for
> +audio streams, but other settings, such as resolution, must be converted
> +explicitly by the user.
> +
> +Different frame rates are acceptable but will result in variable frame rate
> +at output; be sure to configure the output file to handle it.
> +
> +Examples:
> + at itemize
> + at item
> +Concatenate an opening, an episode and an ending, all in bilingual version
> +(video in stream 0, audio in streams 1 and 2):
> + at example
> +ffmpeg -i opening.mkv -i episode.mkv -i ending.mkv -filter_complex \
> +  '[0:0] [0:1] [0:2] [1:0] [1:1] [1:2] [2:0] [2:1] [2:2]
> +   concat=s=3:v=1:a=2 [v] [a1] [a2]' \

n=3

> +  -map '[v]' -map '[a1]' -map '[a2]' output.mkv
> + at end example
> +
> + at item
> +Concatenate two parts, handling audio and video separately, using the
> +(a)movie sources, and adjusting the resolution:
> + at example
> +movie=part1.mp4, scale=512:288 [v1] ; amovie=part1.mp4 [a1] ;
> +movie=part2.mp4, scale=512:288 [v2] ; amovie=part2.mp4 [a2] ;
> +[v1] [v2] concat [outv] ; [a1] [a2] concat=v=0:a=1 [outa]
> + at end example
> +Note that a desync will happen at the stitch if the audio and video streams
> +do not have exactly the same duration in the first file.
> +
> + at end itemize
> +
>  @section showwaves
>  
>  Convert input audio to a video output, representing the samples waves.
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index b094f59..642a105 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -197,6 +197,7 @@ OBJS-$(CONFIG_MP_FILTER) += libmpcodecs/vf_yvu9.o
>  OBJS-$(CONFIG_MP_FILTER) += libmpcodecs/pullup.o
>  
>  # transmedia filters
> +OBJS-$(CONFIG_CONCAT_FILTER)                 += f_concat.o
>  OBJS-$(CONFIG_SHOWWAVES_FILTER)              += avf_showwaves.o

Bikeshed: # multimedia filters?

Could be useful to contain filters dealing with more than one media
type, and thus also transmedia filters. In this case would make sense
to name the file avf_concat (consistent with the filter private name).

>  TOOLS     = graph2dot
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 706405e..cae4c99 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -134,6 +134,7 @@ void avfilter_register_all(void)
>      REGISTER_FILTER (NULLSINK,    nullsink,    vsink);
>  
>      /* transmedia filters */
> +    REGISTER_FILTER (CONCAT,      concat,      avf);
>      REGISTER_FILTER (SHOWWAVES,   showwaves,   avf);
>  
>      /* those filters are part of public or internal API => registered
> diff --git a/libavfilter/f_concat.c b/libavfilter/f_concat.c
> new file mode 100644
> index 0000000..fb8fb15
> --- /dev/null
> +++ b/libavfilter/f_concat.c
> @@ -0,0 +1,428 @@
> +/*
> + * Copyright (c) 2012 Nicolas George
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> + * See the GNU Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public License
> + * along with FFmpeg; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * concat audio-video filter
> + */
> +
> +#include "libavutil/avassert.h"
> +#include "libavutil/opt.h"
> +#include "avfilter.h"
> +#define FF_BUFQUEUE_SIZE 256
> +#include "bufferqueue.h"
> +#include "internal.h"
> +#include "video.h"
> +#include "audio.h"
> +
> +#define TYPE_ALL 2
> +
> +typedef struct {
> +    const AVClass *class;
> +    unsigned nb_streams[TYPE_ALL]; /**< number of out streams of each type */
> +    unsigned nb_segments;
> +    unsigned cur_idx; /**< index of the first input of current segment */
> +    int64_t delta_ts; /**< timestamp to add to produce output timestamps */
> +    unsigned nb_in_active; /**< number of active inputs in current segment */
> +    struct concat_in {
> +        int64_t pts;
> +        int64_t nb_frames;
> +        unsigned eof;
> +        struct FFBufQueue queue;
> +    } *in;
> +} ConcatContext;
> +
> +#define OFFSET(x) offsetof(ConcatContext, x)
> +
> +static const AVOption concat_options[] = {
> +    { "n", "specify the number of segments", OFFSET(nb_segments),
> +      AV_OPT_TYPE_INT, { .dbl = 2 }, 2, INT_MAX },
> +    { "v", "specify the number of video streams",
> +      OFFSET(nb_streams[AVMEDIA_TYPE_VIDEO]),
> +      AV_OPT_TYPE_INT, { .dbl = 1 }, 1, INT_MAX },
> +    { "a", "specify the number of audio streams",
> +      OFFSET(nb_streams[AVMEDIA_TYPE_AUDIO]),
> +      AV_OPT_TYPE_INT, { .dbl = 0 }, 0, INT_MAX },
> +    { 0 }
> +};
> +
> +AVFILTER_DEFINE_CLASS(concat);
> +
> +static int query_formats(AVFilterContext *ctx)
> +{
> +    ConcatContext *cat = ctx->priv;
> +    unsigned type, nb_str, idx0 = 0, idx, str, seg;
> +    AVFilterFormats *formats, *rates;
> +    AVFilterChannelLayouts *layouts;
> +

> +    for (type = 0; type < TYPE_ALL; type++) {
> +        nb_str = cat->nb_streams[type];
> +        for (str = 0; str < nb_str; str++) {
> +            idx = idx0;

**

> +            /* Set the output formats */
> +            formats = ff_all_formats(type);
> +            if (!formats)
> +                return AVERROR(ENOMEM);
> +            ff_formats_ref(formats, &ctx->outputs[idx]->in_formats);
> +            if (type == AVMEDIA_TYPE_AUDIO) {
> +                rates = ff_all_samplerates();
> +                if (!rates)
> +                    return AVERROR(ENOMEM);
> +                ff_formats_ref(rates, &ctx->outputs[idx]->in_samplerates);
> +                layouts = ff_all_channel_layouts();
> +                if (!layouts)
> +                    return AVERROR(ENOMEM);
> +                ff_channel_layouts_ref(layouts, &ctx->outputs[idx]->in_channel_layouts);
> +            }

**

> +            /* Set the same formats for each corresponding input */
> +            for (seg = 0; seg < cat->nb_segments; seg++) {
> +                ff_formats_ref(formats, &ctx->inputs[idx]->out_formats);
> +                if (type == AVMEDIA_TYPE_AUDIO) {
> +                    ff_formats_ref(rates, &ctx->inputs[idx]->out_samplerates);
> +                    ff_channel_layouts_ref(layouts, &ctx->inputs[idx]->out_channel_layouts);
> +                }
> +                idx += ctx->nb_outputs;
> +            }

**

> +            idx0++;
> +        }
> +    }

Nit+: some empty lines at the indicated points may help readability.

> +    return 0;
> +}
> +
> +static int config_output(AVFilterLink *outlink)
> +{
> +    AVFilterContext *ctx = outlink->src;
> +    ConcatContext *cat   = ctx->priv;
> +    unsigned out_no = FF_OUTLINK_IDX(outlink);
> +    unsigned in_no  = out_no, seg;
> +    AVFilterLink *inlink = ctx->inputs[in_no];
> +
> +    /* enhancement: find a common one */
> +    outlink->time_base           = AV_TIME_BASE_Q;
> +    outlink->w                   = inlink->w;
> +    outlink->h                   = inlink->h;
> +    outlink->sample_aspect_ratio = inlink->sample_aspect_ratio;
> +    outlink->format              = inlink->format;
> +    for (seg = 1; seg < cat->nb_segments; seg++) {
> +        inlink = ctx->inputs[in_no += ctx->nb_outputs];
> +        /* possible enhancement: unsafe mode, do not check */
> +        if (outlink->w                       != inlink->w                       ||
> +            outlink->h                       != inlink->h                       ||
> +            outlink->sample_aspect_ratio.num != inlink->sample_aspect_ratio.num ||
> +            outlink->sample_aspect_ratio.den != inlink->sample_aspect_ratio.den) {

> +            av_log(ctx, AV_LOG_ERROR, "Input link %s parameters "
> +                   "(%dx%d, SAR %d:%d) do not match the corresponding output "

nit: I'd prefer "size %dx%d" or "size:%dx%d" but whatever

> +                   "link %s parameters (%dx%d, SAR %d:%d)\n",
> +                   ctx->input_pads[in_no].name, inlink->w, inlink->h,
> +                   inlink->sample_aspect_ratio.num,
> +                   inlink->sample_aspect_ratio.den,
> +                   ctx->input_pads[out_no].name, outlink->w, outlink->h,
> +                   outlink->sample_aspect_ratio.num,
> +                   outlink->sample_aspect_ratio.den);
> +            return AVERROR(EINVAL);
> +        }
> +    }
> +
> +    return 0;
> +}
> +
[...]
> +static void send_silence(AVFilterContext *ctx, unsigned in_no, unsigned out_no)
> +{
> +    ConcatContext *cat = ctx->priv;
> +    AVFilterLink *outlink = ctx->outputs[out_no];
> +    int64_t base_pts = cat->in[in_no].pts;
> +    int64_t nb_samples, sent = 0;
> +    int frame_size;
> +    AVRational rate_tb = { 1, ctx->inputs[in_no]->sample_rate };
> +    AVFilterBufferRef *buf;
> +
> +    if (!rate_tb.den)
> +        return;
> +    nb_samples = av_rescale_q(cat->delta_ts - base_pts,
> +                              outlink->time_base, rate_tb);
> +    frame_size = FFMAX(9600, rate_tb.den / 5); /* arbitrary */

> +    while (nb_samples) {
> +        frame_size = FFMIN(frame_size, nb_samples);
> +        buf = ff_get_audio_buffer(outlink, AV_PERM_WRITE, frame_size);
> +        if (!buf)
> +            return;
> +        buf->pts = base_pts + av_rescale_q(sent, rate_tb, outlink->time_base);
> +        ff_filter_samples(outlink, buf);
> +        sent       += frame_size;
> +        nb_samples -= frame_size;
> +    }

I'm not sure I understand this. Shouldn't you fill the audio buffer
with silence? (a zeroed buffer does not always correspond to silence).

> +}
> +
> +static void flush_segment(AVFilterContext *ctx)
> +{
> +    ConcatContext *cat = ctx->priv;
> +    unsigned str, str_max;
> +
> +    find_next_delta_ts(ctx);
> +    cat->cur_idx += ctx->nb_outputs;
> +    cat->nb_in_active = ctx->nb_outputs;
> +    av_log(ctx, AV_LOG_VERBOSE, "Segment finished at pts=%"PRId64"\n",
> +           cat->delta_ts);
> +
> +    if (cat->cur_idx < ctx->nb_inputs) {
> +        /* pad audio streams with silence */
> +        str = cat->nb_streams[AVMEDIA_TYPE_VIDEO];
> +        str_max = str + cat->nb_streams[AVMEDIA_TYPE_AUDIO];
> +        for (; str < str_max; str++)
> +            send_silence(ctx, cat->cur_idx - ctx->nb_outputs + str, str);
> +        /* flush queued buffers */
> +        /* possible enhancement: flush in PTS order */
> +        str_max = cat->cur_idx + ctx->nb_outputs;
> +        for (str = cat->cur_idx; str < str_max; str++)
> +            while (cat->in[str].queue.available)
> +                push_frame(ctx, str, ff_bufqueue_get(&cat->in[str].queue));
> +    }
> +}
[...]

I read the rest of the patch and can't spot apparent errors.
-- 
FFmpeg = Foolish & Freak Martial Practical Erudite Gospel


More information about the ffmpeg-devel mailing list