[FFmpeg-soc] Help needed with new concatenate filter

Brandon Mintern bmintern at gmail.com
Mon Apr 5 17:39:33 CEST 2010


On Sun, Apr 4, 2010 at 7:23 PM, Stefano Sabatini
<stefano.sabatini-lala at poste.it> wrote:
> On date Sunday 2010-04-04 16:16:28 -0400, Brandon Mintern encoded:
>> On Thu, Apr 1, 2010 at 9:03 AM, Stefano Sabatini
>> <stefano.sabatini-lala at poste.it> wrote:
>> > On date Wednesday 2010-03-31 20:06:17 -0400, Brandon Mintern encoded:
>> >> Thanks a lot for the feedback. A new, much cleaner (and actually
>> >> similar to what I wrote on my first try before changing it to what you
>> >> saw) patch is at the bottom of this post. I still can't see why the
>> >> 2nd input is not being output; it seems like everything should be
>> >> getting properly propagated.
>> >
>> > You need to shift the PTSes of the second source, something of the
>> > kind:
>> > pts += last_pts_of_first_source;
>>
>> I tried to do this in start_frame:
>>
>> static void start_frame(AVFilterLink *link, AVFilterPicRef *picref)
>> {
>>     ConcatenateContext *con = link->dst->priv;
>>
>>     if (con->first_input_consumed)
>>         picref->pts += con->first_input_last_pts;
>>     else
>>         con->first_input_last_pts = picref->pts;
>>
>>     avfilter_null_start_frame(link, picref);
>> }
[snip]
> I don't know how are you testing this, but I suppose you're using
> ffplay to test this.
>
> In order to understand what's going on you need to understand the
> threading model used by ffplay (hint: from gdb use thread apply all
> bt).
>
> The problem here looks like that when the decode thread stops to
> process packets, it doesn't send anymore packets to the queue where it
> is waiting the video thread.
>
> So the video thread doesn't have a way to signal to the filterchain
> that the input source ended, and ffplay will simply stop to play the
> video.
>
> Yes maybe this should be fixed in ffplay, patches are welcome of
> course :-).

Here is a
sample usage:

./ffmpeg_g -y -i input1.wmv -vfilters "movie=0:wmv3:input2.wmv [in2];
[in][in2] concatenate" -r 15 out.wmv

I really don't know much about ffmpeg/vfilters at all. I am trying my
best to understand it, but I'm clearly missing something. I thought
that in this case, the output end of the filter chain would call
concatenate's request_frame until it got a nonzero return value.
concatenate then propagates the request_frame back either to the input
end of the filter chain or to the movie filter (based on how I wrote
concatenate). Either way, I was under the understanding that
start_frame calls are then propagated forward from the beginning,
followed by get_video_buffer, draw_slice, end_frame, and all that
other good stuff. So to me, it obviously is not making sense that
[in2] is never being processed at all.

For me, the point of this filter is to avoid temporarily storing
processed segments of video to the filesystem. I was hoping to do a
whole bunch of processing (overlay, scale, fade, etc.) to produce a
title sequence, and then prepend that title sequence to a long video.
If I can't make this work (and it seems like I can't), then I guess
I'll just have to write the title sequence out and make a second call
to ffmpeg in order to put the two pieces together.

Thanks for your bits of assistance,
Brandon


More information about the FFmpeg-soc mailing list