[FFmpeg-devel] Allow interrupt callback for AVCodecContext

Don Moir donmoir at comcast.net
Mon Jan 6 09:33:42 CET 2014

----- Original Message ----- 
From: "Reimar Döffinger" <Reimar.Doeffinger at gmx.de>
To: "FFmpeg development discussions and patches" <ffmpeg-devel at ffmpeg.org>
Sent: Monday, January 06, 2014 8:11 AM
Subject: Re: [FFmpeg-devel] Allow interrupt callback for AVCodecContext

> On 06.01.2014, at 09:34, "Don Moir" <donmoir at comcast.net> wrote:
>> ----- Original Message ----- From: "Reimar Döffinger" <Reimar.Doeffinger at gmx.de>
>> To: "FFmpeg development discussions and patches" <ffmpeg-devel at ffmpeg.org>
>> Sent: Monday, January 06, 2014 5:17 AM
>> Subject: Re: [FFmpeg-devel] Allow interrupt callback for AVCodecContext
>>> On 06.01.2014, at 10:01, "Don Moir" <donmoir at comcast.net> wrote:
>>>>> In other words: I think this looks so attractive to you because it would work
>>>>> well if it was implemented specifically _just for you_. But having code specifically
>>>>> for one person in such a large project doesn't make much sense.
>>>> I think its a real issue and I know better than to ask for something just for me.
>>>> Although the people that would benefit would be in the minority.
>>>> Players apps would not benefit much. Timeline and editiing apps can benefit though.
>>> Your examples make me supicious. Why would timeline and editing apps have to
>>> completly unpredictably stop decoding? Player apps should be far more affected...
>>> If it's predictable, you can just flush the context...
>> It's predictable in the sense that user has choosen to seek or swap out video. In my apps the seek call is immediate and 
>> interruptible but the actual seeking process takes time. Some of this time is waiting on avcodec_decode_video2 and then just 
>> depends on seek position etc.
>> I can flush the context while it is in the middle of an avcodec_decode_video2? This is the thing I am waiting to finish when not 
>> using cached context.
> No, but you possibly might avoid giving more than 1 frame at the time (note: I doubt that would work currently), thus avoiding 
> having to wait for it to decode n frames in case of n-time multithreading.
> Slice threading instead of frame threading where possible should help, too.
>>> Without analysis what exactly the cost is, that is micro-optimization.
>>> There are going to be loads of places where you can micro-optimize that doesn't need new
>>> FFmpeg features and might gain your application more im overall performance.
>>> If you did an analysis and the cost was relevant, then we should look at that and
>>> see if we can fix it so that cached contexts are nothing anyone has to worry about.
>> It's not clear how much more memory is used when allocating new context. It depends on the codec id mostly I think. I am not too 
>> much bothered by that.
>> Each open context though does in my worse case create 2 new threads that are just sitting idle waiting to be swapped out when 
>> needed. Right now I have just one cached context for video. Audio does not matter too much but I just have not determined if I 
>> need to do anything for that or not yet. I limit the context to have a maximum of 2 threads. Diminishing returns with more 
>> threads than that.
>> So if I have 10 videos open at once (not unusual), than that is 20 additional threads open doing nothing. I know they are idle 
>> and not doing anything but it still bothers me.
> Unless you heavily customized the OS you will have 50 to 100 useless _processes_ around.
> Threads are specifically designed to be lightweight, unless you have actual numbers indicating the opposite being bothered even by 
> 200 useless threads is just silly.
> I expect you could create 10000 threads and would not notice any overhead.
> Apache used to create/use a whole _process_ for each request and it could handle quite large workloads anyway, and processes are 
> vastly more expensive than threads.
>>> The other option is to make creating contexts really fast, so you
>>> don't have to cache them in the first place.
>> It's not the creation buts it's the open that takes the most time. If avcodec_open2 could be made blinding fast then that would 
>> be good.
>> The only way that works across the board right now for new context is the following.
>> AVCodecContext *new_context = avvode_alloc_context3 (NULL);
>> avcodec_copy_context (new_context,exisiting_context);
>> avcodec_open2 (new_context,codec,NULL);
>> You can use this approach for some but fails too often in avcodec_open2 or elsewhere.
>> AVCodecContext *new_context = avvode_alloc_context3 (codec);
>> avcodec_open2 (new_context,codec,NULL);
> As I said before, a full bugreport would be welcome, this should _not_ be failing as far as I can tell.
> And by "creation" I meant the open call.
> That one is simply not optimized at all in most cases (nobody considered its performance relevant so far I think), so it's likely 
> there will be a lot of low-hanging fruit to make it faster.

In the future I will see what is taking the most time with avcodec_open2. Before this I never paid much attention to it either but I 
did look quickly at the code and then left quickly haha.

Right now I don't even know if avcodec_open2 is supposed to work for an allocated context if you use this:

AVCodecContext *new_context = avvode_alloc_context3 (codec);
avcodec_open2 (new_context,codec,NULL);

In the cases it did not work it did not matter if you set the codec defaults either but should not have to anyway.

Doesn't work for say Theora saying missing side data and I think fails elsewhere for others. First time I tried it, it work for 
mpeg2 then next to files failed and moved on and started using copy context method.

The original avcode_open2 on the stream codec works always as expected which is the existing_context used above. 

More information about the ffmpeg-devel mailing list