[FFmpeg-devel] [PATCH 2/2] lavc: add h264 mediacodec decoder

wm4 nfxjfg at googlemail.com
Tue Feb 23 11:32:16 CET 2016


On Tue, 23 Feb 2016 10:21:10 +0100
Matthieu Bouron <matthieu.bouron at gmail.com> wrote:

> On Mon, Feb 22, 2016 at 02:55:06PM +0100, wm4 wrote:
> > On Mon, 22 Feb 2016 12:20:36 +0100
> > Matthieu Bouron <matthieu.bouron at gmail.com> wrote:
> >   
> > > From: Matthieu Bouron <matthieu.bouron at stupeflix.com>
> > > 
> > > ---
> > > ...  
> > 
> > Some remarks:
> > 
> > - The qcom stuff should probably be moved into its own source file,
> >   because it's a lot of code, but self-contained.  
> 
> I'll move the copy / conversion routines to another file. Would you be OK
> with that ? Or do you want only the qcom stuff in a separate file
> (probably named mediacodec_buffer.c,h) ?

Well, I just thought that the qcom code looks relatively
self-contained, so it'd be a good candidate to move away.

> > 
> > - Do you really need h264_extradata_to_annexb_sps_pps? The BSF already
> >   creates mp4-style extradata, which MediaCodec apparently can take
> >   directly.  
> 
> I will do more testing and remove it if possible in the next revision of
> the patch. The original idea behind it was to comply with the
> documentation that states that for the h264 codec csd-0 should contain pps
> and csd-1 sps (those values are also set this way by the MediaExtractor
> (SDK MP4 demuxer).

Yeah, but in the non-Annex-B if branch you set csd-0 to the whole
extradata, without bothering to split it by pps and sps.

> > 
> > - There are several timeouts to avoid deadlocks in the dataflow,
> >   apparently. Is this inherently needed, or only because of the
> >   libavcodec API, which requires returning 0 or 1 output frames per
> >   input packet?  
> 
> It is needed for different reasons:
>   * The MediaCodec API does not guarantee 1 input - 1 output,
> 
>   * The MediaCodec API needs to be fed with enough buffers before
>   it outputs its first buffer. And you want to do that as fast as
>   possible (this is why in this case, I make the API returns immediately
>   when I try to dequeue the output buffers when it hasn't returned any
>   frames yet).
> 
>   * You also don't want to wait forever for an input buffer or an output
>   buffer to be available because if the internal queue of the codec is
>   full, you'll never get an input buffer available and if the codec hasn't
>   consumed enough packets, you'll never get an output buffer.
> 
>   * Also in theory, you would be able to block forever when the codec is
>   flushing its remaining frames. I preferred to not block forever but
>   for a fair amount of time so if it happens that there is bug in the
>   codec implementation or something else, it won't deadlock the whole
>   process.

All these problems sound pretty similar to the problems I had in MMAL
(an API relatively similar to OMX), which is asynchronous and has
decoupled input/output. It's very fragile and can't do without
timeouts, but I didn't find a better way to handle it either. I'm
thinking about proposing a new decode API for libavcodec, which would
make interacting with this type of API much simpler.

> > 
> > - The JNI code uses braces in case labels: "case (FF_JNI_FIELD): {"
> >   (Seems unusual coding style.)  
> 
> The reason for this is to declare and scope local variables to the
> case block. If this style is not allowed in FFmpeg, I will removed the
> braces.

That's fine. I mean why isn't it "case FF_JNI_FIELD: {"?


More information about the ffmpeg-devel mailing list