[FFmpeg-devel] [PATCH] libvpx: alt reference frame / lag

Reimar Döffinger Reimar.Doeffinger
Wed Jun 16 21:28:53 CEST 2010


On Wed, Jun 16, 2010 at 02:52:17PM -0400, John Koleszar wrote:
> Allowing the application to run
> more frequently is always a good thing, especially on embedded
> platforms, for single threaded applications.

The it makes more sense to provide and API to pass immediately
whatever it is there to the decoder instead of having to wait
for a full frame.

> These frames are frames in every sense except that they're not
> displayed on their own. They're not just a fancy header. Here's an
> example: You can have an ARF that's taken from some future frame and
> not displayed. Then later, when that source frame's PTS is reached,
> code a non-ARF frame that has no residual data at all, effectively a
> header saying "present the ARF buffer now." Which packet do you call
> the "frame" and which is the "header" in that case?

The one that you put into a the decoder and then you get a frame out
is the frame, and it is the only real frame.
It doesn't look like it for the decoder, but why do you want to force
your users by all means to have to bother with _internals_ of your codec?
Yes, there may be advanced users that might need more control, but
why should the ordinary users have to pay the price in complexity for them?
The first rule IMO is still "keep simple things simple".

> >> A packet stream is a clean abstraction that everybody
> >> understands, the only twist here is that not all packets are
> >> displayed.
> >
> > That argument works just as well for claiming that e.g. for JPEG
> > the SOI, EOI etc. should each be in a separate packet.
> > Or that for H.264 each slice should go into its own packet, after
> > all someone might want to decode only the middle slice for some
> > reason.
> 
> That data is all related to the same frame. An ARF is not necessarily
> related to the frame preceding or following it.

Neither are most of the time things like SPS and PPS for H.264.
At least we still don't put the into a separate packet (well, in
extradata for formats where it is possible, but only because
it does not change).

> There are existing
> applications that very much care about the contents of each reference
> buffer and what's in each packet, this isn't a hypothetical like
> decoding a single slice.

Which applications exactly? What exactly are they doing? And why exactly
do they absolutely need to have things in a separate packet?



More information about the ffmpeg-devel mailing list