[FFmpeg-devel] SOC Qualifiction tasks

Uoti Urpala uoti.urpala
Sat Mar 15 01:45:33 CET 2008


On Sat, 2008-03-15 at 00:00 +0100, Michael Niedermayer wrote:
> On Fri, Mar 14, 2008 at 10:44:01PM +0200, Uoti Urpala wrote:
> > - Doesn't for with LOW_DELAY.
> 
> low_delay == no delay which is just the opposit of frame based multithreading.

I don't see it as the opposite or incompatible. Doing reordering in the
decoder is IMO orthogonal to whether you want multiple threads to work
on different frames.

> > - Without LOW_DELAY will make DTS (even more) meaningless.
> 
> I cannot make sense of the "even more". Except that yes, DTS will need some
> work, this is inevitable no matter what API, DTS just wont be magically
> correct.

Basically referring to it not being very reliable with non-mpeg
containers. Not really relevant to the main point of discussion.

> > MPlayer's basic method of reordering
> > pts in a decode delay sized buffer would keep working, but it would
> > break the comparison with avctx->has_b_frames used to detect cases where
> > a pts will really never have a corresponding visible frame. The timing
> > problems are not purely the fault of the thread-related API changes
> > though; 
> 
> mplayers basic mathod of reordering pts does AFAIK not work.
> IIRC (please correct me if iam wrong) it needs to have timestamps for
> each frame. These are generally not available and not required by the specs.
> Neither mpeg2 nor h.264 in mpeg-ps/ts need to have timestamps on every
> frame, they only need them once every 0.5 or so seconds and the frames
> in between can have variing duration and arbitrary reordering. Only the
> decoder or some other code decoding headers and SEIs can recover the
> timestamps.

The -correct-pts mode is based on pts values and doesn't work if it
doesn't get pts from the demuxer. Good handling of mpeg containers would
probably best be done with a special dts-based mode. Anyway that's not
relevant to the issue related to the proposed thread API that I
described above.

> > if FFmpeg had better functionality to track frame timestamps and
> > reordering across decoding then such changes would not break things so
> > easily.
> 
> Iam not sure what you are talking about, but if you can point at some
> concrete problems and solutions with patches these would be welcome.

The most basic issue is keeping track of timestamps (or any other
information) related to particular frames across decoding. I think the
get_buffer() thing that ffplay.c uses to reorder timestamps is an ugly
hack, and the need to use such hacks shows that the library API is
lacking.

Another thing is that if you use LOW_DELAY then (AFAIK) there's no way
to access the reordering information. So if you decode a bunch of frames
you don't necessarily even know whether the next visible frame would be
one of the frames you decoded or further in the decode order.

A better API for the first case is obvious: allow the user to give a
marker to avcodec_decode_video() that is later returned with the
corresponding frame. I haven't thought about a good API, much less an
implementation, for the second case.

> Also id like to point you to AVstream.pts_buffer which i belive is
> equivalent to what you call "MPlayer's basic method of reordering pts".
> In libavformat its one of several methods to calculate missing timestamps.
> If you think mplayers method can do something which ours cannot then 
> iam very interrested to hear what that is.

As explained above I'm not talking about anything that can be (easily)
done outside libavcodec. Basically I think libavcodec should give better
access to timing/reordering related information from the codec. A player
using libavcodec shouldn't need anything like ffplay's get_buffer hack
or MPlayer's pts sorting buffer if it wants to implement simple playback
based on known-correct pts values.

> > On a more general level I don't like the idea of managing concurrently
> > running threads behind the caller's back as the ONLY supported mode of
> > operation.
> 
> Is this just a feeling or do you know of some concrete use cases where it
> would be a limitation?

Say you want to keep a constant number of actively running threads (for
example equal to the number of cores, or one less to leave a free core
for other activity), and also have other tasks besides decoding (like
video filtering). I think this is a realistic enough use case. It seems
difficult to achieve with the proposed API. It basically requires
threads to occasionally "move" from the "FFmpeg side" to "user side"
when they complete their current tasks.





More information about the ffmpeg-devel mailing list