[FFmpeg-devel] libavfilter API design in a realtime environment

Kieran Kunhya kierank at obe.tv
Wed Mar 16 01:42:24 CET 2016


Hello,

I want to try and use the libavfilter API to overlay bitmap subtitles on
video from a realtime source. This seems difficult/impossible to do with
the current API hence asking on the main devel list.

Some questions:

1: How do I know the end to end latency of the pipeline? Is it fixed, does
it vary? This matters because my wallclock PTS needs addition of this
latency.
2: Do I need to interleave video and subtitles (e.g VSVSVSVS) in
monotonically increasing order? What happens if the subtitles stop for a
bit (magic queues are bad in a realtime environment)? My timestamps are
guaranteed to be the same though.
3: My world is CFR but libavfilter is VFR - how does the API know when to
start releasing frames? Does this add one frame of video latency then until
it waits for the next video frame to arrive?
4: What are the differences between the FFmpeg and libav implementations?
FFmpeg uses a framesync and libav doesn't?
5: I know exactly which frames have associated subtitle bitmaps or not, is
there a way I can overlay without an extra frame delay?

If possible please answer these questions with respect to libavfilter only
(no lavc/lavf specifics)

Kieran


More information about the ffmpeg-devel mailing list