[FFmpeg-devel] [RFC] Move ffplay engine to separate library

Stefano Sabatini stefasab at gmail.com
Tue Oct 1 12:53:46 CEST 2013

On date Monday 2013-09-30 23:29:33 +0200, Lukasz M encoded:
> Hi,
> I'd like to propose new library of ffmpeg project that provides middle
> layer between current ffmpeg tools and player that uses it.
> There is a lot to do to make a player based on ffmpeg working correctly,
> this is probably common for most players.
> That would be good to make it as a part of ffmpeg project so it can be
> maintained globaly and people may contribute to it.
> I used to make an iOS player for my own purpose, that base on ffmpeg and I
> ported ffplay as a core for it.
> Unfortunatelly it is hard to keep it up to date with git master, I believe
> there is many people that would appreciate this kind of library inside
> ffmpeg.
> I made some work as kind of "proof of concept". I pushed it on
> https://github.com/lukaszmluki/ffmpeg/tree/libavengine
> This is not a stuff ready to merge. In contrary, there is a lot of things
> not finished. It's just a vision more or less how it would look like.
> In general I wanted to make a library that is platform independet (it
> depends only on ffmpeg). As it requires threads, synchronization etc,
> it would be provided as callbacks by player (ffplay as a internal player).
> These callback would be global (initalized once)
> There would be also callbacks for player stuff like audio initialization /
> closing, picture rendering etc.
> The player would need to call (names are kept from ffplay)
> video_refresh periodically and sdl_audio_callback when audio system needs
> data.

> At this moment I don't want to talk about details. Just want to hear your
> opinion about this idea.

One problem with your approach is that you depend on SDL. Ideally a
player should be able to select the output device depending on
availability/preference, and a generic framework as FFmpeg should be
able to abstract from a specific implementation.

As I envision FFmpeg, a player should be built based on the basic
blocks provided by the libraries, in particular by composing filters
and input / output devices.

For example you could have something like this:

[input device] -> [movie source] -> [filters] -> [output device]
                    [control device]

and provide some way to send interactive commands, e.g. to adjust
filtering and/or to seek back.

What we lack right now, amongst the other things: better output
devices (e.g. opengl, audio outputs), an efficient way to build the
filtergraph, more filter commands (e.g. to seek back in the movie
source), and a high-level API and/or high-level language bindings.
FFmpeg = Fiendish and Fundamental Mastodontic Pitiful Elitarian Geisha

More information about the ffmpeg-devel mailing list