[FFmpeg-soc] [soc]: r4390 - in afilters: Makefile.dummy af_null.c avfilter.c avfilter.h dummy.c

Kevin DuBois kdub432 at gmail.com
Tue Jun 9 14:52:21 CEST 2009


On Tue, Jun 9, 2009 at 7:56 AM, Michael Niedermayer <michaelni at gmx.at>wrote:

> On Mon, Jun 08, 2009 at 10:17:51PM +0200, Vitor Sessak wrote:
> > @Michael: If you are going to complain of having start_buffer() for audio
> > and start_frame() for video, this it the best time for it.
>

> well, iam not sure if its the best time ...
> best time would be if theres a description of the planed design on the
> table
> or if theres enough code to infer from it the planned design.
>

> As is its hard to comment, because i simply dont yet see where this is
> heading toward
>
> in video we have single frames at a time or single slices at a time that
> are
> passed around, the functions are designed to handle these
> frame start slice0 slice1 ... sliceN frame end
>
> in audio we have samples, handling a single sample at a time is not
> practical. And samples are not truly bundled in clear frames ...
>

I didn't use a clear-cut naming convention up to this point, and agree that
my choice of variables is a bit schizophrenic. I can standardize them to be
in line with that description.


>
> request_frame() possibly should be extended by a int sample_count argument
> draw_slice() seems useful with y & height representing
> start sample and count
>

I added something like that in the commit I pushed last night. Buffer size
should be able to be adjusted by the user.


>
> one big question is about the buffer layout and the awnser would
> likely affect the API that makes sense to it.
>
> there have been old discussions on mplayer-dev with rich and
> possibly ivan about audio filters and buffering ...
>
> Goals would certainly be
> * filters should be simple
> * core should be simple
> * no or few memcpy/memmove
>
> Things to keep in mind
> * many filters do need multiple input samples for an output sample
> -> thus the filter core must be able to do some buffering magic
>   giving a filter a little future and past context if needed, the
>   amount of that would be requested by the filter
> * audio data can be stored packed and planar, both must be supported
>  there also where some disscussions on how to fit this in
>  data & linesize
>
> [...]
>

Yes, I've considered these things in thinking about designing the system.
memcpy/memmove would be detrimental to the speed of the system.

As for the design of the filters, I currently have them set up much like the
video filters. Namely, they have a linked list of input and output pads, and
init/end function hooks. As I see it being done in the video filters, the
actual functions that perform manipulation of the data are hooked into the
AVFilterPad structure (like start_frame, end_frame, draw_slice), and I have
begun to replicate these functions to handle the audio equivelent of
AVFilterPic, which I called AVFilterSamples

As for core, I dont anticipate much change from what is currently in place.
The biggest part of modifying the core will be in negotiating audio format
conversions, if necessary. E.g, a filter A puts out fl32 to filter B, which
is expecting sl16.

I'm planning on avoiding memcpy's by having a pointer to the buffer be
passed to each sequential filter, so the memory doesnt have to move.

My plan is to put about 2-3 hours in a night, hopefully by the end of the
week we'll have something that people can seriously criticize :-)


-- 
Kevin DuBois
kdub432 at gmail.com
PGP fingerprint
80CF 7C1D 0A1C BE03 2203
95B6 1515 C3DC B6BE 7E88


More information about the FFmpeg-soc mailing list