[Ffmpeg-devel] Embedded preview vhook/voutfb.so /dev/fb

Rich Felker dalias
Wed Mar 28 20:56:47 CEST 2007


On Wed, Mar 28, 2007 at 06:49:42PM +0200, Michael Niedermayer wrote:
> one thing which seems to not have been mentioned yet is that the scaling
> operation for lets say line 5 is not the same as for line 6, scaling 
> 501 lines to 500 shows that nicely for example, so a scale filter needs
> to know and deal with its position

yes, this is why i said in the original post that the non-one-to-one
case needs thought. still i think, like you said, that telling the
position is sufficient.

> also the number of lines of context for luma and chroma can differ, or
> it can be the same but the space which they cover can differ, that is
> if we seriously would want to split swscale into horizontal and vertical
> filters such things must be supported, inside sws this is trivial iam not
> sure how nice that would be outside
> having a ring buffer with more lines than needed is bad due to caching
> issues

one thing i didn't mention is that picture filters could indicate that
they operate only one certain planes, or that the same operation needs
to be performed on each plane. so for planar-to-planar scaling,
swscale could in principle even be factored into separate scalers for
each plane. not sure if this is desirable or not. i'm more trying to
indicate the expressive power of the design rather than suggest a
preferred implementation for swscale.

as far as context, i don't think it hurts to lie that you need more
context than you actually do. it will just defer a line or two from
being processed during one slice until the next slice. it won't cause
any duplicate processing so the only performance penalties that could
exist would be cache-related. hopefully any such penalties would be
extremely small.

> lets consider a few other filters
> wavelets while they are local differ from even and odd pixels, also
> practial implementations perform several passes over the image and
> each pass will generally be done as several lifting step that is
> the data will be written to and read a few times
> i cani imagine how to do both libmpcodec slice input and output
> but random order fails also output together with extra context seems
> tricky, i mean we should reuse the lines as soon as possible to
> improve cacheing but here our filter needs to know how much context
> the next filter needs

hmm, sounds difficult in any system.. :)

> box blur and similar filters are trivial to implement with
> libmpcodec slices, but they need slices to be in a sensible order

they should work with any order as long as you have the sufficient
context, no? i don't see why they depend on order at all..

rich




More information about the ffmpeg-devel mailing list