[FFmpeg-devel] libavfilter: extending overlay filter

Stefano Sabatini stefano.sabatini-lala
Mon Mar 14 22:24:36 CET 2011


On date Monday 2011-03-14 13:31:20 +0000, Mark Himsley encoded:
> On 14/03/11 11:55, Stefano Sabatini wrote:
> >On date Sunday 2011-03-13 20:54:09 +0000, Mark Himsley encoded:
> >>On 13/03/11 19:28, Mark Himsley wrote:
> >>>On 13/03/11 18:38, Stefano Sabatini wrote:
> >>>>On date Sunday 2011-03-13 15:18:04 +0000, Mark Himsley encoded:
> >>>>>On 13/03/11 14:26, Stefano Sabatini wrote:
> >>>>>>On date Sunday 2011-03-13 14:18:42 +0000, Mark Himsley encoded:
> >>>>[...]
> [...]
> >>+enum { RED = 0, GREEN, BLUE, ALPHA };
> >>+
> >>  typedef struct {
> >>      int x, y;                   ///<  position of overlayed picture
> >>
> >>+    int allow_packed_rgba;
> >>+    uint8_t inout_is_packed_rgba;
> >>+    uint8_t inout_is_alpha;
> >>+    uint8_t inout_rgba_map[4];
> >>+    uint8_t blend_is_packed_rgba;
> >>+    uint8_t blend_rgba_map[4];
> >
> >just a minor remark, I wonder if "overlay" is more clear than "blend".
> 
> I agree. But in other parts of the current code the word blend is
> used, such as blend_pix_fmts[].
> 
> Using either "overlay" everywhere or "blend" everywhere would be a
> good change. Using both is confusing.
> 
> My preference would be to use "main" and "overlay" throughout.

+1

[...]  
> >>+    if (over->inout_is_packed_rgba != over->blend_is_packed_rgba) {
> >>+        av_log(ctx, AV_LOG_ERROR,
> >>+               "Main and overlay are not similar formats, cannot mix YUV and RGB.\nInsert a format filter to change a stream's format.\n");
> >>+        return AVERROR(EINVAL);
> >>+    }
> >
> >I believe this should be avoided/removed.
> 
> I agree, but within the current framework - what should happen when
> an experienced user who has set the extra command line parameter to
> enable RGB processing gets it wrong and feeds the overlay filter
> with a mixture of YUV and RGB?

If RGB is set: main and overlay input are both converted to RGB.
If YUV is set: main and overlay input are both converted to YUV.

This is somehow inefficient, but still should allow you to get the job
done.

E.g.
movie=logo.png, format=yuv420p [over]; [in][over] overlay:0:0:1 [out]

The logo is converted to yuv420p, then converted back to RGB (yes this
is slightly counter-intuitive). The trick is to avoid conversion, as
the filterchain negotiation will manage that. So it becomes:

movie=logo.png [over]; [in][over] overlay:0:0:1 [out]

no unnecessary conversion is performed.
 
> >>+                // m is the blend multiplication of overlay over the main main
> >
> >can be removed
> 
> True.
> 
> 
> >>+                if (over->inout_is_alpha)
> >>+                {
> >>+                    switch (blend)
> >>+                    {
> >
> >Nit: the favored style is
> >if (...) {
> >     switch (...) {
> 
> Understood - K&R it is.
> 
> 
> >>+                        case 0:
> >>+                        case 0xff:
> >>+                            break;
> >>+                        default:
> >>+                            // the un-premultiplied calculation is:
> >>+                            // (255 * 255 * overlay_alpha) / ( 255 * (overlay_alpha + main_alpha) - (overlay_alpha * main_alpha) )
> >>+                            blend =
> >>+                            // the next line is a faster version of:  0xff * 0xff * blend
> >>+                                ( (blend<<  16) - (blend<<  9) + blend )
> >>+                                / (
> >>+                            // the next line is a faster version of:  0xff * (blend + d[over->inout_rgba_map[ALPHA]])
> >>+                                    ((blend + d[over->inout_rgba_map[ALPHA]])<<  8 ) - (blend + d[over->inout_rgba_map[ALPHA]])
> >>+                                    - d[over->inout_rgba_map[ALPHA]] * blend
> >>+                                );
> >
> >I can't understand this.
> >
> >When overlaying over the main input, the main alpha should remain
> >unchanged, only the overlay alpha channel is used for blending over
> >the main input.
> 
> Not when the background has alpha. When the background has alpha the
> proportion of the overlay video to the background video is
> proportional to the overlay alpha and inversely proportional to the
> background alpha.
> 
> This is how, for instance, Adobe After Effects does it when you are
> blending an overlay over an alpha. This code returns pixel values to
> within 1/255th of the values After Effects returns.

Uhm OK, just I wonder if this shouldn't be parametrized.

[...]
-- 
FFmpeg = Formidable and Frightening Majestic Peaceless Erotic Guru



More information about the ffmpeg-devel mailing list