[FFmpeg-user] Remove everything but a single color (range)

Hans Carlson forbyta at gmx.com
Wed Mar 18 03:20:24 EET 2020


On Tue, 17 Mar 2020, Ted Park wrote:

> Sorry I wasn’t clear, that is not what I meant at all and in fact I am 
> surprised that produced results that you describe as “sort of works.” 
> I’m kind of curious, is the material something you could share?

I came up with an example I think might illustrate the video and goal 
better.  See below.

> Try this…
>
> First, choose rgb or yuv, then stick to one. (The video is yuv so I 
> would choose that.) Then, add an alpha channel to serve as the mask. It 
> might be added/converted automatically before the keying filters, but 
> then it might switch from yuv to argb and back, which you probably would 
> not like. If yuv, yuva444p would be sufficient in this case. If you 
> really don’t to work in yuv, I think any of the plain rgb formats with 
> alpha would work (rgba, argb, etc).
>
> ffmpeg -i $INPUT -filter_complex "format=yuva444p”
>
> Chroma key out the box (or color key, if you went that route)
> ffmpeg -i $INPUT -filter_complex "format=yuva444p,chromakey=0xC8438A:yuv=1”

I'm assuming 0xC8438A is the YCbCr equivalent of the RGB value 0xe6e65c 
that I was using?  My assumption is based on this converter 
(http://www.picturetopeople.org/color_converter.html), that says RGB 
0xe6e65c (230,230,92) is YCbCr (200,67,138) 0xc8438a.

> ffmpeg -i $INPUT -filter_complex "format=yuva444p,chromakey=0xC8438A:yuv=1,negate=negate_alpha=1,negate=negate_alpha=0”
>
> That should give you the yellow box sort of rotoscoped out. You might 
> still have to refine the mask, which you can do by isolating the alpha 
> plane as a separate movie then working on that.

I had to use "chromakey=color=0xC8438A:similarity=0.1" and then this does 
work... but again ONLY if ffplay is used.  If ffmpeg is used then it looks 
exactly like $INPUT.

I'm not sure what you mean by "refine the mask" and "isolating the alpha 
plane as a separate movie".

>> What I want is ONLY the yellow bar and everything else black, as in the 
>> ffplay command.  How can I get the same thing with ffmpeg?
>
> A rudimentary explanation, but I think it's like the renderer used by 
> ffplay draws on a black screen, and libx264 uses a white canvas.

Is there a way to tell libx264 to use a black canvas?  Or I can use a 
different codec and/or format? I don't really care what the output 
codec/format is as you will see with the end goal, explained below.

So... maybe I should back up a bit and provide A) a TEST VIDEO and B) a 
better explanation of my END GOAL.

The TEST VIDEO can be generated using the "life" filter and some other 
bits.  It will basically be "life" with a 6 section grid overlay where a 
yellow box moves from section to section.  I see the exact same behavior 
using this test video as I do with the real video.

First create a filter script file.  This isn't really necessary, but I 
found it easier to adjust where and when the yellow square appears by 
simply editing this file, rather than 1 really long command line.

Save this as life-boxes.filter.  By all means change or add new drawbox 
entries, just be sure the last one does NOT end with ','.  The drawgrid 
and drawtext filters are not necessary, they just help show where and when 
the yellow box (drawbox) shows up.

--------------- CUT -----------------
drawgrid=w=200:h=200:color=black,

drawtext=text='1':fontsize=40:fontcolor=white:x=10:y=10,
drawtext=text='2':fontsize=40:fontcolor=white:x=210:y=10,
drawtext=text='3':fontsize=40:fontcolor=white:x=410:y=10,
drawtext=text='4':fontsize=40:fontcolor=white:x=10:y=210,
drawtext=text='5':fontsize=40:fontcolor=white:x=210:y=210,
drawtext=text='6':fontsize=40:fontcolor=white:x=410:y=210,

drawtext=text='%{pts}':fontsize=30:fontcolor=white:x=10:y=main_h-max_glyph_h-10,

drawbox=enable='between(t, 0, 1.96)':w=50:h=50:x=75:y=75:color=0xffff00,
drawbox=enable='between(t, 2, 2.96)':w=50:h=50:x=275:y=75:color=0xffff00,
drawbox=enable='between(t, 3, 4.96)':w=50:h=50:x=275:y=275:color=0xffff00,
drawbox=enable='between(t, 5, 7.96)':w=50:h=50:x=475:y=75:color=0xffff00,
drawbox=enable='between(t, 8, 8.96)':w=50:h=50:x=75:y=275:color=0xffff00,
drawbox=enable='between(t, 9,10.96)':w=50:h=50:x=475:y=275:color=0xffff00,
drawbox=enable='between(t,11,11.96)':w=50:h=50:x=275:y=275:color=0xffff00,
drawbox=enable='between(t,12,14.96)':w=50:h=50:x=275:y=75:color=0xffff00,
drawbox=enable='between(t,15,16.96)':w=50:h=50:x=475:y=275:color=0xffff00,
drawbox=enable='between(t,17,19.96)':w=50:h=50:x=75:y=75:color=0xffff00
--------------- CUT -----------------

Then run the following to generate the "life-boxes.mkv" test video:

   ffmpeg -f lavfi -i "life=size=600x400:life_color=0x00ff00:death_color=0xff0000:mold_color=0x0000ff:mold=10,format=yuv420p" -filter_complex_script life-boxes.filter -t 20 life-boxes.mkv

A couple notes about the test video:

   1) the yellow box will only be in one section at a time.
   2) which section and when the yellow box appears is unknown.

Obviously the where and when IS known in the test video, but neither is 
known in the real video.

The END GOAL is to figure out which section and most importantly WHEN the 
yellow box appears.  So, given the above test video I want to end up with 
something like this:

The first number is the section number (left-right,top-bottom) and the 2nd 
number is the absolute time it appeared.

   1, 0  # section 1, timecode 0 seconds
   2, 2  # section 2, timecode 2 seconds
   5, 3  # section 5, timecode 3 seconds
   3, 5  # section 3, timecode 5 seconds
   4, 8  # section 4, timecode 8 seconds
   6, 9  # section 6, timecode 9 seconds
   5,11  # section 5, timecode 11 seconds
   2,12  # section 2, timecode 12 seconds
   6,15  # section 6, timecode 15 seconds
   1,17  # section 1, timecode 17 seconds

I don't expect ffmpeg to produce that output (or anything close to that) 
directly.  I fully expect to use ffprobe or verbose/debug output or some 
other method to actually get the info.

Exactly HOW I go about this I haven't fully figured out, but it seemed to 
me the 1st step was to somehow isolate the yellow box.  I figured if I 
could get the yellow box on a completely black (or transparent) 
background, then there should be some way I could distinguish between a 
frame that's entirely black/transparent (1 color) vs a frame that contains 
some yellow (ie, the box).  Maybe I'd use blackdetect in some way or maybe 
I generate png files for each frame and then manipulate them with 
imagemagick or some other image processing tool.  Again, I haven't fully 
figured it all out.

In terms of figuring out WHERE the box appears, if there's no 
filter/method that could give me the location of the box, I think I could 
crop the video into 6 separate files (1 for each section), then process 
each file separately to determine WHEN the yellow box appears in that 
section.  But perhaps there's a better way to do that.

Soo.... trying some of the previous commands using the above generated 
video (life-boxes.mkv), I get these results...

NOTE: The RGB 0xffff00 value is the color of the yellow box found in 
life-boxes.filter.  The YCbCr 0xd21092 value is the RGB -> YCbCr 
conversion of 0xffff00 (according to 
http://www.picturetopeople.org/color_converter.html).

My original (trial and error/colorhold,chromakey) method.  Using ffplay, 
this works. ie, all I see is the yellow box moving around on a black 
background:

   ffplay -i life-boxes.mkv -vf "colorhold=color=0xffff00:similarity=0.1,chromakey=color=black:similarity=.2"

If I change that to ffmpeg, then the yellow box is in color, but I still 
see the grid lines and "life" is now gray scale:

   ffmpeg -i life-boxes.mkv -vf "colorhold=color=0xffff00:similarity=0.1,chromakey=color=black:similarity=.2" life-boxes-colorhold.mkv

With both of these, I don't see any difference.  They look exactly like 
life-boxes.mkv... at least visually.

   ffmpeg -i life-boxes.mkv -vf "format=yuva444p,chromakey=color=0xd21092:similarity=0.1:yuv=1" life-boxes-chromakey.mkv

   ffmpeg -i life-boxes.mkv -vf "format=yuva444p,chromakey=color=0xd21092:similarity=0.1:yuv=1,negate=negate_alpha=1,negate=negate_alpha=0" life-boxes-chromakey-negate.mkv

BUT... if ffplay is used...

This one shows "life" in color (just like life-boxes.mkv), and the yellow 
box is sort of a mixture between black and yellow.  Increasing the 
similarity to 0.4 makes the yellow box completely black, but then it's too 
high once the negate filters are added:

   ffplay -i life-boxes.mkv -vf "format=yuva444p,chromakey=color=0xd21092:similarity=0.1:yuv=1"

And this one looks very similar to my "colorhold,chromakey" command using 
ffplay.  ie, only the yellow box is displayed and everything else is 
black.  But this only works if ffplay is used:

   ffplay -i life-boxes.mkv -vf "format=yuva444p,chromakey=color=0xd21092:similarity=0.1:yuv=1,negate=negate_alpha=1,negate=negate_alpha=0"

And using the RGB color works as well... as long as ffplay is used:

   ffplay -i life-boxes.mkv -vf "format=yuva444p,chromakey=color=0xffff00:similarity=0.1,negate=negate_alpha=1,negate=negate_alpha=0"

Soo... given the above END GOAL, is there a better way to separate the 
yellow box from the rest of the video and determine at least WHEN it 
appears?


More information about the ffmpeg-user mailing list