[FFmpeg-user] concat demuxer filter_complex (fade)

mail-login+ffmpeg at protonmail.com mail-login+ffmpeg at protonmail.com
Fri Apr 17 02:04:46 EEST 2020


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 16, 2020 3:16 PM, Ted Park <kumowoon1025 at gmail.com> wrote:

> Hi,
>
> > Actually I don't see why it should be complicated. Maybe we're writing about different things. You're still writing about the concat demuxer but I already came to the conclusion that its easier to use the concat filter if you want to insert transitions, since otherwise you'd have to preprocess the images into videos and already add the transition at this point. With the concat filter, you can use one filterchain to add the transitions and concatenate the images in one step (no additional video files in the preprocessing process)
>
> Oh, yeah you're right, I assumed the original approach would be the one in op. But I meant it's complicated because ffmpeg didn't really have a transition filter for video until recently, and you're left managing everything yourself, like when to start reading/queuing frames for clip 2, when you can stop looping the first part, etc. Like, one filter vs one filterchain


Oh no, since transitions do not seem possible with the concat demuxer (without generating one video file for each image), I changed to the concat filter (and now I understand what you mean that this is much more complicated, since yes you have to set the duration with looping the image and then trimming it (at least it's the only way that I know of, but if anyone knows of another better way: I'd appreciate it ;). And with the concat filter in contrast to the concat demuxer you of course have to take care of many more parameters of the input files which have to be the same.).

And yes you have to do many things on your own and the ffmpeg commands get really long with that many input files, but since the single parts of it are very much the same (scaling, padding, rotating,...) with just other in- and output streams this is something which can simply be managed by a script (e.g. read the length of the video, subtract a few seconds/frames and use that as offset for the fade (out) filter.

I now have such a script (written in bash, requiring ffmpeg, mediainfo and convert (imagemagick)). Anyhow, I'd still appreciate it if there were a more direct way to do something like this (maybe someone is reading this, who'll think about this ;)

With kind regards

(if someone is interested, my bash (requires bash, ffmpeg, mediainfo, imagemagick. exiftool is required for automatic rotation but this will be merged into imagemagick soon)  script is located under https://gitlab.com/AtticusSullivan/publbashscripts/-/blob/master/img+vid2slideshow.bash any feadback is welcome ;)


More information about the ffmpeg-user mailing list