[FFmpeg-user] Merge two videos into one with side-by-side composition

Lou lou at lrcd.com
Sat Jun 15 03:39:20 CEST 2013

On Fri, 14 Jun 2013 15:48:49 -0700
John Crossman <johncrossman at berkeley.edu> wrote:

> Thanks for the quick reply. Here is the output.
> *
> *
> *ffmpeg -i camera.mpg -i screen.mpg -filter_complex
> "[0:v:0]pad=iw*2:ih[bg]; [bg][1:v:0]overlay=w" output.mpg*
> Input #0, mpeg, from 'camera.mpg':
>   Duration: 00:00:08.61, start: 0.500000, bitrate: 513 kb/s
>     Stream #0:0[0x1e0]: Video: mpeg1video, yuv420p, 768x480 [SAR 1:1 DAR
> 8:5], 104857 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 29.97 tbc
> [mpeg @ 0x7f896a813600] max_analyze_duration 5000000 reached at 5000000
> microseconds
> Input #1, mpeg, from 'screen.mpg':
>   Duration: 00:00:08.83, start: 0.500000, bitrate: 530 kb/s
>     Stream #1:0[0x1e0]: Video: mpeg1video, yuv420p, 1024x768 [SAR 1:1 DAR
> 4:3], 104857 kb/s, 30 fps, 30 tbr, 90k tbn, 30 tbc
> [Parsed_overlay_1 @ 0x7f896a413a60] Overlay area with coordinates x1:1024
> y1:0 x2:2048 y2:768 is not completely contained within the output with size
> 1536x480

My original example assumed that your inputs were the same frame size.
Your inputs vary in size, and your padding was based on the smaller
input, so your larger input was not fitting correctly.

You can pad from the larger video and center the smaller video in the
padded area (you should check the arithmetic since it's Friday and I
just got off of work):

ffmpeg -i camera.mpg -i screen.mpg -filter_complex \
"[1:v]pad=iw*2:ih[bg];[bg][0:v]overlay=W/2+((W/2-w)/2):(H-h)/2" \
-qscale:v 2 output.mpg

Or you could add the scale filter make the inputs a more similar size.

Again, please remember that top-posting is not recommended on this
mailing list.

More information about the ffmpeg-user mailing list