[FFmpeg-user] Audio-Video Synchronization and Fastest Encoding

Stephan Monecke stephanmonecke at gmail.com
Thu Jun 4 18:11:11 EEST 2020


Hi together,


I have a weak computer (i3 something) connected to a HDMI-grabber
(Magewell USB Capture HDMI PLUS, acts like a webcam) and audio via
line over the microphone port.

I want to merge those two streams together with as little audio-video
offset as possible and as light as possible on the CPU BUT I need
multiple other programs to be able to read the HDMI-grabber
simultaneously (I hence somehow need to mirror /dev/video0).

For the latter, I currently use an instance of the nginx-rtmp plugin
as a local video relay -- I might as well have a look at the
v4l2loopback module but I don't know about the performance or
side-effects so far.

I now have the following questions:

  1. What is the best way to automatically produce synchronized audio
and video? Is there some resource someone can point me at? I currently
use `-itsoffset` with empirical value for the video and HOPE it to be
constant. Post-production would not be a problem as long its
automatable and command line.

  2. How can I send the video to the local nginx as light as possible
on the CPU?

When using the GPU via

ffmpeg
    -vaapi_device /dev/dri/renderD128
    -r 60
    -i /dev/video0
    -vf 'format=nv12,hwupload'
    -c:v h264_vaapi
    -r 25
    -profile high
    -threads 1
    -an
    -f flv rtmp://localhost/live

I have around 70-80 % CPU usage on the respective core but the
audio-video offset seems to vary about a second between multiple
recordings. When I use

ffmpeg
    -r 60
    -i /dev/video0
    -preset ultrafast
    -c:v libx264
    -r 25
    -threads 1
    -an
    -f flv rtmp://localhost/live

I have dangerous 100 % CPU utilization on the respective core but did
not notice the offset variation so far. Is there something lighter?

  3. Is there a completely different more sane approach?


Thanks a lot for any help!

Stephan


More information about the ffmpeg-user mailing list