[FFmpeg-user] 4K 60Hz Directshow Video Capture

James Girotti james.girotti at gmail.com
Tue Feb 13 02:03:24 EET 2018


>
> ffmpeg -f dshow -video_size 3840x2160 -framerate 60000/1001 -rtbufsize
> 2100000000 -pixel_format bgr24 -i video="MZ0380 PCI, Analog 01 Capture"
> -c:v h264_nvenc -preset lossless -f null -
> Gives me the same error
>

That's surprising, I can get about 200fps using file-based/ramdisk "-c:v
h264_nvenc -preset -lossless". Have you also tried "-c:v hevc_nvenc -preset
lossless"? What's the encoding FPS that you're getting? You technically
shouldn't be able get much more than 60fps as that's what your capture card
is supplying. Can you monitor the "Video Engine Utilization" during
encoding? In linux it's listed in the nvidia-settings GUI or "nvidia-smi
dmon" on the CLI will show enc/dec%.


> ffmpeg -f dshow -video_size 3840x2160 -framerate 60000/1001 -rtbufsize
> 2100000000 -pixel_format bgr24 -i video="MZ0380 PCI, Analog 01 Capture"
> -c:v rawvideo -f null -
> Gets me nearly x1 performance when executing from a ram disk but
>
> ffmpeg -f dshow -video_size 3840x2160 -framerate 60000/1001 -rtbufsize
> 2100000000 -pixel_format bgr24 -i video="MZ0380 PCI, Analog 01 Capture"
> -c:v rawvideo raw.nut
> Only gets me x0.5 and the buffer overflows.


> Is there a way of accelerating rawvideo decoding? Would using my
> colleagues 1080 make a difference? Thanks.


I think raw-video is already decoded. So no way/need to accelerate that.
You might try a different pix_fmt from your capture card while using
hw-encoding, but you'd have to test. I don't know the internals, i.e. when
the pixel format is converted during hw-encoding. So it might make a
difference.

Changing pixel formats might be a concern if you are trying to achieve
"100% lossless" capture. I've read that yuv444p should be sufficient
colorspace for bgr24.

There isn't a lot of info out there on encoding speed differences based on
GPU models. It's a complex subject, but from what I have observed the ASIC
is tied to the GPU clock (I have observed that GPU clock speed increases as
ASIC load increases). If that's true, then a GTX 1080, with it's higher max
clock, could have faster encoding, but I have no data to back that up only.


More information about the ffmpeg-user mailing list