[FFmpeg-devel] have some major changes for nvenc support

Andrey Turkin andrey.turkin at gmail.com
Fri Jan 8 22:48:07 CET 2016

2016-01-08 20:25 GMT+03:00 Michael Niedermayer <michael at niedermayer.cc>:

> Also slightly orthogonal but if you have 4 filters each written for a
> different hwaccel you can write a generic filter that passes its
> stuff to the appropriate one
> If theres not much shareale code between hw specific filters then this
> might be cleaner than throwing all hw specifc code in one filter
> directly
> such seperate filters could use the existing conditional compilation
> code, could be used directly if the user wants to force a specific
> hw type, ...
> So we'd have a default filter and then specific ones just in case anyone
needs them. That is a great idea.
Not sure which way is cleaner - to have proxy filter or to lump everything
together in one filter and add extra filter definitions to force specific
hwaccel. Would work either way.

That I'd looking for in the end, though, is to have more "intelligent"
ffmpeg wrt hardware acceleration. I'd much rather prefer to have it
automagically use hwaccel whether possible, in the optimal way, and not
have to describe every detail.
For example, this thread started because NVidia had a specific scenario and
they had to offload scaling to CUDA - and they had to concoct something
arguably messy to make it happen. There's very specialized filter with many
outputs, and you have to use complex_filter and you have to map filter
outputs to output files etc. It would be awesome if they could just do
ffmpeg -i input -s 1920x1080 -c:v nvenc_h264 hd1080.mp4 -s 1280x720 -c:v
nvenc_h264 hd720.mp4 ..., and it'd do scaling on GPU without explicit

In my opinion (which might be totally wrong), it would take 4 changes to
make that happen:
- make ffmpeg use avfilter for scaling - i.e. connect scale filter to
filterchain output (unless it already does)
- add another pixelformat for CUDA, add its support to scale, and add it as
a input format for nvenc_h264
- adjust pixelformat negotiation logic in avfilter to make sure it would
select CUDA pixelformat for scale (maybe just preferring hwaccel formats
over software ones would work?)
- add interop filter to perform CPU-GPU and GPU-CPU data transfer - i.e.
convert between hwaccel and corresponding software formats; avfilter would
insert it in appropriate places when negotiating pixel formats (I think it
does something similar with scale). This might be tricky - e.g. in my
example single interop filter needs to be added and its output has to be
shared between all the scale filters. If, say, there were 2 GPUs used in
encoding then there would have to be 2 interop filters. On the plus side
all existing host->device copying code in encoders can be thrown out (or
rather moved in that filter); as well as existing device->host copying code
from ffmpeg_*.c. Also it would make writing new hwaccel-enable filters

Actually there is one more thing to do - filters would somehow have to
share hwaccel contexts with their neighbour filters as well as filterchain
inputs and outputs.

More information about the ffmpeg-devel mailing list