[FFmpeg-user] CPU and GPU

MediaMouth communque at gmail.com
Mon Mar 2 02:45:19 EET 2020

> On Mar 1, 2020, at 2:28 PM, Dennis Mungai <dmngaie at gmail.com> wrote:
> On Sun, 1 Mar 2020, 19:55 Carl Eugen Hoyos, <ceffmpeg at gmail.com <mailto:ceffmpeg at gmail.com>> wrote:
>> Am So., 1. März 2020 um 14:11 Uhr schrieb Ted Park <kumowoon1025 at gmail.com
>>> :
>>>> FFmpeg's videotoolbox implementation is missing ProRes support.
>>> Oh, I never noticed that. It shouldn’t be too difficult right?
>> Hopefully not.
>>> Though I’m guessing the decoder would be of limited use for the command
>> line tools.
>> Why?
>> Carl Eugen
> From the context of his email, this mostly infers to any perceived
> performance gain H/W accelerated decode brings to the table in an encoding
> workload. It's not likely to be faster than a software based decoder, but
> will do the same task without using any CPU cycles nonetheless.
> Secondly, consider that (almost) zero filters in FFmpeg can take advantage
> of videotoolbox (at the moment), so even with the command line tools,
> chaining together workflows with complex filter chains that can take
> advantage of H/W acceleration on Mac OSX would be limited to the OpenCL
> (and Vulkan, now that we have a Vulkan HW Context in FFmpeg) based filters
> only. Pumping up hwupload/hwdownload filters in these chains further slows
> down things, often offsetting any performance benefit(s) such filters bring
> to the workflow.
> This is likely to change in the future if filters that can tap into
> videotoolbox come into play.

Part of the original reason for posting CPU vs GPU question in the first place was just a curiosity about how FFmpeg was handling various workloads.  Already I use it in automated pipelines as a welcome alternative to Adobe Media Encoder.

I only recently started paying attention how it was making use of CPU cores and GPUs.

What struck me watching the ProRes transcodes was how the process used all 8 cores and (presumably) hyper-threading (representing as 16 cores being used)

Is FFmpeg making the determination how to distribute across the cores, or is the machine (in this case MacOS) making those decisions?  Does FFmpeg have any control over these things, and if so, is there any reason an FFmpeg user might want manual control -- say to either maximize processing time ("use 'em al"l) or manage computing resources ("use only cores 1, 2, and 3, but leave 4, 6, and 6 free for something else")?

Was also curious: Does anyone know if Media Encoder is using FFmpeg under the hood?

More information about the ffmpeg-user mailing list