[FFmpeg-devel] [PATCH 2/2] lavf: Add coreimage filter for GPU based image filtering on OSX.

Thilo Borgmann thilo.borgmann at mail.de
Mon Mar 14 15:38:52 CET 2016

Am 14.03.16 um 13:56 schrieb Clément Bœsch:
> On Mon, Mar 14, 2016 at 01:46:52PM +0100, Thilo Borgmann wrote:
>> Am 14.03.16 um 11:22 schrieb Clément Bœsch:
>>> [...]
>>>> +./ffmpeg -f lavfi -i nullsrc=s=100x100,coreimage=filter=CIQRCodeGenerator@@inputMessage=https\\\\\://FFmpeg.org/@@inputCorrectionLevel=H -frames:v 1 QRCode.png
>>> remove ./
>>> also, it's probably better to have 2 filters: one for usage as a source,
>>> and another one for filtering (coreimagesrc vs coreimage).
>> I want to extend it later to allow the user to define an output size avoiding
>> the need to have it defined by input iamge size. Then it could be used as an
>> image source, too. However, I'll have to think about many more cases for that to
>> work good. Thus I really want to do it as further work.
>> Also, if we split out the generator category into coreimagesrc, the following
>> would not work anymore without GPU-HOST transfer:
>> coreimage=filter=boxblur at default#qrcode at whatever@somewhere
>> Instead the user would have to use a complex filter:
>> coreimagesrc=filter=qrcode at whatever [qr],
>> [v:0] coreimage=filter=boxblur at default [blur],
>> [blur][qr]overlay=somewhere [output]
>> To do it without transfer then would require to allow coreimage to do generators
>> like coreimagesrc - which would void the argument for a seperated src because
>> complete feature duplication, don't you think?
> If you can hold the picture into a CVPixelBuffer, you could use
> AV_PIX_FMT_VIDEOTOOLBOX, which make it possible to create one single
> apple filter per coreimage instance in the filtergraph.
> It's probably a cleaner design. You'll probably need a
> AV_PIX_FMT_VIDEOTOOLBOX to AV_PIX_FMT_RGBA (or whatever) convert filter
> (transparently in vf scale, or maybe simply a coreimage=torgba)

Spec says: "A Core Video pixel buffer is an image buffer that holds pixels in
main memory." A more valid choice could be CVImageBuffer.

Anyway, the object itself should be retainable across filter instances, also as
a CIImage object. The filter would also have to know about the complete filter
chain to know if the image should be kept as CIImage object (probably in VRAM)
for further processing or if it has to be transferred to the host... and where
in the chain is the current instance? Did it correctly retain an image for this
instance? What if an error occurs in between? Do we know that?

There is just missing some work for making a nullsrc in before avoidable. The
alternative might bring object conversion (PIX_FMT & toRGB filters?), much more
complex code and filter graph parsing together with a distinct modularization
with almost the same code. I see this much more error prone and way harder to
debug and maintain. Once the user can specify an output size the existing code
would already perfectly serve as an image source.

Sorry, I don't think it would be the better choice.

>>>> +#define SafeCFRelease(ptr) { \
>>>> +    if (ptr) {               \
>>>> +        CFRelease(ptr);      \
>>>> +        ptr = NULL;          \
>>>> +    }                        \
>>>> +}
>>> please use do while(0) form
>> I don't understand how do while(0) form could be applied here, please explain.
> #define SafeCFRelease(ptr) do { \
>     if (ptr) {                  \
>         CFRelease(ptr);         \
>         ptr = NULL;             \
>     }                           \
> } while (0)

I don't know why this is beneficial but I'll change it locally.


More information about the ffmpeg-devel mailing list