[FFmpeg-devel] GSoC
Dylan Fernando
dylanf123 at gmail.com
Thu Mar 15 03:11:07 EET 2018
On Thu, Mar 15, 2018 at 12:08 PM, Dylan Fernando <dylanf123 at gmail.com>
wrote:
>
>
> On Sun, Mar 11, 2018 at 10:18 PM, Mark Thompson <sw at jkqxz.net> wrote:
>
>> On 11/03/18 04:36, Dylan Fernando wrote:
>> > On Thu, Mar 8, 2018 at 8:57 AM, Mark Thompson <sw at jkqxz.net> wrote:
>> >
>> >> On 07/03/18 03:56, Dylan Fernando wrote:
>> >>> Thanks, it works now
>> >>>
>> >>> Would trying to implement an OpenCL version of vf_fade be a good idea
>> >> for a
>> >>> qualification task, or would it be a better idea to try a different
>> >> filter?
>> >>
>> >> That sounds like a sensible choice to me, though if you haven't
>> written a
>> >> filter before you might find it helpful to write something simpler
>> first to
>> >> understand how it fits together (for example: vflip, which has trivial
>> >> processing parts but still needs the surrounding boilerplate).
>> >>
>> >> - Mark
>> >>
>> >> (PS: be aware that top-posting is generally frowned upon on this
>> mailing
>> >> list.)
>> >>
>> >>
>> >>> On Wed, Mar 7, 2018 at 1:20 AM, Mark Thompson <sw at jkqxz.net> wrote:
>> >>>
>> >>>> On 06/03/18 12:37, Dylan Fernando wrote:
>> >>>>> Hi,
>> >>>>>
>> >>>>> I am Dylan Fernando. I am a Computer Science student from
>> Australia. I
>> >> am
>> >>>>> new to FFmpeg and I wish to apply for GSoC this year.
>> >>>>> I would like to do the Video filtering with OpenCL project and I
>> have a
>> >>>> few
>> >>>>> questions. Would trying to implement an opencl version of vf_fade
>> be a
>> >>>> good
>> >>>>> idea for the qualification task, or would I be better off using a
>> >>>> different
>> >>>>> filter?
>> >>>>>
>> >>>>> Also, I’m having a bit of trouble with running unsharp_opencl. I
>> tried
>> >>>>> running:
>> >>>>> ffmpeg -hide_banner -nostats -v verbose -init_hw_device
>> opencl=ocl:0.1
>> >>>>> -filter_hw_device ocl -i space.mpg -filter_complex unsharp_opencl
>> >>>> output.mp4
>> >>>>>
>> >>>>> but I got the error:
>> >>>>> [AVHWDeviceContext @ 0x7fdac050c700] 0.1: Apple / Intel(R) Iris(TM)
>> >>>>> Graphics 6100
>> >>>>> [mpeg @ 0x7fdac3132600] max_analyze_duration 5000000 reached at
>> 5005000
>> >>>>> microseconds st:0
>> >>>>> Input #0, mpeg, from 'space.mpg':
>> >>>>> Duration: 00:00:21.99, start: 0.387500, bitrate: 6108 kb/s
>> >>>>> Stream #0:0[0x1e0]: Video: mpeg2video (Main), 1 reference frame,
>> >>>>> yuv420p(tv, bt470bg, bottom first, left), 720x480 [SAR 8:9 DAR 4:3],
>> >> 6000
>> >>>>> kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
>> >>>>> Stream mapping:
>> >>>>> Stream #0:0 (mpeg2video) -> unsharp_opencl
>> >>>>> unsharp_opencl -> Stream #0:0 (mpeg4)
>> >>>>> Press [q] to stop, [?] for help
>> >>>>> [graph 0 input from stream 0:0 @ 0x7fdac0418800] w:720 h:480
>> >>>> pixfmt:yuv420p
>> >>>>> tb:1/90000 fr:30000/1001 sar:8/9 sws_param:flags=2
>> >>>>> [auto_scaler_0 @ 0x7fdac05232c0] w:iw h:ih flags:'bilinear' interl:0
>> >>>>> [Parsed_unsharp_opencl_0 @ 0x7fdac0715a80] auto-inserting filter
>> >>>>> 'auto_scaler_0' between the filter 'graph 0 input from stream 0:0'
>> and
>> >>>> the
>> >>>>> filter 'Parsed_unsharp_opencl_0'
>> >>>>> Impossible to convert between the formats supported by the filter
>> >> 'graph
>> >>>> 0
>> >>>>> input from stream 0:0' and the filter 'auto_scaler_0'
>> >>>>> Error reinitializing filters!
>> >>>>> Failed to inject frame into filter network: Function not implemented
>> >>>>> Error while processing the decoded data for stream #0:0
>> >>>>> Conversion failed!
>> >>>>>
>> >>>>> How do I correctly run unsharp_opencl? Should I be running it on a
>> >>>>> different video file?
>> >>>>
>> >>>> It's intended to be used in filter graphs where much of the activity
>> is
>> >>>> already happening on the GPU, so the input and output are in the
>> >>>> AV_PIX_FMT_OPENCL format which contains GPU-side OpenCL images.
>> >>>>
>> >>>> If you want to use it standalone then you need hwupload and
>> hwdownload
>> >>>> filters to move the frames between the CPU and GPU. For your
>> example,
>> >> it
>> >>>> should work with:
>> >>>>
>> >>>> ffmpeg -init_hw_device opencl=ocl:0.1 -filter_hw_device ocl -i
>> space.mpg
>> >>>> -filter_complex hwupload,unsharp_opencl,hwdownload output.mp4
>> >>>>
>> >>>> (There are constraints on what formats can be used and therefore
>> >> suitable
>> >>>> files (or required format conversions), but I believe a normal
>> yuv420p
>> >>>> video like this should work in all cases.)
>> >>>>
>> >>>> - Mark
>> >>
>> >
>> > Thanks.
>> >
>> > How is AV_PIX_FMT_OPENCL formatted? When using read_imagef(), does xyzw
>> > correspond to RGBA respectively, or to YUV? Would I have to account for
>> > different formats? If so, how do I check the format of the input?
>>
>> See libavutil/hwcontext_opencl.c and in particular the functions
>> opencl_get_buffer(), opencl_pool_alloc() and opencl_get_plane_format() for
>> the code creating the AV_PIX_FMT_OPENCL images.
>>
>> It tries to support all formats which are representable as OpenCL images,
>> so the component values are dependent on what the format of the underlying
>> image is. What can actually be represented does depends a bit on the
>> implementation - for example, CL_R channel order is needed for all planar
>> YUV images, and CL_RG is needed as well for NV12 and P010 support. The
>> data_type is always UNORM_INT8 or UNORM_INT16 (depending on depth,
>> intermediate depths like 10-bit require are treated as UNORM_INT16 and
>> require an MSB-packed format like P010 rather than an LSB-packed format
>> like YUV420P), so it should always be read as a float (float2, float4) in
>> the CL kernels.
>>
>> Given that, if you have kernels which are not dependent on interactions
>> between components then you don't actually need to care about the
>> underlying format - use float4 everywhere and what's actually in xyzw
>> doesn't matter. See the program_opencl examples <
>> http://ffmpeg.org/ffmpeg-filters.html#program_005fopencl-1> for some
>> cases of this, and the unsharp_opencl filter is also close to this (it only
>> cares about luma vs. chroma planes).
>>
>> If on the other hand you do need to know exactly where the components
>> are, then you will need to look at the sw_format of the incoming
>> hw_frames_ctx (it's on the input link at when the config_inputs function is
>> called on a filter input pad). If you can't easily support all formats
>> then rejecting unsupported ones here with a suitable error message is fine
>> (there isn't currently any negotiation of that format, so it will be up to
>> the user to get it into the right state). With only one or a small number
>> of formats there then you can know exactly what is in the xyzw components
>> and therefore use them however you like.
>>
>> Hope this helps,
>>
>> - Mark
>> _______________________________________________
>> ffmpeg-devel mailing list
>> ffmpeg-devel at ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>>
>
> Thanks, that helps.
>
> I need to know exactly what is in xyzw, since I’m trying to implement
> vf_fade, and the color parameter is given as rgba. I managed to get it
> working on yuv files. I’m currently trying to test it on different pixel
> formats. What other file types should I test my filter on? Is there some
> way I can download different video files with different pixel formats to
> test my filter on?
>
> Also, I’ve attempted to implement avgblur with opencl. This is my first
> time submitting a patch, sorry if I format it incorrectly.
>
> - Dylan
>
[master 319e56f87c] lavfi: Add OpenCL avgblur filter
6 files changed, 381 insertions(+)
create mode 100644 libavfilter/opencl/avgblur.cl
create mode 100644 libavfilter/vf_avgblur_opencl.c
diff --git a/configure b/configure
index fe81ba31b5..203737615c 100755
--- a/configure
+++ b/configure
@@ -3205,6 +3205,7 @@ aresample_filter_deps="swresample"
ass_filter_deps="libass"
atempo_filter_deps="avcodec"
atempo_filter_select="rdft"
+avgblur_opencl_filter_deps="opencl"
azmq_filter_deps="libzmq"
blackframe_filter_deps="gpl"
boxblur_filter_deps="gpl"
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 6a6083618d..6bf32ad260 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -138,6 +138,8 @@ OBJS-$(CONFIG_ALPHAMERGE_FILTER) +=
vf_alphamerge.o
OBJS-$(CONFIG_ASS_FILTER) += vf_subtitles.o
OBJS-$(CONFIG_ATADENOISE_FILTER) += vf_atadenoise.o
OBJS-$(CONFIG_AVGBLUR_FILTER) += vf_avgblur.o
+OBJS-$(CONFIG_AVGBLUR_OPENCL_FILTER) += vf_avgblur_opencl.o
opencl.o \
+ opencl/avgblur.o
OBJS-$(CONFIG_BBOX_FILTER) += bbox.o vf_bbox.o
OBJS-$(CONFIG_BENCH_FILTER) += f_bench.o
OBJS-$(CONFIG_BITPLANENOISE_FILTER) += vf_bitplanenoise.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index 9adb1090b7..cb04d1b113 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -148,6 +148,7 @@ static void register_all(void)
REGISTER_FILTER(ASS, ass, vf);
REGISTER_FILTER(ATADENOISE, atadenoise, vf);
REGISTER_FILTER(AVGBLUR, avgblur, vf);
+ REGISTER_FILTER(AVGBLUR_OPENCL, avgblur_opencl, vf);
REGISTER_FILTER(BBOX, bbox, vf);
REGISTER_FILTER(BENCH, bench, vf);
REGISTER_FILTER(BITPLANENOISE, bitplanenoise, vf);
diff --git a/libavfilter/opencl/avgblur.cl b/libavfilter/opencl/avgblur.cl
new file mode 100644
index 0000000000..fff655529b
--- /dev/null
+++ b/libavfilter/opencl/avgblur.cl
@@ -0,0 +1,60 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301 USA
+ */
+
+
+__kernel void avgblur_horiz(__write_only image2d_t dst,
+ __read_only image2d_t src,
+ int rad)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_NEAREST);
+ int2 loc = (int2)(get_global_id(0), get_global_id(1));
+ int2 size = (int2)(get_global_size(0), get_global_size(1));
+
+ int count = 0;
+ float4 acc = (float4)(0,0,0,0);
+
+ for (int xx = max(0,loc.x-rad); xx < min(loc.x+rad+1,size.x); xx++)
+ {
+ count++;
+ acc += read_imagef(src, sampler, (int2)(xx, loc.y));
+ }
+
+ write_imagef(dst, loc, acc / count);
+}
+
+__kernel void avgblur_vert(__write_only image2d_t dst,
+ __read_only image2d_t src,
+ int radv)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_NEAREST);
+ int2 loc = (int2)(get_global_id(0), get_global_id(1));
+ int2 size = (int2)(get_global_size(0), get_global_size(1));
+
+ int count = 0;
+ float4 acc = (float4)(0,0,0,0);
+
+ for (int yy = max(0,loc.y-radv); yy < min(loc.y+radv+1,size.y); yy++)
+ {
+ count++;
+ acc += read_imagef(src, sampler, (int2)(loc.x, yy));
+ }
+
+ write_imagef(dst, loc, acc / count);
+}
diff --git a/libavfilter/opencl_source.h b/libavfilter/opencl_source.h
index 23cdfc6ac9..02bc1723b0 100644
--- a/libavfilter/opencl_source.h
+++ b/libavfilter/opencl_source.h
@@ -19,6 +19,7 @@
#ifndef AVFILTER_OPENCL_SOURCE_H
#define AVFILTER_OPENCL_SOURCE_H
+extern const char *ff_opencl_source_avgblur;
extern const char *ff_opencl_source_overlay;
extern const char *ff_opencl_source_unsharp;
diff --git a/libavfilter/vf_avgblur_opencl.c
b/libavfilter/vf_avgblur_opencl.c
new file mode 100644
index 0000000000..6e5ae4f32e
--- /dev/null
+++ b/libavfilter/vf_avgblur_opencl.c
@@ -0,0 +1,316 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301 USA
+ */
+
+#include "libavutil/common.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/mem.h"
+#include "libavutil/opt.h"
+#include "libavutil/pixdesc.h"
+
+#include "avfilter.h"
+#include "internal.h"
+#include "opencl.h"
+#include "opencl_source.h"
+#include "video.h"
+
+
+typedef struct AverageBlurOpenCLContext {
+ OpenCLFilterContext ocf;
+
+ int initialised;
+ cl_kernel kernel_horiz;
+ cl_kernel kernel_vert;
+ cl_command_queue command_queue;
+
+ int radius;
+ int radiusV;
+ int planes;
+
+} AverageBlurOpenCLContext;
+
+
+static int avgblur_opencl_init(AVFilterContext *avctx)
+{
+ AverageBlurOpenCLContext *ctx = avctx->priv;
+ cl_int cle;
+ int err;
+
+ err = ff_opencl_filter_load_program(avctx, &ff_opencl_source_avgblur,
1);
+ if (err < 0)
+ goto fail;
+
+ ctx->command_queue = clCreateCommandQueue(ctx->ocf.hwctx->context,
+ ctx->ocf.hwctx->device_id,
+ 0, &cle);
+ if (!ctx->command_queue) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to create OpenCL "
+ "command queue: %d.\n", cle);
+ err = AVERROR(EIO);
+ goto fail;
+ }
+
+ ctx->kernel_horiz = clCreateKernel(ctx->ocf.program,"avgblur_horiz",
&cle);
+ if (!ctx->kernel_horiz) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to create kernel: %d.\n", cle);
+ err = AVERROR(EIO);
+ goto fail;
+ }
+
+ ctx->kernel_vert = clCreateKernel(ctx->ocf.program,"avgblur_vert",
&cle);
+ if (!ctx->kernel_vert) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to create kernel: %d.\n", cle);
+ err = AVERROR(EIO);
+ goto fail;
+ }
+
+ if (ctx->radiusV <= 0) {
+ ctx->radiusV = ctx->radius;
+ }
+
+ ctx->initialised = 1;
+ return 0;
+
+fail:
+ if (ctx->command_queue)
+ clReleaseCommandQueue(ctx->command_queue);
+ if (ctx->kernel_horiz)
+ clReleaseKernel(ctx->kernel_horiz);
+ if (ctx->kernel_vert)
+ clReleaseKernel(ctx->kernel_vert);
+ return err;
+}
+
+static int avgblur_opencl_filter_frame(AVFilterLink *inlink, AVFrame
*input)
+{
+ AVFilterContext *avctx = inlink->dst;
+ AVFilterLink *outlink = avctx->outputs[0];
+ AverageBlurOpenCLContext *ctx = avctx->priv;
+ AVFrame *output = NULL;
+ cl_int cle;
+ size_t global_work[2];
+ cl_mem src, dst;
+ int err, p;
+
+ av_log(ctx, AV_LOG_DEBUG, "Filter input: %s, %ux%u (%"PRId64").\n",
+ av_get_pix_fmt_name(input->format),
+ input->width, input->height, input->pts);
+
+ if (!input->hw_frames_ctx)
+ return AVERROR(EINVAL);
+
+ if (!ctx->initialised) {
+ err = avgblur_opencl_init(avctx);
+ if (err < 0)
+ goto fail;
+
+ }
+
+ output = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+ if (!output) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
+
+ for (p = 0; p < FF_ARRAY_ELEMS(output->data); p++) {
+ src = (cl_mem) input->data[p];
+ dst = (cl_mem)output->data[p];
+
+ if (!dst)
+ break;
+
+ int radius_x = ctx->radius;
+ int radius_y = ctx->radiusV;
+
+ if (!(ctx->planes & (1 << p))) {
+ radius_x = 0;
+ radius_y = 0;
+ }
+
+ cle = clSetKernelArg(ctx->kernel_horiz, 0, sizeof(cl_mem), &dst);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
+ "destination image argument: %d.\n", cle);
+ goto fail;
+ }
+ cle = clSetKernelArg(ctx->kernel_horiz, 1, sizeof(cl_mem), &src);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
+ "source image argument: %d.\n", cle);
+ goto fail;
+ }
+ cle = clSetKernelArg(ctx->kernel_horiz, 2, sizeof(cl_int),
&radius_x);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
+ "sizeX argument: %d.\n", cle);
+ goto fail;
+ }
+
+ global_work[0] = output->width;
+ global_work[1] = output->height;
+
+ av_log(avctx, AV_LOG_DEBUG, "Run kernel on plane %d "
+ "(%"SIZE_SPECIFIER"x%"SIZE_SPECIFIER").\n",
+ p, global_work[0], global_work[1]);
+
+ cle = clEnqueueNDRangeKernel(ctx->command_queue,
ctx->kernel_horiz, 2, NULL,
+ global_work, NULL,
+ 0, NULL, NULL);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to enqueue kernel: %d.\n",
+ cle);
+ err = AVERROR(EIO);
+ goto fail;
+ }
+
+ cle = clSetKernelArg(ctx->kernel_vert, 0, sizeof(cl_mem), &dst);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
+ "destination image argument: %d.\n", cle);
+ goto fail;
+ }
+ cle = clSetKernelArg(ctx->kernel_vert, 1, sizeof(cl_mem), &dst);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
+ "source image argument: %d.\n", cle);
+ goto fail;
+ }
+ cle = clSetKernelArg(ctx->kernel_vert, 2, sizeof(cl_int),
&radius_y);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
+ "sizeY argument: %d.\n", cle);
+ goto fail;
+ }
+
+ global_work[0] = output->width;
+ global_work[1] = output->height;
+
+ av_log(avctx, AV_LOG_DEBUG, "Run kernel on plane %d "
+ "(%"SIZE_SPECIFIER"x%"SIZE_SPECIFIER").\n",
+ p, global_work[0], global_work[1]);
+
+ cle = clEnqueueNDRangeKernel(ctx->command_queue, ctx->kernel_vert,
2, NULL,
+ global_work, NULL,
+ 0, NULL, NULL);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to enqueue kernel: %d.\n",
+ cle);
+ err = AVERROR(EIO);
+ goto fail;
+ }
+
+ }
+
+ cle = clFinish(ctx->command_queue);
+ if (cle != CL_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to finish command queue:
%d.\n",
+ cle);
+ err = AVERROR(EIO);
+ goto fail;
+ }
+
+ err = av_frame_copy_props(output, input);
+ if (err < 0)
+ goto fail;
+
+ av_frame_free(&input);
+
+ av_log(ctx, AV_LOG_DEBUG, "Filter output: %s, %ux%u (%"PRId64").\n",
+ av_get_pix_fmt_name(output->format),
+ output->width, output->height, output->pts);
+
+ return ff_filter_frame(outlink, output);
+
+fail:
+ clFinish(ctx->command_queue);
+ av_frame_free(&input);
+ av_frame_free(&output);
+ return err;
+}
+
+static av_cold void avgblur_opencl_uninit(AVFilterContext *avctx)
+{
+ AverageBlurOpenCLContext *ctx = avctx->priv;
+ cl_int cle;
+
+
+ if (ctx->kernel_horiz) {
+ cle = clReleaseKernel(ctx->kernel_horiz);
+ if (cle != CL_SUCCESS)
+ av_log(avctx, AV_LOG_ERROR, "Failed to release "
+ "kernel: %d.\n", cle);
+ }
+
+ if (ctx->kernel_vert) {
+ cle = clReleaseKernel(ctx->kernel_vert);
+ if (cle != CL_SUCCESS)
+ av_log(avctx, AV_LOG_ERROR, "Failed to release "
+ "kernel: %d.\n", cle);
+ }
+
+ if (ctx->command_queue) {
+ cle = clReleaseCommandQueue(ctx->command_queue);
+ if (cle != CL_SUCCESS)
+ av_log(avctx, AV_LOG_ERROR, "Failed to release "
+ "command queue: %d.\n", cle);
+ }
+
+ ff_opencl_filter_uninit(avctx);
+}
+
+#define OFFSET(x) offsetof(AverageBlurOpenCLContext, x)
+#define FLAGS (AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM)
+static const AVOption avgblur_opencl_options[] = {
+ { "sizeX", "set horizontal size", OFFSET(radius), AV_OPT_TYPE_INT,
{.i64=1}, 1, 1024, FLAGS },
+ { "planes", "set planes to filter", OFFSET(planes), AV_OPT_TYPE_INT,
{.i64=0xF}, 0, 0xF, FLAGS },
+ { "sizeY", "set vertical size", OFFSET(radiusV), AV_OPT_TYPE_INT,
{.i64=0}, 0, 1024, FLAGS },
+ { NULL }
+};
+
+AVFILTER_DEFINE_CLASS(avgblur_opencl);
+
+static const AVFilterPad avgblur_opencl_inputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .filter_frame = &avgblur_opencl_filter_frame,
+ .config_props = &ff_opencl_filter_config_input,
+ },
+ { NULL }
+};
+
+static const AVFilterPad avgblur_opencl_outputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .config_props = &ff_opencl_filter_config_output,
+ },
+ { NULL }
+};
+
+AVFilter ff_vf_avgblur_opencl = {
+ .name = "avgblur_opencl",
+ .description = NULL_IF_CONFIG_SMALL("Apply average blur filter"),
+ .priv_size = sizeof(AverageBlurOpenCLContext),
+ .priv_class = &avgblur_opencl_class,
+ .init = &ff_opencl_filter_init,
+ .uninit = &avgblur_opencl_uninit,
+ .query_formats = &ff_opencl_filter_query_formats,
+ .inputs = avgblur_opencl_inputs,
+ .outputs = avgblur_opencl_outputs,
+ .flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE,
+};
More information about the ffmpeg-devel
mailing list