[FFmpeg-devel] [PATCH 2/2] lavf/vf_scale_amf: AMF scaler/colorspace converter filter implementation

Alexander Kravchenko akravchenko188 at gmail.com
Tue Jun 19 17:34:12 EEST 2018


Hi Mark.
Thanks for your review and comments.
See my comments and question bellow

> -----Original Message-----
> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces at ffmpeg.org] On Behalf Of Mark Thompson
> Sent: Tuesday, June 19, 2018 2:20 AM
> To: ffmpeg-devel at ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 2/2] lavf/vf_scale_amf: AMF scaler/colorspace converter filter implementation
> 

> 
> > +    { AV_PIX_FMT_NV12,       AMF_SURFACE_NV12 },
> > +    { AV_PIX_FMT_BGR0,       AMF_SURFACE_BGRA },
> > +    { AV_PIX_FMT_RGB0,       AMF_SURFACE_RGBA },
> 
> What is happening to alpha here - should these be RGBA/BGRA?
> 
Alpha will be ignored in case RGB->NV12/YUV conversions
In case of conversion between RGBA variations channels will be reordered.
Probably RGBA/BGRA also should be also added as input formats


> > +    int err;
> > +    int i;
> > +    static const enum AVPixelFormat input_pix_fmts[] = {
> > +        AV_PIX_FMT_NV12,
> > +        AV_PIX_FMT_0RGB,
> > +        AV_PIX_FMT_BGR0,
> > +        AV_PIX_FMT_RGB0,
> > +        AV_PIX_FMT_GRAY8,
> > +        AV_PIX_FMT_YUV420P,
> > +        AV_PIX_FMT_YUYV422,
> > +        AV_PIX_FMT_NONE,
> > +    };
> > +    static const enum AVPixelFormat output_pix_fmts_default[] = {
> > +        AV_PIX_FMT_D3D11,
> > +        AV_PIX_FMT_DXVA2_VLD,
> > +        AV_PIX_FMT_NONE,
> > +    };
> > +    output_pix_fmts = output_pix_fmts_default;
> > +
> > +    //in case if hw_device_ctx is set to DXVA2 we change order of pixel formats to set DXVA2 be choosen by default
> > +    //The order is ignored if hw_frames_ctx is not NULL on the config_output stage
> > +    if (avctx->hw_device_ctx) {
> > +        device_ctx = (AVHWDeviceContext*)avctx->hw_device_ctx->data;
> > +
> > +        if (device_ctx->type == AV_HWDEVICE_TYPE_DXVA2){
> > +            static const enum AVPixelFormat output_pix_fmts_dxva2[] = {
> > +                AV_PIX_FMT_DXVA2_VLD,
> > +                AV_PIX_FMT_D3D11,
> > +                AV_PIX_FMT_NONE,
> > +            };
> > +            output_pix_fmts = output_pix_fmts_dxva2;
> 
> This feels dubious.  Can you explain exactly what you want to happen here?
> 
> I suspect you might be better exposing only the pixfmt associated with the device if one is provided.  (E.g. if you have software input
> and a D3D11 device then you don't want to expose DXVA2 output at all.)
> 
We can have here two cases of initialization
1) from avctx->hw_device_ctx; available in 'query_output'. (input frame is in host memory, we use device from -filter_hw_device option). 
2) from avctx->inputs[0] ->hw_frames_ctx; not available in 'query_output'.  - (input frame is hw frame, we use device context from it) -  this is higher priority.

Frame context is higher priority but it is not set on this stage (query_formats).

So it can be the following scenario:
*) avctx->hw_device_ctx is set to DX9. This case we can set output format only DX9 in 'query_output'
*) avctx->inputs[0] ->hw_frames_ctx is set to DX11. But we know about this on configure_output function. And we cannot change output format to DX11.


> > +    AMF_ASSIGN_PROPERTY_BOOL(res, ctx->converter, AMF_VIDEO_CONVERTER_KEEP_ASPECT_RATIO, ctx->keep_aspect_ratio);
> > +    AMFAV_RETURN_IF_FALSE(avctx, res == AMF_OK, AVERROR(ENOMEM),
> > + "AMFConverter-SetProperty() failed with error %d\n", res);
> 
> What effect with this have?  The w/h calculation already deals with aspect ratio preservation (640:-1, etc.) in a common way across all
> scale filters.

this is required if you want to fit, for example, a square video into a rectangular video and fill the empty space with a color

> 
> > +
> > +    AMF_ASSIGN_PROPERTY_BOOL(res, ctx->converter, AMF_VIDEO_CONVERTER_FILL, ctx->fill);
> > +    AMFAV_RETURN_IF_FALSE(avctx, res == AMF_OK, AVERROR(ENOMEM),
> > + "AMFConverter-SetProperty() failed with error %d\n", res);
> 
> Is it valid not to set this?  I'm guessing from the API, but it looks like if this isn't set when you have an output rectangle then the rest of
> the surface will be left with undefined contents because it's a newly-allocated surface.
> 

Option to fill or not fill empty space in case input stream does not fill output one.
This can happen if user set AMF_VIDEO_CONVERTER_KEEP_ASPECT_RATIO or/and set AMF_VIDEO_CONVERTER_OUTPUT_RECT


> > +
> > +    if(ctx->color_profile != AMF_VIDEO_CONVERTER_COLOR_PROFILE_UNKNOWN) {
> > +        AMF_ASSIGN_PROPERTY_INT64(res, ctx->converter,
> > + AMF_VIDEO_CONVERTER_COLOR_PROFILE, (amf_int32)ctx->color_profile);
> 
> What does this profile do?
This selects conversion matrix in RGB<->YUV conversions

> 
> The input properties should be set from the input AVFrame (see color_range/color_primaries/color_trc/colorspace/chroma_location -
> you'll need all of those to support JPEG vs. normal video).
> 
> If it's setting the output properties then you will also need to set the AVFrame fields correctly rather than copying them from the input
> as you do below (copy_props does this).
> 
You mean I need to have option to select conversion matrix according on input frame properties, and if user want to change it manually I need to update output frame properties?
frame->colorspace property?

> > +
> > +    res = ctx->converter->pVtbl->QueryOutput(ctx->converter, &data_out);
> > +    AMFAV_RETURN_IF_FALSE(avctx, res == AMF_OK, AVERROR(ENOMEM),
> > + "QueryOutput() failed with error %d\n", res);
> 
> Does this have the expected pipelining effect?  (The input is only submitted here, the operation doesn't need to finish until someone
> actually reads the output.)

It puts processing in hw queue and does not wait until it finished. Next consumer of frame (encoder for example) will use it when it is ready. 

> 
> > +
> > +    if (data_out) {
> > +        AMFGuid guid = IID_AMFSurface();
> > +        data_out->pVtbl->QueryInterface(data_out, &guid, (void**)&surface_out); // query for buffer interface
> > +        data_out->pVtbl->Release(data_out);
> > +    }
> > +
> > +    out = amf_amfsurface_to_avframe(avctx, surface_out);
> 
> How many frames is the following component allowed to hold on to?  If arbitrarily many, good.  If it's limited, can this limit be
> configured?  (See extra_hw_frames.)
> 

arbitrarily many




More information about the ffmpeg-devel mailing list