[FFmpeg-devel] [PATCH 1/2] libavcodec: Add support for QSV screen capture plugin
bilyak.alexander at gmail.com
Mon Aug 14 16:25:36 EEST 2017
I am not satisfied with this approach neither.
But I thought it would be better this way since we have already
well-written code for encoder/decoder, surface management, etc. Sharing
this functions between lavc and lavd will be a big mess up and produce a
lot of avpriv_ functions (I've tried). And I don't know if copying the same
code twice (at least for surface management and initialization) would be
better than adding pseudo-decoder.
So I sent this patch just to show the idea, of course it is not final
version. I was curious if pseudo-decoder will be acceptable or not.
Also your remarks 1 and 2 are true. But as I said: this is just concept
I don't have idea if there could be any need to pass external device in. If
you want a screen capture device - what could you wish to do with it's
hardware context? Settings like mouse capture (that is not working at all
in QSV lol) or specifying desired window/area could be set via usual text
If someone REALLY want to provide external device context (or get it out
from created context) - we could add some callback parameter with pointer
to context memory (I've seen something like this in VLC long time ago).
This is bad, BAD approach, but as you mentioned - lavf won't let us do
As far as I understood this "decoder" from Intel makes copy of backbuffer
to system memory. As it consumes a lot less CPU than GDI (like half) and
Intel Media Performance shows me some small load on MFX - I THINK that GPU
is responsible for copying and converting. But this are just my thoughts,
that's all as no info is provided in documentation.
Many thanks for reviewing the code,
2017-08-14 14:26 GMT+02:00 Mark Thompson <sw at jkqxz.net>:
> On 11/08/17 10:10, Alexander Bilyak wrote:
> > Intel QSV SDK provide screen capture plugin starting from API ver 1.17
> > as runtime loadable plugin for QSV decoder.
> > * add API version selection while initialization of QSV context
> > (default is still 1.1 for usual encoding/decoding)
> > ---
> > configure | 2 +
> > libavcodec/Makefile | 1 +
> > libavcodec/allcodecs.c | 1 +
> > libavcodec/qsv.c | 6 +-
> > libavcodec/qsv_internal.h | 2 +-
> > libavcodec/qsvdec.c | 12 ++-
> > libavcodec/qsvdec.h | 5 +
> > libavcodec/qsvdec_screen.c | 250 ++++++++++++++++++++++++++++++
> > libavcodec/qsvenc.c | 3 +-
> > 9 files changed, 272 insertions(+), 10 deletions(-)
> > create mode 100644 libavcodec/qsvdec_screen.c
> I'm not convinced that adding this as a hacked-up pseudo-decoder is really
> the best approach.
> It would, I think, be straightforward to put this in lavd completely
> standalone. The common code you are actually using there is:
> * Session initialisation - this should be trivial, since you have no
> device or external frames anyway.
> * The actual decode function - this contains a lot of additional
> trickiness (packets, asynchronicity, queueing) which you don't want. A
> simpler form which just fetches one frame would feel better. This should
> also be able to avoid the second copy to the output packet.
> Some other thoughts:
> * If this is only available in a higher API version then you will need a
> configure test for those headers.
> * Does this only support NV12 capture? In many cases RGB is more useful
> (or at least some YUV 4:4:4 which doesn't do nasty things to thin coloured
> * Is having an externally-provided device (hw_device_ctx) ever useful?
> The lavd implementation doesn't have any way to pass a device in (since
> lavf can't).
> * Do you happen to know how it actually works? (Presumably it's reading
> surfaces used for scanout on the GPU side somehow; who does the copy and
> colour conversion?)
> - Mark
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
More information about the ffmpeg-devel