[FFmpeg-devel] [PATCH 1/5] lavc : yami : add libyami decoder/encoder

Chao Liu yijinliu at gmail.com
Wed Aug 17 07:44:52 EEST 2016


On Tue, Aug 16, 2016 at 11:27 AM, Mark Thompson <sw at jkqxz.net> wrote:

> On 16/08/16 03:44, Jun Zhao wrote:
> >
> >
> > On 2016/8/16 10:14, Chao Liu wrote:
> >> On Mon, Aug 15, 2016 at 6:00 PM, Jun Zhao <mypopydev at gmail.com> wrote:
> >>
> >>>
> >>>
> >>> On 2016/8/16 1:48, Jean-Baptiste Kempf wrote:
> >>>> On 15 Aug, Hendrik Leppkes wrote :
> >>>>>> On Mon, Aug 15, 2016 at 10:22 AM, Jun Zhao <mypopydev at gmail.com>
> >>> wrote:
> >>>>>>>> add libyami decoder/encoder/vpp in ffmpeg, about build step,
> >>>>>>>> please refer to the link: https://github.com/01org/
> >>> ffmpeg_libyami/wiki/Build
> >>>>>>>>
> >>>>>>
> >>>>>> We've had patches for yami before, and they were not applied because
> >>>>>> many developers did not agree with adding more wrappers for the same
> >>>>>> hardware decoders which we already support.
> >>>>>> Please refer to the discussion in this thread:
> >>>>>> https://ffmpeg.org/pipermail/ffmpeg-devel/2015-January/167388.html
> >>>>>>
> >>>>>> The concerns and reasons brought up there should not really have
> >>> changed.
> >>>> I still object very strongly against yami.
> >>>>
> >>>> It is a library that does not bring much that we could not do
> ourselves,
> >>>> it duplicates a lot of our code, it is the wrong level of abstraction
> >>>> for libavcodec, it is using a bad license and there is no guarantee of
> >>>> maintainership in the future.
> >>>
> >>> I know the worry after read the above thread.For Intel GPU HW
> accelerate
> >>> decode/encode,
> >>> now have 3 options in ffmpeg:
> >>>
> >>> 1. ffmpeg and QSV (Media SDK)
> >>> 2. ffmpeg vaapi hw accelerate decoder/native vaapi encoder
> >>> 3. ffmpeg and libyami
> >>>
> >> Sorry for this little diversion: what are the differences between QSV
> and
> >> vaapi?
> >> My understanding is that QSV has better performance, while vaapi
> supports
> >> more decoders / encoders. Is that correct?
> >> It would be nice if there are some data showing the speed of these HW
> >> accelerated decoders / encoders.
> >
> > QSV has better performance is right, but libyami have more
> decoders/encoders than
> > vaapi hw accel decoder/encoder. :)
> >
> > According our profile, the speed of QSV/Libyami/vaapi-hw accel decoder
> and native
> > vaapi encoder are: QSV > ffmpeg and libyami > vaapi-hw accel decoder and
> native
> > vaapi encoder
>
> In a single ffmpeg process I believe that result, but I'm not sure that
> it's the question you really want to ask.
>
> The lavc VAAPI hwaccel/encoder are both single-threaded, and while they
> overlap operations internally where possible the single-threadedness of
> ffmpeg (the program) itself means that they will not achieve the maximum
> performance.  If you really want to compare the single-transcode
> performance like this then you will want to make a test program which does
> the threading outside lavc.
>
>
> In any case, I don't believe that the single generic transcode setup is a
> use that many people are interested in (beyond testing to observe that
> hardware encoders kindof suck relative to libx264, then using that instead).
>
> To my mind, the cases where it is interesting to use VAAPI (or really any
> hardware encoder on a normal PC-like system) are:
>
> * You want to do /lots/ of simultaneous transcodes in some sort of server
> setup (often with some simple transformation, like a scale or codec
> change), and want to maximise the number you can do while maintaining some
> minimum level of throughput on each one.  You can benchmark this case for
> VAAPI by running lots of instances of ffmpeg, and I expect that the libyami
> numbers will be precisely equivalent because libyami is using VAAPI anyway
> and the hardware is identical.
>
Our use case is similar to this one. In one process, we have multiple
threads that decode the input video streams, process the decoded frames and
encode.
To process the frames efficiently, we would like the decoded frames to be
of some format like yuv420p, which has a separate luminance channel.
We would like to use whatever hardware accelerations that are available. So
far, we have only tried QSV. It works, with some problems though, like no
support for VP8, only available for relatively new intel CPUs.
I just took a look at vaapi hwaccel, curious why its pix format has to
be AV_PIX_FMT_VAAPI? Jun's patch does support other pix format like yuv420p.

>
> * You want to do other things with the surfaces on your GPU.  Here, using
> VAAPI directly is good because the DRM objects are easily exposed so you
> can move surfaces to and from whatever other stuff you want to use (OpenCL,
> DRI2 in X11, etc.).
>
> * You want to minimise CPU/power use when doing one or a small number of
> live encodes/decodes (for example, video calling or screen recording).
> Here performance is not really the issue - any of these solutions suffices
> but we should try to avoid it being too hard to use.
>
> So, what do you think libyami brings to any of these cases?  I don't
> really see anything beyond the additional codec support* - have I missed
> something?


> libyami also (I believe, correct me if I'm wrong) has Intel-specificity -
> this is significant given that mesa/gallium has very recently gained VAAPI
> encode support on AMD VCE (though I think it doesn't currently work well
> with lavc, I'm going to look into that soon).
>
> I haven't done any detailed review of the patches; I'm happy to do so if
> people are generally in favour of having the library.
>
> Thanks,
>
> - Mark
>
>
> * Which is fixable.  Wrt VP8, I wrote a bit of code but abandoned it
> because I don't know of anyone who actually cares about it.  Do you have
> some useful case for it?  If so, I'd be happy to implement it.  I am
> already intending to do VP9 encode when I have hardware available; VP9
> decode apparently already works though I don't have hardware myself.
>
We would like to have that. Appreciate if you could make this happen.
BTW, compared to libvpx, how faster it could be? I know it depends on the
GPU, just want to have a rough idea..

>
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>


More information about the ffmpeg-devel mailing list