[Ffmpeg-devel-irc] ffmpeg-devel.log.20160917

burek burek021 at gmail.com
Sun Sep 18 03:05:02 EEST 2016


[02:19:37 CEST] <cone-399> ffmpeg 03Steven Liu 07master:3ea28f3f79ed: doc/muxers: add flv muxer document into doc/muxers
[10:08:57 CEST] <kinnu323> Hey.. I am trying to setup FFmpeg Dev Environment. and I got this error while I was doing `build` . Can some one help me with this?
[10:09:54 CEST] <kinnu323> Error : make: execvp: ./version.sh: Permission denied
[10:09:54 CEST] <kinnu323> GEN	libavutil/ffversion.h
[10:09:55 CEST] <kinnu323> /bin/sh: 1: ./version.sh: Permission denied
[10:09:55 CEST] <kinnu323> make: *** [libavutil/ffversion.h] Error 126
[10:29:36 CEST] <ubitux> permission denied on execution? did you messup the files permissions? or maybe trying to run it on a non-executable mount?
[10:45:33 CEST] <cone-195> ffmpeg 03Carl Eugen Hoyos 07master:44bcb636c1a7: lavc/libvpxenc: Avoid vp8 transparency encoding with auto-alt-ref.
[12:41:46 CEST] <kinnu323> Yes it is on mounted device.!  now its running..Thanks :)
[13:35:05 CEST] <durandal_17> nice, integer overflows in unsharp mplayer crap filter
[13:53:52 CEST] <cone-747> ffmpeg 03Steven Liu 07master:27714b462d1b: lavf/http: deprecate user-agent option
[13:58:28 CEST] <cone-747> ffmpeg 03Paul B Mahol 07master:d790887d1c6e: avfilter/vf_unsharp: check if scalebits is too high
[13:58:29 CEST] <cone-747> ffmpeg 03Paul B Mahol 07master:4096bb176b39: avfilter/vf_unsharp: limit matrix size in either direction to 23
[14:10:47 CEST] <cone-747> ffmpeg 03Philip Langdale 07master:8a066697023e: avcodec/cuvid: Fully re-initialize the parser after a flush.
[14:10:48 CEST] <cone-747> ffmpeg 03Philip Langdale 07master:ee88dcb2b0fe: avcodec/cuvid: Check for non 420 chroma formats - they aren't supported
[14:34:07 CEST] <durandal_17> ubitux: paletteuse should use framesync directly, because it should stop processing if first stream reached eof
[14:36:01 CEST] <atomnuker> will push the truehd encoder as soon as fate finishes here
[14:43:19 CEST] <cone-747> ffmpeg 03Jai Luthra 07master:15b86f480a9c: mlpenc: Working MLP/TrueHD encoder
[14:43:20 CEST] <cone-747> ffmpeg 03Rostislav Pehlivanov 07master:d4b36be1229a: Changelog: update with TrueHD and MLP encoders
[14:45:22 CEST] <JEEB> najs
[15:20:23 CEST] <ubitux> durandal_17: yeah i guess; but this is related to the "new" option, so i let it up to you to do that :)
[15:22:25 CEST] <durandal_17> ubitux: its also related to normal operation, when first stream change param, and you then want second stream to give input again, otherwise it would wait forever
[15:23:26 CEST] <ubitux> why would you want input again?
[15:24:05 CEST] <durandal_17> ffmpet at least forgets about second input frame when reinitializing framesync/dualinput
[15:28:42 CEST] <ubitux> the filter shoud have the palette in memory
[15:29:41 CEST] <durandal_17> ubitux: yes, but dualinput expects new one
[15:47:01 CEST] <zedd1234> Hi! I'm new and would like start contributing to Outreachy, could anyone help me out? Thanks
[15:49:13 CEST] <Chloe> Hi zedd1234
[15:49:16 CEST] <Chloe> ping durandal_17
[15:49:42 CEST] <zedd1234> Hi
[15:49:59 CEST] <durandal_17> atomnuker: bump lavc minor when adding new encoders?
[15:51:48 CEST] <Chloe> zedd1234: I guess you've seen https://trac.ffmpeg.org/wiki/SponsoringPrograms/Outreachy/2016-12#QualificationTasks
[15:52:30 CEST] <zedd1234> Yeas
[15:52:36 CEST] <zedd1234> Yes*
[15:52:47 CEST] <Chloe> Do you have anything in mind?
[15:54:20 CEST] <zedd1234> I'm not sure what XPM decoder/encoder means, but Improve Selftest coverage looks interesting.
[15:55:28 CEST] <Chloe> https://en.wikipedia.org/wiki/X_PixMap this is XPM
[15:56:01 CEST] <durandal_17> for converage you need to build and install ffmpeg and also fetch samples
[15:57:50 CEST] <zedd1234> Ohh Thanks a lot I'll get on with this
[15:58:26 CEST] <durandal_17> you could also create new ffmpeg video filter source using libqrencode lib
[15:59:09 CEST] <durandal_17> or if that is too easy, qrscanner
[15:59:55 CEST] <atomnuker> durandal_17: not lavc micro?
[16:00:19 CEST] <durandal_17> atomnuker: micro is for very small things
[16:09:25 CEST] <cone-747> ffmpeg 03Paul B Mahol 07master:0e7d2c60e99f: avfilter/vf_overlay: support J formats too
[16:09:26 CEST] <cone-747> ffmpeg 03Paul B Mahol 07master:97f50d1c624d: avfilter/vf_overlay: add YUVA422P to alpha_pix_fmts
[16:36:25 CEST] <cone-747> ffmpeg 03Rostislav Pehlivanov 07master:38c3fc940447: lavc: bump minor (after adding TrueHD and MLP encoders)
[16:52:17 CEST] <JEEB> atomnuker: has the truehd encoder's output been tested against any hw decoders btw?
[16:52:30 CEST] <JEEB> or well, any non-lavc decoders :)
[16:54:49 CEST] <atomnuker> nope
[16:56:31 CEST] <JEEB> gotcha
[16:59:08 CEST] <atomnuker> still has some problems and introduces pops on very high volume transients
[16:59:41 CEST] <JEEB> ah
[16:59:44 CEST] <atomnuker> (not overflows though, still looking for a reason it does that)
[17:01:44 CEST] <atomnuker> it has to happen somewhere during filter application (since the decoder doesn't report a failed lossless check)
[19:18:55 CEST] <cone-747> ffmpeg 03Paul B Mahol 07master:22bdba7a93ba: doc/filters: add two lut2 examples
[20:18:50 CEST] <cone-747> ffmpeg 03Michael Niedermayer 07master:a88092317076: avformat/http: Fix #ifdef FF_API_HTTP_USER_AGENT
[21:05:11 CEST] <durandal117> somebody broke dvd_subtitle
[21:05:45 CEST] <durandal117> next policy: each regression introduced -> +24hour ban
[21:10:50 CEST] <durandal117> Input stream #0:3 (subtitle): 1054 packets read (3665668 bytes); 2 frames decoded;
[21:28:35 CEST] <rcombs> introduce regression -> you have to add a test for it before doing anything else
[21:29:16 CEST] <JEEB> yup
[21:29:53 CEST] <Chloe> Are timestamps fixed now then?
[21:31:07 CEST] <durandal117> Chloe: timestamps?
[21:31:21 CEST] <Chloe> yeah there was an issue with null timestamps, no?
[21:31:51 CEST] <JEEB> do you mean the ffmpeg.c stuff with nonzero starting timestamps?
[21:32:16 CEST] <JEEB> I know the movenc.c check was changed that it's less obvious now but technically INT_MAX is still incorrect there
[21:32:34 CEST] <JEEB> if I remember the check correctly
[21:32:48 CEST] <durandal117> JEEB: now itsoffset works with movenc
[21:32:57 CEST] <JEEB> oh
[21:33:03 CEST] <JEEB> when did it get fixed?
[21:33:11 CEST] <JEEB> because I tested last evening
[21:33:27 CEST] <durandal117> what revision?
[21:33:49 CEST] <JEEB> 51000b99
[21:33:53 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=commit;h=51000b994514e64a6c5039e179f20c9e24f87c45
[21:34:09 CEST] <JEEB> because this stopped the check getting triggered in movenc.c in my test case I think?
[21:34:28 CEST] <JEEB> there's still the frame duplication logic in ffmpeg.c
[21:34:56 CEST] <JEEB> which seems to think that it has to duplicate frames until that 300 second mark which is the value of itsoffset
[21:35:05 CEST] <durandal117> JEEB: how do you tested?
[21:35:24 CEST] <JEEB> same as yesterday?
[21:35:29 CEST] <JEEB> or well, the day before
[21:35:38 CEST] <JEEB> I think posted the lavfi input test
[21:35:54 CEST] <JEEB> http://up-cat.net/p/56fcb551
[21:37:03 CEST] <JEEB> lines 82-84
[21:37:11 CEST] <durandal117> JEEB: test with normal file, not lavfi
[21:37:19 CEST] <JEEB> ok, I did that too I'm pretty sure
[21:37:27 CEST] <JEEB> that's just the one that doesn't depend on a specific input thing
[21:37:35 CEST] <JEEB> let me upgrade my VM and I will re-test
[21:44:54 CEST] <JEEB> ok, fetching
[21:49:05 CEST] <JEEB> durandal117: btw why would lavfi be any different?
[21:49:22 CEST] <JEEB> well I know it could in theory but it should give you pretty good input timestamps in theory
[21:53:17 CEST] <JEEB> durandal117: hmm, first test looks OK. let me test more
[21:54:31 CEST] <durandal117> itsoffset just doesn't play well with lavfi, didn't explored more
[21:54:38 CEST] <JEEB> ok
[21:54:47 CEST] <BtbN> Is there any player for Windows that supports HEVC acceleration via DXVA?
[21:54:52 CEST] <JEEB> mpv
[21:55:38 CEST] <JEEB> --hwdec=dxva2-copy gets you even the opengl renderer quality (and at least on my 960 thing it's not even slow with the copy)
[21:56:15 CEST] <BtbN> hm, kind of looking for a more VLC/MPC-HC like player with a nicely integrated UI
[21:56:29 CEST] <BtbN> But mpv will do for testing
[21:56:35 CEST] <JEEB> MPC-HC with LAV from the last 1.5 years should do as well I think
[21:56:42 CEST] <BtbN> Only Software-Decoding
[21:57:19 CEST] <JEEB> no, it also has DXVA2 for HEVC
[21:57:34 CEST] <BtbN> Doesn't work for me for some reason then
[21:57:37 CEST] <JEEB> I mean, LAV was one of the first things to utilize the dxva2 hevc stuff in lavc
[21:57:59 CEST] <BtbN> Yes, ffmpeg/lavc does. But it still needs some application support. Which MPC-HC seems to be lacking
[21:58:22 CEST] <JEEB> and I'm trying to tell you that LAV Video has supported that for quite a while now :P
[21:59:20 CEST] <BtbN> Well, I'm currently playing a HEVC file with MPC-HC, and it clearly tells me it's software-decoding.
[21:59:35 CEST] <JEEB> did you enable DXVA2 for HEVC in LAV Video?
[21:59:50 CEST] <JEEB> did you enable DXVA2 in LAV Video at all?
[21:59:54 CEST] <BtbN> Where do I do that? The settings for that seem quite minimal.
[22:00:15 CEST] <JEEB> and you say you want a GUI...
[22:00:34 CEST] <JEEB> anyways, if it's separate then go to LAV Video's properties and change it there
[22:00:53 CEST] <JEEB> if it's the stuff that came with MPC-HC only ("internal") then there's buttons in the internal filters part of options
[22:01:01 CEST] <BtbN> The only config it seems to have for that is selecting the Enhanced Video Renderer, which adds a green checkmark in front of DXVA.
[22:01:30 CEST] <JEEB> are you using the internal or non-internal LAV Video?
[22:01:31 CEST] <BtbN> ah, found it
[22:01:45 CEST] <BtbN> HEVC is indeed unchecked
[22:02:34 CEST] <BtbN> Oh, I can just check it...
[22:02:35 CEST] <JEEB> durandal117: tested with a file and I still get timestamps beginning with zeor :<
[22:02:38 CEST] <BtbN> took that for a "status indicator"
[22:03:15 CEST] <BtbN> yes, works fine now.
[22:03:20 CEST] <BtbN> Wonder why it was unchecked by default.
[22:03:53 CEST] <JEEB> stuff like haswell and broadwell have their own sw decoders in the DXVA2 interface, among other things
[22:04:13 CEST] <durandal117> JEEB: command?
[22:04:54 CEST] <JEEB> yeah, seems like this command does the same thing as lavfi :/
[22:05:24 CEST] <JEEB> durandal117: let me post it fully so we can disect if I'm doing anything completely dumb
[22:06:56 CEST] <durandal117> JEEB: you should get start:some value for each stream
[22:08:38 CEST] <JEEB> durandal117: it seems like it starts muxing audio with a nonzero start timestamp, and then starts getting video... which it then decides to duplicate a lot until the 300s point
[22:08:43 CEST] <JEEB> let me post it
[22:09:29 CEST] <JEEB> also I can make the sample available for testing purposes
[22:10:13 CEST] <JEEB> http://up-cat.net/p/3eb5221d
[22:10:34 CEST] <JEEB> you can simplify it a lot by removing libx264 and switching to mpeg4 etc
[22:11:01 CEST] <JEEB> lines 205 to 210
[22:11:17 CEST] <JEEB> before that it was encoding/muxing audio
[22:13:43 CEST] <BtbN> philipl, what do you think about exposing the deinterlacer in cuvid?
[22:15:20 CEST] <JEEB> durandal117: and as far as I can see that's some duplication logic @ ffmpeg.c
[22:17:20 CEST] <durandal117> JEEB: and what ffmpeg -i output says?
[22:17:41 CEST] <JEEB> durandal117: that's the beginning of that output, no?
[22:17:44 CEST] <JEEB> I haven't cut it at all
[22:18:07 CEST] <JEEB> Duration: 00:39:30.04, start: -0.007000, bitrate: 3217 kb/s
[22:18:17 CEST] <JEEB> because ffmpeg.c applies itsoffset afterwards
[22:18:53 CEST] <JEEB> durandal117: want the sample for local testing? although this seems like the same thing as the lavfi input thing
[22:19:09 CEST] <JEEB> where ffmpeg.c's frame duplication logic goes wee-wee
[22:19:11 CEST] <durandal117> JEEB: of output
[22:19:40 CEST] <JEEB> first fragment is:
[22:19:40 CEST] <JEEB>                 fragment_dts = 0
[22:19:41 CEST] <JEEB>                 fragment_duration = 10010000
[22:20:10 CEST] <durandal117> no, just ffmpeg -i output after muxing
[22:20:33 CEST] <JEEB> http://up-cat.net/p/4fce9a1c
[22:21:03 CEST] <JEEB> which matches the start timestamp of the ISMV fragments
[22:23:46 CEST] <durandal117> JEEB: [ismv @ 0x3e0f980] Track 1 starts with a nonzero dts 2999786667, while the moov already has been written. Set the delay_moov flag to handle this case.
[22:24:26 CEST] <JEEB> that's the audio track, and moov doesn't make sense in this part
[22:24:43 CEST] <JEEB> it's fragmented ISOBMFF fork (ISMV)
[22:24:56 CEST] <JEEB> but just to test it, let's see
[22:25:16 CEST] <JEEB> durandal117: as an extra thing you can note it muxing the audio just fine before that message
[22:26:33 CEST] <JEEB> what the fuck
[22:26:34 CEST] <mifritscher> hi
[22:26:35 CEST] <JEEB> just happened
[22:27:03 CEST] <JEEB> durandal117: I don't think that flag works with ISMV at all
[22:27:24 CEST] <JEEB> I'm getting *very* weird stuff with a non-seekable output (which ISMV should be) if I try utilizing it
[22:27:38 CEST] <JEEB> suddenly I have *zero* fragment timestamp boxes
[22:27:40 CEST] <JEEB> which is not correct
[22:28:10 CEST] <JEEB> let's see if I let it write it to a file
[22:31:19 CEST] <JEEB> durandal117: if I use that flag it creates a single long segment in the beginning so that's not what was meant, either
[22:31:59 CEST] <JEEB> and I still get those 8991 duplicates
[22:34:24 CEST] <JEEB> durandal117: the duplication seems to be done in do_video_out @ ffmpeg.c
[22:36:48 CEST] <durandal117> JEEB: looks like only -c copy works
[22:37:07 CEST] <JEEB> that makes sense since it can't duplicate coded pictures
[22:39:03 CEST] <mifritscher> I'm the guy regarding resurrecting ffserver on the mailing list. reynaldo, I was asked to contact you about this?
[22:40:50 CEST] <JEEB> durandal117: -vsync vfr seems to help
[22:42:26 CEST] <JEEB> durandal117: with ISMV the mov/mp4 etc decoder doesn't read the ISMV timestamps so the start continues to be zero if I read it with ffmpeg, but if I use boxdumper with tfxd box support I can see it being correct
[22:42:58 CEST] <JEEB> now the real problem is if I can offset this by unix timestamps :)
[22:43:16 CEST] <JEEB> (which is what the live streaming server wants)
[22:50:35 CEST] <JEEB> durandal117: ok, with -itsoffset 1474145231  things start totally falling down
[22:53:37 CEST] <durandal117> JEEB: like?
[22:55:19 CEST] <JEEB> my pastebin provider will hate me for these but I'm POST'ing it atm
[22:55:35 CEST] <JEEB> seems to just have a lot of frame dropping because some PTS/DTS calculations go awry
[22:59:29 CEST] <JEEB> had to cut the stderr a bit http://up-cat.net/p/fb4ee897
[23:00:53 CEST] <JEEB> it decoded a whopping 11088 frames of which most were dropped, audio seemed to have a similar case although it got more packets through
[23:01:40 CEST] <JEEB> note: ffmpeg.c requires -t to include the itsoffset if you want to limit transcoding, while avconv.c counts it from the start point
[23:01:52 CEST] <JEEB> all these funky differences :)
[23:08:23 CEST] <philipl> BtbN: I think it makes sense.
[23:08:45 CEST] <BtbN> the question is, does it do frame doubling?
[23:09:00 CEST] <philipl> There's no easy way for mpv (for example) to deinterlacing after the fact if you're using interop
[23:09:50 CEST] <philipl> I don't know. But even of not, I think the option is still worth exposing.
[23:11:04 CEST] <BtbN> it looks horrible
[23:11:18 CEST] <BtbN> which might just be because it's 25 fps...
[23:11:32 CEST] <nevcairiel> decoders currently cant output two frames per input
[23:11:51 CEST] <BtbN> Yeah, I think cuvid can't either, so it's not frame doubling
[23:12:04 CEST] <nevcairiel> cuvid as an API can do that
[23:12:12 CEST] <nevcairiel> you ahve to ask it for the second one
[23:12:35 CEST] <BtbN> hm, all I see is setting the deinterlace mode to Weave/Bob/Adaptive
[23:13:25 CEST] <philipl> If the fields are being passed in one by one, you can get a full frame back for each.
[23:13:29 CEST] <nevcairiel> you have to set CUVIDPROCPARAMS.second_field to extract the second one
[23:13:55 CEST] <nevcairiel> 0/1 for both frames
[23:14:38 CEST] <philipl> also, if you know that you're going get two back, you can manage your state carefully and refuse to consume the next input until after you pull out the second frame.
[23:14:51 CEST] <philipl> I put ridiculous logic into the crystalhd decoder to do this.
[23:15:44 CEST] <JEEB> lol
[23:16:23 CEST] <BtbN> could also do stupid stuff like 4 plane NV12
[23:16:33 CEST] <BtbN> with one frame in plane 1 and 2, and another one in 3 and 4
[23:16:42 CEST] <philipl> heh. Terrifying but functional.
[23:16:52 CEST] <nevcairiel> thats just silly, you cant support that without bunch of hackery everywhere
[23:17:16 CEST] <BtbN> Stuff that doesn't support it will see it as one normal NV12 frame.
[23:17:40 CEST] <BtbN> And players/code aware of that specialty can extract the second frame
[23:17:43 CEST] <nevcairiel> you can stop thinking about that anyway, i would veto it away =p
[23:18:10 CEST] <philipl> So back to my first question. Is it feeding in individual fields or a field-pair on the input side?
[23:18:19 CEST] <nevcairiel> we actually have a new decoder API to allow m:n input/output patterns, but if you use it, any user wanting to use the decoder also n eeds to switch to the new API
[23:18:22 CEST] <philipl> If it's individual fields, then you don't have a problem with output frames
[23:18:27 CEST] <BtbN> it's just feeding frames
[23:18:30 CEST] <BtbN> and each frame has two fields
[23:18:43 CEST] <BtbN> so you have to process the same frame twice
[23:18:43 CEST] <philipl> BtbN: for everything? even mpeg2?
[23:18:51 CEST] <philipl> that's how the parser works?
[23:19:27 CEST] <BtbN> Well, it allways encodes one whole frame
[23:19:36 CEST] <BtbN> so it will come out as such
[23:19:45 CEST] <philipl> You're specifically talking transcode right?
[23:19:53 CEST] <BtbN> no, pure decoding
[23:20:26 CEST] <philipl> Ok. Well, then no easy answers.
[23:20:35 CEST] <BtbN> lavc just isn't prepared for an encoder outputting more than one frame per invocation
[23:20:39 CEST] <BtbN> *decoder
[23:20:59 CEST] <philipl> New api is ready?
[23:21:00 CEST] <BtbN> So the caller would have to call it twice, once with the frame, and once with some dummy data
[23:21:05 CEST] <nevcairiel> actually it is, but like I said, if you use it, any API consumer has to use the new  API
[23:21:17 CEST] <BtbN> Does ffmpeg.c do that?
[23:21:19 CEST] <philipl> I'm sure wm4 can't wait.
[23:21:21 CEST] <nevcairiel> no
[23:21:41 CEST] <BtbN> I wouldn't mind implementing the deinterlacing that way
[23:21:51 CEST] <BtbN> The non-deinterlacing path would still work with the old api
[23:22:06 CEST] <nevcairiel> you cant make a decoder support both APIs, i dont think
[23:22:28 CEST] <wm4> I've sent patches for ffmpeg.c
[23:22:40 CEST] <philipl> Technically we already have many decoders coming out of cuvid.c
[23:22:48 CEST] <wm4> but they don't output timestamps correctly in certain obscure corner cases tested by FATE
[23:22:56 CEST] <wm4> (for decoding)
[23:23:02 CEST] <nevcairiel> philipl: duplicating it because of that is stupid though 
[23:23:06 CEST] <wm4> because ffmpeg wants to set and get DTS values on flush packets
[23:23:11 CEST] <philipl> nevcairiel: to be sure.
[23:25:04 CEST] <BtbN> Deinterlacing with one frame per two fields is horrible though
[23:25:22 CEST] <BtbN> https://btbn.de/images/test/deint/out_cuvid_adaptive.mkv
[23:35:32 CEST] <BtbN> philipl, https://btbn.de/images/test/deint/ the 3 deint modes, by plain stupid replacing Weave with Bob/Adaptive
[23:38:29 CEST] <philipl> BtbN: but you're getting that regardless of the deint mode you choose.
[23:38:37 CEST] <philipl> to go back to your original question.
[23:38:42 CEST] <philipl> So you're doomed anyway.
[23:38:59 CEST] <BtbN> hm?
[23:39:05 CEST] <nevcairiel> with weave you get both original fields in the first frame
[23:39:28 CEST] <BtbN> yes, it only exists for comparion sake
[23:40:26 CEST] <philipl> nevcairiel: true, but that assumes you have a deinterlacer that can act on a weaved frame and the ability to invoke it.
[23:41:03 CEST] <nevcairiel> thats how all our deinterlacers work, or all our software decoders
[23:41:14 CEST] <BtbN> it works fine for cuvid as well
[23:41:25 CEST] <BtbN> cuvid + yadif does what it's supposed to do
[23:42:45 CEST] <philipl> We don't have a deinterlacer that works on the interop path
[23:44:17 CEST] <durandal117> is here any dvdsub master?
[23:47:33 CEST] <philipl> So yeah, if you're prepared to copy back you can use a standard deinterlacer.
[23:49:53 CEST] <BtbN> Hm, wouldn't it be possible to emulate the old API on top of the new one?
[23:53:14 CEST] <nevcairiel> apparently you can give a decoder both APIs
[23:53:20 CEST] <nevcairiel> but it would probably be very ugly
[23:53:39 CEST] <BtbN> Yeah, I don't see anything that would prevent that
[23:54:49 CEST] <BtbN> I understand the new API right, that for a decoder one would implement send_packet and receive_frame?
[23:55:10 CEST] <BtbN> And then the API caller, well, sends a packet and receives a frame?
[23:55:24 CEST] <BtbN> or multiple frames, that is
[23:55:41 CEST] <nevcairiel> you would implement send_packet and receive_frame
[23:55:45 CEST] <nevcairiel> yes
[23:56:55 CEST] <BtbN> from a quick grep, cuvid would actually be the first decoder to implement that.
[23:57:52 CEST] <BtbN> And I don't see why not. As long as decode is still implemented, nothing should break.
[23:59:13 CEST] <nevcairiel> if you can avoid a lot of code duplication in cuvid, otherwise it would seem rather ugly
[23:59:56 CEST] <BtbN> It should be trivial to implement the old api via the old API, as it's known that it will allways be one frame pet packet for the Weave-Case
[00:00:00 CEST] --- Sun Sep 18 2016


More information about the Ffmpeg-devel-irc mailing list