[Ffmpeg-devel-irc] ffmpeg-devel.log.20150808

burek burek021 at gmail.com
Sun Aug 9 02:05:02 CEST 2015


[00:11:34 CEST] <jamrial> BBB: think you could add (or feel like adding) vp9 support to tests/checkasm sometime? it would make benching and testing new versions of the functions much easier
[00:11:39 CEST] <jamrial> benching ipred and mc right now is a pita for example
[00:11:55 CEST] <BBB> uh, ok, I guess
[00:12:01 CEST] <BBB> I havent looked much at it tbh
[00:12:15 CEST] <BBB> I have my own (private, never cleaned up) version of an asm testing tool thats probably very similar
[00:12:23 CEST] <BBB> I use it for all kinds of things, including ffvp9 optimizations
[00:12:57 CEST] <BBB> Ill see what I can do to add tests to checkasm
[00:13:04 CEST] <BBB> where does the code live?
[00:13:18 CEST] <jamrial> tests/checkasm folder
[00:15:48 CEST] <Gramner> writing checkasm unit tests is pretty easy as long as you know which the valid input ranges are (for stuff like pixel data it's fairly obvious)
[00:16:06 CEST] <BBB> hm, yeah, that looks fairly similar
[00:16:35 CEST] <Gramner> the tests are also run as part of FATE
[00:16:43 CEST] <BBB> I guess one difference is that mine doesnt have to know about cputypes, it just automatically calls the cpu flag masking function with random flags (~0>>n) and checks the resulting cpu functions to see if it changed from the previous call or the c function
[00:17:19 CEST] <Gramner> you don't need to know anything about cpu types for writing the actual tests
[00:17:33 CEST] <Gramner> the code that calls the tests handles that
[00:18:03 CEST] <BBB> I know, but checkasm.c has some stuff about cpu types, which looked weird/out-of-place
[00:18:03 CEST] <BBB> can you force running one type of test?
[00:18:03 CEST] <BBB> mine has string recognition of testnames
[00:18:09 CEST] <BBB> ok
[00:19:01 CEST] <Gramner> it runs all tests, but you can specify a function for benchmarking purposes
[00:19:27 CEST] <BBB> so if I want to run just the 10bit version of intra_pred_8x8_chroma h264 planar, how doI do that?
[00:19:52 CEST] <BBB> (ideally just the sse2 version, not the mmx version)
[00:20:02 CEST] <BBB> (although admittedly mine doesnt allow you to specify that yet :-p)
[00:21:51 CEST] <jamrial> apparently that's not currently possible. checkasm only accepts a seed value as argument or --bench
[00:22:24 CEST] <Gramner> i don't really see any point in skipping tests, it runs pretty fast aside from benchmarks
[00:22:52 CEST] <BBB> so, lets say you have a 32x32 idct
[00:22:54 CEST] <BBB> thats quite slow
[00:23:14 CEST] <BBB> so you dont want to run other tests while tweaking that simd, or run that while tweaking other simd (e.g. the idct16x16)
[00:24:00 CEST] <Gramner> run tests/checkasm/checkasm and see for yourself, it finishes pretty much instantly
[00:24:16 CEST] <Gramner> benchmarkes takes more time, but you can specify those using --bench=<function name>
[00:24:37 CEST] <Gramner> (using just --bench runs all benchmarks)
[00:24:49 CEST] <Gramner> compile by running make checkasm
[00:27:22 CEST] <Gramner> now it could perhaps be useful to do more interations etc. and in that case it might make more sense to be able to specify tests, but the code is really new so it hasn't been tweaked a lot yet
[00:29:18 CEST] <BBB> so what I have in mind is this (rough copypaste of some personal scripts here)
[00:29:20 CEST] <BBB> http://pastebin.com/qwgPf8uq
[00:30:25 CEST] <BBB> the way I run that is something along the lines of testasm -n 20 mc 0 0 4 4 0 0 1 1 1 1
[00:30:51 CEST] <BBB> and then it runs just the mc function for width=64, filter=0 (sharp), put (not avg), and subpel interpolation in both directions
[00:31:20 CEST] <BBB> so if I make changes to that function, I can see the results on just that function immediately without havign to run the other 2*2*2*5*4-1
[00:31:26 CEST] <BBB> *number_of_optimizations
[00:35:22 CEST] Action: BBB gonna go home, brb
[00:35:29 CEST] <BBB> jamrial: Ill try to write some tests
[00:35:46 CEST] <jamrial> ok, thanks
[01:28:40 CEST] <BBB> Gramner: if you think its interesting, Ill try to port some of these features to checkasm, I think its useful
[01:28:48 CEST] <BBB> (Im not sure anyone else would use it)
[01:30:39 CEST] <Gramner> that'd be nice. both correctness verification and benchmarking are useful fore sure. and it's a lot more useful when commited instead of just being various scripts that someone has locally
[01:31:48 CEST] <Gramner> on x86, checkasm also verifies that you don't use the upper half of 32-bit integers and that all clobbered callee-saved registers are saved and restored properly
[01:31:51 CEST] <BBB> yeah :-p
[01:31:53 CEST] <BBB> sorry about that
[01:32:34 CEST] <BBB> so, the 32bit integer thing is & kinda weird, because I think we use ptrdiff_t everywhere
[01:32:45 CEST] <BBB> (so we fix that at a different level than x264)
[01:32:53 CEST] <BBB> I mean this is mostly to fix int stride not being sign-extended, right?
[01:32:57 CEST] <Gramner> x264 does the same, just with intptr_t
[01:33:19 CEST] <Gramner> but sometimes an int width or whatever is added to a pointer
[01:33:51 CEST] <BBB> oh I see
[01:33:56 CEST] <Gramner> I caught some of those after writing checkasm tests
[01:33:59 CEST] <BBB> yeah sounds useful
[01:34:01 CEST] <BBB> cool
[01:34:14 CEST] <BBB> do you also do xmm clobber testing?
[01:34:17 CEST] <Gramner> yes
[01:34:29 CEST] <BBB> cool
[01:34:41 CEST] <BBB> (all kind of stuff mine didnt do :-p)
[01:37:58 CEST] <cone-899> ffmpeg 03Niklesh 07master:ecc806a224d7: movtextdec: Fix memory leaks by freeing mem allocs correctly
[01:38:17 CEST] <philipl> BtbN: Did that Turkish guy ping you about temporal layer encoding?
[02:25:25 CEST] <wm4> we really need some sort of async decoding API in lavc
[02:25:40 CEST] <wm4> or reject async hw decoders in lavc and put them in a different lib
[02:26:08 CEST] <wm4> it's just so damn impractical and clunky to wrap an async decoder in a sync API
[02:29:54 CEST] <philipl> Yeah.
[02:30:41 CEST] <wm4> the fucking corner cases
[02:32:29 CEST] <baptiste> isn't it just a matter of ? "wait for callback to return" ?
[02:32:41 CEST] <wm4> no
[02:33:02 CEST] <wm4> you want buffering between input and output
[02:33:27 CEST] <wm4> the sync API wants a full lock-stepped operation with one output frame per input frame (possibly with a delay)
[02:33:38 CEST] <nevcairiel> async as in N:M relationship between input/output, not async as in threads and callbacks
[02:33:49 CEST] <wm4> a hw decoder can request input packets and output decoded frames any time it wants
[02:34:01 CEST] <wm4> N:M makes it easier
[02:34:10 CEST] <wm4> but full async would be helpful too
[02:34:19 CEST] <nevcairiel> although once you have the N:M api, adding a frame-ready threaded callback is probably not hard
[02:34:40 CEST] <wm4> with N:M you can simplify buffering a lot
[02:34:50 CEST] <wm4> and don't need to do stupid shit to ensure the lock-stepped operation
[02:36:37 CEST] <baptiste> interesting, what's the interest of having a N:M relationship when lowest delay possible is usually a goal ?
[02:37:51 CEST] <nevcairiel> but this is what would allow lowest delay
[02:38:15 CEST] <nevcairiel> without it, you have to potentially buffer output frames because you cannot return 2 or 3 right away
[02:38:33 CEST] <baptiste> but you can wait
[02:38:45 CEST] <nevcairiel> buit then you have delay
[02:38:51 CEST] <nevcairiel> waiting is delay
[02:39:41 CEST] <nevcairiel> and there is also codecs where this is required to function properly ..  say encoding interlaced hevc, for every input frame, it would produce two output packets .. which is something which is impossible right now
[02:39:42 CEST] <baptiste> yes, I mean delay in frame not in time
[02:39:47 CEST] <wm4> I can't wait to deprecate the old decoding API (thinking ahead here)
[02:40:23 CEST] <nevcairiel> N:M wouldnt add delay, it could still work 1:1 if thats the best for a specific codec
[02:40:28 CEST] <nevcairiel> but the point is, it doesnt have to :)
[02:40:36 CEST] <rcombs> wm4: wouldn't proper async be significantly faster as well
[02:40:39 CEST] <baptiste> if you do one frame in, one frame out with delay, how can you do better than that ?
[02:40:47 CEST] <wm4> rcombs: depends
[02:41:00 CEST] <wm4> I believe with mmal I already add a lot of buffering
[02:41:04 CEST] <wm4> so it doesn't that much
[02:41:14 CEST] <wm4> but the mmaldec.c logic is also complex and fickle
[02:41:27 CEST] <nevcairiel> the codec could completely control how it wants data delivered and how it sends frames out
[02:41:30 CEST] <nevcairiel> it can request more input
[02:41:33 CEST] <nevcairiel> or push more output
[02:41:49 CEST] <baptiste> I understand, hence my question of what is the interest
[02:41:51 CEST] <nevcairiel> you dont have to buffer any output frames, or delay their output until you get more input
[02:42:14 CEST] <nevcairiel> because many external decoders work that way
[02:42:21 CEST] <nevcairiel> and wrapping them in ffmpeg is a big pain
[02:42:54 CEST] <baptiste> I get that, but what is the ultimate interest of behaving like that, I'm trying to understand
[02:42:56 CEST] <nevcairiel> the ffmpeg h264 decoder can buffer a bunch of frames at times to compensate, while other h264 decoders will just output them in one go when they are ready (it happens)
[02:43:04 CEST] <baptiste> most settop boxes have very accurate timing
[02:43:13 CEST] <nevcairiel> getting rid of the damn buffering isnt g ood enough of a reason?
[02:43:19 CEST] <nevcairiel> buffering fully decoded frames in memory isnt fun
[02:43:25 CEST] <baptiste> never fun
[02:43:30 CEST] <baptiste> but don't ask for more then :)
[02:43:42 CEST] <baptiste> ask one frame, output one frame
[02:43:55 CEST] <nevcairiel> but thats not how advanced codecs work
[02:44:05 CEST] <nevcairiel> they are not intra codecs that just decode their input
[02:44:31 CEST] <baptiste> wouldn't most hw decoders in the world are probably h.264 decoders in settop boxes ?
[02:45:13 CEST] <wm4> no, phones
[02:45:27 CEST] <nevcairiel> many hardware APIs I have seen work in a M:N fashion, they can output more than one output frame when the situation allows for that
[02:45:42 CEST] <baptiste> possibly yeah, but I don't think it would be intra codecs
[02:45:42 CEST] <philipl> Is it enough to extend the existing API to allow the decode/encode to return that the packet was not consumed and should be resubmitted?
[02:46:05 CEST] <nevcairiel> philipl: if we're going to break API, might as well do it properly
[02:46:46 CEST] <baptiste> philipl, you can just wait in that case
[02:48:08 CEST] <baptiste> so far Im not understanding the value in working in a M:N fashion, there must be some hardware consumption value or something like that when decoding 5 frames at once is more energy efficient
[02:48:46 CEST] <nevcairiel> its not about that
[02:49:10 CEST] <wm4> it's also much easier to use
[02:49:25 CEST] <nevcairiel> its just that h264 for example decodes frames like that, at some point it wants to return more than one without needing more input,  to reduce the decoder-internal buffers
[02:49:35 CEST] <baptiste> wm4, not according to lavc, apparently :)
[02:49:41 CEST] <wm4> ?
[02:49:45 CEST] <nevcairiel> the h264 decoder in avcodec has some elaborate delay buffering to compensate for the 1:1 api
[02:50:17 CEST] <baptiste> huh ?
[02:50:31 CEST] <baptiste> if delay is 2 frames, it will wait 2 frames then 1-1
[02:50:51 CEST] <baptiste> the first delay is inevitable
[02:51:36 CEST] <baptiste> the only example where I can see an example like you mention would be b frame packing like old divx was doing for the windows api
[02:51:47 CEST] <baptiste> where in effect you had 2 frames at once
[02:53:29 CEST] <baptiste> also some decoders back in days could be fed partial packets and would "auto parse" but that was phased out IIRC
[02:56:31 CEST] <peloverde> vpnein still has packed bitstream... errr superframes
[02:56:50 CEST] <nevcairiel> there is also situations where the coding parameters change, where you might want to flush out any frames from before the change before applying the change and resuming decoding
[02:57:05 CEST] <nevcairiel> you cant do that without buffering input or output right now
[02:57:17 CEST] <rcombs> what does M:N refer to?
[02:57:29 CEST] <nevcairiel> M input packets produce N output frames
[02:57:36 CEST] <nevcairiel> right now, its 1:1
[02:57:54 CEST] <wm4> basically decoupling input and output
[02:58:12 CEST] <nevcairiel> well, technically its 1:N, since a decoder can decide not to output a frame
[02:58:12 CEST] <wm4> like a video filter
[02:58:26 CEST] <nevcairiel> but a decoder cannot decide to output two frames
[02:58:40 CEST] <rcombs> nevcairiel: and you can send null packets to get more frames out, but you don't know when that's necessary, yeah?
[02:58:43 CEST] <wm4> not outputting a frame is also sketchy
[02:58:45 CEST] <baptiste> well you could call decode_video without input to flush
[02:59:04 CEST] <wm4> but it sort of works because delaying also works like this
[02:59:09 CEST] <nevcairiel> for one that would be something the user code has to control, and how would it ever know when to do that
[02:59:20 CEST] <baptiste> a codec flag
[02:59:26 CEST] <nevcairiel> thats dumb
[02:59:38 CEST] <baptiste> huh, why is it dumb ?
[02:59:45 CEST] <nevcairiel> whats wrong with a clean api
[02:59:45 CEST] <baptiste> we have codec caps since ages
[03:00:00 CEST] <baptiste> sure, what's a clean api ?
[03:00:14 CEST] <nevcairiel> more obscure flags and special behavior is not going to make it any easier to use
[03:00:32 CEST] <baptiste> depends, what's easier to use ?
[03:00:44 CEST] <wm4> baptiste: a clean and simple API
[03:00:52 CEST] <baptiste> sure, do you have more details ?
[03:02:07 CEST] <wm4> I thought libav had a "blueprint" wiki page, but apparently not
[03:02:12 CEST] <nevcairiel> for this particular case, two decode functions that work in tandem, one that submits an AVPacket, and one that retrieves decoded frames .. so you can do: while (more input) { submit(data); while(get(frame)) ...; }
[03:03:02 CEST] <baptiste> from a user standpoint ?
[03:04:38 CEST] <baptiste> if so, thats not a lot different than calling decode_video(null)
[03:06:11 CEST] <nevcairiel> its just much clearer api this way, no special behavior on null, no weird *got_frame parameter anymore, just two functions which take one parameter each and return an appropriate error/success
[03:06:47 CEST] <wm4> also for mmal it's significantly simpler on the implementation side (and I'd expect the same for similar APIs)
[03:06:55 CEST] <baptiste> I mean you want new names and new prototypes, sure, we can do :)
[03:07:09 CEST] <nevcairiel> shoe-horning things onto the old API is what makes the API so convoluted and complicated to use sometimes
[03:07:18 CEST] <baptiste> the api is really simple
[03:07:30 CEST] <wm4> too simple
[03:07:30 CEST] <nevcairiel> it looks that way at first
[03:07:37 CEST] <nevcairiel> and then you figure out all the special cases
[03:07:39 CEST] <wm4> so you have to deal with weird corner cases
[03:08:26 CEST] <nevcairiel> both wm4 and myself do have our own decently sized projects that make use of all the libraries, so we do deal with these things regularly
[03:08:50 CEST] <baptiste> trust me I have a big project using the libs :)
[03:08:59 CEST] <nevcairiel> some other devs only see ffmpeg.c unfortunately =p
[03:10:15 CEST] <nevcairiel> anyway time to try sleeping in this heat
[03:10:29 CEST] <wm4> hm, does anyone know if ffmpeg.c uses a separate decoder AVCodecContext?
[03:10:30 CEST] <baptiste> Im interested in hearing about the corner cases though
[03:10:34 CEST] <wm4> or does it use the stream's?
[03:10:58 CEST] <nevcairiel> i think it creates a copy
[03:11:10 CEST] <wm4> it does something weird to the extradata here (I hate life)
[03:11:10 CEST] <Daemon404> which is Most Correct
[03:13:18 CEST] <wm4> crystalhd.c has some weird code to "backup" the extradata
[03:13:25 CEST] <wm4> I guess I need this too
[03:13:40 CEST] <baptiste> yeah that messing with AVStream avctx was always weird
[03:13:43 CEST] <nevcairiel> it does this because of the annexb converter, which converts the extradata
[03:13:53 CEST] <wm4> nevcairiel: I know, but why does it matter
[03:14:02 CEST] <wm4> if the decoder context is really a copy...
[03:14:03 CEST] <wm4> oh wait
[03:14:06 CEST] <nevcairiel> but didnt that qsv guy add a parameter to the converter to not process the extradata
[03:14:10 CEST] <wm4> maybe it's libavformat/utils.c?
[03:14:30 CEST] Action: Daemon404 popcorn.h264
[03:14:40 CEST] <wm4> doesn't make sense either
[03:14:50 CEST] <wm4> because forced codecs don't influence libavformat
[03:15:23 CEST] <baptiste> all right guys, have a good one, good luck with your decoder, wm4 
[03:15:56 CEST] <nevcairiel> ffmpeg.c definitely copies the input context
[03:16:14 CEST] <nevcairiel> the problem is that find_stream_info would already invoke all the code and then covnert the extradata
[03:16:22 CEST] <nevcairiel> so when the real decoder is opened, it will never see the original
[03:18:03 CEST] <wm4> ok avformat_find_stream_info is opening this decoder
[03:18:13 CEST] <wm4> I don't get why
[03:18:34 CEST] <wm4> maye this AVOption shit magically causes it (instead of only being interpreted by ffmpeg.c)?
[03:19:27 CEST] <nevcairiel> ffmpeg.c overrides the decoder choices before calling that function
[03:20:25 CEST] <nevcairiel> hm or not
[03:20:28 CEST] <nevcairiel> no idea how it would
[03:20:32 CEST] <nevcairiel> those dont have a codec id
[03:22:44 CEST] <nevcairiel> oh there it is
[03:22:52 CEST] <nevcairiel> yes it can override the codecs before running that function
[03:23:01 CEST] <nevcairiel> but its a bit crude, only one per codec type
[03:23:06 CEST] <nevcairiel> ie. one video codec
[03:23:09 CEST] <nevcairiel> not per-stream
[03:23:22 CEST] <wm4> I bet it's a ffmpeg "improvement"
[03:23:32 CEST] <wm4> it didn't happen in Libav back then
[03:23:42 CEST] <wm4> though it the end it doesn't matter too much
[03:23:59 CEST] <wm4> just that you don't really want to invoke the hw decoder to "analyze" your streams
[03:24:09 CEST] <wm4> what if they're e.g. using an unsupported profile
[03:24:11 CEST] <nevcairiel> i suppose its generally not too bad that it actually probes with the codec you want
[03:42:07 CEST] <wm4> I wonder if I should send my shitty mmal patches for review, or if I should just push them
[03:53:38 CEST] <atomnuker> wm4: send them in to the ML and wait a few days before merging them if there are no problems
[05:05:06 CEST] <philipl> wm4: Oh man, that backup extradata shit.
[09:54:02 CEST] <cone-692> ffmpeg 03Ludmila Glinskih 07master:8ec89681af26: tests/api/api-h264-test: structure changes to avoid duplicate code
[10:15:08 CEST] <michaelni> kierank, are the api-band/seek-test ready to be comited or still need a review or changes ? i think the last patch revissions have no replies on the ML
[11:04:54 CEST] <cone-692> ffmpeg 03Carl Eugen Hoyos 07master:176698260ca7: configure: mpegvideo depends on mpeg_er.
[12:12:24 CEST] <cone-692> ffmpeg 03Carl Eugen Hoyos 07master:7e9cd9962709: lavc: The h263 encoder (also) depends on h263data.o
[13:23:44 CEST] <BBB> JEEB: do you think youll have time to look at the #libav logs?
[13:26:51 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:c382d9e8cbee: swscale: Add sws_alloc_set_opts()
[13:26:52 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:d0e0757e9a96: swscale: Implement alphablendaway for planar 4:4:4 formats
[13:26:53 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:41e733c1ef20: avfilter/graphparser: Do not ignore scale_sws_opts if args == NULL
[13:26:54 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:165fb7eba80c: cmdutils: Export all sws options using a AVDictionary like the other subsystems do
[13:26:55 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:e755954a84c9: ffplay: pass all sws options to the filter graph
[13:38:10 CEST] <wm4> BBB: what are you looking for? (not sure if I have complete logs myself)
[13:42:52 CEST] <BBB> #libav logs from as early as possible (probably march 2011) to now
[13:45:13 CEST] <wm4> mine definitely don't go that far back.. maybe half-way
[14:11:56 CEST] <BBB> wm4: anything is better than nothing, how far back do yours go?
[14:12:47 CEST] <JEEB> BBB: I did work until late and then I've been on the move this whole day in southwest finland
[14:12:58 CEST] <JEEB> so maybe later but haven't had any time so far
[14:13:11 CEST] <BBB> JEEB: thanks!
[14:25:50 CEST] <BtbN> philipl, yes, i told him i don't have any compatible hardware.
[16:31:18 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:d3d776ccf94c: avfilter/vf_scale: apply generic options after flags.
[16:31:19 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:6dbaeed6b7b7: ffmpeg: switch swscale option handling to AVDictionary similar to what the other subsystems use
[16:31:20 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:408c9cf0e21d: cmdutils: Fix overriding flags on the command line.
[17:05:34 CEST] <cone-692> ffmpeg 03Michael Niedermayer 07master:5edab1d207d2: cmdutils: remove sws_opts usage, simplify code
[17:14:51 CEST] <philipl> BtbN: nvenc appears to support temporal layers, but requires the app to specify which layer each frame will go into.
[17:43:27 CEST] <BtbN> philipl, hm, that'd require quite a bit of refactoring. And i'm not sure if it's worth it. If I understand that feature right, all it does is make it possible to play streams at a lower framerate on machines too slow for the real one?
[17:43:34 CEST] <BtbN> Seems like an odd hack to me
[17:49:15 CEST] <philipl> BtbN: Yeah. It would require a lot of logic in our code to control which frames go in which layer
[17:50:12 CEST] <philipl> I suppose it might be used for slow-mo sections in a video too? (Hi ubitux)
[17:50:37 CEST] <philipl> This Turkish guy seems to think it will be important for broadcasters sending 120fps streams to 'old' 60fps-only TVs
[17:51:04 CEST] <BtbN> I highly doubt that.
[17:51:19 CEST] <JEEB> lol
[18:38:38 CEST] <philipl> BtbN: I am also skeptical.
[19:02:41 CEST] <BBB> so & mr filter experts. if I asked you about the ten most exciting audio and video filters& which ones would you claim they are?
[19:03:34 CEST] <wm4> svp4lyfe
[19:05:23 CEST] <BBB> ?
[19:06:48 CEST] <wm4> motion interpolation seems to be one of the most requested features
[19:06:56 CEST] <wm4> although I don't quite know why
[19:07:00 CEST] <wm4> it adds terrible artifacts
[19:15:17 CEST] <Compn> BBB : recoloring filter and the one that can edit photos and recombine them i forgot its name right now. 
[19:15:53 CEST] <wm4> all of the postproc filters!
[19:16:25 CEST] <Compn> BBB : for audio, i'd like to see karaoke and reversekaraoke , to isolate dialog or to exclude dialog on center 
[19:16:50 CEST] <BBB> but which ones are the most exciting filters that we already have?
[19:18:21 CEST] <Compn> everyone says yadif deinterlacer
[19:28:47 CEST] <ubitux> BBB: depends
[19:28:55 CEST] <ubitux> i like lut3d related filters and ebur128 :D
[19:29:03 CEST] <ubitux> but obviously i'm biased
[19:29:26 CEST] <wm4> I bet most low-profile libavfilter uses are for vf_yadif
[19:29:28 CEST] <ubitux> palette stuff maybe too
[19:29:32 CEST] <wm4> ebur might be used a lot too
[19:29:39 CEST] <wm4> (from my impressions)
[19:30:00 CEST] <ubitux> BBB: but to be honest the main drag in writing new filters currently is the inability to keep a lot of context
[19:30:04 CEST] <ubitux> or said differently, no seeking
[19:30:15 CEST] <ubitux> you see that in thumbnail filter typically
[19:30:28 CEST] <ubitux> the filter is going to pick just one frame out of N
[19:30:49 CEST] <ubitux> technically it doesn't need to keep them all in memory if it could query back one
[19:30:56 CEST] <ubitux> (it based its selection on histograms)
[19:31:08 CEST] <ubitux> but since filters are stream based, it has to cache them all
[19:32:03 CEST] <wm4> seeking really wouldn't fit in well libavfilter's model
[19:32:14 CEST] <ubitux> maybe
[19:32:24 CEST] <wm4> maybe a more general filter model (think dshow/gstreamer) would be more appropriate
[19:32:36 CEST] <wm4> but it'd end up with messing it into libavfilter, and the result would be terrible
[19:32:51 CEST] <ubitux> well technically the seeking wouldn't be in lavfi
[19:33:01 CEST] <ubitux> lavfi needs a model to request a frame to the user
[19:33:11 CEST] <BBB> so yadif and ebur
[19:33:13 CEST] <BBB> thats it
[19:33:28 CEST] <ubitux> BBB: well, we have many cool filters
[19:33:41 CEST] <BBB> tell me a few cool ones that you think are particularly useful
[19:33:54 CEST] <ubitux> any color filter
[19:34:11 CEST] <ubitux> hue, eq, color*, lut, lut3d/haldclut, ...
[19:34:32 CEST] <ubitux> palettegen/paletteuse for hq gif
[19:34:40 CEST] <BBB> hqdn3d?
[19:34:45 CEST] <ubitux> hqx/xbr for scaling pixel based material 
[19:34:49 CEST] <Compn> dont bring that up around ubitux :P
[19:34:49 CEST] <BBB> or, whats the best denoise filter?
[19:34:54 CEST] <Compn> er around wm4
[19:34:59 CEST] <ubitux> we have various denoisers
[19:35:01 CEST] Action: Compn confusing things up
[19:35:20 CEST] <ubitux> owdenoise, dctdnoiz, fftfilt maybe
[19:35:28 CEST] <BBB> I know we have 20
[19:35:31 CEST] <BBB> which is the best?
[19:35:34 CEST] <ubitux> try it
[19:35:38 CEST] <BBB> ...
[19:35:40 CEST] <ubitux> i'm currently writing another one
[19:35:44 CEST] <BBB> which one do we recommend to novice users?
[19:35:49 CEST] <BBB> the ones without eyes
[19:36:05 CEST] <ubitux> i don't recommend any because i don't know which is the best
[19:36:09 CEST] <Compn> hqdn3d is cheap 
[19:36:14 CEST] <Compn> owdenoise is slower iirc
[19:36:25 CEST] <Compn> which usually means better
[19:36:33 CEST] <Compn> sorry i have also not tested these in years
[19:36:34 CEST] <ubitux> BBB: fieldmatch/decimate or pullup are used quite a bit
[19:36:39 CEST] <ubitux> (unrelated to denoising)
[19:36:56 CEST] <ubitux> testsrc is used a lot for testing ;)
[19:37:32 CEST] <ubitux> the usual scale, drawtext, crop filters are used a lot too
[19:37:42 CEST] <ubitux> ass/subtitles filter for burning subtitles too
[19:38:06 CEST] <ubitux> you can add pad and overlay filter in the scale/drawtext/crop list
[19:39:32 CEST] <ubitux> ah, and [t]blend can be very useful
[19:39:47 CEST] <ubitux> anyway, i'll end up listing them all
[19:40:27 CEST] <nevcairiel> the only one i have ever used is yadif because deint is mandatory with todays displays
[19:40:34 CEST] <BBB> ok thats better
[19:40:38 CEST] <BBB> so what about audio filters?
[19:40:45 CEST] <BBB> give me 5 awesome audio filters
[19:40:48 CEST] <BBB> I already have ebur128
[19:41:26 CEST] <nevcairiel> biquad is pretty powerful, but you need a degree in audio dsp to figure out to use it properly :)
[19:43:31 CEST] <ubitux> BBB: resampling obviously, pan/amerge filter to play with channels, aevalsrc for synth (similar to geq video one) or sine
[19:44:02 CEST] <ubitux> i often use asplit when i need to mix it with showspectrum/showwaves/showcqt
[19:44:58 CEST] <ubitux> BBB: http://trac.ffmpeg.org/wiki/FancyFilteringExamples
[19:45:49 CEST] <ubitux> ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,xbr=3
[19:45:50 CEST] <ubitux> :D
[19:46:04 CEST] <BBB> ok that should do
[19:46:27 CEST] <BBB> most awesome codecs of the past 5 years? I selected prores, vp9, h264 high bitdepth, hevc, g2m, jpeg2k
[19:46:34 CEST] <ubitux> gif
[19:46:39 CEST] <wm4> ubitux: it died atfer a few seconds
[19:46:41 CEST] <BBB> and for audio wma lossless and atrac3+
[19:46:52 CEST] <ubitux> wm4: it stabilized* yeag
[19:51:05 CEST] <ubitux> BBB: didn't you forget opus?
[19:51:15 CEST] <BBB> probably yes
[19:51:32 CEST] <ubitux> ffv1 got some attention recently too
[19:53:41 CEST] <BBB> well ffv1 is like gif
[19:53:46 CEST] <BBB> it exists
[19:53:48 CEST] <BBB> we fixed some stuff
[19:53:50 CEST] <BBB> yay
[19:53:52 CEST] <BBB> next
[19:55:35 CEST] <ubitux> it was improvements more than fixing
[19:55:40 CEST] <ubitux> (for ffv1)
[19:55:53 CEST] <ubitux> also standardization process
[19:56:04 CEST] <BBB> ok, ffv1.N?
[19:56:13 CEST] <BBB> 3?
[19:56:15 CEST] <BBB> 4?
[19:56:20 CEST] <ubitux> 1.3 or 1.4, dericed or michaelni should know better
[19:57:16 CEST] <ubitux> in the past 5 years i think we saw the rise (and fall?) of utvideo?
[19:57:25 CEST] Action: ubitux tickles Daemon404
[19:58:28 CEST] <BBB> 1.3 I guess
[19:58:31 CEST] <atomnuker> how is atrac3+ awesome?
[19:58:34 CEST] <BBB> changelog mentions only 1.3
[20:02:58 CEST] <durandal_1707> Nobody tested atadenoise, deband, dynaudnorm, astats, and afade
[20:03:30 CEST] <durandal_1707> afade is used a lot when googling
[20:04:15 CEST] <ubitux> ah yeah, fade filters
[20:04:22 CEST] <ubitux> ted uses fade filters typically
[20:05:08 CEST] <durandal_1707> and do not forget compand
[20:05:26 CEST] <ubitux> atempo is quite nice btw
[20:05:47 CEST] <ubitux> but again we're going to end up listing them all
[20:06:33 CEST] <durandal_1707> noise filter is also useful for testing
[20:07:06 CEST] <durandal_1707> karaoke can be done with pan
[20:07:20 CEST] <ubitux> (and ass filter @_@)
[20:07:49 CEST] <Daemon404> [18:57] <@ubitux> in the past 5 years i think we saw the rise (and fall?) of utvideo?
[20:07:52 CEST] <Daemon404> [18:57]  * ubitux tickles Daemon404
[20:07:54 CEST] <Daemon404> fall yes
[20:07:59 CEST] <Daemon404> so much so that nobody has garassed me to add 10bit support
[20:08:04 CEST] <Daemon404> while upstream has had it for like 1-2 years
[20:08:13 CEST] <Daemon404> but... eh
[20:08:25 CEST] <Daemon404> i think it is still widely used in anime land
[20:08:33 CEST] <Daemon404> i dont know of anything replacing it
[20:08:39 CEST] <Daemon404> /storytime
[20:09:06 CEST] <ubitux> i guess huffyuv with the support of the new pix fmt, or ffv1 could be a good replacement
[20:09:26 CEST] <JEEB> I think Ut Video might actually be more used in video editing than animoo world
[20:09:29 CEST] <Daemon404> iirc the reason people used utvideo over huffy was tiled threads
[20:09:31 CEST] <Daemon404> er, striped.
[20:09:40 CEST] <JEEB> because the damn thing has plugins for every damn media framework
[20:09:48 CEST] <JEEB> VFW, QT, MF, DS
[20:10:14 CEST] <Daemon404> >implying anything uses MF 
[20:10:34 CEST] <JEEB> indeed
[20:10:40 CEST] <JEEB> but VFW and QT is already a lot
[20:10:42 CEST] <Daemon404> clearly it needs gst.
[20:10:50 CEST] <JEEB> gst has it through lavc
[20:10:59 CEST] <Daemon404> lavc doesnt have 10bit
[20:11:10 CEST] <JEEB> very few are using that it seems
[20:11:22 CEST] <JEEB> although it is called "Ut Video Pro"
[20:11:24 CEST] <JEEB> or so
[20:11:26 CEST] <Daemon404> the only people doing 10biy lossless i know, are using FFV1.
[20:11:33 CEST] <JEEB> so I'd see some corporate'y folk using it
[20:11:35 CEST] <JEEB> just for the name
[20:12:48 CEST] <ubitux> huffyuv supports > 8bit now
[20:13:08 CEST] <Daemon404> but not low-latency threading (tiled or striped)
[20:13:13 CEST] <Daemon404> which what ut vidoe was aimed at
[20:13:15 CEST] <Daemon404> for NLE
[20:14:03 CEST] <ubitux> ok
[20:26:41 CEST] <jamrial> ubitux, BBB: regarding fancy audio filters, someone wrote this aevalsrc example some time ago http://pastebin.com/XzfPqWcG
[20:29:20 CEST] <ubitux> i yeah i remember; it was a port
[20:29:46 CEST] <ubitux> ffplay -f lavfi ...
[20:30:03 CEST] <ubitux> it's pretty cool
[20:31:33 CEST] <wm4> I only see relatively inelegant hacks
[20:31:38 CEST] <wm4> that would be cool if done properly
[20:31:45 CEST] <wm4> *nag nag*
[20:49:14 CEST] <kierank> durandal_1707: ping
[20:50:42 CEST] <durandal_1707> kierank: pong
[20:52:52 CEST] <kierank> durandal_1707: do you remember why the numbers in smptebars don't match
[20:52:55 CEST] <kierank> like for example blue
[20:53:43 CEST] <durandal_1707> perhaps colorspace conversion is done?
[20:54:03 CEST] <durandal_1707> what value blue should be?
[20:54:15 CEST] <wm4> wasn't thre something about this being wrong because it outputs rgb instead of yuv?
[20:54:40 CEST] <durandal_1707> that was, no more true
[20:54:47 CEST] <kierank> durandal_1707: in the code itself there are two different values
[20:55:35 CEST] <wm4> does this code set the output colorspace?
[20:56:03 CEST] <wm4>  av_frame_set_colorspace(picref, AVCOL_SPC_BT709);
[20:56:51 CEST] <wm4> libavfilter has no proper concept of colorspaces, so maybe the AVFrame-based passing of it somehow fails at some point
[21:01:43 CEST] <durandal_1707> kierank: one is 75% blue
[21:03:27 CEST] <kierank>  838 static const uint8_t   blue[4] = {  32, 240, 118, 255 };
[21:03:34 CEST] <kierank> the spec doesn't use those numbers
[21:04:01 CEST] <kierank> oh i see
[21:04:17 CEST] <kierank> ignore that then
[21:04:21 CEST] <kierank> but there's still a bug i need to find
[21:04:29 CEST] <kierank> will have to do it when i have access to scope
[21:05:52 CEST] <durandal_1707> what scope you want to use?
[21:06:14 CEST] <kierank> I have a vectorscope (sd only) at work
[21:06:23 CEST] <kierank> and dericed said hd fails on his scope too
[21:13:41 CEST] <durandal_1707> you mean pure blue fail or everything is shifted?
[21:14:44 CEST] <durandal_1707> smptebars and smptehdbars have different colorspace set
[21:16:44 CEST] <kierank> everything fails
[21:17:00 CEST] <kierank> it shouldn't matter because the scope is yuv all the way
[21:36:34 CEST] <kierank> durandal_170: so I think smptebars should all be 75%
[21:36:38 CEST] <kierank> that might be why it fails
[22:02:54 CEST] <cone-557> ffmpeg 03Andreas Cadhalpun 07master:2e9c8be8342c: avcodec: add missing FF_API_CODEC_ID guard
[22:02:55 CEST] <cone-557> ffmpeg 03Andreas Cadhalpun 07master:8bd74aafe85c: avfilter: remove obsolete function declarations
[22:02:56 CEST] <cone-557> ffmpeg 03Andreas Cadhalpun 07master:9126ae4b6ba3: use avfilter_pad_get_{type,name} accessor functions
[22:02:57 CEST] <cone-557> ffmpeg 03Andreas Cadhalpun 07master:e66a43f69455: graphdump: include internal.h for AVFilterPad
[22:06:52 CEST] <philipl> I've got some wmv3 files here that are reporting '-99' as their profile numeric value.
[22:07:09 CEST] <philipl> That's, obviously, causing the vdpau profile verification to fail. but the files play fine
[22:12:20 CEST] <durandal_170> kierank: do you have 8bit yuv file with correct colors?
[22:16:20 CEST] <durandal_170> also do you mean both smptehdbars and smptebars are wrong?
[22:36:50 CEST] <nevcairiel> -99 is the value for unknown profile phh
[22:36:56 CEST] <nevcairiel> philipl: 
[22:37:01 CEST] <philipl> EH.
[22:37:15 CEST] <philipl> Yet, mediainfo accurately reports the profile
[22:37:40 CEST] <philipl> So we're ballsing that up
[22:37:59 CEST] <nevcairiel> profiles are often not viewed as crucial data and more informative
[22:38:07 CEST] <nevcairiel> so not every codec sets it very reliably
[22:38:20 CEST] <philipl> No doubt, but at least one and possibly two problems here.
[22:38:37 CEST] <philipl> the vdpau init code tries to map report avctx->profile to known supported profiles on the vdpau side
[22:38:51 CEST] <philipl> The vc1dev code appears to check profile values
[22:40:05 CEST] <philipl> So, where does avctx->profile get set?
[22:40:26 CEST] <philipl> In this particular case, the profile is read and stored in the private vc1 context and that's how the internal decoder decisions are made
[22:40:33 CEST] <philipl> Is avctx->profile from the demuxer?
[22:41:15 CEST] <philipl> Ok, so the bug is vc1dec doesn't set avctx->profile
[22:41:55 CEST] <nevcairiel> i think it does
[22:42:14 CEST] <nevcairiel> but maybe not in the wmv3 case
[22:42:21 CEST] <philipl> Not in any case.
[22:42:31 CEST] <nevcairiel> avctx->profile = v->profile;
[22:42:34 CEST] <nevcairiel> seems to =p
[22:42:37 CEST] <philipl> hrm
[22:43:16 CEST] <philipl> Certainly one of my test files is wmv
[22:43:20 CEST] <philipl> one is m2ts
[22:43:22 CEST] <philipl> both don't work
[22:43:42 CEST] <philipl> the m2ts one is a vc-1 bluray extract
[22:43:45 CEST] <philipl> so that's not wmv3 for sure
[22:44:49 CEST] <nevcairiel> i dont even do profile checks for vc1, its universally advanced profile really =p
[22:44:56 CEST] <nevcairiel> only have to exclude wmv3 complex
[22:46:46 CEST] <philipl> and yet here we are.
[22:49:17 CEST] <philipl> Looks the vdpau init code runs before the profile is extracted from the bitstream
[22:54:19 CEST] <ubitux> who is rtogni?
[22:56:07 CEST] <iive> Roberto Togni?
[22:56:34 CEST] <iive> rxt
[22:56:48 CEST] <ubitux> he seemed to have replaced avcodec_frame_alloc() with av_frame_alloc() in mplayer just now
[22:57:01 CEST] <ubitux> but... someone probably wants to look deeper here
[22:57:44 CEST] <ubitux> because avcodec_alloc_frame() needs to be freed with avcodec_free_frame(), and av_frame_alloc() with av_frame_free()
[22:57:48 CEST] <wm4> this stuff can be pretty tricky, although refcount not being on by default probably helps compatibility for stuff like mplayer here
[22:57:56 CEST] <ubitux> iirc they're not exactly compatible regarding refcounting
[22:58:18 CEST] <iive> ubitux: write and ask him.
[22:58:21 CEST] <ubitux> well they're probably going to have some surprises anyway
[22:58:34 CEST] <ubitux> iive: no i don't really care, i don't use mplayer for years now
[22:58:36 CEST] <philipl> wm4: Where does the hwaccel->init() get called in mpv? I'm struggling to find it
[22:58:48 CEST] <ubitux> it just passed in my mbox and i was a bit concerned
[22:59:12 CEST] <iive> ubitux: so you are still subscribed.
[22:59:26 CEST] <iive> just reply to the commit and repeat what you said here.
[22:59:27 CEST] <ubitux> iive: yeah, seems so
[22:59:36 CEST] <wm4> philipl: vd_lavc_hwdec.init is called in init_avctx
[22:59:39 CEST] <Shiz> mplayer still exists?
[22:59:45 CEST] <wm4> philipl: before the decoder is created
[22:59:49 CEST] <iive> rxt is reasonable guy.
[22:59:58 CEST] <philipl> wm4: I'm trying to work out the call chain to the ffmpeg hwaccel->init method
[23:00:16 CEST] <philipl> To work out how to reconcile this profile stuff being out of order
[23:00:19 CEST] <wm4> philipl: ah... I don't know either without doing some digging
[23:00:41 CEST] <wm4> philipl: but it's not called from mpv AFAIK
[23:00:45 CEST] <wm4> (or any API user)
[23:01:06 CEST] <philipl> Yeah, except I can't see it called in ffmpeg.git either except in end_frame(!)
[23:01:26 CEST] <rtogni> avcodec_alloc_frame() is just return av_frame_alloc() (libavcodec/utils.c)
[23:01:32 CEST] <wm4> vdpau.c:264:    return avctx->hwaccel->init(avctx);
[23:01:35 CEST] <wm4> this one?
[23:01:48 CEST] <wm4> (264 is a line number)
[23:01:52 CEST] <ubitux> oh you're on the channel, my bad
[23:01:54 CEST] <rtogni> so if they behaved diffrenetly originally, that code is already broken
[23:02:21 CEST] <wm4> I remember when I tried to keep my code compatible to both pre- and post-refcounting
[23:02:25 CEST] <wm4> it wasn't pretty
[23:03:05 CEST] <wm4> philipl: I guess end_frame is where parsing is done, and the frame just needs to be "rendered"?
[23:03:12 CEST] <rtogni> and unless I missed something, they are never freed in mplayer
[23:03:21 CEST] <wm4> "is done" as in was done before calling the function
[23:08:39 CEST] <philipl> wm4: I guess so
[23:08:52 CEST] <philipl> but if it's delayed that long, how can it possible be running before the vc1_decoder_init
[23:08:55 CEST] <philipl> Anyway.
[23:09:28 CEST] <wm4> that's not possible
[23:09:38 CEST] <wm4> unless another decoder was created before it
[23:10:28 CEST] <philipl> I'll keeping digging.
[23:15:17 CEST] <philipl> Ok.
[23:15:33 CEST] <philipl> It's because vc1_decode_init calls ff_get_format very early and that triggers the hwaccel init
[23:16:01 CEST] <philipl> That needs to be reversed.
[23:16:06 CEST] <wm4> fun
[23:18:12 CEST] <philipl> Gawd.
[23:44:58 CEST] <philipl> FJKLLKSDFJKL
[23:45:20 CEST] <philipl> vc1_decode_init calls msmpeg4_decode_init which calls h263_decode_init, which calls get_format.
[23:45:23 CEST] <philipl> ARGH
[23:46:13 CEST] <wm4> lol
[23:46:28 CEST] <wm4> yeah, that must be part of the mpegvideo mess
[23:46:58 CEST] <philipl> quite.
[23:47:01 CEST] <philipl> So much pain
[23:47:20 CEST] <philipl> And I obviously can't tell if the vc1 specific init stuff must come after that or if I can move it up
[23:52:47 CEST] <philipl> Seems to have worked, however.
[00:00:00 CEST] --- Sun Aug  9 2015



More information about the Ffmpeg-devel-irc mailing list