[Ffmpeg-devel-irc] ffmpeg-devel.log.20160222

burek burek021 at gmail.com
Tue Feb 23 02:05:02 CET 2016


[00:04:56 CET] <nevcairiel> jkqxz: its really not that hard to track that yourself, you need to setup get_format and get_buffer2 callbacks anyway, so you can just check in those calls if hwaccel is used, return errors otherwise, which will also make the decode call error out
[00:10:16 CET] <michaelni> j-b, nevcairiel would the warning i had suggested initially "Hardware accelerated decoding with frame threading does require drivers and hw acceleration APIs to be thread save and or requires complex locking to be done by the user application otherwise It can result in artifacts or crashes. This combination is thus discouraged" solve this ?
[00:10:51 CET] <j-b> michaelni: I'll ask courmisch, yes.
[00:11:18 CET] <iive> this reminds me I have a draft that i haven't sent...
[00:11:27 CET] <nevcairiel> how does more text change anything, still contains the same base elements "can have bugs, discouraged"
[00:12:02 CET] <nevcairiel> courmisch isnt a person known for his spirit of cooperation, so 
[00:12:23 CET] <iive> inho, the problem is how hackish the threading is implemented.
[00:14:02 CET] <iive> for example, cloning and merging context is something that should be avoided.
[00:14:22 CET] <atomnuker> how come the s302m encoder is still experimental?
[00:14:34 CET] <nevcairiel> iive: thats not really related to hwaccel problems
[00:14:38 CET] <iive> all things for decoding a frame/slice should be in their own (sub)context
[00:14:57 CET] <iive> nevcairiel: it is, because threading is started, before get_format is called
[00:15:23 CET] <nevcairiel> strictly speaking hwaccel could turn itself on in the middle of a file when the format changes
[00:15:31 CET] <nevcairiel> you would always be in threading mode then
[00:15:55 CET] <j-b> nevcairiel: courmisch is _my_ problem...
[00:16:23 CET] <iive> nevcairiel: format change is similar to decoder reinit
[00:16:52 CET] <iive> it could change everything, including resolution
[00:17:06 CET] <nevcairiel> sure, and it does that just fine even in threading mode
[00:17:29 CET] <iive> can you turn threading mode on and off, during decoder work?
[00:17:35 CET] <nevcairiel> no
[00:17:39 CET] <iive> why not?
[00:17:44 CET] <wm4> enabling hwaccel midstream seems like an exceptionally obscure case anyway
[00:17:56 CET] <nevcairiel> because its not designed to be able to do that
[00:18:05 CET] <nevcairiel> it has a fixed size thread pool, and it cant change
[00:18:09 CET] <iive> that's because threading is a hack
[00:18:32 CET] <nevcairiel> but thats unrelated to every thread having its own codec context
[00:18:43 CET] <nevcairiel> you are smushing all things together without logic =p
[00:19:09 CET] <iive> you didn't let me finish...
[00:19:19 CET] <iive> whatever...
[00:19:24 CET] <nevcairiel> this method  allows intra only codecs to multithread with extremely little effort
[00:19:47 CET] <iive> why is the threaded pool fixed size?
[00:19:58 CET] <nevcairiel> because its easier this way
[00:20:06 CET] <nevcairiel> also because of our decode API
[00:20:16 CET] <nevcairiel> its not very flexible about such things
[00:21:36 CET] <nevcairiel> creating an over-engineered perfect solution isnt hard in theory, in practice you have to rewrite everything
[00:21:56 CET] <iive> bingo
[00:22:06 CET] <nevcairiel> well, feel free to start
[00:22:11 CET] <nevcairiel> we'll see you in a decade or two
[00:22:11 CET] <nevcairiel> :D
[00:22:20 CET] <iive> do you know of a project that wants to rewrite everything ffmpeg ?
[00:23:27 CET] <nevcairiel> rewriting everything is not a good goal for anyone, you have to do incremental improvements otherwise nothing is going to get finished ever
[00:24:05 CET] <nevcairiel> people have been planning to revise the decode API for ages now, once such an API may exist one could start thinking about how to use it to make internals more elegant
[00:24:26 CET] <wm4> lol I'll try
[00:25:18 CET] <iive> the decode API is fine.. it can handle much more abuse.
[00:25:43 CET] <nevcairiel> no, its far too rigid
[00:26:04 CET] <nevcairiel> for example you couldnt lower the thread count because there is no way to output the excess frame
[00:26:11 CET] <wm4> lacks the ability to output >1 frames per packet
[00:26:20 CET] <iive> you don't have to output them
[00:26:28 CET] <iive> just don't start new ones...
[00:26:30 CET] <nevcairiel> buffering them for ever is hardly ideal
[00:27:21 CET] <iive> wm4: it allows calling with NULL input?
[00:27:30 CET] <nevcairiel> not in the middle of the stream
[00:27:33 CET] <wm4> that enters flush mode
[00:27:33 CET] <nevcairiel> NULL is only for EOF
[00:27:37 CET] <iive> why not...
[00:29:22 CET] <wm4> hm I don't even know how that'd interact with normal decoding or our threading, but those hw wrappers aren't going to like it
[00:30:15 CET] <nevcairiel> i bet there is a bunch of things that really wont like it, not to mention that it sounds like weird-ass API to call it with NULL all the time
[00:31:10 CET] <wm4> it sounds like even if it works, it'd make threading less efficient
[00:31:16 CET] <iive> well, i did mention that it is abuse.
[00:31:57 CET] <michaelni> the previous call could return a special return code to indicate that there are more frames ready or that could be retrieved without additional input
[00:32:37 CET] <wm4> michaelni: yes
[00:32:48 CET] <nevcairiel> thats also terrible api design =p
[00:33:10 CET] <wm4> just introducing decoupled in/output would be ideal
[00:33:15 CET] <nevcairiel> indeed
[00:33:38 CET] <iive> that also have problems on its own.
[00:49:30 CET] <cone-318> ffmpeg 03Mats Peterson 07master:cf85a20d920f: lavc/rawdec: Align AV_PIX_FMT_RGB24 correctly
[00:49:31 CET] <cone-318> ffmpeg 03Josh de Kock 07master:67f8a0be5455: configure&avdevice/jack: Fixed issue #43 JACK indev support on OSX
[00:50:17 CET] <ethe> thanks for your help michaelni :)
[00:50:25 CET] <michaelni> np
[01:41:39 CET] <michaelni> we need backup mentors for GSoC, I mean if you add yourself in a week thats after we would have been rejected for lack of backup mentors
[01:42:07 CET] <michaelni> 6 out of 10 projects have no backup mentor listed
[01:46:45 CET] <michaelni> they complained about backups in the past IIRC and we had much more backup mentors back then
[01:48:00 CET] <michaelni> is it really needed for me to ask people privately each year one by one ?
[01:50:49 CET] <durandal_1707> put me as backup for swscale and I dunno what else
[01:52:07 CET] <durandal_1707> is truehd project really still active?
[01:52:46 CET] <michaelni> added you for swscale
[01:53:13 CET] <durandal_1707> michaelni: is it me or xyz output looks buggy?
[01:53:25 CET] <jamrial> durandal_1707: the encoder? there's supposedly a wip from some years ago, but nobody touched it since then. It's been a gsoc project but no student every chose it
[01:53:51 CET] <michaelni> truehd lacks a backup mentor
[01:54:07 CET] <michaelni> how can i test reproduce xyz  bug ?
[01:54:53 CET] <durandal_1707> from rgba to xyz and back it should be exact same output
[01:56:06 CET] <durandal_1707> well reasonable same, not bitexact
[01:57:20 CET] <michaelni> philipl, are you available as mentor or backup mentor ? we still need 5 backup mentors: https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2016
[01:59:12 CET] <durandal_1707> michaelni: add me for mxf too
[02:00:11 CET] <jya> nevcairiel: looking against at this yadif filter, so I send a NULL AVFrame to drain the graph, but is there a way to flush it so it can be fed more AVFrame or do I must recreate one ?
[02:01:22 CET] <jya> I would like to avoid reparsing the string with avfilter_graph_parse2 if it can be avoided 
[02:01:48 CET] <durandal_1707> its fast
[02:03:18 CET] <durandal_1707> michaelni: so how going from rgba to xyz to rgba back looks like?
[02:04:34 CET] <durandal_1707> ubitux/michaelni: #ffmpeg stills list 2.8 in wall msg
[02:04:40 CET] <jya> durandal_1707: if it's in relation to my question, it does look efficient. But seeing it's just to use a replacement for avpicture_deinterlace and i see it in our code being used in a loop, i don't want to take on the risk to have regression speed-wise
[02:05:45 CET] <michaelni> durandal_1707, lena.png -vf format=rgba,format=xyz12le,format=rgba looks reasonable to the eye
[02:05:59 CET] <durandal_1707> jya: how/when you flush?
[02:07:06 CET] <jya> durandal_1707: our stuff expect one frame (interlaced) in and one out immediately. as yadif won't output a frame immediately, nevcairiel suggested I simply pass a NULL frame to drain the filter.
[02:07:21 CET] <jya> but then i just realised I can't reuse the filter after than
[02:07:32 CET] <michaelni> fixed #ffmpeg topic
[02:07:52 CET] <jya> i was hoping there was a way to flush the filter/graph and be able to restart
[02:09:07 CET] <durandal_1707> jya: it does output one frame immediately
[02:09:28 CET] <durandal_1707> do you mean field mode?
[02:09:29 CET] <jya> didn't see that, on the first frame I got EAGAIN
[02:09:50 CET] <jya> that was the same regardless of the yadif mode (tried all of them 0,1,2,3,4)
[02:10:03 CET] <durandal_1707> that's for first and second frame only IIRC
[02:10:35 CET] <jya> sure, but i want a frame to come out immediately , just really to replace avpicture_deinterlace
[02:10:53 CET] <jya> as there are cases where we only create a filter for a single frame
[02:11:33 CET] <durandal_1707> not possible with yadif as it needs three frames 
[02:11:55 CET] <jya> i just found out that it was used elsewhere in our code, and this time in a loop (still expecting a frame to always come out from first input). but then I get -22 on the 2nd av_buffersrc_add_frame
[02:12:47 CET] <jya> durandal_1707: well, feeding it as 2nd frame NULL, I get a picture out, and it certainly looks deinterlaced to me
[02:12:51 CET] <durandal_1707> it is possible with nnedi, it only needs 1 frame, but because of pts in ask for more
[02:13:24 CET] <atomnuker> michaelni: added myself as a TrueHD encoder backup mentor
[02:14:13 CET] <atomnuker> could also serve as main mentor if the ramiro doesn't show up
[02:14:31 CET] <jya> durandal_1707: sorry, I don't get what you mean with "pts in ask for more"
[02:15:08 CET] <durandal_1707> jya: presentation timestamps for frame
[02:15:33 CET] <jya> i know what pts means, what does "ask for more" means
[02:16:22 CET] <michaelni> atomnuker, thanks alot!
[02:16:23 CET] <jya> oh, maybe you meant "it asks for me"
[02:16:24 CET] <durandal_1707> needs next frame to interpolate pts for second field frame
[02:17:55 CET] <jya> ok, so if feeding EOS to the filter isn't the right approach, what filter should I use to replace and simulate avpicture_deinterlace . The doc stated to use yadif instead. I need something behaving in the exact same fashion, one frame in, one frame out, and no latency
[02:19:08 CET] <durandal_1707> michaelni: in chromascope filter on ml lavfi converts yuv to xyz and it looks differen from yuv to rgb to xyz
[02:20:05 CET] <durandal_1707> jya: what that code did?
[02:20:35 CET] <durandal_1707> just did dumb interpolation?
[02:20:41 CET] <jya> yep
[02:21:03 CET] <jya> the aim is to present a thumb image , a screen grab
[02:22:50 CET] <michaelni> durandal_1707, do you have a testcase / commandline that shows the bad/difference/buggy case ?
[02:23:59 CET] <durandal_1707> michaelni: just try yuv input with and without format filter to rgba before chromascope
[02:24:30 CET] Action: jya thinks that just getting the old code of avpicture_deinterlace is going to be the easiest rather than messing with filters
[02:24:40 CET] <durandal_1707> without case will show very different output
[02:29:34 CET] <jya> sigh, and where am I going to find weights for that nnedi filter
[02:29:53 CET] <durandal_1707> its slow very
[02:30:11 CET] <durandal_1707> you need dumb bober
[02:31:48 CET] <jya> durandal_1707: so back to my question, what filter should be used (and is there one) to replace avpicture_deinterlacer ?
[02:32:37 CET] <durandal_1707> not currently, try field filter for fun
[02:33:37 CET] <jya> durandal_1707: and there's no equivalent to avcodec_flush_buffers for graph/filter ?
[02:35:57 CET] <durandal_1707> jya: no, they are stream based, and some filters need multiple frames to output single frame, but perhaps one could write deinterlacer that just do cube? interpolation
[02:36:52 CET] <jya> that's unfortunate that you need to recreate the entire filters/graph just to reuse one once it hits EOF
[02:36:54 CET] <durandal_1707> michaelni: tried to reproduce xyz issue?
[02:38:35 CET] <durandal_1707> well nobody know that you need such latency
[02:41:20 CET] <michaelni> durandal_1707, i see the difference
[02:41:46 CET] <jya> durandal_1707: is issue is more realted to the removal of avpicture_deinterlace in 3.0, and with nothing available to provide exact same functionality. sure it had been marked as deprecated a long time ago, but it had its use
[02:45:40 CET] <durandal_1707> well I may write replacement
[02:46:04 CET] <jya> that would be nice thank you.
[02:50:58 CET] <durandal_1707> michaelni: the yuv to xyz doesnt make sense so I would just change negotiation
[02:51:58 CET] <jamrial> atomnuker, kierank: "./ffmpeg -f rawvideo -s 352x288 -pix_fmt yuv420p -i tests/data/vsynth1.yuv -c:v vc2 -s 1920x1080 -r 24000/1001 -pix_fmt yuv422p10 vc2.mov" the output can't be decoded
[02:53:47 CET] <michaelni> durandal_1707, does your code handle gamma in rgb vs xyz correctly ? it looks different from what sws does
[02:54:22 CET] <atomnuker> jamrial: does it work with a framerate of exactly 24 and -strict -1?
[02:55:10 CET] <durandal_1707> michaelni: you mean for xyz input?
[02:55:53 CET] <durandal_1707> afaik its not needed
[02:57:28 CET] <jamrial> atomnuker: yes
[02:57:32 CET] <durandal_1707> its multiplication to all components
[02:57:33 CET] <michaelni> i mean sws does rgb->gamma->matrix->gamma->xyz but your code seems to do just the linear matrix
[02:58:40 CET] <durandal_1707> well the yuv to xyz show out of tongue values
[02:59:19 CET] <durandal_1707> the gamma correction if needed may come later
[03:00:37 CET] <Compn> does anyone have a complete list of pixel formats / colorspaces ?  idont mean in ffmpeg, but all software, combined.
[03:00:45 CET] <Compn> just wondering how many there are.
[03:00:47 CET] <durandal_1707> the rgba/yuv values should be inside triangle
[03:03:16 CET] <durandal_1707> Compn: colorspace or color primaries/system
[03:05:34 CET] <Compn> durandal_1707 : it just seems weird (to me) that there is no definitive list. maybe theres something on wikipedia...
[03:06:43 CET] <durandal_1707> everyone invents own
[03:07:20 CET] <Compn> durandal_1707 : one of my hobbies is collecting this type of information. its why i try to maintain the ultimate fourcc list :)
[03:08:39 CET] <durandal_1707> there are magicyuv fourcc, you know?
[03:09:04 CET] <Compn> you are talking about packed yuv format fourccs ? like nv12, yuy2 ?
[03:09:28 CET] <durandal_1707> or planar
[03:10:30 CET] <Compn> yes i know of them. i've been trying to document all fourccs... http://wiki.multimedia.cx/index.php?title=Category_talk:Video_FourCCs
[03:12:15 CET] <Compn> and trying to get samples, binary decoders, maybe even some RE work done on the simpler codecs- or encourage RE simple codecs as qualification tasks
[03:14:08 CET] <durandal_1707> so you installed magicyuv codec?
[03:15:25 CET] <Compn> i dont think so
[03:15:36 CET] <michaelni> durandal_1707, the triangle assume linear xyz, AV_PIX_FMT_XYZ12 is not linear
[03:18:09 CET] <durandal_1707> michaelni: the xyz mxf sample displayed fine 
[03:21:48 CET] <michaelni> durandal_1707, your code divides by 0
[03:21:58 CET] <michaelni> Assertion sum>0 failed at libavfilter/vf_chromascope.c:988
[03:22:59 CET] <durandal_1707> ahh...
[03:23:27 CET] <michaelni> ill try to figure out what the pixfmt negotiation is doing wrong
[03:50:31 CET] <cone-920> ffmpeg 03Michael Niedermayer 07master:1ec7a7038060: avutil/pixdesc: Make get_color_type() aware of CIE XYZ formats
[04:40:26 CET] <Compn> https://github.com/edanvoye/zoe-lossless-codec
[04:41:09 CET] <jamrial> for avi
[04:50:59 CET] <philipl> michaelni: Sorry, not this year. I'm going to have a busy summer with other things going on.
[05:04:02 CET] <michaelni> philipl, ok, thanks anyway
[10:10:44 CET] <ubitux> "MMX implied by specified flags"
[10:10:54 CET] <ubitux> i don't think we want this when running aarch64 code :p
[10:21:04 CET] <JEEB> :D
[10:55:55 CET] <ubitux> - git describe
[10:55:57 CET] <ubitux> n2.9-dev-3743-g1ec7a70
[10:56:09 CET] <ubitux> should be 3.0
[10:57:26 CET] <nevcairiel> the dev tags are made after the previous release, after 2.8 it wasnt really known that it would be 3.0 instead
[10:57:43 CET] <nevcairiel> git master should be called 3.1-dev now
[10:59:50 CET] <ubitux> is it something wrong on my side?
[11:00:26 CET] <ubitux> oh, i didn't fetch the tags for some reason
[11:00:28 CET] <ubitux> ok
[11:09:06 CET] <ubitux> libavdevice/libavdevice.so: undefined reference to `dispatch_release'
[11:09:08 CET] <ubitux> libavdevice/libavdevice.so: undefined reference to `dispatch_walltime'
[11:09:10 CET] <ubitux> libavdevice/libavdevice.so: undefined reference to `dispatch_semaphore_signal'
[11:09:12 CET] <ubitux> fuck.
[11:16:48 CET] <wm4> sounds like the configure check was actually wrong
[11:23:03 CET] <ubitux> yay, just got the hikey board
[11:23:42 CET] <ubitux> board ~100¬, +50¬ taxes
[11:25:12 CET] <wbs> ubitux: there's the new pine a64 that you can get for $15 as well :P
[11:25:35 CET] <wbs> (but I also got a dragonboard for slightly less than what you seem to have paid for the hikey one)
[11:26:48 CET] <fritsch> allwinner ...
[11:27:02 CET] <wbs> yeah, that's of course an issue
[11:27:13 CET] <ubitux> wbs: so it's out?
[11:27:19 CET] <ubitux> i thought it was planed for march or sth
[11:27:38 CET] <wbs> but if you just want it as a cheap aarch64 cpu and don't care about the rest it might not be quite as bad. modulo potential gpl infringments and such ofc :P
[11:27:55 CET] <wbs> ubitux: dunno, I just saw that it's possible to order it and earlier orders should ship in march
[11:49:33 CET] <wm4> so I tried to fix autodetection for this videotoolbox patch
[11:49:44 CET] <wm4> but... it's just too much of a fucked up mess
[11:49:49 CET] <wm4> our configure, I mean
[11:51:34 CET] <durandal_170> anyone have some xyz samples?
[13:49:46 CET] <ubitux> in ff8c2c410, why /.5 instead of *2?
[13:50:50 CET] <nevcairiel> probably to match the documentation
[13:58:50 CET] Action: JEEB is trying to wrap his head around what seems to be the current Industry Standard with negative CTS offsets in fragmented ISOBMFF
[14:27:03 CET] <JEEB> ok, has anyone else wanted to stab MS for their special snowflake MSDN-defined boxes?
[14:27:16 CET] <JEEB> like, using the wording "absolute timestamp"
[14:27:29 CET] <JEEB> instead of composition or decoding time stamp
[14:29:32 CET] <J_Darnley> There's so much I would stab MS for but that isn't one reason I have personally encountered yet.
[14:31:51 CET] <JEEB> I'm currently trying to understand the effects of using negative CTS offsets with that piece of shit's tfxd (https://msdn.microsoft.com/en-us/library/ff469354.aspx)
[15:18:10 CET] <ubitux> stupid q: i420==yuv420p right? what do we do with yv12? "yvu420"?
[15:18:25 CET] <wm4> yeah
[15:18:41 CET] <wm4> for yv12 you simply swap the planes and then treat it as yuv420p
[15:19:09 CET] <ubitux> sws seems to use the yv12 naming for both i420 and yv12
[15:19:12 CET] <wm4> so, what's a good way to circumvent get_buffer2, but still get a correctly setup AVFrame? use ff_decode_frame_props?
[15:19:20 CET] <ubitux> which is actually i420 internally
[15:19:21 CET] <wm4> ubitux: yes
[15:19:28 CET] <wm4> not like it matters
[15:19:37 CET] <ubitux> right, ok
[15:19:55 CET] <andrey_utkin> how to trigger h264_parse as breakpoint? I run "-c copy" and "-c rawvideo" commands on MP4/H.264 file, using --enable-debug --disable-stripping statically-linked ffmpeg binary, but my breakpoint is never triggered. Any clue?
[15:21:26 CET] <ubitux> ffmpeg -i <in> -f null -
[15:21:29 CET] <andrey_utkin> or, parser is not used when h264 stream is decoded?
[15:21:50 CET] <ubitux> copy doesn't decode, rawvideo should though
[15:25:16 CET] <andrey_utkin> actually, what I am investigating now, is whether I can to distinguish H.264 I-frames from IDR-frames at application level. I'd need to. Maybe by reaching parser context through API during decoding or demuxing (whatever is possible), or by patching ffmpeg
[15:26:11 CET] <wm4> I'm still wondering what the avpacket keyframe flag actually means
[15:26:51 CET] <andrey_utkin> h264_parse is triggered on .ts file, on .mp4 it is still not
[15:29:28 CET] <jkqxz> Doesn't it just mean "the muxer should mark this point as seekable to"?  (With no comment on what that actually means for the underlying format.)
[15:29:35 CET] <nevcairiel> Parser only runs when needed, you can always run it manually if you want it for extracting information
[15:32:46 CET] <wm4> jkqxz: that's usually the closest meaning from what I can see
[15:40:23 CET] <andrey_utkin> in context of our previous discussion on #ffmpeg (that only IDRs are valid seek points for H.264), is it so in case of MP4(mov demuxer)+H.264 and MPEGTS(demuxer)+H.264? I would be very grateful for any help in figuring this out from code or knowledge, and not by accident some time later.
[15:41:45 CET] <JEEB> ok, do I understand correctly that the stuff in track->cluster[i].cts is not really the CTS but rather the CTS offset
[15:41:50 CET] <JEEB> because that's how it seems
[16:05:19 CET] <JEEB> also is it just me or is mov_read_trun kind of hard to read if you want to know how it handles the CTS offsets?
[16:05:25 CET] <JEEB> (in lavf/mov.c)
[16:06:37 CET] <JEEB> all it seems to do is dts -= sc->ctts_data[sc->ctts_count].duration;
[16:06:50 CET] <JEEB> under if (flags & MOV_TRUN_SAMPLE_CTS)
[16:35:54 CET] <andrey_utkin> [bounty] could somebody please make up a patch which logs a warning when demuxer or decoder encounters H.264 non-key frame which references any frame prior to latest I-frame? I am happy to pay.
[16:36:14 CET] <andrey_utkin> this patch is not necessarily something subject to upstreaming
[16:47:21 CET] <BBB_> but isnt that allowed?
[16:47:30 CET] <BBB_> @andrey_utkin 
[16:49:00 CET] <andrey_utkin> what is(n't) allowed? publishing/upstreaming of patch? everything is allowed in this regard
[16:53:33 CET] <nevcairiel> such frames are allowed, is what BBB_ means
[16:55:09 CET] <BBB> if you dont want such references, simply force IDR instead of I frames (iirc) in the encoder
[16:55:27 CET] <andrey_utkin> BBB: the files which are processed are encoded elsewhere
[16:59:09 CET] <andrey_utkin> Just to recap: as it was discussed a few hours ago on #ffmpeg, only IDR frames are considered safe seek points by design of H.264. AFAIU H.264 has two kinds of pictures which are mapped to AV_PICTURE_TYPE_I: NAL_IDR_SLICE and some other NAL_SLICE. And referencing is arbitrary and GOP concept is not strict in H.264, as jkqxz has explained. I my usecase, I need to be sure about slicing the file and not breaking any references. So some in
[17:00:32 CET] <andrey_utkin> and in case of cross-GOP weird refs it could be investigated quickly and resolved that files should be reencoded, or my custom application must be reworked to support that case.
[17:14:09 CET] <BBB> ah I see, you want to slice files
[17:15:09 CET] <BBB> I think in practice you can just parse slice headers and print poc. if after a keyframe, p frames follow with p_poc < i_poc, youve got a stream with issues
[17:15:11 CET] <BBB> (most likely)
[17:15:25 CET] <BBB> this isnt scientiifcally definitive, but it is practically always correct I think
[17:23:37 CET] <jamrial> patch v17
[17:26:13 CET] <andrey_utkin> BBB: how exactly your propose to parse slice hearders? to which degree I can leverage existing ffmpeg (parser) code? What about B-frames (you have just explained the case of P-frames)?
[17:26:31 CET] <BBB> b frames are p frames
[17:26:41 CET] <BBB> for this purpose
[17:27:41 CET] <BBB> to whihc degree can you leverage existing code & hmmm & itd take some fiddling
[17:27:53 CET] <BBB> you cannot access it simply via api right now
[17:28:33 CET] <andrey_utkin> thank you for proposing a solution, but still I'd be happy (to fund it) if somebody proficient would implement that.
[17:29:47 CET] <andrey_utkin> I think I'll post such request on ffmpeg-devel...
[17:30:59 CET] <JEEB> cool, I think I just implemented negative CTS offsets in trun
[17:31:14 CET] <JEEB> now to see what else I have to poke
[17:31:53 CET] <cone-564> ffmpeg 03Xiaolei Yu 07master:5a9158947609: swscale/arm: re-enable neon rgbx to nv12 routines
[17:55:16 CET] <cone-564> ffmpeg 03James Almer 07master:26034929d574: checkasm: bench each vf_blend mode once
[17:57:25 CET] <ubitux> tracing the precision of a value in sws...
[17:58:10 CET] <ubitux> so you start with an unknown precision which you up by 13bits, which is then set to the asm which will reduced it by 8 bits, then re-up it immediately by 3 bits
[17:58:27 CET] <ubitux> s/set/sent/
[17:59:19 CET] <ubitux> then you have an identical value, which is also of unknown precision, which is in one code path up by 9-bit, but in the other by 3 bits, and in that second case it's sent to the asm where it's changed again
[17:59:32 CET] <ubitux> aaaaaah
[17:59:34 CET] <ubitux> :((
[18:00:11 CET] <nevcairiel> dont let sws mess with your mind
[18:04:17 CET] <cone-564> ffmpeg 03Paul B Mahol 07master:5d93437e4612: avfilter/vf_waveform: add 12bit depth support
[18:25:25 CET] <cone-564> ffmpeg 03Rostislav Pehlivanov 07master:1387f3a0510c: vc2enc: set quantization ceiling to 50
[18:34:02 CET] <BBB> ubitux: I worked on sws at some point in my life, y'know
[18:34:09 CET] <BBB> ubitux: I stopped for a reason, it was eating me
[18:34:15 CET] <JEEB> https://kuroko.fushizen.eu/patches/0001-movenc-support-negative-CTS-offsets.patch enjoy the evil
[18:35:32 CET] <JEEB> this *seems* to create correct stuff in trun
[18:36:36 CET] <J_Darnley> Does that log want to be ERROR?
[18:36:41 CET] <JEEB> no
[18:36:46 CET] <JEEB> I just forgot to change it to DEBUG
[18:37:53 CET] <J_Darnley> Will the user know what "trun" is when they want to be using this option?
[18:38:02 CET] <JEEB> they should
[18:38:19 CET] <JEEB> also this is only trun changed, it possibly would have to be poked somewhere else as well
[18:38:27 CET] <JEEB> but that's something for tomorrow
[18:38:41 CET] <JEEB> for fragmented stuff trun is probably the most important part
[18:39:10 CET] <J_Darnley> In that case I don't have any other comments
[18:39:21 CET] <J_Darnley> (mostly because I don't know what any of this is)
[18:39:34 CET] <JEEB> is the usage of uint for both unsigned and signed with the bit mask sane or insane?
[18:39:44 CET] <JEEB> and changing the print like that
[18:40:01 CET] <JEEB> I blame the spec for doing the flip of unsigned/signed
[18:40:13 CET] <J_Darnley> :)
[18:40:14 CET] <JEEB> if version == 0 => unsigned, else => signed
[18:40:49 CET] <JEEB> also have an actual example of the difference in output
[18:40:50 CET] <JEEB> http://up-cat.net/p/c8f918fc
[18:41:05 CET] <JEEB> that's an AVC stream at 25fps, timescale 10000000, single picture duration 400000
[18:41:52 CET] <JEEB> you get a zero CTS offset by being able to use negative values
[18:41:58 CET] <JEEB> I mean in the first sample
[18:49:28 CET] <cone-564> ffmpeg 03Muhammad Faiz 07master:bfc61b0fcc77: avfilter: add firequalizer filter
[19:07:59 CET] <cone-564> ffmpeg 03Muhammad Faiz 07master:76377d66b7b8: avfilter/avf_showcqt: remove unneeded headers
[19:23:40 CET] <ethe> what's the standing point on deprecated APIs (say for example, Carbon on OSX), is it acceptable to create a ticket which asks for the API to be updated?
[19:23:58 CET] <Timothy_Gu> ethe: which part of ffmpeg?
[19:24:03 CET] <ethe> ffplay
[19:25:00 CET] <nevcairiel> carbon might be deprecated, but if the only alternative is ObjC, i wouldnt hold my breath for improvement =p
[19:26:47 CET] <ethe> Is that because it wont be allowed, or no one will do it?
[19:27:16 CET] <Timothy_Gu> the latter I believe
[19:27:49 CET] <Timothy_Gu> also will Apple ever remove Carbon?
[19:28:01 CET] <nevcairiel> apple likes screwing over their users, so probably
[19:28:05 CET] <ethe> eventually
[19:28:07 CET] <ethe> yeah lol
[19:28:26 CET] <ethe> I'll create a ticket then, as it doesnt seem like there'll be any harm in doing so
[19:28:28 CET] <nevcairiel> it will be announced as a big feature somehow improving the overall mac experience
[19:28:51 CET] <Timothy_Gu> where in ffplay is carbon used?
[19:28:51 CET] <nevcairiel> Finally the evil Carbon is gone!
[19:30:43 CET] <ethe> the word "carbon" cant be found in the project according to sublime text, but OSX seems to think it uses carbon
[19:30:52 CET] <ethe> I'll look into it before creating a ticket
[19:31:17 CET] <nevcairiel> might be through SDL
[19:31:24 CET] <nevcairiel> which probably uses carbon to create the window
[19:32:24 CET] <nevcairiel> but thats out of our control
[19:33:27 CET] <ethe> that sounds about right https://github.com/spurious/SDL-mirror/search?p=10&q=carbon&utf8=%E2%9C%93
[19:33:47 CET] <Timothy_Gu> http://forums.libsdl.org/viewtopic.php?t=8325&sid=d2526f6ec18afb7c44681bfc6d523745
[19:34:24 CET] <Timothy_Gu> plus ffplay is still using SDL 1.x
[19:35:04 CET] <nevcairiel> i thought someone talked about updating that
[19:41:30 CET] <ethe> I recently worked a bit with SDL(2), I could take a look but maybe see what other people think first
[20:36:13 CET] <ubitux> do we have any yasm mmx functions with many arguments?
[20:41:40 CET] <ubitux> like, i'd like to do a mmx function which take like 13 args if possible
[20:42:05 CET] <ubitux> and if i'm not asking for troubles wrt calling convention
[20:42:20 CET] <nevcairiel> not sure how thats necessarily related to mmx, but i'm sure we have some yasm function that takes this many
[20:42:25 CET] <nevcairiel> the arguments will just be stored on the stack
[20:42:37 CET] <ubitux> do you have one in mind?
[20:42:38 CET] <J_Darnley> It would work if you ask for them to be loaded.
[20:42:44 CET] <J_Darnley> *don't ask
[20:43:51 CET] <nevcairiel> on 64-bit you could  even load them all, couldnt you
[20:44:10 CET] <ubitux> i need to support 32
[20:44:11 CET] <J_Darnley> oh yes
[20:44:23 CET] <J_Darnley> 32 now?
[20:44:28 CET] <J_Darnley> bit
[20:44:29 CET] <ubitux> 32-bit sorr
[20:45:16 CET] <J_Darnley> yeah, reading the source shows me that 15 arguments are currently supported
[20:45:46 CET] <J_Darnley> I don't know where we might use that many though
[20:45:51 CET] <ubitux> great
[20:45:59 CET] <ubitux> swscale
[20:47:28 CET] <nevcairiel> such functions are relatively rare, but yeah it should be possible
[20:47:48 CET] <ubitux> so any example in mind?
[20:48:44 CET] <J_Darnley> none spring to mind
[20:49:03 CET] <J_Darnley> grep shows some hevc deblock functions too
[20:53:01 CET] <nevcairiel> vf_maskedmerge has 8 params, 3 of which it loads from the stack on 32-bit, you should be able to expand from that to 13
[20:53:16 CET] <nevcairiel> same pattern applies
[20:53:28 CET] <ubitux> - git grep 'cglobal[^,]*,[^0-9]*9,'
[20:53:30 CET] <ubitux> libavcodec/x86/h264_deblock.asm:cglobal h264_loop_filter_strength, 9, 9, 0, bs, nnz, ref, mv, bidir, edges, \
[20:53:36 CET] <ubitux> 9 args mmh, but probably x86?
[20:53:59 CET] <ubitux> git grep 'cglobal[^,]*,[^0-9]*1[0-9],'  nothing >= 10 :(
[20:54:39 CET] <durandal_170> stereo3d asm uses many regs, and stack
[20:55:06 CET] <ubitux> heh, why h264_loop_filter_strength declares 9 args/local but has 10?
[20:55:30 CET] <nevcairiel> the 10th is always used from the stack, so it doesnt get any reg
[20:55:43 CET] <nevcairiel> the first number isnt actual number of parameters, its number of parameters to load into regs
[20:56:05 CET] <ubitux> how many reg i'm allowed max?
[20:56:10 CET] <nevcairiel> 7 on 32-bit
[20:56:16 CET] <nevcairiel> and 8 SIMDs
[20:56:26 CET] <ubitux> and 64?
[20:57:12 CET] <ubitux> that h264_loop_filter_strength seems to be built on 32 as well
[20:57:19 CET] <nevcairiel> 15 and 16
[20:57:20 CET] <ubitux> i suppose the >7 are ignored
[20:57:34 CET] <nevcairiel> it automatically  cuts doen GPRs if you request too many
[20:57:44 CET] <ubitux> ok perfect
[20:57:45 CET] <nevcairiel> not sure why exactly, but probably to make code easier to reuse
[20:57:55 CET] <ubitux> thanks a lot
[20:58:28 CET] <kurosu_> like for xmm regs
[20:59:07 CET] <kurosu_> not sure what's the maximum for x86_64, probably 15 because of RIP/GOT
[20:59:42 CET] <nevcairiel> note that if you need extra stack space, you will lose another registers
[20:59:50 CET] <nevcairiel> so only 6 left
[21:00:09 CET] <ubitux> heh
[21:00:31 CET] <ubitux> i forgot how x86 simd was actually pretty annoying compared to arm
[21:00:44 CET] <nevcairiel> x86 is also ancient
[21:01:50 CET] <kurosu_> libswresample/x86/resample.asm:275:cglobal resample_linear_%1, 0, 15, 5, ctx, phase_mask, src, phase_shift, index, frac, <- 15 does seem the maximum
[21:02:16 CET] Action: ubitux confused at first arg being 0
[21:02:23 CET] <kurosu_> none is loaded
[21:02:27 CET] <kurosu_> only manual loads
[21:02:42 CET] <ubitux> i see, ok
[21:02:54 CET] <kurosu_> oh, in that function - I'm just guessing, not that I verified that
[21:09:57 CET] <kurosu_> the most troublesome abi stuff for asm is float (eg aac/dts iirc), as even the x64 abis are rather different
[21:10:36 CET] <nevcairiel> if you dont do too much magic, the abstraction should hide everything else from you
[21:10:38 CET] <kurosu_> I bet there's weirder, but x86_32 stuff for dts required some time to get right
[21:11:01 CET] <nevcairiel> speaking of dts, want to expand the synth filter for 64 wide and fixed point? :)
[21:11:58 CET] <kurosu_> is the former that easily found ? when I looked at the previous decoder, I don't think there was even a fate test
[21:12:19 CET] <nevcairiel> we replaced the entire dts decoder, all 4 modes should be tested
[21:12:34 CET] <kurosu_> fixed point will be a mess with those 32x32->64 things - at least the loops are wider and more likely to benefit from sse4 than mlp
[21:12:57 CET] <nevcairiel> synth64 should be rather trivial, the differences in the C code look minimal
[21:13:10 CET] <nevcairiel> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/synth_filter.c;h=1c5dab5c5abb033677774872523c38dd6e590b80;hb=HEAD
[21:13:19 CET] <jamrial_> synth filter fixed mode is necessary to get dts-ma decoding at a decent speed, but yeah, 32x32 -> 64 = annoying
[21:15:03 CET] <jamrial_> nevcairiel: synth64 float is not nearly as useful as synth fixed
[21:15:10 CET] <jamrial_> because it's used by that x96 extension only afaik
[21:15:19 CET] <kurosu_> my point
[21:15:23 CET] <kurosu_> I usually just use the core part anyway
[21:15:29 CET] <nevcairiel> jamrial_: but its infinitely easier to just adapt from the existing synth
[21:15:35 CET] <jamrial_> true
[21:16:00 CET] <ubitux> michaelni: strides can be negative in sws?
[21:16:01 CET] <jamrial_> btw, i wrote lfe1 simd today. will be sending it later
[21:16:30 CET] <michaelni> ubitux, yes
[21:16:38 CET] <ubitux> michaelni: how can i trigger this scenario?
[21:16:39 CET] <jamrial_> the fate suit doesn't test it, but that one weird dts-in-wav file that blocked the decoder for a bit uses it
[21:17:26 CET] <michaelni> ubitux, hmm, iam not sure
[21:17:47 CET] <michaelni> maybe writing a testcase or filter that triggers that could make sense
[21:18:24 CET] <ubitux> mh. ok
[21:29:07 CET] <ubitux> michaelni: another question, http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libswscale/x86/yuv2rgb_template.c;hb=HEAD#l47 is this check still necessary? 
[21:29:24 CET] <ubitux> assuming the case with depth=4 for a start
[21:29:31 CET] <ubitux> (typically, rgba)
[21:29:45 CET] <ubitux> i feel like the stride will always be enough
[21:30:25 CET] <ubitux> i couldn't trigger this case
[21:32:00 CET] <ubitux> the imgutils seems to make sure the linesize is actually large enough
[21:33:01 CET] <wm4> lol assuming everything just uses ffmpeg stuff
[21:33:12 CET] <wm4> a frame could be allocated by something else
[21:34:58 CET] <ubitux> ok
[21:35:18 CET] <ubitux> so much constraints :(
[21:35:43 CET] <wm4> in theory, you could know how much padding there is with AVFrame, though
[21:35:52 CET] <wm4> or maybe the stride
[21:36:18 CET] <wm4> uh, I guess, if it's about the line width, the stride tells you
[21:37:22 CET] <ubitux> it's checking that stride e align(w,8)*depth
[21:37:35 CET] <ubitux> but if it's not the case it seems to be doing some kind of magic
[21:38:38 CET] <wm4> huh, why does it mix mmx code with normal C code, and expects the compiler knows that it can't touch the registers
[21:38:47 CET] <ubitux> faith
[21:39:17 CET] <ubitux> let's hope it's not going to ftree vectorize this loop ;)
[21:39:39 CET] <ubitux> i'm trying to rewrite this to yasm but it's madness
[21:43:39 CET] <wm4> compile it, disassemble it, and git add it
[21:43:50 CET] <ubitux> that's exactly what i did 
[21:43:56 CET] <ubitux> but there are issues
[21:44:12 CET] <ubitux> typically, there is this struct access
[21:44:35 CET] <ubitux> where there is coefficient specially scaled for this simd code in some other common part of the code
[21:44:54 CET] <ubitux> these fields have a special scaling which is then changed in the asm itself, even though it's the only place where it's used
[21:50:20 CET] <ubitux> http://pastie.org/pastes/10733277/text hi framecrc
[21:52:19 CET] <jamrial_> heh, framecrc uses adler32. crc would never give you that :p
[21:53:24 CET] <jamrial_> nevcairiel: can you reproduce this? http://pastebin.com/NN2A0nGG configure with --cpu=haswell on x86_32
[21:54:10 CET] <jamrial_> seems to be a problem with tree vectorize
[21:55:19 CET] <ubitux> speaking of the devil.. :)
[21:55:34 CET] <jamrial_> yep
[21:56:46 CET] <michaelni> ubitux, i suspect its still needed but i dont know / dont have a testcase
[21:57:22 CET] <ubitux> ok
[21:59:01 CET] <wm4> why not just kill mmx asm
[21:59:37 CET] <ubitux> still better than nothing
[21:59:51 CET] <jamrial_> most of it is inline micro optimizations like cabac at this point
[22:00:22 CET] <ubitux> is cabac really simd?
[22:00:28 CET] <ubitux> i thought it wasn't possible
[22:02:28 CET] <jamrial_> look at x86/cabac.h
[22:02:44 CET] <jamrial_> but no, it's not simd
[22:03:05 CET] <jamrial_> my bad
[22:04:38 CET] <wm4> but this is in swscale
[22:05:56 CET] <ubitux> if someone can make sense to the usage of x86_reg index = -h_size / 2 in that file, it will be helpful
[22:06:04 CET] <ubitux> s/to/of/
[22:09:28 CET] <J_Darnley> I would almost certainly make a .i first then try to understand that.
[22:30:05 CET] <ethe> hmm regarding #5261 I did search "SDL" and #3604 didn't come up
[22:30:06 CET] <ethe> odd
[22:30:40 CET] <ubitux> there is a patchset for sdl2
[22:30:55 CET] <ethe> not yet ;)
[22:31:09 CET] <ubitux> no there is
[22:31:34 CET] <ubitux> https://github.com/cus/ffplay/commits/sdl2
[22:32:30 CET] <ethe> was about to say: It might be a good shout to google first (as in, I should probably check before saying anything)
[22:35:14 CET] <ubitux> btw, i will push the subtitles patchset by the end of the week
[22:35:38 CET] <ubitux> i'll probably ping mid week and apply this week end
[22:36:39 CET] <ubitux> after this we can seriously consider moving the thing to lavu (or consider again using AVFrame), and then work on its integration within lavfi
[22:37:02 CET] <ethe> I'm gonna see if I can extract a patch, and modify it to allow either sdl2 or sdl (can I do this? I'm just thinking about licensing, idk if that is allowed)
[22:37:30 CET] <ubitux> no, supporting both sdl and sdl2 is going to be maintainance nightmare
[22:37:39 CET] <ubitux> sdl2 is everywhere we care anyway
[22:37:55 CET] <ubitux> cus is Marton btw, the ffplay maintainer
[22:38:17 CET] <ubitux> the patchset was mentioned (posted?) not long ago
[22:38:26 CET] <ethe> oh right >.< I wonder why it wasn't merged then
[22:38:33 CET] <ubitux> ask him :)
[22:39:00 CET] <ubitux> maybe he's wondering about porting the sdl device as well
[22:39:09 CET] <ubitux> so we can drop the sdl1 dependency
[22:39:27 CET] <ubitux> maybe you could work on that
[22:39:29 CET] <ubitux> just mail him
[22:39:59 CET] <ethe> through the mailing list?
[22:40:06 CET] <ubitux> or private
[22:40:36 CET] <ubitux> git log --author=Marton ffplay.c to get the mail
[22:51:15 CET] <nevcairiel> jamrial_: confirmed, happens here as well
[22:53:22 CET] <nevcairiel> someone should come up with a really smart way to handle cabac better :D
[23:59:52 CET] <michaelni> j-b, should i replace the hwaccel-mt warning by something, or you have another suggestion?
[00:00:00 CET] --- Tue Feb 23 2016



More information about the Ffmpeg-devel-irc mailing list