[Ffmpeg-devel-irc] ffmpeg-devel.log.20180603

burek burek021 at gmail.com
Mon Jun 4 03:05:04 EEST 2018

[00:02:51 CEST] <TD-Linux> atomnuker, okay I haven't tried it recently. it seemed to be somewhat hw dependent too, on amd both ways were basically the same
[00:03:04 CEST] <TD-Linux> I actually run webrender right now (which also damages the whole screen)
[00:03:40 CEST] <atomnuker> I thought the whole point of webrender was to use the compositor's compositor interface and let it handle damage tracking and such
[00:04:33 CEST] <atomnuker> so DirectComposition on windows, wl_compositor on linux
[00:05:05 CEST] <TD-Linux> no, that's entirely separate (but also being worked on, I think it's actually used on windows)
[00:05:32 CEST] <TD-Linux> webrender *could* do damage tracking, it just doesn't
[00:10:36 CEST] <atomnuker> welp, everything is awful ¯\_(Ä)_/¯
[12:14:08 CEST] <cone-008> ffmpeg 03Paul B Mahol 07master:d0bf1aa3c5f7: avfilter/avf_showspectrum: improve axes drawing
[12:45:25 CEST] <cone-008> ffmpeg 03Paul B Mahol 07master:9add1786ad4c: avfilter/avf_showspectrum: avoid overwritting text
[12:45:26 CEST] <cone-008> ffmpeg 03Paul B Mahol 07master:49eda27c6e7a: avfilter/avf_showspectrum: also show sample rate and channel layout
[13:00:27 CEST] <atomnuker> jkqxz: can you test vulkan->vaapi mapping if you've got the time?
[15:00:01 CEST] <anill> can anyone help me on this https://stackoverflow.com/questions/50664967/convert-rtp-stream-into-h-264-via-ffmpeg
[15:01:13 CEST] <cone-008> ffmpeg 03Paul B Mahol 07master:983288538669: avfilter/vf_zoompan: do not increase VAR_IN twice, also count from 0
[18:56:03 CEST] <cone-108> ffmpeg 03Paul B Mahol 07master:29e0879b29d1: avfilter/f_drawgraph: fix drawing of first point for non-first metadata key
[19:41:01 CEST] <cone-108> ffmpeg 03Mark Thompson 07master:2bd24d4a37e9: v4l2_m2m: Mark V4L2 M2M decoders as unsuitable for probing
[21:05:22 CEST] <durandal_1707> Compn: when you will announce that MPlayer is officially dead?
[21:48:48 CEST] <durandal_1707> atomnuker: when you gonna post new patches?
[22:39:11 CEST] <kierank> durandal_1707: when will you announce libavfilter needs rewriting from scratch
[22:43:50 CEST] <durandal_1707> kierank: what would you rewrite exactly?
[22:44:00 CEST] <kierank> all of it
[22:44:02 CEST] <kierank> it's useless
[22:44:04 CEST] <kierank> it leaks memory
[22:44:10 CEST] <kierank> it doesn't support any realtime process
[22:44:18 CEST] <kierank> api is understandable to nobody
[22:44:23 CEST] <kierank> framesync module who knows wtf that does
[22:44:23 CEST] <kierank> etc
[22:44:32 CEST] <kierank> it's just a big blob of wtf
[22:47:56 CEST] <durandal_1707> kierank: where it is useless? where it leaks memory? where it does not support realtime process?
[22:48:23 CEST] <kierank> it leaks memory all the time, if you have n streams coming in and one of them is disconnected it will just buffer frames indefinitely
[22:48:25 CEST] <kierank> it's full of hacks
[22:48:30 CEST] <JEEB> realtime requires the filtering framework thinks of time
[22:48:39 CEST] <kierank> the framework just understands frames
[22:48:41 CEST] <JEEB> because if you don't get input you can't just keep waiting
[22:49:50 CEST] <durandal_1707> you are supposed to handle corner cases manually
[22:50:00 CEST] <kierank> with the plethora of documentation sure
[22:50:15 CEST] <kierank> the problem is there's no api for just letting your use the filters in your own filtergraph
[22:50:19 CEST] <kierank> that's the biggest problem
[22:50:26 CEST] <kierank> you have to buy in to all the libavfilter surrounding crap
[22:50:31 CEST] <kierank> (i.e framesync)
[22:50:55 CEST] <JEEB> yeh
[22:51:19 CEST] <JEEB> if I just want to overlay stuff IFF there's stuff there's no way to do that with the filter stuff at the moment
[22:51:44 CEST] <kierank> latency of pipeline is a big problem as well
[22:51:52 CEST] <kierank> it's completely dynamic
[22:52:08 CEST] <kierank> you don't know how many frames are being buffered internally
[22:52:12 CEST] <JEEB> yeh
[22:52:40 CEST] <JEEB> anyways I need to debug some filter chain stuff during the week because something herps a derp somewhere, which doesn't show up in file coding
[22:52:57 CEST] <rcombs> also still no subtitles
[22:53:04 CEST] <durandal_1707> filter can know how many frames are buffered if its really needed
[22:53:13 CEST] <kierank> across the entire pipeline?
[22:53:22 CEST] <kierank> and it won't ever by dynamic because of framesync?
[22:53:34 CEST] <rcombs> also you need separate filters for software and hardware frames
[22:53:46 CEST] <kierank> my loose understanding of nicolas undocumented masterpiece is that it dynamically adapts to frames arriving
[22:54:52 CEST] <rcombs> I also run into some realtime issues with lavf/segment fwiw
[22:55:12 CEST] <kierank> durandal_1707: I would like to see filter and filtergraph decoupled
[22:55:16 CEST] <kierank> so I can just use filters like codecs
[22:55:20 CEST] <kierank> load them, get frames out and that's it
[22:55:40 CEST] <durandal_1707> number of frames attached to each fifo can easily be get from filter graphs
[22:55:41 CEST] <JEEB> yea, it's a dynamic thing that tries to make sure things are in some sync
[22:55:41 CEST] <rcombs> if I have 2 segment muxers running (one for A/V and one for subtitles, which may be sparse), there's no way for the subtitles one to know when the transcode has proceeded far enough to cut a new segment, until a new packet is sent
[22:55:53 CEST] <kierank> durandal_1707: yes but's its dynamic
[22:56:02 CEST] <kierank> that will never fly in a realtime environment
[22:56:17 CEST] <rcombs> so I have to do dumb hacks like assuming the current subtitle segment is finished if the video gets a couple segments ahead of it
[22:56:32 CEST] <rcombs> would be good to have heartbeats in lavf for that
[22:57:26 CEST] <rcombs> I'd say "write an empty packet with the timestamp set accordingly" but empty packets are kinda overloaded as a concept
[22:57:38 CEST] <JEEB> yea
[22:57:59 CEST] <JEEB> reminds me of the two ways people already implemented PMT updates
[22:58:02 CEST] <JEEB> one was callback-based
[22:58:11 CEST] <JEEB> another was AVPackets that came from a specific stream id
[22:58:12 CEST] <JEEB> (zero)
[23:02:25 CEST] <rcombs> also lavfi has a bit too much boilerplate
[23:02:44 CEST] <durandal_1707> rcombs: be more specific
[23:06:06 CEST] <durandal_1707> kierank: regarding framesync, one could add option to consume frames asap and not wait for next frames pts
[23:06:34 CEST] <rcombs> jkqxz: oh hey do you have an Apollo Lake system
[23:07:52 CEST] <rcombs> I've had a user report an issue there where the graph "hwupload,scale_vaapi=w=[X]:h=[Y]:format=nv12,hwupload" with VAAPI for both dec and enc results in the chroma planes getting interleaved wrong in the output
[23:08:09 CEST] <rcombs> as if it's either getting nv12 but treating it as yuv420p, or getting yuv420p and treating it as nv12
[23:11:39 CEST] <jkqxz> I don't have an Apollo Lake right now, but I can easily find one to test on.
[23:11:42 CEST] <jkqxz> Why hwupload twice there?
[23:13:04 CEST] <jkqxz> If you have VAAPI for both encode and decode there then neither of those hwuploads should be doing anything.
[23:14:21 CEST] <atomnuker> vaapi chroma planes are messed up - if you map yuv420p (not nv12) vaapi to opencl or vulkan U and V are swapped
[23:14:58 CEST] <jkqxz> Oh, is that YV12?
[23:15:13 CEST] <BtbN> I420
[23:15:39 CEST] <BtbN> nvenc does the same. As a reaction, the CUDA pix_fmts have it internally swapped
[23:16:15 CEST] <durandal_1707> kierank: write how that direct api should roughly look like and i will implement it
[23:16:37 CEST] <kierank> av_filter_frame(ctx, in, out)
[23:17:13 CEST] <kierank> and your pipeline code is a high level implementor
[23:17:14 CEST] <JEEB> yeh, 1) fill as many inputs as you have 2) filter
[23:17:14 CEST] <durandal_1707> kierank: that is very trivial, and does not cover multiple inputs/outputs
[23:17:27 CEST] <atomnuker> yeah, I'd also like a direct api
[23:17:29 CEST] <kierank> durandal_1707: sure, but you handle that in your pipeline code
[23:17:37 CEST] <JEEB> for example if you only have the primary inpu tavailable for overlay, you don't fill it
[23:17:41 CEST] <JEEB> and then fill the primaryt
[23:17:47 CEST] <JEEB> and then press the butan
[23:17:48 CEST] <rcombs> jkqxz: yeah, I sprinkle hwuploads all over the place because it makes generating the graph text easier (less special-casing depending on what's using hardware and what's not); since they're no-ops I don't think it matters?
[23:18:26 CEST] <durandal_1707> kierank: no comprehendo
[23:18:52 CEST] <rcombs> if the user drops the :format=nv12 then it works fine
[23:19:15 CEST] <rcombs> but some devices require that iirc
[23:19:25 CEST] <durandal_1707> kierank: overlay filtter needs 2 input frames
[23:19:33 CEST] <JEEB> durandal_1707: you fill either one or two slots
[23:19:42 CEST] <JEEB> primary and if you hav esecondary the secondary
[23:19:47 CEST] <JEEB> then you press butan
[23:19:52 CEST] <jkqxz> Probably needed on not-i965.  That driver has weird magic which copies input frames if it doesn't like the format they start as.
[23:20:04 CEST] <durandal_1707> so it would be av_filter_frame(ctx, **in, **out)
[23:20:16 CEST] <atomnuker> perfect
[23:20:33 CEST] <jkqxz> rcombs:  Do you have a picture of the output?
[23:20:44 CEST] <rcombs> https://us.v-cdn.net/6025034/uploads/editor/ue/de8zfx6frvn8.png
[23:20:53 CEST] <rcombs> vs correct: https://us.v-cdn.net/6025034/uploads/editor/v9/md98p8vih4dy.png
[23:22:06 CEST] <durandal_1707> kierank: is that **frame too alien to you?
[23:22:34 CEST] <JEEB> durandal_1707: wouldn't each filter instance have slots?
[23:22:44 CEST] <JEEB> then you first fill them up as you have them
[23:22:57 CEST] <JEEB> then after you have filled, you push
[23:23:11 CEST] <JEEB> so the ctx would be the filter context or something
[23:27:29 CEST] <durandal_1707> JEEB: how would you know to which input/output frame belongs?
[23:27:53 CEST] <JEEB> well you have just gathered them, no?
[23:28:14 CEST] <JEEB> it's just like you feed primary/secondary in filter chains right now
[23:28:15 CEST] <atomnuker> you expose the filter inputs and outputs array which have descriptions
[23:31:19 CEST] <jkqxz> rcombs:  That's pretty weird.  It's like the chroma samples have been translated horizontally slightly, some to the left and some to the right?  The vertical positioning looks right, as do the actual values.
[23:31:29 CEST] <jkqxz> I'll try to reproduce it tomorrow.
[23:38:40 CEST] <rcombs> thanks
[23:52:06 CEST] <KGB> [13FFV1] 15michaelni pushed 1 new commit to 06master: 02https://git.io/vhlzC
[23:52:06 CEST] <KGB> 13FFV1/06master 1479f5938 15Jérôme Martinez: Version 0 and 1 can have some content after SliceContent...
[00:00:00 CEST] --- Mon Jun  4 2018

More information about the Ffmpeg-devel-irc mailing list