[Ffmpeg-devel-irc] ffmpeg-devel.log.20170814

burek burek021 at gmail.com
Tue Aug 15 03:05:03 EEST 2017


[14:09:35 CEST] <cone-410> ffmpeg 03Yogender Gupta 07master:77c5a54192b2: avfilter/scale_npp: fix passthrough mode
[14:10:21 CEST] <BtbN> I forgot to commit my local changes...
[14:17:39 CEST] <cone-410> ffmpeg 03Timo Rothenpieler 07master:f4ebbda566f7: avfilter/scale_npp: check for buffer allocation failure
[15:05:31 CEST] <JEEB> heh, I love the lack of time_base in AVFrames
[15:06:24 CEST] <wm4> add it
[15:06:32 CEST] <JEEB> just tried using yadif and magically my pts got doubled :) then noticed `link->time_base.den = ctx->inputs[0]->time_base.den * 2;` in vf_yadif
[15:07:01 CEST] <wm4> yes you need to use the output sink timebase
[15:07:09 CEST] <JEEB> ah right, that's a thing
[15:13:12 CEST] <JEEB> so yea, things that could be in AVFrame: frame_type , time_base
[15:13:25 CEST] <JEEB> maybe I'll get to adding them one day :V
[15:13:39 CEST] <iive> start now :D
[15:29:05 CEST] <JEEB> iive: more likely I will start with trying to make a callback in lavf for "program has been updated"
[15:29:20 CEST] <JEEB> because I am crazy enough to use lavf for mpeg-ts
[15:31:36 CEST] <kierank> JEEB: insane
[15:31:41 CEST] <thardin_> madman
[15:32:14 CEST] <kierank> I will make sure I take you to the local asylum at vdd
[15:32:40 CEST] <JEEB> sounds like the correct action :D
[15:32:46 CEST] <JEEB> at least soon enough
[15:33:18 CEST] <JEEB> but yea, do we have any other examples of registering callbacks or so from lavf?
[15:33:27 CEST] <JEEB> or would I be wading into new territory?
[15:36:30 CEST] <wm4> JEEB: what callback?
[15:37:21 CEST] <JEEB> wm4: a thing for the mpegts demuxer or so, which would enable me to get a callback called if an AVProgram has gotten updated
[15:38:20 CEST] <JEEB> either a pointer in an AVOption (lol) or something else through the avcontext
[15:38:25 CEST] <JEEB> *avformatcontext
[15:38:30 CEST] <nevcairiel> cant you just diff the programs after every call :p
[15:38:42 CEST] <JEEB> nevcairiel: I would guess that is rather inefficient
[15:39:56 CEST] <iive> JEEB: doesn't the program table already have a sequence number?
[15:40:05 CEST] <wm4> JEEB: some update flag or counter would probably be better
[15:40:18 CEST] <wm4> the only callback we have in lavf is the one for opening reference files
[15:40:35 CEST] <wm4> and only because there's absolutely no other way to implement this
[15:42:19 CEST] <JEEB> wm4: yea, that's another way of doing it. although in theory before you get your next AVPacket multiple PMTs might have happened
[15:42:24 CEST] <JEEB> so you'd need an array of ints
[15:42:44 CEST] <JEEB> nullptr if no update, [1337, 1338] if updates happened
[15:43:50 CEST] <JEEB> iive: nope https://www.ffmpeg.org/doxygen/trunk/structAVProgram.html
[15:44:05 CEST] <JEEB> and you just get an array of these
[15:44:12 CEST] <JEEB> which get updated when they get
[15:44:19 CEST] <wm4> maybe we should have some sort of event thing
[15:44:33 CEST] <wm4> where you can get not only packets but also events (as structs) from the demuxer
[15:44:40 CEST] <wm4> (still preferable to callbacks)
[15:45:04 CEST] <kierank> anyone familiar with h264 in rtp?
[15:45:07 CEST] <JEEB> yea, I'm not preferring callbacks. that's just the first thing that came to mind :P
[15:45:27 CEST] <JEEB> since people generally herp a derp at the idea of new fields in lavf ctx
[15:48:22 CEST] <jkqxz> kierank:  Meaning STAPs and FUs?  Yes.
[15:48:56 CEST] <kierank> jkqxz: so one thing I don't understand in h264 rtp is how long you are meant to buffer
[15:49:33 CEST] <iive> JEEB: i was talking about the dvb standard, it has sequence for all the tables that could change,
[15:49:47 CEST] <iive> but yeh, having an event signaling stuff is good idea.
[15:50:00 CEST] <iive> e.g. imagine been able to handle extradata change.
[15:51:05 CEST] <kierank> you get a timestamp for a frame but relative to what?
[15:51:41 CEST] <jkqxz> kierank:  There are no general rules.
[15:52:03 CEST] <wbs> kierank: the rtp timestamps are against an undefined random origin
[15:52:04 CEST] <jkqxz> The RTCP lets you match the RTP timestamp to a real clock (and the matching audio clock).
[15:52:28 CEST] <kierank> and I guess you build an estimator of the link delay from the rtcp timestamps
[15:52:56 CEST] <wbs> you don't need to do that, the rtcp packets just say "rtp time X in the video stream corresponds to realtime clock Y"
[15:53:12 CEST] <wbs> so you basically use the latest of those mappings to produce a coherent timeline
[15:59:41 CEST] <wm4> iive: we already support extradata changes
[15:59:51 CEST] <iive> how?
[15:59:54 CEST] <wm4> side data
[16:00:12 CEST] <iive> but that needs an actual data?
[16:00:17 CEST] <kierank> wbs: i see
[16:00:24 CEST] <iive> aka packet
[16:00:25 CEST] <wm4> iive: yes
[16:00:53 CEST] <iive> how about an null packet that only has side-data?
[16:01:08 CEST] <kierank> wbs: but it's still up to the receiver to come up with an estimator of how much to buffer
[16:01:18 CEST] <kierank> because I would assume for frame timestamp n, an rtcp packet would also contain timestamp n
[16:01:27 CEST] <wm4> iive: API break
[16:02:02 CEST] <iive> avi file does have a null packets... how are these handled?
[16:02:54 CEST] <jkqxz> kierank:  Yes, it's completely up to the receiver.  They need to balance their own requirements for real-time-ness vs. possibility of packets turning up later.
[16:03:14 CEST] <kierank> I mean buffer enough to keep enough frames available to display
[16:03:16 CEST] <kierank> i.e the vbv
[16:03:22 CEST] <wm4> iive: discarded
[16:03:30 CEST] <kierank> which is in addition to any packet based buffering
[16:04:34 CEST] <jkqxz> Traditionally RTP is used in IDR/P-only close-to-CBR cases where that is not an issue.
[16:05:12 CEST] <jkqxz> CBR in the "all frames are roughly the same size" sense, not just "within HRD constraints".
[16:05:43 CEST] <kierank> ok, makes sense
[16:09:47 CEST] <kierank> the whole thing about moving live streaming to webrtc isn't going to work then
[16:09:58 CEST] <kierank> without big quality drops if all frames are same size
[16:11:47 CEST] <BtbN> Doesn't WebRTC also only support bad codec modes?
[16:11:52 CEST] <BtbN> Like, h264 baseline only?
[16:14:14 CEST] <jkqxz> There is nothing stopping the receiver using more buffering and adding delay if they want (based on the HRD parameters, I guess).
[16:15:36 CEST] <cone-410> ffmpeg 03Timo Rothenpieler 07master:62b75537db15: avfilter/scale_npp: fix logic used in previous patch
[16:15:59 CEST] <jkqxz> The all-frames-same-size is just the normal RTP use in videoconferencing-type applications (i.e. what webrtc was originally meant for).
[16:16:23 CEST] <BtbN> rabb.it uses WebRTC for live streaming. It's bad.
[16:18:28 CEST] <jkqxz> If enough people start using it then it I imagine more support will be hacked in.
[16:18:57 CEST] <JEEB> one of the gaming streaming services renamed webrtc to "ftl protocol"
[16:18:59 CEST] <BtbN> I couldn't even figure out how to stream to a browser using WebRTC
[16:56:21 CEST] <tdjones> atomnuker: Do you have any requirements for my GSoC submission aside from git patches being sent to the mailing list?
[17:00:17 CEST] <atomnuker> tdjones: there was something new this year, can't remember, there's still time
[17:00:40 CEST] <tdjones> Yeah, they want us to publish it online in some manner
[17:01:22 CEST] <tdjones> Didn't know if you wanted it done in any specific way, but just let me know if you think of something
[17:02:37 CEST] <atomnuker> k, for now keep at it and improve the patches, with your improvements the field is open for experimentation
[17:04:03 CEST] <atomnuker> do send what you come up with a week or so before the deadline (which is in 2 weeks) so it can be polished and upstreamed
[17:06:37 CEST] <tdjones> Sure
[17:25:51 CEST] <J_Darnley> How does a decoder signal to avcodec that it needs more data before it can return a frame?  Is it more than just setting got_picture to 0?  Isn't there something about an again return code?
[17:26:50 CEST] <atomnuker> no, just set *got_frame to 0 and return a positive number (the packet size or however many bits you've read)
[17:27:28 CEST] <J_Darnley> That is what we're doing.
[17:28:22 CEST] <kierank> J_Darnley: you're allocating the frame and then doing nothing with it
[17:28:24 CEST] <kierank> that's the bug
[17:28:27 CEST] <kierank> it's not stored anywhere
[17:28:34 CEST] <kierank> you need to take a reference to the buffer
[17:28:45 CEST] <kierank> and then output it when you have a complete frame
[17:29:09 CEST] <J_Darnley> That was about to be next question, "what about AVFrames".
[17:30:18 CEST] <wm4> J_Darnley: depends which API
[17:30:27 CEST] <wm4> there are 2
[17:30:31 CEST] <kierank> the normal API
[17:30:43 CEST] <wm4> both are normal
[17:35:30 CEST] <J_Darnley> Okay, I don't knwow what you mean by that, kierank:
[17:35:44 CEST] <kierank> so at the moment we send a fragment to diracdec
[17:35:47 CEST] <J_Darnley> When you say "reference" do you mean ffmpeg's reference counting API
[17:35:49 CEST] <kierank> it's a sequence header
[17:35:57 CEST] <kierank> it allocates a frame
[17:36:02 CEST] <kierank> we don't return the frame
[17:36:04 CEST] <kierank> the frame leaks
[17:36:06 CEST] <J_Darnley> ... or a member in the dirac decoder struct?
[17:36:32 CEST] <kierank> not sure if keeping it in a member of the struct is ok or if it needs to be refcounted
[17:36:36 CEST] <kierank> I think refcounted
[17:36:51 CEST] <kierank> ah *data is a pointer
[17:36:56 CEST] <kierank> so it just leaks unless its stored
[17:37:12 CEST] <kierank> so maybe not refcounted 
[17:37:16 CEST] <kierank> just stored in the struct
[17:37:22 CEST] <J_Darnley> yeah a void pointer maing it so obvous what's there
[17:37:27 CEST] <atomnuker> you can do the following: call get_buffer on field 1 with the full frame size, decode to it and have a pointer in your context to it
[17:37:41 CEST] <kierank> it's not about field 1
[17:37:47 CEST] <kierank> it's about indepdently decodable slices
[17:37:52 CEST] <atomnuker> then during field 2 don't call get buffer, write to field 1's avframe and transfer the reference to the current avframe
[17:37:58 CEST] <atomnuker> ah, ok
[17:37:59 CEST] <kierank> atomnuker: already implemented
[17:38:07 CEST] <kierank> by you in fact
[17:38:13 CEST] <J_Darnley> :)
[17:38:30 CEST] <atomnuker> I don't remember what the trimmed decoder does but I don't think it does that
[17:42:52 CEST] <J_Darnley> I see the decoder calling av_frame_ref for something about fields.
[17:43:15 CEST] <kierank> atomnuker: trimmed does exactly that
[17:43:40 CEST] <kierank> J_Darnley: can you check my hypothesis is correct with valgrind
[17:43:43 CEST] <kierank> i think the avframe leaks
[17:43:48 CEST] <J_Darnley> okay
[17:44:11 CEST] <kierank> atomnuker: there is a new vc-2 spec that implements fragments (i.e groups of slices) which we feed to the decoder individually
[17:44:17 CEST] <kierank> hence things have changed quite a bit
[17:45:00 CEST] <J_Darnley> Thanks for backtracing those log messages, valgrind.
[17:45:30 CEST] <J_Darnley> Nothing is leaked.
[17:46:12 CEST] <J_Darnley> Unless that is part of the ~5k "still reachable"
[17:47:54 CEST] <kierank> ok then i dunno why ffmpeg complains about frame size changing
[17:47:57 CEST] <kierank> you'll need to look into that
[17:48:06 CEST] <kierank> in theory it should do the right thing, count the slices until they are done
[17:48:17 CEST] <kierank> if need be break non-fragmented hq mode, I don't care any more
[17:48:18 CEST] <J_Darnley> I agree.
[17:48:56 CEST] <atomnuker> kierank: lol, I'm looking at the code and I can't figure out what happens
[17:49:10 CEST] <kierank> atomnuker: for what case?
[17:49:16 CEST] <atomnuker> interlaced
[17:49:36 CEST] <atomnuker> I can't see where the buffer gets transferred and outputted during the second field
[17:50:04 CEST] <atomnuker> the data avframe doesn't get touched during the second field
[17:50:15 CEST] <atomnuker> how is it outputting stuff during the second field?
[17:50:57 CEST] <atomnuker> ah, nevermind
[17:51:04 CEST] <atomnuker> av_frame_ref(data, s->current_picture)
[17:51:13 CEST] <kierank> "/* Picks the field number based on the parity of the picture number */"
[17:51:14 CEST] <kierank> yeah
[17:51:57 CEST] <atomnuker> clever, it creates a reference in the output avframe from the internal one
[18:32:17 CEST] <J_Darnley> kierank: no more error, but now I am leaking things
[18:34:40 CEST] <kierank> ok
[18:35:24 CEST] <J_Darnley> oh, and no allocated buffer for the planes.
[18:35:46 CEST] <kierank> huh
[18:35:51 CEST] <kierank> that's clearly broke then
[18:35:57 CEST] <J_Darnley> Yes!
[18:41:28 CEST] <J_Darnley> Oh, that was my mistake.  I put the idwt in the wrong place.
[18:42:40 CEST] <J_Darnley> Good news: the chroma is no longer green, we now have a nice 50% gray image
[18:43:38 CEST] <kierank> are you doing the slice unpack
[18:44:06 CEST] <J_Darnley> Probably not.  Did I miss something?
[18:44:25 CEST] <kierank> decode_hq_slice
[18:44:28 CEST] <kierank> actually decoding the coefficients
[18:44:36 CEST] <J_Darnley> Ha.
[18:44:43 CEST] <J_Darnley> Yeah, probably
[18:44:48 CEST] <kierank> do it linearly don't try and thread it
[18:44:55 CEST] <J_Darnley> noted
[19:19:30 CEST] <J_Darnley> kierank: I think I've got it.
[19:20:05 CEST] <kierank> definitely worth fuzzing if it works and doesn't leak
[19:20:07 CEST] <J_Darnley> I see the zoneplate pattern
[19:20:43 CEST] <J_Darnley> Oh, I think it still leaks
[19:21:51 CEST] <J_Darnley> Yeah, I didn't change that bit.
[19:22:20 CEST] <kierank> and multiple frames works?
[19:22:21 CEST] <J_Darnley> To all: how do you use av_frame_ref correctly?
[19:22:44 CEST] <J_Darnley> kierank: I haven't made a sample like that
[19:22:55 CEST] <J_Darnley> Myabe I should do that first
[19:22:55 CEST] <kierank> J_Darnley: the sample on .18 is like that
[19:23:19 CEST] <J_Darnley> Really?
[19:23:35 CEST] <kierank> yes, I think it's 40 frames or something
[19:23:57 CEST] <kierank> ah no
[19:23:58 CEST] <kierank> my bad
[19:24:09 CEST] <kierank> anyway make one work without a leak
[19:24:12 CEST] <kierank> and fuzz crash
[19:24:14 CEST] <kierank> then go onto more
[19:24:18 CEST] <J_Darnley> okay
[19:24:38 CEST] <J_Darnley> Continuing my question...
[19:25:39 CEST] <kierank> you allocate a frame and you get 1 ref to it
[19:25:42 CEST] <J_Darnley> Yes
[19:25:44 CEST] <J_Darnley> I'm using av_frame_ref to make a reference into the dirac decoder struct after I allocate 
[19:25:44 CEST] <kierank> then you ref it to get another one
[19:26:20 CEST] <J_Darnley> I'm using av_frame_ref to make a reference into the dirac decoder struct after I allocate a frae on the first fragment given to the decoder
[19:27:04 CEST] <kierank> so you need to assume that anything you put into *data gets freed afterwards
[19:27:10 CEST] <J_Darnley> Then when I have decoded the picture I am using av_frame_ref again to copy properties back into the output AVFrame
[19:27:21 CEST] <kierank> then you have a refcount of two
[19:27:28 CEST] <kierank> and after decode_frame it has a refcount of 1 so leaks
[19:27:35 CEST] <kierank> I think
[19:27:46 CEST] <kierank> but i thought you'd reuse the frame so maybe it should have a refcount of 1
[19:28:07 CEST] <J_Darnley> I don't know what any of this does
[19:28:14 CEST] <kierank> J_Darnley: refcounting
[19:28:14 CEST] <J_Darnley> I hate garbage collection garbage.
[19:28:19 CEST] <kierank> nonsense
[19:28:28 CEST] <kierank> refcounting was a patch that took ffmpeg into the 21st century
[19:28:43 CEST] <JEEB> true
[19:29:04 CEST] <kierank> ok libavcodec to be more precise
[19:29:08 CEST] Action: J_Darnley prefers the stoneage
[19:29:20 CEST] <durandal_1707> lol
[19:29:45 CEST] <kierank> good luck sharing frames between threads
[19:31:01 CEST] <J_Darnley> So when do I call free or unref or whatever?
[19:31:02 CEST] <kierank> J_Darnley: so yes i am right
[19:31:07 CEST] <kierank> you don't free
[19:31:16 CEST] <kierank> you only ref the frame if you need to keep a copy
[19:31:18 CEST] <atomnuker> yep, refcounting absolutely rocks and saves on so much memcpys
[19:31:33 CEST] <kierank> J_Darnley: so you keep a copy of the frame during the fragments
[19:31:44 CEST] <kierank> then pass it to *data on the way out
[19:31:45 CEST] <kierank> afaik
[19:32:02 CEST] <kierank> J_Darnley: I will convert you to refcounting like I convertered atomnuker 
[19:32:05 CEST] <kierank> it was a struggle but I won
[19:32:26 CEST] <kierank> J_Darnley: the current problem is we store the frame in *data
[19:32:30 CEST] <kierank> instead of the struct
[19:32:35 CEST] <kierank> and so *data is freed automatically
[19:33:08 CEST] <J_Darnley> Not anymore.  I made a ref using a new AVFrame in the diracdec struct.
[19:33:25 CEST] <J_Darnley> Do I write that address into *data then?
[19:33:32 CEST] <kierank> you don't need to make a ref in the struct
[19:33:38 CEST] <kierank> you go direct
[19:33:43 CEST] <kierank> then put that into *data when the frame is done
[19:35:16 CEST] <J_Darnley> ?  I'm sure *data is being saved into one of the other 3 AVFrame* in the struct on the first fragment.
[19:36:08 CEST] <kierank> huh
[19:36:11 CEST] <kierank> data is the output
[19:36:41 CEST] <kierank> data = frame <-- then ffmpeg does what it wants
[19:37:16 CEST] <kierank> J_Darnley: you're sure the avframe is leaking from valgrind
[19:37:18 CEST] <kierank> can you patch .18
[19:37:20 CEST] <kierank> and I can look
[19:38:38 CEST] <J_Darnley> Okay
[00:00:00 CEST] --- Tue Aug 15 2017


More information about the Ffmpeg-devel-irc mailing list