[Ffmpeg-devel-irc] ffmpeg-devel.log.20170422

burek burek021 at gmail.com
Sun Apr 23 03:05:04 EEST 2017


[00:02:06 CEST] <alevinsn> jamrial:  thanks for looking into the crash associated with the patch I submitted
[00:02:22 CEST] <alevinsn> I had started to look into it, but I got a bit side-tracked by make fate not working properly on my system
[00:02:30 CEST] <alevinsn> how long is the typical deprecation period?
[00:02:38 CEST] <jamrial> alevinsn: no problem
[00:03:24 CEST] <alevinsn> I ask because I wonder if it is worth it to do anything
[00:03:29 CEST] <alevinsn> but, if the deprecation period is like a year
[00:03:41 CEST] <alevinsn> then it is likely worthwhile to address this sooner
[00:04:24 CEST] <BtbN> next or second-next major bump
[00:04:51 CEST] <alevinsn> someone actually posted about these memory leaks at http://stackoverflow.com/questions/43389411/how-does-one-correctly-avoid-the-avcodec-alloc-context3-leak-from-avformat-new-s
[00:04:55 CEST] <jamrial> it will be removed next year or so
[00:05:18 CEST] <llogan> is Thomas Mundt ever in here?
[00:09:50 CEST] <alevinsn> jamrial:  Comments elsewhere in code seem to indicate that AVStream::codecpar is the replacement for AVStream::codec
[00:10:01 CEST] <alevinsn> but you indicated that it should use AVStream::internal::avctx
[00:10:18 CEST] <alevinsn> I think that's from the header file where AVStream is declared
[00:10:34 CEST] <jamrial> i mean the internal libavformat code needing an AVCodecContext should use st->internal->avctx instead of st->codec
[00:12:50 CEST] <alevinsn> ok, that makes sense, in those cases when it needs to get the AVCodecContext * associated with a AVStream *
[00:12:58 CEST] <alevinsn> but, it shouldn't be exposed to user code
[00:14:08 CEST] <alevinsn> ffmpeg.c uses st->codec in some cases directly
[00:14:11 CEST] <alevinsn> I noticed
[00:14:25 CEST] <jamrial> yes, that needs to be removed eventually
[00:14:28 CEST] <alevinsn> and not wrapped in #if FF_API_LAVF_AVCTX
[00:14:29 CEST] <alevinsn>  #endif
[00:14:50 CEST] <rcombs> because it hasn't been replaced yet
[00:14:59 CEST] <alevinsn> it is possible that some of the cases that are using st->codec could get by with using st->codecpar
[00:15:16 CEST] <rcombs> maybe!
[00:15:18 CEST] <alevinsn> I mean, if it is just some information about how the codec was setup that is needed
[00:17:45 CEST] <alevinsn> this all makes me wonder if the following call is muxing.cpp (under examples) is even necessary
[00:17:55 CEST] <alevinsn>     /* copy the stream parameters to the muxer */
[00:17:55 CEST] <alevinsn>     ret = avcodec_parameters_from_context(ost->st->codecpar, c);
[00:18:32 CEST] <rcombs> looks like it would be
[00:19:02 CEST] <alevinsn> I bet you get that just by the call to avformat_new_stream()
[00:20:10 CEST] <rcombs> how would you?
[00:20:31 CEST] <rcombs> avformat_new_stream doesn't even take an AVCodecContext arg
[00:21:13 CEST] <alevinsn> yes, I see that
[00:22:30 CEST] <alevinsn> to me, that seems like a non-obvious step
[00:22:48 CEST] <alevinsn> that you have to call avcodec_parameters_from_context() separately
[00:22:54 CEST] <alevinsn> that it doesn't happen for you implicitly
[00:23:15 CEST] <alevinsn> the stream was mostly setup right, but now we have to patch it up a bit
[00:23:38 CEST] <rcombs> you create the stream, then you populate it with the details provided by the encoder
[00:23:44 CEST] <alevinsn> maybe avformat_new_stream() ought to take an AVCodecContext * as input instead?
[00:24:18 CEST] <alevinsn> the second input is currently an AVCodec *, although that is an optional parameter
[00:24:50 CEST] <alevinsn> I mean, if the AVStream * doesn't have any meaning without being associated with an AVCodecContext *
[00:24:59 CEST] <alevinsn> why not use that as an input instead of an AVCodec object?
[00:25:38 CEST] <BtbN> wasn't that deprecated in favour of codecpar?
[00:26:00 CEST] <alevinsn> AVStream::codec has been deprecated in favor of AVStream::codecpar
[00:26:01 CEST] <alevinsn> but
[00:26:16 CEST] <alevinsn> there apparently will still be an internal AVCodecContext * associated with the stream anyway
[00:26:35 CEST] <alevinsn> but this is a separate AVCodecContext * than the one used by user code
[00:30:51 CEST] <rcombs> the internal one is an implementation detail and is specific to demuxing
[00:31:15 CEST] <rcombs> and an AVStream* can have meaning without an associated user-facing codec context
[00:31:36 CEST] <rcombs> or even without a filled codecpar
[00:31:54 CEST] <rcombs> e.g. data streams, or attachments
[00:32:39 CEST] <alevinsn> it still requires an AVFormatContext * passed in in those cases though to create it, right?
[00:33:44 CEST] <alevinsn> so, here's the question
[00:34:16 CEST] <alevinsn> for the data streams or attachments situations, in those cases, would the AVCodec * parameter be null?
[00:59:24 CEST] <nevcairiel> The AVCodec parameter is optional at all times anyway, its only used to initialize the AVStream, and once the deprecated parts are removed its even entirely unused
[01:01:56 CEST] <nevcairiel> A stream should be entirely independent of a specific AVCodec or AVCodecContext, at least from the external API. That it internally has an AVCodecContext which gets used during demuxing to figure out codec parameters is just how its implemented
[01:02:19 CEST] <nevcairiel> Unfortunately there are still a few parts left that access the old deprecated ways, but those ought to be fixed
[01:09:29 CEST] <alevinsn> yes, I see how it only uses the AVCodec * input to populate st->codec
[01:09:42 CEST] <alevinsn> But, in those cases that you have an AVCodecContext *
[01:09:51 CEST] <alevinsn> and want to use it to populate codecpars implicitly
[01:10:16 CEST] <alevinsn> maybe it makes sense to have a version of avformat_new_stream() that takes as input an AVCodecContext *
[01:10:21 CEST] <nevcairiel> basically, just ignore st->codec
[01:10:30 CEST] <alevinsn> Yes, I get that
[01:10:30 CEST] <nevcairiel> and no, it makes no sense, since st->codec is just going away
[01:10:34 CEST] <alevinsn> it is for internal use
[01:10:42 CEST] <alevinsn> I'm only talking about for populating codecpars
[01:10:47 CEST] <alevinsn> which is apparently important
[01:11:15 CEST] <alevinsn> rather than requiring an additional call to avcodec_parameters_from_context()
[01:12:04 CEST] <alevinsn> I admit that I have to wonder why codecpars is also needed
[01:12:20 CEST] <alevinsn> if a stream is independent of any codecs
[01:12:25 CEST] <alevinsn> why have codecpars at all?
[01:12:43 CEST] <nevcairiel> muxers still need codec information
[01:13:27 CEST] <nevcairiel> but AVCodec and AVCodecContext is basically a concrete codec implementation, AVCodecParameters is a simple container for just the relevant properties
[01:14:48 CEST] <nevcairiel> the difference ensures a cleaner separation between components
[01:15:03 CEST] <alevinsn> muxing essentially happens as a result of calling av_interleaved_write_frame(), right?
[01:15:11 CEST] <nevcairiel> yes
[01:15:22 CEST] <alevinsn> which takes, as input, an AVFormatContext *
[01:15:29 CEST] <alevinsn> which stores the different streams
[01:15:55 CEST] <alevinsn> why not have an additional array of AVCodecContext objects, one per stream, stored in an AVFormatContext *
[01:16:04 CEST] <alevinsn> if codecpars is only needed for muxing
[01:16:34 CEST] <rcombs> wat
[01:16:39 CEST] <nevcairiel> I concur
[01:16:48 CEST] <rcombs> because that makes negative sense
[01:17:19 CEST] <alevinsn> nevcairiel:  you agree with one of my proposals?  what is this world coming to? :-)
[01:17:29 CEST] <nevcairiel> no, i agree with rcombs "wat"
[01:17:46 CEST] <alevinsn> oh :-( :-)
[01:18:51 CEST] <rcombs> not trying to be mean, I just don't understand what the point of that would be
[01:19:11 CEST] <rcombs> sounds similar to the old mechanism (AVCodecContext on AVStream) except worse
[01:19:24 CEST] <nevcairiel> the stream has all info it needs stored in AVStream, why would there be a separate array somewhere else
[01:19:29 CEST] <alevinsn> well, it could just as well be AVCodecParameters, not AVCodecContext
[01:19:43 CEST] <alevinsn> well, if it is only needed for muxing
[01:19:51 CEST] <alevinsn> and streams have multiple uses beyond just being used for muxing
[01:20:03 CEST] <alevinsn> and codecpars is only used by muxing, it seems like it doesn't belong in AVStream
[01:20:19 CEST] <alevinsn> and belongs in something that is specific to muxing
[01:20:23 CEST] <nevcairiel> codecpars are used everywhere AVStream is used
[01:20:44 CEST] <nevcairiel> and an AVStream never exists standlone, it always belongs to an AVFormatContext
[01:20:52 CEST] <nevcairiel> so moving anything from stream to context changes exactly nothing
[01:20:59 CEST] <nevcairiel> excpet make things more confusing
[01:22:47 CEST] <alevinsn> ok, perhaps none of my proposals are any good, but I still stick by my statement that the later call to avcodec_parameters_from_context() is non-intuitive
[01:23:22 CEST] <alevinsn> and I think it somewhat requires the developer to understand ffmpeg internals to know why it must be done
[01:23:42 CEST] <nevcairiel> no, he just has to understand the external interface
[01:23:54 CEST] <nevcairiel> you want to mux something? You need AVCodecParameters
[01:23:57 CEST] <nevcairiel> how can you make those?
[01:24:16 CEST] <nevcairiel> well, manually is a way, or .. oh there is this function that can make one from my encoding context, i'll use that! :)
[01:25:16 CEST] <alevinsn> if there were actually a document that described these requirements, then yeah, but as far as I can tell, the documentation is the examples, and it might as well just be magic in that case
[01:27:42 CEST] <alevinsn> guess I could contribute some by writing better documentation :-)
[01:27:50 CEST] <alevinsn> or contributing developer documentation period
[01:28:00 CEST] <alevinsn> since the only documentation really is the header files and the examples
[01:28:17 CEST] <nevcairiel> thats kind of intentional
[01:28:25 CEST] <nevcairiel> the header files contain doxygen comments
[01:28:37 CEST] <nevcairiel> which get turned into a website
[01:28:49 CEST] <nevcairiel> of course those header comments can always be expanded in various areas
[01:30:14 CEST] <alevinsn> looking through the documentation in avformat.h
[01:30:21 CEST] <nevcairiel> ie. https://ffmpeg.org/doxygen/trunk/group__lavf__encoding.html#details
[01:30:27 CEST] <nevcairiel> i'm sure it could be expanded to be clearer
[01:30:38 CEST] <alevinsn> I see that it says that codecpars should be populated
[01:34:41 CEST] <alevinsn> ok, I didn't really get that this documentation existed
[02:29:05 CEST] <Zeranoe> Apparently Decklink is nonfree now?
[02:30:09 CEST] <Zeranoe> Was there some sort of recent evaluation of the license? 
[02:32:57 CEST] <RiCON> Zeranoe: the headers are free, but not the SDK
[02:33:15 CEST] <RiCON> can't get the headers without accepting the SDK EULA, so it's nonfree
[02:34:24 CEST] <Zeranoe> I understand. That's going to upset some people...
[02:35:28 CEST] <RiCON> yeah, decklink should be pressed to add an exclusion or allow downloading the headers without accepting the eula
[02:35:59 CEST] <RiCON> exception*
[02:48:22 CEST] <alevinsn> I wasn't quite sure of the logic with that one
[02:48:49 CEST] <alevinsn> because, now the thought is, that decklink isn't compatible with GPL or LGPL
[02:48:56 CEST] <alevinsn> and I don't think it is as clearcut as that
[02:49:01 CEST] <alevinsn> as it is for other non-free components
[03:19:42 CEST] <chatter29> hey guys
[03:19:46 CEST] <chatter29> allah is doing
[03:19:51 CEST] <chatter29> sun is not doing allah is doing
[03:19:53 CEST] <chatter29> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
[03:20:20 CEST] <rcombs> why don't I have ops in here
[04:45:36 CEST] <alevinsn> does that happen often?  people coming to the channel that clearly have no business being here
[04:45:46 CEST] <alevinsn> and are almost certainly not getting the audience they might like?
[04:46:10 CEST] <alevinsn> re: chatter29
[04:48:44 CEST] <rcombs> that's a spammer
[04:49:23 CEST] <rcombs> that same text gets sent to various channels pretty frequently, usually by someone by the name "chatter" (sometimes with numbers on the end)
[04:50:46 CEST] <alevinsn> how do they even know about this channel?
[04:51:37 CEST] <alevinsn> I guess they can get a list of channels
[04:51:43 CEST] <alevinsn> and go through each
[04:51:50 CEST] <alevinsn> without knowing anything about ffmpeg-devel
[07:42:28 CEST] <Gramner> Chloe it does, but you can't give it the full identifier as a parameter because it will expand completely ("too much") so "%undef %1" makes no sense. "%undef %1%2" however is fine if you give it two parts of the identifier you want to undefine and it will concatenate them
[07:43:53 CEST] <Gramner> x86inc even has a CAT_UNDEF macro that does just that
[07:46:14 CEST] <Gramner> a limit of the nasm/yasm preprocessor is that you can't have fine-grained control over expansion. it's all or nothing
[16:13:05 CEST] <cone-450> ffmpeg 03Paul B Mahol 07master:01729f77dd2a: avfilter: add doubleweave filter
[18:38:33 CEST] <philipl> wm4: so where's the branch with this new new decode api?
[18:38:50 CEST] <philipl> Given that the only two decoders that are using it are cuvid and crystalhd, are they designing in a vacuum?
[18:38:57 CEST] <philipl> BtbN said the new new API has problems
[18:39:26 CEST] <BtbN> https://github.com/jamrial/FFmpeg/tree/mergework
[18:39:36 CEST] <BtbN> It has a fix for cuvid, but I'm not sure if it's correct or good
[18:41:03 CEST] <philipl> I see it.
[18:41:10 CEST] <philipl> Yeah. this isn't going to work for crystalhd.
[18:41:33 CEST] <philipl> As far as I can tell, based on what I tried last night, the hardware doesn't seem to make progress if you aren't polling it.
[18:41:47 CEST] <philipl> So I work out it's full, and then sleep for as long as I want, and it doesn't move forward.
[18:42:16 CEST] <philipl> This seems like a step backwards from what it was.
[18:42:22 CEST] <philipl> It's now not fully decoupled anymore.
[18:48:46 CEST] <BtbN> Yeah, I'm also very confused as to what the whole idea is
[18:49:00 CEST] <BtbN> The API seemed good to me
[18:56:10 CEST] <wm4> philipl: well in theory the new internal API is equivalent to the old API, just different
[19:05:36 CEST] <nevcairiel> I also liked the current new API better
[19:05:59 CEST] <nevcairiel> It allows actual proper decoupling
[19:07:49 CEST] <philipl> right.
[19:08:02 CEST] <philipl> So what are they basing any of this on if they don't have any new style decoders in their tree?
[19:09:18 CEST] <nevcairiel> In the end you can probably do the same things with it either way though
[19:09:23 CEST] <wm4> philipl: yes
[19:09:29 CEST] <wm4> also yes to nevcairiel 
[19:09:49 CEST] <wm4> haven't looked at the code, would it be hard to restore the old send_packet callback?
[19:11:45 CEST] <BtbN> I really don't understand the point of this change
[19:14:45 CEST] <wm4> BtbN: it's supposed to make decoders simpler
[19:14:59 CEST] <wm4> especially audio codecs with multiple sub-frames I suppose (?)
[19:17:38 CEST] <BtbN> can both modes be supported at the same time?
[19:18:48 CEST] <jamrial> philipl: i think this was done to make the autobsf at the decoder level (next commit in queue) simpler
[19:22:44 CEST] <nevcairiel> It's also supposed to make the internal API of codecs and bsfs similar, but shrug
[19:22:59 CEST] <nevcairiel> Anyway in most cases it doesn't really make that much of a difference
[19:23:36 CEST] <nevcairiel> The decoder decides either way when it wants input
[20:09:06 CEST] <philipl> wm4: Can I call ff_decode_get_packet, then check for full?
[20:09:07 CEST] <cone-450> ffmpeg 03Michael Niedermayer 07master:362f6c91e466: avfilter/avf_avectorscope: Assert that format is valid
[20:09:29 CEST] <philipl> I have to compare the packet size to the amount of space left in the tx buffer.
[20:09:51 CEST] <wm4> philipl: once you get a packet with this you have to keep it
[20:10:02 CEST] <philipl> So I'm up shit creek.
[20:10:04 CEST] <wm4> but you could keep a ref in the context and use it the next time you can
[20:10:14 CEST] <philipl> Just makes life more complicated.
[20:10:27 CEST] <wm4> or maybe there could be a ff_decode_poll_packet
[20:10:31 CEST] <wm4> (but seems weird)
[20:10:45 CEST] <philipl> *something something existing api was perfectly fine*
[20:10:57 CEST] <wm4> philipl: maybe you could talk about this to elenril (who introduced this change)
[20:12:13 CEST] <philipl> You have his email handy?
[20:12:25 CEST] <wm4> it's his irc nick
[20:12:32 CEST] <philipl> @libav.org?
[20:12:33 CEST] <wm4> email uh see git commit history
[20:12:35 CEST] <nevcairiel> wouldnt the old api have the same problem
[20:12:37 CEST] <philipl> heh. fair
[20:12:39 CEST] <nevcairiel> once you have a packet you get to keep it
[20:12:40 CEST] <wm4> I mean he is on irc with that nick
[20:12:58 CEST] <wm4> nevcairiel: you see the packet before you reject it in the old API
[20:13:19 CEST] <wm4> my shitty MF wrapper is going to have a similar problem
[20:13:21 CEST] <nevcairiel> seems like a rather specific niche case really though
[20:13:21 CEST] <philipl> nevcairiel: the appliction tries to give the decoder the packet and it can reject it
[20:13:32 CEST] <nevcairiel> that you can potentiually accept some packet
[20:13:35 CEST] <nevcairiel> but not all packets
[20:13:36 CEST] <philipl> Well, cuvid also worked more logically with the old api
[20:14:09 CEST] <nevcairiel> generally, there is no big difference, if you have room you just poll for a new packet, if not, well, dont
[20:14:50 CEST] <philipl> It just seems weird. Having the decoder pull packets just seems the wrong way round.
[20:15:16 CEST] <philipl> wm4: You've looked at what adapting mpv looks like, I assume?
[20:15:54 CEST] <wm4> philipl: the public API doesn't change
[20:16:19 CEST] <nevcairiel> for crystalhd it sounds to me like no api is going to be able to solve the suckyness of that hardware :D
[20:16:51 CEST] <philipl> The existing API works.
[20:16:56 CEST] <philipl> existence proof
[20:16:59 CEST] <philipl> mpv works
[20:17:13 CEST] <philipl> and ffplay now works, after Marton's final patch
[20:17:20 CEST] <nevcairiel> but it doesnt work with every valid calling pattern of the api
[20:17:41 CEST] <nevcairiel> as shown by early ffplay ports, which was valid code
[20:17:43 CEST] <philipl> It works with all the callers that exist in the world :-)
[20:18:18 CEST] <philipl> but yes, obviously it's ridiculous hardware with insane undocumented semantics.
[20:18:44 CEST] <philipl> It's basically only intended to work in a fully decoupled threaded model.
[20:19:00 CEST] <wm4> async?
[20:19:04 CEST] <philipl> yeah.
[20:19:15 CEST] <nevcairiel> i finally converted my code to the new api as well the last couple days, and the general calling pattern is pretty standard: send packet, poll decoded frames until EAGAIN, repeat
[20:19:20 CEST] <philipl> but even the new/new-new API is decoupled on one thread.
[20:19:20 CEST] <wm4> that would be mappable to the current API, but with timeouts
[20:19:33 CEST] <wm4> make a worker thread?
[20:19:37 CEST] <nevcairiel> the api could support threading under the hood
[20:20:01 CEST] <philipl> nevertheless, things were working fine before, without horrible timeouts or threading inside the decoder.
[20:20:07 CEST] <philipl> so I'll view anything else as a step backwards
[20:20:43 CEST] <philipl> And as I said, I think timeouts actively don't work. It really seems like the hardware doesn't make progress if you aren't actively making certain library calls.
[20:20:50 CEST] <philipl> that seems too insane to believe, but it's really what I see.
[20:21:40 CEST] <philipl> If I wait until the buffer is full and then sleep, output does not appear.
[20:21:53 CEST] <wm4> well, whether you poll under the hood or not
[20:22:26 CEST] <philipl> I think I'd have to poll both for queue space and for output frames.
[20:22:33 CEST] <philipl> Do only one and stuff doesn't happen. *sigh*
[20:27:25 CEST] <nevcairiel> sounds like it could definitely need a worker thread to babysit it at all times
[20:32:00 CEST] <philipl> Yay. My double polling worked.
[20:32:24 CEST] <BtbN> Yeah, I'm not a fan of this change at all. It just seems weird. But skipping it would probably mess up future merges?
[20:32:34 CEST] <BtbN> But libav doesn't even have decoders using the new API?
[20:33:14 CEST] <jamrial> BtbN: it would make the autobsf commit merge a pain. basically we'd have to rewrite it
[20:33:38 CEST] <jamrial> then of course, the same once every decoder starts being ported to the new api
[20:33:40 CEST] <nevcairiel> also any future decoder/encoder that use it
[20:33:49 CEST] <BtbN> Can we do both? So each decoder can choose what works best?
[20:33:55 CEST] <nevcairiel> thats terrible
[20:33:55 CEST] <philipl> Well, for all my complaints, I have crystalhd working without massive changes.
[20:34:14 CEST] <jamrial> if it's really no good then it should be undone or replaced on libav as well
[20:34:32 CEST] <philipl> The double-poll realisation also applies to the old-new API and would make it compliant there.
[20:34:47 CEST] <BtbN> It seems super weird to me, and against how most decoders work
[20:35:12 CEST] <nevcairiel> instead of being given a packet as input you just call a function, seems not that different
[20:35:46 CEST] <BtbN> Well, but in the case of cuvid, which profits from buffering a few packets, it causes trouble
[20:35:58 CEST] <nevcairiel> how is that?
[20:36:12 CEST] <philipl> BtbN: You'd have to buffer yourself by calling get_packet in a loop
[20:36:27 CEST] <nevcairiel> you dont have to return a frame if you have none yet
[20:36:34 CEST] <nevcairiel> just bail out early if you still need more input
[20:36:37 CEST] <cone-450> ffmpeg 03Thomas Mundt 07master:207e6debf866: avfilter/interlace: change lowpass_line function prototype
[20:36:43 CEST] <nevcairiel> or if you have excess output, just dont call the function
[20:37:24 CEST] <philipl> https://gist.github.com/philipl/7383e0e69803807f249c90753108c676
[20:37:50 CEST] <philipl> BtbN: for crystalhd, the latency to first output frame means that it naturally buffers hundreds of frames
[20:41:35 CEST] <BtbN> hm, yeah, the cuvid.c patch is probably fine
[20:41:41 CEST] <BtbN> But still, it seems super weird to me
[20:41:47 CEST] <philipl> BtbN: With the patched cuvid code as it exists right now, shouldn't the same thing happen? I assume multiple output_frame calls will happen where no frame is ready, so it will take many input frames before the first output is returned
[21:02:25 CEST] <philipl> jamrial: I'll have a final diff for you in a few hours. Have to go do other things
[21:03:48 CEST] <jamrial> philipl: alright, thanks
[21:23:59 CEST] <alevinsn> what did Nicolas George mean by:  "To ensure ABI compatibility with the fork, which has been dropped."
[21:24:11 CEST] <alevinsn> is the fork in this case libav?
[21:24:21 CEST] <JEEB> yes
[21:25:02 CEST] <alevinsn> so, all those av_frame accessor functions will be eliminated at some point from ffmpeg?
[21:25:06 CEST] <alevinsn> but remain in libav?
[21:27:36 CEST] <nevcairiel> Libav never had them
[21:27:59 CEST] <nevcairiel> We had them so we could move the fields without breaking ABI
[21:28:26 CEST] <nevcairiel> But we choose to remove libav ABI compat so no need to move them anymore
[21:29:12 CEST] <jamrial> or rather, no need to worry about their offsets being different than libav's, hence direct access becomes a possibility again
[21:30:41 CEST] <alevinsn> still, using the accessor functions is perhaps better for ffmpeg by itself?
[21:30:52 CEST] <alevinsn> in case ffmpeg ever makes those properties internal, etc
[21:31:20 CEST] <BtbN> those accessors probably caused quite a bit of API misuse
[21:31:25 CEST] <alevinsn> but, if the thought is that they will always be public, the names will never change, etc
[21:31:31 CEST] <BtbN> By people still using the fields directly, and then getting surprised
[21:31:33 CEST] <alevinsn> then it is more convenient to access directly
[21:32:32 CEST] <alevinsn> Not sure if my last messages went through--currently on a crappy Internet connection
[21:32:35 CEST] <alevinsn> I wrote
[21:32:46 CEST] <alevinsn> still, using the accessor functions is perhaps better for ffmpeg by itself?
[21:32:46 CEST] <alevinsn> <alevinsn> in case ffmpeg ever makes those properties internal, etc
[21:32:46 CEST] <alevinsn> <alevinsn> but, if the thought is that they will always be public, the names will never change, etc
[21:32:46 CEST] <alevinsn> <alevinsn> then it is more convenient to access directly
[21:43:04 CEST] <nevcairiel> These properties are meant to be public, if they ever go internal, they should probably be unavailable entirely
[22:40:44 CEST] <cone-450> ffmpeg 03Michael Niedermayer 07master:fc8cff96ed45: avcodec/h264_cavlc: Fix undefined behavior on qscale overflow
[23:15:29 CEST] <cone-450> ffmpeg 03Marton Balint 07master:c037f2f1ba3a: ffmpeg; check return code of avcodec_send_frame when flushing encoders
[23:31:16 CEST] <cone-450> ffmpeg 03Marton Balint 07release/3.3:ed2ed4ac0f05: ffmpeg; check return code of avcodec_send_frame when flushing encoders
[23:46:22 CEST] <cone-450> ffmpeg 03Aaron Levinson 07master:5b281b476b32: libavutil/thread.h: Fixed g++ build error when ASSERT_LEVEL is greater than 1
[00:00:00 CEST] --- Sun Apr 23 2017


More information about the Ffmpeg-devel-irc mailing list