[Ffmpeg-devel-irc] ffmpeg-devel.log.20170507

burek burek021 at gmail.com
Mon May 8 03:05:04 EEST 2017


[01:04:08 CEST] <cone-671> ffmpeg 03Takayuki 'January June' Suwa 07master:ea93b74074c5: avdevice/alsa: wait until playback buffers are drained before closing
[01:04:09 CEST] <cone-671> ffmpeg 03Ricardo Constantino 07master:c0b3781bf2fe: rtmpproto: send swfverify value as swfurl if latter is unused
[03:59:29 CEST] <cone-671> ffmpeg 03James Almer 07master:fb0f29f9aaaf: avcodec/hevc_sei: actually propagate error codes
[04:14:07 CEST] <cone-671> ffmpeg 03Michael Niedermayer 07master:1121d9270783: avcodec/msmpeg4dec: Correct table depth
[04:14:08 CEST] <cone-671> ffmpeg 03Michael Niedermayer 07master:669419939c1d: avcodec/svq3: Fix multiple runtime error: signed integer overflow: 44161 * 61694 cannot be represented in type 'int'
[04:14:09 CEST] <cone-671> ffmpeg 03Michael Niedermayer 07master:9e88cc94e58e: avcodec/ivi_dsp: Fix multiple left shift of negative value -2
[04:14:10 CEST] <cone-671> ffmpeg 03Michael Niedermayer 07master:e92fb2bea180: avcodec/texturedsp: Fix multiple runtime error: left shift of 255 by 24 places cannot be represented in type 'int'
[04:14:11 CEST] <cone-671> ffmpeg 03Michael Niedermayer 07master:3e56db892600: avcodec/targa_y216dec: Fix width type
[04:54:12 CEST] <cone-671> ffmpeg 03James Almer 07master:4a51aa7ddaeb: configure: add missing avcodec dependencies to filters
[05:31:08 CEST] <cone-671> ffmpeg 03Steven Liu 07master:cc25a887c546: avformat/matroskadec: fix resource leak
[05:33:18 CEST] <atomnuker> bofh_: ping
[07:44:07 CEST] <atomnuker> does our flac muxer support writing attached pics (coverart)?
[07:45:01 CEST] <atomnuker> apparently not
[08:00:02 CEST] <rcombs> atomnuker: I have a patch for that
[08:00:34 CEST] <rcombs> https://gist.github.com/64a70e9aacd8c1f361098c59bafcb3a7
[08:00:52 CEST] <rcombs> I think it was sent back because of some GIF-related problem that I haven't gotten around to fixing
[08:01:28 CEST] <atomnuker> remove gif support then, its useless
[08:02:27 CEST] <rcombs> yeah that's the easy solution, I just haven't bothered
[08:02:36 CEST] <rcombs> lemme go see what all is blocking that patchset atm
[08:03:26 CEST] <rcombs> oh, just that
[08:03:41 CEST] <wm4> gif nicely breaks libavformat's awful abstraction for attached images
[08:04:00 CEST] <atomnuker> lgtm, +2, remof gaf support, skip the ml, commit, we'll fax issuas laterz
[08:04:20 CEST] <wm4> becaused attached images are "streams", except they're not, so gifs with multiple frames just don't git in
[08:05:18 CEST] <wm4> (probably better solution: just make them blobs, require user to open them as memory streams with another lavf context)
[08:09:16 CEST] <atomnuker> rcombs: when you submit the patch to the ML could you bump lavf micro as well? I need it for my cd ripper
[08:09:32 CEST] <rcombs> atomnuker: hah, that is what this is for :P
[08:09:50 CEST] <rcombs> (also includes segment.c breaking on chapters and a cue sheet demuxer)
[10:03:04 CEST] <rcombs> so, lavc's FLAC encoder's end behavior appears to be invalid
[10:04:12 CEST] <rcombs> it changes the block size for the last frame, which is fine, except that it encodes the timestamp as a frame number instead of a sample number, and my understanding is that you can only do that if every preceding frame has the same block size as the current one
[10:05:14 CEST] <wm4> doesn't the timestamp refer to the first sample in a frame
[10:05:22 CEST] <wm4> so how does the length of the last frame matter for timestamps
[10:06:49 CEST] <rcombs> wm4: the timestamp can be either in units of samples or of frames
[10:07:03 CEST] <rcombs> if it's in frames, then you convert to samples by multiplying by the block size
[10:07:14 CEST] <rcombs> this doesn't work if the block size has changed
[10:07:38 CEST] <rcombs> oh wait, there's a thing about this
[10:07:39 CEST] <rcombs> >(In the case of a fixed-blocksize stream, only the last block may be shorter than the stream blocksize; its starting sample number will be calculated as the frame number times the previous frame's blocksize, or zero if it is the first frame)
[10:08:26 CEST] <rcombs> so, what, you need to know a priori that it's the last block?
[10:08:35 CEST] <rcombs> who the fuck came up with that?
[10:08:58 CEST] <rcombs> also: "hey, let's put timestamps in the frame headers", :|
[10:09:00 CEST] <wm4> hm not sure if I get it
[10:28:34 CEST] <nevcairiel> rcombs: cant you just always calc from the previous clocks size
[10:28:44 CEST] <nevcairiel> blocks*
[10:31:40 CEST] <alevinsn> wm4:  I started doing the videotoolbox review tonight (before you sent the ping to the ML)--if you are willing to wait for my review
[10:31:44 CEST] <alevinsn> I'll get it done
[10:32:03 CEST] <wm4> ok
[10:32:29 CEST] <alevinsn> there is quite a bit of code, so its slow going
[10:32:55 CEST] <alevinsn> "    // Somewhat tricky because the API user can call av_videotoolbox_default_free()
[10:32:55 CEST] <alevinsn>     // at any time.
[10:32:55 CEST] <alevinsn> "
[10:33:01 CEST] <alevinsn> I didn't understand this comment
[10:33:15 CEST] <alevinsn> av_videotoolbox_default_free() doesn't do anything with hwaccel_priv_data
[10:33:23 CEST] <alevinsn> it only frees hwaccel_context
[10:35:38 CEST] <wm4> because videotoolbox_default_free calls that function
[10:37:36 CEST] <alevinsn> I don't understand why that matters--if it returns null, videotoolbox_default_free() just returns
[10:41:22 CEST] <rcombs> nevcairiel: yeah, that's what I'm doing in flacenc now
[10:41:59 CEST] <wm4> alevinsn: because there is an unusual situation with avctx->internal not being set because the codec context can be closed
[10:42:22 CEST] <rcombs> nevcairiel: but that doesn't work for e.g. flac_read_timestamp
[10:44:42 CEST] <alevinsn> wm4:  okay, but what does that have to do with av_videotoolbox_default_free()?  plus, if that's the case, why is it okay to use hwaccel_priv_data directly in the next function, videotoolbox_buffer_create()?
[10:47:51 CEST] <alevinsn> and then further, if we go to the places that call videotoolbox_get_context(), such as videotoolbox_session_decode_frame()
[10:48:15 CEST] <alevinsn> if it is possible for internal to be null, then what this function is doing looks wrong
[10:50:45 CEST] <wm4> av_videotoolbox_default_free() calls an internal function that's shared
[10:50:56 CEST] <wm4> it's not
[10:50:58 CEST] <wm4> ...
[10:56:20 CEST] <alevinsn> you mean how videotoolbox_default_free() is also called by videotoolbox_uninit()?
[10:58:55 CEST] <wm4> just read the code
[11:01:00 CEST] <alevinsn> I have--but, I still don't understand why the comment is relevant, and if it is relevant, why there aren't issues in the other places
[11:01:40 CEST] <alevinsn> maybe if you could describe the sequence that prompted that check, it would be helpful (for me)
[11:02:18 CEST] <wm4> when videotoolbox_default_free() is called if the codec is closed
[11:10:07 CEST] <alevinsn> so, are you saying there is the possibility that, because av_videotoolbox_default_free() is exposed to the user
[11:10:17 CEST] <alevinsn> if it is called after the codec has been closed
[11:10:25 CEST] <alevinsn> you have that check there to make sure it doesn't crash?
[11:10:33 CEST] <wm4> yes
[11:12:15 CEST] <alevinsn> OK, I got it, next thing, I know it was doing this in the existing code, but why isn't there a cast to (VTContext *) for hwaccel_priv_data?  hwaccel_priv_data is of type void *
[11:12:20 CEST] <alevinsn> I would think that this would at least produce a warning
[11:12:27 CEST] <alevinsn> a compiler warning
[11:13:08 CEST] <wm4> this is not C++
[11:13:31 CEST] <nevcairiel> yeah C does not need casting for that
[11:13:47 CEST] <alevinsn> but, it is needed for:     CVPixelBufferRef pixbuf = (CVPixelBufferRef)vtctx->frame;
[11:14:09 CEST] <nevcairiel> you dont need casts from void* to other pointers
[11:14:23 CEST] <nevcairiel> but you do need it for other complex type conversions
[11:14:50 CEST] <alevinsn> oh--that just looks wrong to me, but I can get used to it
[11:16:13 CEST] <alevinsn> ok, before I head to bed, I had a thought that you might find amusing that I thought I would share
[11:16:51 CEST] <alevinsn> since wm4 is effectively anonymous, wm4 could in theory be anyone... so, if we take that to its logical conclusion....
[11:17:05 CEST] <alevinsn> wm4 is cehoyos masquerading as wm4 :-)
[11:17:54 CEST] <alevinsn> ok, maybe not as amusing as I thought it was
[11:18:42 CEST] <nevcairiel> using a fixed name still identifies you, even if its not your real name
[11:19:29 CEST] <alevinsn> I meant
[11:20:08 CEST] <alevinsn> with the nature of the Internet, it is certainly possible to have multiple aliases
[11:21:19 CEST] <nevcairiel> that could be claimed about anyone
[11:21:34 CEST] <nevcairiel> how do we know you are who you are claiming to be? :d
[11:22:04 CEST] <alevinsn> yes, yes, a "real name" doesn't mean anything as well
[11:23:24 CEST] <alevinsn> but, at least with wm4, and the e-mail address associated with wm4 on the ML, I don't think there is anything to easily tie wm4 to the identity of a specific individual
[11:24:22 CEST] <alevinsn> so, I was suggesting that these arguments between wm4 and cehoyos may be a joke perpetrated by cehoyos entirely... its ridiculous, which is why I thought it was amusing
[11:31:55 CEST] <iive> alevinsn: you mean, this might be a joke by wm4 entirely :P
[11:47:36 CEST] <wm4> does anyone know what the allowed timestamp range is?
[11:47:43 CEST] <wm4> in libavformat
[11:51:12 CEST] <BtbN> Whatever fits into int64 I'd assume?
[11:52:14 CEST] <wm4> then stuff like RELATIVE_TS_BASE could probably not work
[11:52:17 CEST] <nevcairiel> 48 bit i think, because of this relative ts hack
[11:52:21 CEST] <wm4> lol
[11:52:38 CEST] <nevcairiel> (not like 48 bit isnt a lot of timestamps)
[11:53:52 CEST] <JEEB> relative TS base?
[11:53:58 CEST] <JEEB> it feels like I don't want to know about this...
[11:54:32 CEST] <wm4> indeed you don't
[11:54:58 CEST] <nevcairiel> i forget wtf this was even for
[11:58:09 CEST] <wm4> full 64 bit is a bit crazy, because it'd overflow most calculations that do something with timestamps
[11:58:27 CEST] <rcombs> OK now I've gotta know
[11:58:32 CEST] <rcombs> what is this
[11:58:39 CEST] <rcombs> (RELATIVE_TS_BASE)
[12:01:07 CEST] <wm4> see libavformat/utils.c
[12:01:09 CEST] <wm4> try not to cry
[12:02:13 CEST] <rcombs> &wat
[12:16:25 CEST] <BtbN> hm, adding support for the hw_device_ctx to nvenc.c is harder than expected
[12:16:42 CEST] <BtbN> as it relies on the data_pix_fmt from the context
[12:18:39 CEST] <wm4> what does it do with it? initialization?
[12:19:08 CEST] <BtbN> I'm trying to find out if it really needs it before getting the first frame
[12:19:53 CEST] <wm4> it it really does, using hw_frames_ctx might be reasonable, but only jkqxz knows whether this would be ok API use
[12:21:36 CEST] <BtbN> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/nvenc.c#L408 here it just gets the cuda context, using hw_device_ctx instead is trivial there.
[12:22:24 CEST] <BtbN> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/nvenc.c#L1406 and here it gets the width/height and sw_format, which I'm pretty sure can all be gotten from the frame instead.
[12:22:43 CEST] <wm4> certainly
[12:23:00 CEST] <wm4> so it looks like hw_device_ctx support is possible/easy
[12:23:13 CEST] <BtbN> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/nvenc.c#L1305 but this is the place that's the problem
[12:23:42 CEST] <wm4> ah
[12:23:58 CEST] <BtbN> That data_pix_fmt is used in the capabilities checks, to see if the device supports 10bit/yuv444 and so on
[12:24:26 CEST] <wm4> you could init the encoder on the first frame
[12:24:37 CEST] <wm4> though that seems annoying
[12:24:38 CEST] <BtbN> That would be a huge re-structure
[12:25:04 CEST] <BtbN> It's also used when configuring the encoder, like vui->videoFullRangeFlag = (avctx->color_range == AVCOL_RANGE_JPEG || ctx->data_pix_fmt == AV_PIX_FMT_YUVJ420P || ctx->data_pix_fmt == AV_PIX_FMT_YUVJ422P || ctx->data_pix_fmt == AV_PIX_FMT_YUVJ444P);
[12:25:54 CEST] <wm4> J formats, seriously
[12:26:13 CEST] <wm4> they're deprecated and hw_device_ctx doesn't have colorspace info AFAIK
[12:26:16 CEST] <BtbN> A lot of decoders still output those
[12:26:25 CEST] <wm4> sorry, hw_frames_ctx
[12:26:55 CEST] <wm4> you can check avctx->color_range though, which I suppose is set on init
[12:27:00 CEST] <wm4> (why not the pixfmt??)
[12:27:07 CEST] <wm4> maybe this is a non-issue
[12:27:25 CEST] <BtbN> hm, I'd assume I can just do ctx->data_pix_fmt = avctx->sw_pix_fmt;
[12:27:39 CEST] <wm4> yeah
[12:27:54 CEST] <wm4> just update the docs
[12:28:04 CEST] <BtbN> There are docs about that?
[12:28:19 CEST] <wm4> the current docs say about that field: * - encoding: unused.
[12:28:29 CEST] <wm4> (well, doxygen, not docs)
[12:28:34 CEST] <BtbN> ah, those docs
[12:30:32 CEST] <BtbN> Do I just put "set by user" there?
[12:30:50 CEST] <BtbN> Or should I describe it a bit more in detail?
[12:31:43 CEST] <wm4> maybe mention that it's only interpreted if pix_fmt is a hwaccel format
[12:32:43 CEST] <BtbN> "encoding: May be used by some hw encoders to find the underlying sw pixel format when pix_fmt is a hwaccel format."
[12:33:39 CEST] <wm4> sounds good
[12:33:41 CEST] <BtbN> I wonder if ffmpeg.c already sets it correctly when encoding
[12:33:46 CEST] <BtbN> It should?
[12:34:01 CEST] <wm4> searching for it shows nothing lol
[12:34:13 CEST] <wm4> but nothing should stop it from being able to
[12:34:30 CEST] <BtbN> cuvid.c makes use of it, so ffmpeg.c has to touch it somewhere
[12:37:58 CEST] <jkqxz> It's set in the get_format code in lavc.
[12:38:54 CEST] <jkqxz> I'm not sure exactly what you are trying to do here, but hw_device_ctx for encoders is only useful if you are using software frames as input.
[12:39:14 CEST] <BtbN> So far it was used by nvenc.c to get the cuda context
[12:39:26 CEST] <BtbN> And as such was also inter-mangled into some other things
[12:39:51 CEST] <BtbN> primarily only to get the sw pixel format. And I'm not trying to add avctx->hw_device_ctx support to nvenc.c
[12:39:58 CEST] <BtbN> to eventually migrate ffmpeg_cuvid.c
[12:40:03 CEST] <wm4> jkqxz: so he should access hw_frames_ctx?
[12:40:33 CEST] <wm4> the doxygen doesn't really say whether it's set on init
[12:40:45 CEST] <jkqxz> If you have hardware frames as input, then you use hw_frames_ctx (including the device inside it).
[12:41:06 CEST] <jkqxz> If you have software frames as input, use hw_device_ctx to get the device you are meant to use, or if not set you can make one up yourself by whatever means.
[12:41:25 CEST] <jkqxz> Yes, hw_frames_ctx must be set on init.
[12:42:05 CEST] <jkqxz> ("This field should be set before avcodec_open2() is called.")
[12:42:23 CEST] <wm4> oh damn
[12:42:26 CEST] <wm4> blindness
[12:42:38 CEST] <BtbN> But that makes the entire effort pointless
[12:42:47 CEST] <jkqxz> AVCodecContext.sw_pix_fmt shouldn't be relevant, because either it's the sw_pix_fmt of the hw_frames_ctx, or it's the pix_fmt itself because you are using software frames.
[12:43:09 CEST] <wm4> BtbN: well for sw mode, you still want to read hw_device_ctx to set the device to be used
[12:43:25 CEST] <wm4> makes for a better user API than messing with private options etc.
[12:43:39 CEST] <BtbN> in sw mode nvenc.c just creates its own cuda context
[12:44:37 CEST] <BtbN> But the whole idea was to get rid of the half-initialized hw_frames_ctx ffmpeg_cuvid.c does, and instead get rid of it entirely, as it's essentialy not needed at all
[12:44:56 CEST] <BtbN> And I'm pretty sure that can be done without any crazy hackery
[12:45:28 CEST] <jkqxz> The context created by nvenc will be on some device it picks (possibly with a private option), rather than on the one defined by the user with this common mechanism.  That's all you gain by using this.
[12:45:47 CEST] <jkqxz> (Same change for libmfx: <https://git.libav.org/?p=libav.git;a=commit;h=3d197514e613ccd9eab43180c0a7c8b09a307606>.)
[12:46:16 CEST] <jkqxz> (Except it's worse there, because of things opening for exclusive access by default and breaking multiple instances.)
[12:48:14 CEST] <wm4> BtbN: well the user can still init the hw_frames_ctx for decoding, it just isn't particularly useless
[12:48:24 CEST] <wm4> for videotoolbox the situation is similarly awkward
[12:48:41 CEST] <wm4> (maybe worse because neither hw_frames_ctx nor hw_device_ctx really make sense for it)
[12:48:45 CEST] <wm4> at least for decoding
[13:00:19 CEST] <BtbN> wm4, https://github.com/FFmpeg/FFmpeg/compare/master...BtbN:master this looks pretty clean to me.
[13:08:37 CEST] <wm4> BtbN: not sure
[13:09:11 CEST] <wm4> I think hw_device_ctx should be used in sw format mode only?
[13:09:45 CEST] <BtbN> But what was the whole point of the entire effort then? I thought it was to get rid of the ugglynes ffmpeg_cuvid.c does
[13:09:58 CEST] <BtbN> It didn't help with that at all
[13:13:59 CEST] <wm4> but what does this have to do with ffmpeg_cuvid.c?
[13:14:20 CEST] <BtbN> It currently half-initializes a hw_frames_ctx, which then gets fully initialized by cuvid.c
[13:14:31 CEST] <BtbN> That's ugly, and it really only needs to create a CUDA context
[13:15:29 CEST] <wm4> you can drop that
[13:15:48 CEST] <BtbN> No I can't, nvenc.c needs the hw_frames_ctx
[13:15:52 CEST] <wm4> output avframes will have a privately created heframesctx
[13:16:04 CEST] <BtbN> but nvenc needs it on the avctx
[13:16:09 CEST] <BtbN> at init-time
[13:16:12 CEST] <BtbN> to get the cuda context
[13:18:27 CEST] <wm4> encoders are initialized once the first frame was filtered/decoded, right?
[13:18:59 CEST] <BtbN> in ffmpeg.c, yes. But nvenc needs the hw_frames_ctx to be set on init, it being on the frames isn't enough.
[13:19:20 CEST] <atomnuker> they ought to be given container framesize != first image's framesize
[13:21:50 CEST] <jkqxz> hw_frames_ctx on the AVCodecContext is set from the first frame when the encoder is opened.
[13:22:59 CEST] <wm4> BtbN: so it looks like you're already done
[13:23:16 CEST] <wm4> although being able to set the device in sw mode would be good too
[13:23:54 CEST] <BtbN> jkqxz, where is that done?
[13:24:05 CEST] <jkqxz> I don't understand what sw_pix_fmt is even meaning there.  You already have the hw_frames_ctx in that case, so you already know the sw_pix_fmt.
[13:24:31 CEST] <BtbN> for cases where you don't have a hw_frames_ctx, but only a hw_device_ctx
[13:24:38 CEST] <BtbN> As for CUDA, there is no inherent need for one
[13:24:53 CEST] <jkqxz> <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=ffmpeg.c;h=e798d922779f6fbcf019902cb0dfcd3cd877643d;hb=HEAD#l3430>
[13:25:13 CEST] <jkqxz> But the only such cases are software case where the pix_fmt is pix_fmt.
[13:25:29 CEST] <jkqxz> (Immediately before avcodec_open2()...)
[13:25:47 CEST] <BtbN> right now they are, but due to the new field being available, the whole cuvid/nvenc hwaccel could be migrated to not use an external hw_frames_ctx
[13:27:29 CEST] <wm4> hw_frames_ctx is also needed for filtering
[13:27:38 CEST] <wm4> mpv uses it to get the sw_format (and also screenshots)
[13:27:38 CEST] <jkqxz> There is no "external" hw_frames_ctx, but there is still the one carried in the frames (which is required, because you need the hardware metadata on them).
[13:27:56 CEST] <jkqxz> And you need to pass that one to the encoder.
[13:27:57 CEST] <wm4> oh with external you mean the one in ffmpeg_cuvid.c?
[13:28:03 CEST] <BtbN> yes
[13:28:10 CEST] <wm4> just create the hw_frames_ctx in the decoder
[13:28:12 CEST] <BtbN> cuvid.c conveniently creates one internally, so the frames coming out of it have one
[13:28:14 CEST] <wm4> like it's done now
[13:28:17 CEST] <wm4> (after my patch)
[13:28:48 CEST] <BtbN> hm, this basically means that ffmpeg_cuvid.c can be almost entirely removed now
[13:28:56 CEST] <wm4> if cuvid.c doesn't support custom frame allocation, and if changing the sw_format to something else doesn't make sense, the "external" hw_frames_ctx should be deprecated
[13:29:16 CEST] <wm4> (changing sw_format might make sense if the hw decoder supports multiple formats for the same video)
[13:29:28 CEST] <wm4> (like videotoolbox does, urgh)
[13:29:36 CEST] <BtbN> it does
[13:29:51 CEST] <BtbN> you can for example request an 8bit format for 10bit video, and it will do as you told it
[13:30:09 CEST] <wm4> in that case, only the half-initialized part needs to be deprecated
[13:30:24 CEST] <BtbN> cuvid.c already uses sw_pix_format for that
[13:30:26 CEST] <wm4> (so a user would either have to set a fully initialized one, or none)
[13:30:42 CEST] <wm4> well I don't think the user is supposed to set sw_pix_format
[13:31:00 CEST] <BtbN> But it looks like ffmpeg.c would have to learn about hw_device_ctx first. The InputStream only has a hw_frames_ctx
[13:31:12 CEST] <wm4> and for videotoolbox I made it so that it can select the sw_format with a hw_frames_ctx
[13:31:32 CEST] <wm4> advanced device selection is in Libav
[13:31:41 CEST] <BtbN> cuvid.c insist that thw hw_frames_ctx swformat is identical to the one in sw_pix_fmt
[13:31:47 CEST] <wm4> I think it's 50 commits or so away from the currently merged one
[13:32:05 CEST] <wm4> why does it do that?
[13:32:32 CEST] <BtbN> Because anything else doesn't make sense?
[13:33:17 CEST] <wm4> well what sw_pix_fmt is is not too well defined anyway
[13:33:30 CEST] <wm4> for example with real hwaccels, the hw_frames_ctx sw_format can be different
[13:33:35 CEST] <BtbN> What is it supposed to do if the sw_pix_fmt is NV12, but the externally supplied context is yuv420p?
[13:33:48 CEST] <wm4> e.g. if sw_pix_fmt is yuv420p, the hw_frames_ctx one is usually nv12
[13:33:49 CEST] <BtbN> So it just throws an error
[13:34:12 CEST] <wm4> does cuvid (the nvidia API) support yuv420p decoding?
[13:34:17 CEST] <BtbN> no
[13:34:21 CEST] <wm4> then throw an error
[13:34:31 CEST] <BtbN> It will throw an error whenever they mismatch
[13:34:34 CEST] <BtbN> no matter in what way
[13:35:36 CEST] <wm4> didn't you say it supports decoding 10 bit video to 8 bit surfaces?
[13:36:43 CEST] <BtbN> it does
[13:36:59 CEST] <BtbN> but that works by having 10bit hevc, and setting an 8 bit sw format
[13:37:00 CEST] <wm4> so in that case, if the user sets a hw_frames_ctx, it could set the sw_format to 8 bit
[13:37:08 CEST] <wm4> even though avctx->sw_pix_fmt is 10 bit
[13:37:36 CEST] <BtbN> It will want that to be 8 bit as well
[13:37:58 CEST] <wm4> then set it?
[13:37:58 CEST] <BtbN> which makes sense to me, as it represents the output pixel format. coded input doesn't have a pixel format
[13:38:08 CEST] <wm4> sw_pix_fmt has no meaning
[13:38:20 CEST] <wm4> it only has some meaning within hwaccels and get_format
[13:38:30 CEST] <wm4> but it's not interpreted or used otherwise by anything
[13:38:39 CEST] <wm4> so do whatever is convenient?
[13:41:47 CEST] <BtbN> https://github.com/FFmpeg/FFmpeg/compare/master...BtbN:master this should be enough for support in sw mode
[13:44:13 CEST] <wm4> yeah looks good to me
[13:44:22 CEST] <wm4> maybe the factoring could be improved
[13:44:39 CEST] <wm4> like moving device creation into a function (or just using the hwcontext's device creation)
[13:45:46 CEST] <BtbN> that's a bit complicated as it's interleaved with the capabilities check
[13:46:03 CEST] <BtbN> it will by default use the first device that supports the needed features for the current encode, unless user-specified
[13:46:56 CEST] <wm4> are there really multiple devices usually?
[13:47:19 CEST] <BtbN> I have come across multiple people who have more than one GPU
[13:47:29 CEST] <wm4> yeah, I'd expect one device per GPU
[13:47:30 CEST] <BtbN> Also some with very different GPUs
[13:47:56 CEST] <BtbN> Like, an old Quadro K600 as output-GPU, and a bunch of GPGPU-Cards on top
[13:47:57 CEST] <wm4> in theory you can pass an AVDictionary to device creation, but yeah maybe not worth the effort
[13:48:19 CEST] <BtbN> I'd still need to enumerate all devices
[13:48:25 CEST] <BtbN> And check each one for its nvenc capabilities
[13:48:34 CEST] <wm4> I mean you could do the enumeration in the hwcontext code
[13:48:46 CEST] <BtbN> And pass in a callback to check if it's ok? oO
[13:49:06 CEST] <wm4> no, just come up with a way to signal requirements via the AVDictionary
[13:49:18 CEST] <wm4> but that's probably way more complex than the current code
[13:49:38 CEST] <BtbN> That'd make the hwcontext code need to know about nvenc
[13:49:50 CEST] <BtbN> As qurying the device capabilities is nvenc api, not generic CUDA
[13:49:57 CEST] <jkqxz> Yeah, requirements would be nasty.
[13:50:50 CEST] <wm4> still a bit annoying that this nvenc can't be used when doing full-hw transcoding
[13:50:52 CEST] <jkqxz> I assume the default mode is also so that you automagically choose the next card when you run into the two-instances-per-consumer-GPU limit?
[13:51:12 CEST] <wm4> (is it still 2 per GPU?)
[13:51:22 CEST] <BtbN> 2 per nvenc chip on consumer cards
[13:51:25 CEST] <BtbN> which happen to have only one
[13:52:14 CEST] <BtbN> jkqxz, yes, it would indeed do that.
[13:52:20 CEST] <BtbN> Never tought about that, but yes, that would happen
[13:52:54 CEST] <jkqxz> Because that would be useful to some set of people, and isn't really achievable without weird configuration by any other route.
[13:53:49 CEST] <BtbN> nvEncOpenEncodeSessionEx is what fails in that case, which makes nvenc_open_session fail, which makes nvenc_check_device fail, which causes nvenc_setup_device to iterate to the next device
[13:56:15 CEST] <wm4> pretty nasty if you'd like that with full-hw transcoding
[13:56:24 CEST] <wm4> since the encoder is actually opened last
[13:58:45 CEST] <BtbN> With full-hw transcoding it gets a cuda context as input
[13:58:49 CEST] <BtbN> and that logic never runs
[13:59:01 CEST] <BtbN> it just runs a capability check on the passed context, and throws an error if it doesn't work
[14:03:54 CEST] <wm4> yeah
[14:04:04 CEST] <wm4> which is a problem if someone wants that fallback logic
[14:06:42 CEST] <BtbN> it's not possible in a sensible way to do that on full hw transcoding
[14:14:50 CEST] <cone-349> ffmpeg 03Timo Rothenpieler 07master:dad6f44bbd57: avcodec/nvenc: support external context in sw mode
[14:14:50 CEST] <cone-349> ffmpeg 03Timo Rothenpieler 07master:f89a89c5500c: avcodec/nvenc: use frames hwctx when registering a frame
[14:15:17 CEST] <BtbN> Is the git server busy/about to die? It just was sitting there at "remote: -Info-          Update is fast-forward" for almost two minutes
[14:15:48 CEST] <atomnuker> hm, I think I noticed a slowdown too
[14:16:19 CEST] <BtbN> j-b, ^
[15:32:57 CEST] <cone-349> ffmpeg 03Michael Niedermayer 07master:464c4b86ee43: avcodec/mss34dsp: Fix multiple signed integer overflow
[15:32:58 CEST] <cone-349> ffmpeg 03Michael Niedermayer 07master:78bf446852a7: avcodec/ra144: Fix runtime error: left shift of negative value -798
[15:32:59 CEST] <cone-349> ffmpeg 03Michael Niedermayer 07master:2162b862eba5: avcodec/magicyuv: Check len to be supported
[15:52:31 CEST] <Compn> BtbN : dont we run our own git ?
[15:53:20 CEST] <Compn> yeah git.ffmpeg.org goes to our bg host
[15:53:25 CEST] <michaelni> Compn, our git is not slow
[15:53:53 CEST] <Compn> BtbN and atomnuker were the reporters, not me :)
[15:54:08 CEST] <michaelni> deveopers use git at source.ffmpeg.org:ffmpeg
[15:54:31 CEST] <michaelni> source.ffmpeg.org.	600	IN	CNAME	git.videolan.org.
[15:54:34 CEST] <Compn> ah
[15:54:42 CEST] <jamrial> is this about fetch/pull or push? because while the former is fast as usual, the latter started being slow for me ever since the videolan server move
[15:54:43 CEST] <Compn> yeah sorry i was thinking something else :D
[15:55:22 CEST] <Compn> tracert is timing out to source.ffmpeg.org, somewhere in london
[15:55:38 CEST] <Compn> at least for me
[15:55:54 CEST] <Compn> not that that has anything to do with git slowdonw
[15:55:56 CEST] Action: Compn runs
[16:10:43 CEST] <philipl> BtbN: so what was the conclusion on whethere the hw_frame_ctx initialisation can be removed from ffmpeg_cuvid.c?
[16:11:55 CEST] <wm4> it can
[16:12:11 CEST] <BtbN> well, not without some merge from libav it seems
[16:15:32 CEST] <wm4> Libav brings the generic hwaccel
[16:15:42 CEST] <wm4> it will work with the current cuvid
[16:15:52 CEST] <wm4> (because it uses hw_device_ctx)
[16:16:09 CEST] <wm4> so ffmpeg_cuvid.c can be deleted, and the hwaccel entry in ffmpeg_opt.c slightly updated
[16:16:23 CEST] <philipl> And we think nvenc will work correctly after the last changes?
[16:16:50 CEST] <wm4> yes
[16:21:55 CEST] <philipl> great.
[17:18:02 CEST] <Compn> whos the nvenc maintainer ?
[17:18:03 CEST] Action: Compn runs
[17:42:32 CEST] <philipl> BtbN: So you guys were discussing using avctx->sw_pix_fmt in ff_nvenc_encode_init instead of hw_frames_ctx. Any reason not to do that now? I tried and it works with ffmpeg.c
[17:42:55 CEST] <BtbN> Because it's documented as unused
[17:43:15 CEST] <BtbN> And of course it works, data_pix_fmt being wrong wouldn't cause obvious failures
[17:43:43 CEST] <BtbN> Could only cause wrong colors, and wrong settings when doing yuv444 or 10bit encodes
[17:44:21 CEST] <BtbN> hw_frames_ctx is set anyway, so no need to jump through hoops here
[17:44:38 CEST] <philipl> fair enough
[17:45:24 CEST] <j-b> BtbN: yes, see https://munin.videolan.org/VideoLAN/goldeneye.videolan.org/vmstat.html
[17:45:56 CEST] <j-b> BtbN: or https://munin.videolan.org/VideoLAN/goldeneye.videolan.org/index.html in general
[17:46:41 CEST] <BtbN> yeah, looks very busy the last day or so
[17:47:06 CEST] <j-b> the load is very very high
[17:47:44 CEST] <j-b> and requests quite high too
[17:47:48 CEST] <BtbN> are there some people constantly cloning stuff with no end?
[17:47:54 CEST] <BtbN> that's how it looks to me
[17:48:15 CEST] <j-b> yes
[17:48:17 CEST] <j-b> very often
[17:48:31 CEST] <j-b> usually people start doing that in loop
[18:03:38 CEST] <j-b> BtbN: I did some magic. I hope it will help. Answer in 10 min.
[18:12:00 CEST] <j-b> BtbN: seems way better.
[18:12:12 CEST] <BtbN> banned some spamming IP(s)?
[18:16:47 CEST] <j-b> BtbN: + cleaning + restarting some services, yes.
[18:25:20 CEST] <atomnuker> BBB: what fails currently that we need to align stuff?
[18:25:29 CEST] <atomnuker> why didn't it happen before?
[18:30:19 CEST] <j-b> BtbN: everything is in place, yes. But it's weird to see that many git requests, aka 3x more than last week.
[18:33:02 CEST] <BtbN> probably a single user with a massive misconfiguration hammering the server
[18:35:13 CEST] <j-b> BtbN: does not seem so.
[18:52:42 CEST] <philipl> Shouldn't we add these new *.version files to .gitignore?
[19:12:25 CEST] <jamrial> they aren't?
[19:14:46 CEST] <BBB> atomnuker: see earlier emails about specifics, but the current patch series is trying to address the more general issue of how to make requirements for alignment specific and explicit
[19:14:56 CEST] <BBB> (right now its implied but undocumented and easy to get wrong)
[19:15:22 CEST] <jamrial> philipl: ah yeah, an upcoming merge adds them
[19:15:43 CEST] <jamrial> philipl: i'll cherry pick it since it's a bit far away from out current point
[19:19:23 CEST] <cone-834> ffmpeg 03Diego Biurrun 07master:fbc304239fe6: build: Ignore generated .version files
[19:33:47 CEST] <cone-834> ffmpeg 03Michael Niedermayer 07master:c04aa148824f: avcodec/g726: Fix runtime error: left shift of negative value -2
[19:33:48 CEST] <cone-834> ffmpeg 03Michael Niedermayer 07master:0ac1c87194a6: avcodec/eamad: Fix runtime error: signed integer overflow: 49674 * 49858 cannot be represented in type 'int'
[19:33:49 CEST] <cone-834> ffmpeg 03Michael Niedermayer 07master:a38e9797cb41: avcodec/s302m: Fix left shift of 8 by 28 places cannot be represented in type 'int'
[19:33:50 CEST] <cone-834> ffmpeg 03Michael Niedermayer 07master:a5e0dbf530d4: avcodec/aacdec_template: Do not decode 2nd PCE if it will lead to failure
[19:33:51 CEST] <cone-834> ffmpeg 03Michael Niedermayer 07master:441026fcb13a: avcodec/xwddec: Check bpp more completely
[19:48:31 CEST] <cone-834> ffmpeg 03Marton Balint 07master:c0443c1af1a7: lavfi/avfiltergraph: only return EOF in avfilter_graph_request_oldest if all sinks EOFed
[20:00:48 CEST] <cone-834> ffmpeg 03wm4 07release/3.3:059db2204046: ffmpeg: check for unconnected outputs
[20:00:49 CEST] <cone-834> ffmpeg 03Marton Balint 07release/3.3:508e410d348e: lavfi/avfiltergraph: only return EOF in avfilter_graph_request_oldest if all sinks EOFed
[21:01:43 CEST] <alevinsn> wm4:  just finished videotoolbox review
[21:22:33 CEST] <durandal_1707> would someone write cmadd SIMD for me?
[21:43:16 CEST] <kierank> durandal_1707: no
[21:44:57 CEST] <durandal_1707> kierank: why?
[21:45:03 CEST] <kierank> i dunno what cmadd is
[21:45:28 CEST] <durandal_1707> complex multiple and add
[23:58:29 CEST] <durandal_1707> jamrial: how should i name function that does complex multiplication and adds result to accumulator?
[23:59:16 CEST] <jamrial> scalarproduct_something?
[00:00:00 CEST] --- Mon May  8 2017


More information about the Ffmpeg-devel-irc mailing list