[Ffmpeg-devel-irc] ffmpeg-devel.log.20180111

burek burek021 at gmail.com
Fri Jan 12 03:05:03 EET 2018


[00:37:03 CET] <cone-810> ffmpeg 03Mark Thompson 07master:9b4611a1c1f2: vf_overlay_opencl: Don't leak output frame on error
[00:37:04 CET] <cone-810> ffmpeg 03Mark Thompson 07master:526a87b47124: vf_program_opencl: Add missing error code returns
[01:55:26 CET] <wm4> so we have a 80 column maximum policy or not?
[01:57:19 CET] <BBB> I dont think its consistently enforced
[01:57:38 CET] <wm4> I'm looking at a 176 column monster condition here
[01:57:40 CET] <wm4> screw that
[01:58:22 CET] <jdarnley> We don't enfore it that much because many of the option structs easily exceed 80
[01:59:20 CET] <jdarnley> but code often follows it
[02:01:42 CET] <atomnuker> yep, that's the best way to deal with it imo
[02:02:01 CET] <atomnuker> some projects enforce it regiliously and its infuriating and makes the code ugly
[02:03:11 CET] <wm4> I find it infuriating if there's a line that spans 3 screens (thus gets broken in my editor) and which tries to do bullshit vertical alignment
[02:04:08 CET] <jdarnley> Make a commit to change that monstrosity
[02:04:35 CET] <jdarnley> even if you turn 1 line into 20
[02:05:39 CET] <atomnuker> an occasional monster is better than suffering on every single function because you're 1 char above the limit
[02:06:40 CET] <wm4> just have a soft limit
[03:45:40 CET] <cone-617> ffmpeg 03Eduard Sinelnikov 07master:7fcbebbeafd1: avformat/aiffdec: AIFF fix in case of ANNO
[03:49:36 CET] <tmm1> wm4: a53_cc in h264 comes in via SEI. would that still be affected by reordering?
[03:50:32 CET] <wm4> AFAIK yes
[08:56:07 CET] <ldts> lrusak: I'll setup to run some tests on your AV_PIX_FMT_DRM_PRIME patch and see how we can integrate it
[08:57:14 CET] <lrusak> thanks
[08:57:33 CET] <lrusak> should be able to get it to work with mpv with some small changes
[08:58:11 CET] <lrusak> but I'd like to be able to switch between using AV_PIX_FMT_DRM_PRIME and not
[08:58:23 CET] <ldts> yes 
[09:01:39 CET] <ldts> out of curiosity what is the performance improvement that you have seen in Kodi?
[09:08:56 CET] <wm4> <lrusak> but I'd like to be able to switch between using AV_PIX_FMT_DRM_PRIME and not <- what do you mean by that?
[09:12:10 CET] <lrusak> the patch is very intrusive and makes it not possible to use the decoder without outputting AV_PIX_FMT_DRM_PRIME
[09:12:25 CET] <lrusak> ldts: the perfomance is very good in kodi :)
[09:12:43 CET] <ldts> yeah that is what I thought. we need to get this in
[09:14:02 CET] <wm4> you could always build sw output on top of DRM output (within the decoder)
[09:14:20 CET] <wm4> and map or copy the DRM frames to normal AVFrames
[09:17:01 CET] <nevcairiel> wasnt there the problem that drm frames could be unexpected formats, ie. tiled or whatever?
[09:18:26 CET] <lrusak> hmm
[09:19:16 CET] <wm4> nevcairiel: in some cases, I guess (like the stuff the rockchip wrapper outputs, but which isn't real v4l)
[09:20:02 CET] <lrusak> the rockchip decoder in ffmpeg can only output AV_PIX_FMT_DRM_PRIME  IIRC
[09:22:16 CET] <lrusak> can't I simply check the requested pix format and ifdef the code?
[09:22:45 CET] <wm4> the frames the rockchip wrapper can probably be mapped using the hwcontext functions, except that won't work for its weird 10 bit format
[09:23:07 CET] <wm4> but there's probably no such problem for v4l (yet)
[09:36:19 CET] <ldts> lrusak: the format for the context is assigned during v4l2_probe_driver (to understand the kernel capabilities and do the necessary translations). why cant it be done at that level (context) instead of hardcoding it in the v4l2_buffer? 
[09:38:38 CET] <lrusak> it can sure
[09:44:00 CET] <ldts> then once you know the context format, just switch behaviours at runtime in free_buffer, buf_to_avframe?
[09:46:28 CET] <lrusak> yea
[10:04:51 CET] <ldts> I think during that probe you could check if dmabuf is supported...still about your patch, context_init is not changed so all the buffers are created as V4L2_MEMORY_MAP?
[11:14:21 CET] <cone-136> ffmpeg 03Carl Eugen Hoyos 07master:af964baf0906: configure: Simplify detection of static x264 on systems without pkg-config.
[14:45:20 CET] <cone-181> ffmpeg 03James Almer 07release/2.8:a1433196b8d5: avformat/libssh: check the user provided a password before trying to use it
[14:45:20 CET] <cone-181> ffmpeg 03James Almer 07release/3.0:ef95789c8c49: avformat/libssh: check the user provided a password before trying to use it
[14:45:20 CET] <cone-181> ffmpeg 03James Almer 07release/3.1:da1132264104: avformat/libssh: check the user provided a password before trying to use it
[14:45:20 CET] <cone-181> ffmpeg 03James Almer 07release/3.2:4fb5f391ae0b: avformat/libssh: check the user provided a password before trying to use it
[14:45:22 CET] <cone-181> ffmpeg 03James Almer 07release/3.3:f85b102c8011: avformat/libssh: check the user provided a password before trying to use it
[14:50:40 CET] <bogdanc> test
[15:01:17 CET] <LongChair> does anyone knows here if ffmpeg reads and stores somewhere the hdr metadata information ? 
[15:02:17 CET] <LongChair> it looks like the display standard defined by CEA would require that type of informations struct : 
[15:02:19 CET] <LongChair> https://www.irccloud.com/pastebin/NqHWKE1V/
[15:02:39 CET] <nevcairiel> yes that stuff is parsed from SEI and send as side data
[15:03:40 CET] <LongChair> ok 
[15:04:09 CET] <LongChair> So for a decoder, this would come as part of avctx ? 
[15:04:22 CET] <nevcairiel> part of avframe
[15:05:22 CET] <LongChair> ok so it's coming with avframe side data ?
[15:05:51 CET] <jamrial> yes. see libavutil/mastering_display_metadata.h
[15:09:18 CET] <jamrial> for vp9 you need to look at the container's avstream side data (matroska, mp4), since hdr info is not part of the bitstream like in hevc
[15:24:07 CET] <LongChair> jamrial: hmmm i'll have a few questions just to make sure i got this right, lemme rephrase
[15:24:26 CET] <LongChair> so that struct comes in the stream and i read from stream by ffmpeg
[15:26:25 CET] <LongChair> from a decoder standpoint when being called with `xxx_receive_frame` function, the frame passed to it already has that data in side_data of the AvFrame
[15:26:48 CET] <jamrial> afaik, yes
[15:29:13 CET] <LongChair> so https://www.ffmpeg.org/doxygen/3.0/mastering__display__metadata_8h_source.html has got part of info of the CEA struct above
[15:29:26 CET] <LongChair> basically display_primaries & luminance
[15:29:45 CET] <LongChair> eotf can be guessed from colorspace properties i suppose
[15:30:06 CET] <JEEB> look at the trunk doxygen
[15:30:06 CET] <LongChair> then remains type & max_fall & min/max_cll
[15:30:15 CET] <JEEB> because I'm pretty sure MaxFALL etc are there
[15:30:23 CET] <JEEB> in the mastering display metadata
[15:30:49 CET] <JEEB> https://www.ffmpeg.org/doxygen/trunk/structAVContentLightMetadata.html
[15:30:53 CET] <LongChair> ok so it has another struct for min/max cll 
[15:30:53 CET] <jamrial> content light metadata is also defined there. as JEEB said, look at the most recent doxy
[15:31:40 CET] <LongChair> ok so if i'm right all that's missing is type & max_fall ? 
[15:32:12 CET] <JEEB> what
[15:32:22 CET] <JEEB> didn't I just link MaxFALL at you?
[15:32:50 CET] <JEEB> also the same header from current master is https://www.ffmpeg.org/doxygen/trunk/mastering__display__metadata_8h_source.html
[15:33:41 CET] <JEEB> never, ever look at some old release for if something is there or not. always look at master ("trunk" because of hysterical raisins), and then look up your old version if you're using an old version to see if the stuff is there
[15:33:46 CET] <nevcairiel> eotf/type can be gathered from colorspace/transfer function, iirc
[15:33:51 CET] <JEEB> yes
[15:33:56 CET] <nevcairiel> all others are part  of the metadata
[15:36:31 CET] <cone-181> ffmpeg 03James Almer 07master:ef21033c327a: avutil/mastering_display_metadata: fix copyright header wrongly formated as doxy
[15:36:40 CET] <LongChair> nevcairiel: well there are two fields i don't see
[15:36:59 CET] <LongChair> minCLL doesn't seem to be there .. is that like zero al lthe time ? 
[15:37:25 CET] <LongChair> also the type field in CEA struct ... 
[15:38:03 CET] <nevcairiel> cant provide any values that is not communicated from the codec anywhere
[15:38:18 CET] <nevcairiel> should read the CEA spec to find out what type wants
[15:38:41 CET] <JEEB> there's a min_luminance field that is the minimum luminance of the mastering display if that's the minCLL you want :P
[15:38:52 CET] <JEEB> although it probably isn't
[15:38:56 CET] <LongChair> nope looks to be a sperate field
[15:38:58 CET] <JEEB> since MaxCLL is the content light level
[15:39:02 CET] <LongChair> https://www.irccloud.com/pastebin/NqHWKE1V/
[15:39:11 CET] <JEEB> as opposed to the mastering screen
[15:41:33 CET] <LongChair> it's funny that minCLL is not part of the stream metadata
[15:41:41 CET] <LongChair> type could be somethign different
[15:41:49 CET] <jamrial> neither the content light level SEI in the hevc spec or the Matroska spec define minCLL, so as nevcairiel we can't provide anything the codec or container doesn't communicate
[15:42:05 CET] <LongChair> yeah got that, i'll assume it's zero then
[15:42:46 CET] <LongChair> only need to figure out what they want as type and should be good :)
[15:42:56 CET] <LongChair> thanks for the info to both of you :)
[15:43:10 CET] Action: JEEB only sees minCLL in the samsung HDR10+ stuff
[15:43:21 CET] <JEEB> or well, discussions related to that
[15:43:35 CET] <JEEB> the non-dynamic metadata seems to only have MaxCLL/MaxFALL which we have
[15:44:45 CET] <nevcairiel> I have CEA-861.3 from 2015, and that doesn't even include MinCLL, it ends after MaxFALL
[15:46:42 CET] <LongChair> funny
[15:47:42 CET] <LongChair> intel and rockchip drm `HDR_SOURCE_METADATA` property seem to both have that in linux sourcecode definition for that struct 
[15:49:04 CET] <nevcairiel> well the HDMI spec doesnt include it =p
[15:49:50 CET] <nevcairiel> unless its for some newer thing that this 2015 standard didnt include yet
[15:51:14 CET] <LongChair> oh well :) i'll try without it for now :)
[15:51:22 CET] <LongChair> see what happens
[15:57:36 CET] <philipl> BtbN: fun times
[15:57:58 CET] <BtbN> well, adapting that hlsl one to CUDA should be easy enough
[15:58:04 CET] <BtbN> But they seem to use it as a scaler...?
[15:58:27 CET] <philipl> Technically, deinterlacing is a form of scaling :-)
[15:58:34 CET] <philipl> just in one direction
[15:58:44 CET] <nevcairiel> all those *EDI deinterlacers are just fancy scalers to upscale the fields
[15:58:44 CET] <BtbN> well, one that needs backward and forward references
[15:59:05 CET] <nevcairiel> does it use temporal hints?
[15:59:17 CET] <BtbN> I don't think NEDI does
[15:59:26 CET] <BtbN> But there are some that do
[15:59:39 CET] <nevcairiel> of course, and those are likely less useful as pure scalers
[16:00:01 CET] <BtbN> I wonder how hard a proper Motion-Adaptive deinterlacer would be
[16:00:07 CET] <JEEB> hah, looking up mincll led me to this funny thread http://www.avsforum.com/forum/465-high-dynamic-range-hdr-wide-color-gamut-wcg/2891281-hdr10-samsung-qled-hdr10-summit.html#post54191537 (this and latter comments)
[16:00:08 CET] <BtbN> And if it's even possible
[16:08:29 CET] <philipl> heh. As with all things, it's going to be a mess for a while.
[16:09:10 CET] <philipl> BtbN: Should we bother asking the nvidia guy if they want to volunteer a deinterlacing filter? I assume they'd want to keep theirs proprietary...
[16:11:32 CET] <BtbN> I'll try to come up with a CUDA NEDI one this weekend
[16:13:09 CET] <BtbN> only annoying thing is, it will always be non-free
[16:14:13 CET] <philipl> Yeah. And so hardly ever in the build anyone uses
[16:15:05 CET] <JEEB> what did that crap have interop with? :P
[16:15:23 CET] <BtbN> OpenGL
[16:15:25 CET] <JEEB> so that the filter could possible be utilized without CUDA being involved
[16:15:37 CET] <BtbN> OpenGL, and, well D3D
[16:15:39 CET] <JEEB> ok, that sounds like what mpv is doing might be viable :P
[16:15:51 CET] <JEEB> which is having GLSL which is then compiled into HLSL on windows
[16:15:57 CET] <atomnuker> what about vulkan, do they still not support their foreign fd extension in their drivers?
[16:16:06 CET] <BtbN> they don't
[16:16:08 CET] <JEEB> (there's a native d3d11 renderer now based on that instead of ANGLE)
[16:16:11 CET] <BtbN> Vulkan is way too new
[16:16:32 CET] <JEEB> so in theory you could utilize something similar
[16:16:56 CET] <JEEB> and not make it nonfree while being usable in that specific scenario, maybe?
[16:17:14 CET] <BtbN> a proper OpenGL filtering infrastructure in ffmpeg would indeed be interesting
[16:17:15 CET] <atomnuker> BtbN: but nvidia has had vulkan drivers since the specs were released? or has it been too new to get a cuda interop still?
[16:17:26 CET] <BtbN> CUDA is rather slow moving
[16:17:27 CET] <JEEB> BtbN: yea, that's the point where it gets iffy with the current infra
[16:17:42 CET] <JEEB> mpv just happened to have various rendering infra
[16:17:52 CET] <BtbN> Pretty much every single OpenGL action would have to be done in threads
[16:17:58 CET] <BtbN> to not mess up contexts
[16:19:16 CET] <JEEB> for example I'd welcome an open source deinterlacer that could be utilized - and compiled to be usable with opengl/vulkan/d3d11 (which is what mpv's thing kind of tries to do with GLSL as the base language)
[16:19:19 CET] <bogdanc> What ide do you guys use?
[16:19:37 CET] <atomnuker> JEEB: no, absolutely not
[16:19:59 CET] <JEEB> atomnuker: I did not necessarily mean in the context of FFmpeg
[16:20:03 CET] <JEEB> we've had our own crap with opengl etc
[16:20:05 CET] <atomnuker> I'm not for having a unified opengl/vulkan/d3d11 again
[16:20:16 CET] <JEEB> lol
[16:20:26 CET] <atomnuker> you'll get a classical buggy jack of all trades, master of none
[16:20:33 CET] <atomnuker> sure it'll be compatible
[16:20:44 CET] <atomnuker> but it'll be less compatible than just regular cpu filters with simd
[16:20:58 CET] <JEEB> well what I really care about are opengl and d3d11 and those seem to work on intel lunix and nvidia windows
[16:21:12 CET] <JEEB> vulkan seems to be the one requiring crap for GLSL shader usage which is bad
[16:21:22 CET] <atomnuker> just compile time
[16:21:24 CET] <JEEB> since in theory we don't want to do GLSL anyways
[16:21:32 CET] <JEEB> (at the vulkan level)
[16:21:49 CET] <nevcairiel> vulkans shader compiling topic is entirely lol
[16:22:12 CET] <atomnuker> just call glslang and call it a day, its what everyone who deals with it wants you to do
[16:22:42 CET] <atomnuker> else you get into the land of bad, unstable apis and programs with too many deps to compile comfortably
[16:23:15 CET] <nevcairiel> I like OpenGL, you can just write the shader and its handled at  runtime, why did they have to screw that up so badly
[16:24:14 CET] <atomnuker> to save runtime stalls as you compile new shaders because mobile gpus can't handle too many at once? dunno
[16:52:55 CET] <philipl> opengl based deinterlacing really has to happen in the rendering path doesn't it. ie: it would be an mpv feature. To avoid all that weirdness.
[16:53:46 CET] <JEEB> yup
[16:53:55 CET] <JEEB> but then again you want similar stuff in the encoding path as well
[16:54:14 CET] <JEEB> and both are GPU things. so while I would agree with probably "current FFmpeg is not the place for that, maybe"
[16:54:31 CET] <JEEB> I would say that it'd be really unfortunate if open source deinterlacing stuff would go CUDA :P
[16:54:36 CET] <JEEB> due to its nature
[16:55:09 CET] <philipl> It's less that it has to be CUDA but more that you'd write it in OpenCL for intel/amd and CUDA for nvidia because nvidia has no OpenCL interop
[16:56:04 CET] <philipl> Of course, both intel and amd expose their deinterlacing in a way that's compatible with the ffmpeg filter model so you don't need to write it in the first place.
[16:56:05 CET] <JEEB> philipl: well I did ask what CUDA had interop with
[16:56:18 CET] <JEEB> which came up as d3d11/opengl
[16:56:28 CET] <philipl> JEEB: right. which brings us back here :-)
[16:56:30 CET] <JEEB> and thus my lines came up (´4@)
[16:57:17 CET] <philipl> good ol' nvidia.
[17:14:54 CET] <bogdanc> so, i think i found some link to a bug posted in the mentored projects (gsoc 2018), but i'm having problems debugging the software
[17:15:08 CET] <bogdanc> would somebody give me a hint to what ide's do you use?
[17:15:23 CET] <bogdanc> because i'd rather not use gdb debugger
[17:16:12 CET] <bogdanc> do you use eclipse? codeblocks?
[17:16:40 CET] <JEEB> most people just use a text editor, printf debugging, valgrind and gdb/ldb
[17:16:59 CET] <JEEB> not sure how many people try to integrate the FFmpeg build system into an IDE
[17:17:17 CET] <JEEB> IIRC Qt Creator had a nice debugging interface with gdb?
[17:17:24 CET] <bogdanc> i've tried all day and failed so that's why i thought to ask
[17:17:50 CET] <bogdanc> tried all day to import it into an IDE **
[17:17:52 CET] <JEEB> I was able to integrate the L-SMASH build system into Qt Creator so I would guess FFmpeg is possible as well
[17:18:16 CET] <JEEB> automagic imports most likely won't work, you just have to tell the IDE that configuration step is "this" and then you run make for build etc
[17:19:07 CET] <JEEB> personally I just use sublime text with https://github.com/niosus/EasyClangComplete for completion/warnings etc, and then utilize av_log/printf debugging and valgrind + gdb
[17:19:17 CET] <bogdanc> this answeres my question, thank you, i wanted to avoid printf / gdb
[17:19:21 CET] <atomnuker> I think everyone hates ldb here though
[17:20:09 CET] <JEEB> bogdanc: personally as long as you don't forget to not strip debug symbols gdb was "OK" for basic stuff like setting breakpoints or just seeing where the heck a thing crashed
[17:20:18 CET] <JEEB> for windows I had to build it myself but :welp:
[17:20:24 CET] <JEEB> that didn't take too long
[17:21:59 CET] <bogdanc> i see
[17:22:35 CET] <kierank> i don't think ffmpeg will work in an ide
[17:22:40 CET] <kierank> except windows
[17:23:02 CET] <JEEB> we only support MSVC
[17:23:04 CET] <JEEB> not MSVS
[17:23:34 CET] <JEEB> but yes, since the MSVS debugger can let you just attach to a process like any debugger you could just use MSVS for debugging if you wanted
[17:23:38 CET] <bogdanc> i thought you work on visual studio or something like that
[17:23:59 CET] <JEEB> ahaha, no :) I might build FFmpeg with MSVC once in a blue moon
[17:24:06 CET] <JEEB> but mostly I build on a *nix VM
[17:24:31 CET] <JEEB> and use mingw-w64 cross-compilation toolchains in case I need to build something for windows
[17:24:41 CET] <JEEB> because native windows compilation is both slow an non-rewarding
[17:24:52 CET] <kierank> I don't recommend doing this but I use windows and use a linux vm and share the directory
[17:25:01 CET] <bogdanc> gdb it is ¯\_(Ä)_/¯
[17:26:07 CET] <JEEB> yup
[17:26:11 CET] <JEEB> for actual debugging debugging
[17:27:42 CET] <JEEB> generally printing stuff in tactical places and using valgrind on *nix tends to help a bunch
[17:32:31 CET] <atomnuker> jkqxz: how did -init_hw_device work again
[17:33:09 CET] <atomnuker> you can specify an api but how to specify e.g. a drm/vaapi device?
[17:33:44 CET] <atomnuker> or do you need to add an additional option to specify and init a certain api's device?
[17:36:17 CET] <jkqxz> atomnuker:  RTFM
[17:36:23 CET] <jkqxz> It even has examples!
[17:39:06 CET] <atomnuker> right, "api:device" works
[17:39:32 CET] <atomnuker> but I had to hack ffmpeg_opt to actually set hw_device_ctx
[17:42:28 CET] <jkqxz> Set hw_device_ctx on what?
[17:50:58 CET] <atomnuker> jkqxz: here's the vulkan hwcontext if you want to test/have some comments - https://github.com/atomnuker/FFmpeg/tree/exp_vulkan
[17:51:22 CET] <atomnuker> init the same way as vaapi, e.g. -vulkan_device "device_name"
[17:52:07 CET] <atomnuker> you can hwupload,hwdownload to test how fast it is and to make sure its working and handling frames properly
[17:52:46 CET] <atomnuker> all printouts are AV_LOG_INFO atm so you can see all its doing
[17:54:04 CET] <nevcairiel> wouldnt it be nicer to have like a generic way to specify devices instead of adding a new $api_device thing for all the things
[17:54:34 CET] <jamrial> atomnuker: looks like you missed one file
[17:54:49 CET] <atomnuker> which one?
[17:58:43 CET] <jamrial> nevermind. github collapses big files
[18:22:00 CET] <durandal_1707> atomnuker: have you tested your audio denoising algo over white noise?
[18:22:42 CET] <atomnuker> yep, it'll filter out anything but speech
[18:36:40 CET] <durandal_1707> atomnuker: useless for music
[18:37:59 CET] <atomnuker> yep
[18:38:34 CET] <atomnuker> but then again cymbals make high frequency white noise, you remove that and you'll remove them too
[18:54:01 CET] <durandal_1707> is ambisonic dead?
[18:56:30 CET] <atomnuker> no, youtube are using it for 360 video
[19:12:42 CET] <KGB> [13FFV1] 15retokromer closed pull request #95: add v4_notes (06master...06patch-2) 02https://git.io/vNfY6
[19:13:36 CET] <durandal_1707> atomnuker: there is no 360 video
[19:19:31 CET] <atomnuker> durandal_1707: there is, sadly - https://www.youtube.com/watch?v=bKV1IS-ATmQ
[19:20:22 CET] <atomnuker> no idea who watches them, I think they only exist so they can literaly sell cardboard with some plastic lens
[19:28:11 CET] <BtbN> jkqxz, updating that crap PC only took a whole day. Building opencl enabled ffmpeg master now.
[19:52:20 CET] <BtbN> What exactly do you want tested on nvidia?
[19:57:37 CET] Action: durandal_1707 downloads random 360 video, find audio to be stereo
[20:13:22 CET] <atomnuker> durandal_1707: not sure if ytdl exposes ambisonics
[20:13:45 CET] <atomnuker> I think its only available with opus too
[20:18:48 CET] <atomnuker> updated vulkan branch, rebased framework and wip chromaticaberration filter - https://github.com/atomnuker/FFmpeg/blob/exp_vulkan/libavfilter/vulkan_common.h
[20:55:31 CET] <durandal_1707> atomnuker: there is no really much ambisonics content on yt or anywhere
[21:04:51 CET] <cone-885> ffmpeg 03Martin Vignali 07master:b94cd55155d8: avfilter/x86/vf_interlace : add AVX2 version
[21:14:36 CET] <bogdanc> (as a noob to this program so far) i've finally found where are the compiler flags using "grep -Irnw . -e "gcc" --exclude=\*.{txt,texi,log,c,h,asm}" xD
[21:15:19 CET] <BBB> config.mk?
[21:17:10 CET] <bogdanc> yeap.
[21:40:20 CET] <kierank> bogdanc: why do you want to look at the compiler flags
[21:40:27 CET] <kierank> seems unrelated to cineform
[21:41:24 CET] <bogdanc> well, i'm having trouble breaking using gdb and i wanted to see if the compiler uses the '-g' flag
[21:41:44 CET] <bogdanc> i not managing to put a breakpoint on main or on anything
[21:42:17 CET] <bogdanc> now i rebuilding the project with custom compiler flags but that was a bad ideea so now i build it back again
[21:42:44 CET] <bogdanc> now i've rebuilt the project *
[21:42:45 CET] <kierank> you need to use ffmpeg_g
[21:42:48 CET] <kierank> with symbols
[21:43:32 CET] <bogdanc> that's a good advice
[21:43:34 CET] <bogdanc> thank you
[21:44:03 CET] <bogdanc> a very useful one actually.
[21:45:58 CET] <bogdanc> <3 it feels so good to finally put breakpoint
[21:51:18 CET] <atomnuker> jamrial: are you going to apply the BUFFER_PADDING size increase patch soon?
[21:52:04 CET] <jamrial> was waiting to see if someone else commented or if michael found another case of that define being used in some logic
[21:52:23 CET] <jamrial> but ok, i'll push it later today
[21:55:06 CET] <atomnuker> thanks
[23:19:44 CET] <kierank> atomnuker: https://board.asm32.info/asmbb_how_to_download_compile_and_install_49/
[23:20:57 CET] <atomnuker> okay that is blazing fast really
[23:22:26 CET] <kierank> the templeos of forum software
[23:26:05 CET] <atomnuker> its nice to see such a fast non-static website, though I guess that has to do with the lack of js anywhere
[23:32:07 CET] <wm4> I hate forums
[23:32:26 CET] <wm4> fortunately they're sort of dying
[23:32:35 CET] <wm4> unfortunately because people use social media garbage instead
[23:34:25 CET] <kierank> wm4: you should use twitter
[23:36:33 CET] <wm4> everyone seems to agree that twitter's UI is terrible
[23:37:11 CET] <atomnuker> I like the video they put on the webpage
[23:37:29 CET] <atomnuker> a few mbps hls for a 90x30 video
[23:41:30 CET] <wm4> should have coded the video in asm
[23:42:08 CET] <RiCON> patch to remove the annoying net neutrality popup: https://0x0.st/sNNf.txt
[23:43:05 CET] <nevcairiel> you would think those people  would've managed to make something that turns itself off after the date already long passed
[23:46:45 CET] <jkqxz> I'm more surprised that apparently a load of people let random third-party scripts run on websites.
[23:47:38 CET] <wm4> it's the cool trendy thing to do
[23:47:39 CET] <jkqxz> BtbN:  See <http://ffmpeg.org/pipermail/ffmpeg-user/2018-January/038510.html>.  The interesting test is whether this is something to do with Nvidia, or just something weird in that user's setup.
[23:48:11 CET] <BtbN> does it matter what sample I use?
[23:48:16 CET] <jkqxz> No.
[23:49:08 CET] <jkqxz> Or at least, I don't think it should.  I suppose it could.
[23:50:21 CET] <BtbN> what is this black at 1 color? I suspect the ML messes it up?
[23:50:35 CET] <BtbN> is it black at 1?
[23:50:52 CET] <BtbN> seems to be
[23:51:15 CET] <BtbN> I get the exact same error as in the post
[23:51:19 CET] <BtbN> on a GTX1050
[23:51:41 CET] <jkqxz> Aha.
[23:51:55 CET] <nevcairiel> way too many websites just fall apart when you dont
[23:52:40 CET] <jkqxz> ... though I'm not really sure what to do with that information.
[23:52:57 CET] <BtbN> it's not exactly a very useful debug output
[23:52:59 CET] <jkqxz> Are there any Nvidia tools which might tell you whether anything happened?
[23:53:56 CET] <jkqxz> The -36 is CL_INVALID_COMMAND_QUEUE, which doesn't make any sense at all (it was enqueue the kernels on it immediately before).
[23:55:01 CET] <BtbN> does the overlay filter use the program_opencl filter internally?
[23:55:02 CET] <jkqxz> Google suggests that clFinish() can return that in some unclear cases possibly involving timeout, but I don't see anything to narrow that down.
[23:56:23 CET] <jkqxz> No, it has some built-in kernels (see lavfi/opencl/overlay.cl) and compiles/runs them.  (The building stuff is common with the program filter.)
[23:56:49 CET] <BtbN> https://bpaste.net/show/332676cb90fa there's this in dmesg every time I run
[23:57:15 CET] <wm4> I should probably try opencl->gl interop
[23:57:17 CET] <wm4> just because
[23:58:13 CET] <nevcairiel> "If the execution of a command is terminated, the command-queue associated with this terminated command, and the associated context (and all other command-queues in this context) may no longer be available. The behavior of OpenCL API calls that use this context (and command-queues associated with this context) are now considered to be implementation- defined. The user registered callback function specified when context is created 
[23:58:13 CET] <nevcairiel> can be used to report appropriate error information."
[23:58:25 CET] <nevcairiel> apparently execution failures can kill your command queue asynchroniously
[23:58:40 CET] <BtbN> this looks more like it kills the entire driver
[23:59:20 CET] <jkqxz> nevcairiel:  I think you should still get something a bit more sane.  The error callback function is registered here, and we don't see anything from it.
[00:00:00 CET] --- Fri Jan 12 2018


More information about the Ffmpeg-devel-irc mailing list