[Ffmpeg-devel-irc] ffmpeg-devel.log.20180529

burek burek021 at gmail.com
Wed May 30 03:05:03 EEST 2018


[01:21:25 CEST] <klaxa> hrmpf, been listening to a stream for 2 hours now and the skipping didn't happen... but i already found more bugs :V
[05:42:15 CEST] <hapyt0wn> Hey I'm interested in detecting changes in powerpoint slides during a video and getting a hook set up
[05:42:21 CEST] <hapyt0wn> anyone know how to do that?
[06:57:16 CEST] <cone-059> ffmpeg 03Vishwanath Dixit 07master:f09635f2a2e6: avformat/utils: function to get the formatted ntp time
[06:57:16 CEST] <cone-059> ffmpeg 03Vishwanath Dixit 07master:5717cd80dcb8: avformat/movenc: creating producer reference time (PRFT) box
[08:26:15 CEST] <j-b> kierank: all of them? hevc?
[08:30:56 CEST] <atomnuker> which arch?
[08:32:58 CEST] <JEEB> hevc is slow on all I think? vp9 is a very nice contrast to it at least on x86_64
[08:35:57 CEST] <nevcairiel> hevc is indeed relatively slow, but thats not just a SIMD thing, it could probably use some general review of performance
[08:36:37 CEST] <cone-059> ffmpeg 03Gyan Doshi 07master:cba167934bb2: doc/ffmpeg: update disposition values
[08:38:15 CEST] <durandal_1707> j-b: all codecs are slow
[08:38:39 CEST] <j-b> durandal_1707: sorry, it was a morning joke ;)
[08:38:53 CEST] <j-b> atomnuker: I would say x64 for hevc 
[08:39:23 CEST] <j-b> kierank: cf the talk at SES industry days...
[08:55:19 CEST] <hanna> atomnuker: chromaticaberration_vulkan
[08:55:24 CEST] <hanna> No such filter: 'chromaticaberration_vulkan' *
[08:56:03 CEST] <hanna> Oh sorry
[08:56:14 CEST] <hanna> [hwupload @ 0x557ec7915e40] A hardware device reference is required to upload frames to.
[08:57:15 CEST] <hanna> this is me trying to run ./ffmpeg -f lavfi -i testsrc=duration=10:size=1920x1080:rate=60 -vf format=rgba,hwupload,chromaticaberration_vulkan,hwdownload,format=rgba /mem/out.mkv
[08:57:36 CEST] <hanna> I guess the -init_hw_device and -filter_hw_device options are relevant but I have no idea how to use them for vulkan
[08:57:55 CEST] <hanna> it's not like any of this is documented
[08:59:35 CEST] <hanna> Okay, after looking at the source cod^W^Wdocumentation, and your other examples, I managed to come up with this:
[08:59:37 CEST] <hanna> ./ffmpeg -init_hw_device vulkan=vk -f lavfi -i testsrc=duration=10:size=1920x1080:rate=60 -filter_hw_device vk -vf format=rgba,hwupload,chromaticaberration_vulkan,hwdownload,format=rgba /mem/out.mkv
[09:00:15 CEST] <hanna> This segfaults
[09:01:31 CEST] <hanna> atomnuker: https://0x0.st/s2WC.txt
[09:01:47 CEST] <hanna> fun
[09:02:31 CEST] <hanna> let me retry with debug symbols
[09:04:11 CEST] <hanna> Huh, I tried building with CFLAGS="-Og -g" but I don't get debug symbols
[09:06:28 CEST] <hanna> Yeah, still no dice
[09:07:31 CEST] <nevcairiel> you need to use ffmpeg_g if you want debug symbols
[09:07:35 CEST] <nevcairiel> ffmpeg is always stripped
[09:09:16 CEST] <hanna> oh
[09:10:23 CEST] <hanna> https://0x0.st/s24z.txt
[09:10:27 CEST] <hanna> still not sure why it's optimizing stuff out
[09:11:12 CEST] <hanna> Might be an upstream issue anyway by the looks of it
[09:11:17 CEST] Action: hanna rebuilds mesa
[09:15:17 CEST] <hanna> I think I see the bug
[09:20:15 CEST] <hanna> atomnuker: Judging by the RADV source code it would appear as though you're supported to put a VK_STRUCTURE_TYPE_EXTERNAL_IMAGE_FORMAT_PROPERTIES_KHR into the pNext chain of your VkImageFormatProperties2 props
[09:20:32 CEST] <JEEB> hanna: btw if you want to tell FFmpeg to never strip, --disable-stripping
[09:20:46 CEST] <hanna> atomnuker: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkPhysicalDeviceExternalImageFormatInfo.html this documentation appears to confirm that
[09:20:52 CEST] <hanna> To determine the image capabilities compatible with an external memory handle type, add VkPhysicalDeviceExternalImageFormatInfo to the pNext chain of the VkPhysicalDeviceImageFormatInfo2 structure and VkExternalImageFormatProperties to the pNext chain of the VkImageFormatProperties2 structure.
[09:21:04 CEST] <hanna> You never add a VkExternalImageFormatProperties struct to your VkImageFormatProperties2 chain
[09:21:06 CEST] <hanna> Therefore it segfaults
[09:28:36 CEST] <atomnuker> hanna: force pushed a fix to my repo
[09:28:56 CEST] <atomnuker> anv seems to be a more lenient
[09:29:33 CEST] <atomnuker> you can just comment that part out too, it would only be used for exporting dmabufs from vkimages
[09:31:50 CEST] <hanna> Device creation failed: -12.
[09:31:52 CEST] <hanna> Failed to set value 'vulkan=vk' for option 'init_hw_device': Cannot allocate memory
[09:35:01 CEST] <atomnuker> try vulkan=vk:0
[09:53:53 CEST] <kierank> j-b: meh, hevc
[09:58:52 CEST] <merbanan> hanna: g=3
[10:13:15 CEST] <durandal_1707> atomnuker: the quality of anlmeans for speach is passable for me, for music its bad, but any denoiser is bad in such case, what do you think?
[10:16:08 CEST] <atomnuker> I agree, most music is hard to denoise (though it should still do fine with piano music)
[10:16:14 CEST] <atomnuker> I'll test the patch tonight
[11:36:59 CEST] <hanna> atomnuker: same error
[11:37:05 CEST] <hanna> merbanan: huh?
[11:37:38 CEST] <hanna> atomnuker: Wild stab in the dark: The logic you use to find an appropriate memory type doesn't work for RADV because you made assumptions about available memory types that only work for iGPUs? :p
[11:37:45 CEST] <hanna> And therefore it fails allocating memory?
[11:38:00 CEST] <hanna> Oh, it says "Device creation failed: -12" before that though
[11:38:53 CEST] <merbanan> hanna: g=3 gives more debug info
[11:40:11 CEST] <hanna> I don't know how to set that
[11:40:13 CEST] <hanna> Or where
[11:40:15 CEST] <hanna> Or what this is about
[11:41:02 CEST] <merbanan> when you configure you can add that
[11:41:34 CEST] <hanna> Oh
[11:41:52 CEST] <hanna> What's the exact syntax for that?
[11:41:56 CEST] <hanna> ./configure g=3 ?
[11:43:00 CEST] <merbanan> hmmm,  --enable-debug=3 I think
[11:47:07 CEST] <hanna> [AVHWDeviceContext @ 0x55e9a65bdb00] Failed to allocate memory: VK_ERROR_INVALID_EXTERNAL_HANDLE
[11:47:12 CEST] <hanna> this is the first error
[11:47:45 CEST] <hanna> atomnuker: ^
[11:57:36 CEST] <atomnuker> it works for jkqxz, something must be different with your system
[11:58:19 CEST] <atomnuker> just try ./ffmpeg_g -init_hw_device "vulkan=vk:0" -i <something> -f null -
[11:58:30 CEST] <atomnuker> to see if it can init without doing anything
[12:08:45 CEST] <hanna> atomnuker: that works
[12:08:59 CEST] <hanna> I probably don't have some whatever interop extension for whatever
[12:09:16 CEST] <jkqxz> What case are you trying here?
[12:09:43 CEST] <hanna> I have mesa built without support for wayland but with vaapi and with vdpau
[12:09:49 CEST] <hanna> jkqxz: I was running exactly this:
[12:10:23 CEST] <hanna> ./ffmpeg_g -init_hw_device vulkan=vk:0 -f lavfi -i testsrc=duration=10:size=1920x1080:rate=60 -filter_hw_device vk -vf format=rgba,hwupload,chromaticaberration_vulkan,hwdownload,format=rgba -f null -
[12:10:43 CEST] <hanna> log file: https://0x0.st/s246.txt
[12:11:11 CEST] <jkqxz> I'm not sure I ran something like that on radv.
[12:11:27 CEST] <jkqxz> I can check later, not next to the machine right now.
[12:11:51 CEST] <jkqxz> The YUV stuff didn't work, so I mostly concentrated on anv after testing a few others (including Windows).
[15:18:30 CEST] <cone-578> ffmpeg 03Sergey Lavrushkin 07master:bdf1bbdbb4eb: Adds dnn inference module for simple convolutional networks. Reimplements srcnn filter based on it.
[15:44:16 CEST] <gagandeep> kierank: one frame is now giving a respectable output, just need to provide 2nd frame data in global context
[15:44:47 CEST] <gagandeep> some blurring is there in the first frame of group of 2 frames but that i guess can be figured out
[15:48:50 CEST] <kierank> gagandeep: can you send screenshot
[15:49:54 CEST] <gagandeep> well the file is a full 1080p video with many frames, so one snap for now
[15:50:21 CEST] <kierank> ok
[15:50:48 CEST] <gagandeep> let me see if ffmpeg can convert without 2nd frame
[15:51:00 CEST] <gagandeep> i was using ffplay
[15:53:24 CEST] <gagandeep> kierank: ffmpeg conversion is causing a bit of problem right now
[15:53:31 CEST] <gagandeep> here is the ffplay frame
[15:53:33 CEST] <gagandeep> screenshot
[15:53:34 CEST] <gagandeep> https://imgur.com/a/f5xkcNA
[15:54:09 CEST] <kierank> gagandeep: what does mountain sample look like
[15:54:31 CEST] <gagandeep> i will need to first bypass the error cfhd is raising with it
[15:54:35 CEST] <gagandeep> so give me some time
[15:54:47 CEST] <gagandeep> i will try to open that sample as well
[15:55:01 CEST] <gagandeep> this sample is the one progressive ip i got from david
[15:55:28 CEST] <gagandeep> by tomorrow i think i can give you converted file of mountain one
[15:55:40 CEST] <gagandeep> need to finish the second frame integration
[15:56:04 CEST] <kierank> ok
[15:56:52 CEST] <gagandeep> also i will send the patch later if i can reduce the amount of code cause it only uses the inverses we had built in cfhd
[15:58:43 CEST] <gagandeep> kierank: does ffmpeg conversion cause problems if frame rate of video doesn't match the number of frames it gets
[15:58:52 CEST] <kierank> shouldn't matter
[15:59:01 CEST] <kierank> if you output to yuv
[15:59:11 CEST] <gagandeep> how would i do that
[15:59:22 CEST] <gagandeep> i have frame data in yuv only
[16:00:52 CEST] <gagandeep> oh, it won't matter if raw video is used while transcode
[16:02:59 CEST] <gagandeep> and pix_fmt yuv420p is selected
[16:03:49 CEST] <gagandeep> ffmpeg output for the ip progressive file is good now
[16:03:58 CEST] <gagandeep> thanks, i was scared for a moment
[16:04:36 CEST] <gagandeep> ./quit
[16:16:57 CEST] <atomnuker> jkqxz: it ran on windows?
[16:36:30 CEST] <jdarnley> A rather generic question for you all.  Can one de-interleave every other value in an array without temporary storage for the whole array?
[16:36:56 CEST] <akravchenko188> hi guys. I have question. it look like HWFrameContext of DXVA2 type does support frame allocation if initial_pool_size=0, i.e one by one. any reasons for that?
[16:37:23 CEST] <akravchenko188> hi guys. I have question. it look like HWFrameContext of DXVA2 type does NOT support frame allocation if initial_pool_size=0, i.e one by one. any reasons for that?
[16:48:45 CEST] <nevcairiel> because that would not be usable with the dxva2 decoder
[16:49:05 CEST] <nevcairiel> it requires a pre-defined pool size
[16:50:52 CEST] <akravchenko188> d3d11va does support single alloc
[16:52:28 CEST] <nevcairiel> not sure any use of dxva2 frames would actually support non-pooled frames, filtering probably also requires all of them from the same pool
[16:52:57 CEST] <nevcairiel> dxva2 is much less flexible then d3d11
[16:53:53 CEST] <akravchenko188> I need it for filter which outputs dxva2 frames
[16:55:02 CEST] <akravchenko188> so now I have to set initial_pool_size. probably we need to extend hwcontext_dxva2 to allocate single frames
[16:55:21 CEST] <nevcairiel> what can actually properly consume those then, and why not use d3d11?
[16:56:10 CEST] <akravchenko188> I am implementing both dxva2 and d3d11
[16:57:23 CEST] <nevcairiel> maybe skip dxva2 if its a problem? its mostly legacy at this point
[16:57:59 CEST] <nevcairiel> the DXVA2 api itself is not really designed for un-pooled frames
[16:58:25 CEST] <akravchenko188> in case of ffmpeg.exe usage yes, it is legacy. in case of libs usage not
[17:02:17 CEST] <nevcairiel> the only thing we even have that handles dxva2 frames is a decoder that can output it, we dont have any filters or encoders that can consume them. only qsv encoding  sort-of through deriving it into a qsv frames, but we could probably implement deriving into d3d11 if thats something thats useful
[17:08:24 CEST] <jkqxz> atomnuker:  It did upload/download of RGB on both Intel and AMD on Windows (no YUV support on either of those).  I didn't try to build libshaderc, so no filters.
[17:10:19 CEST] <jkqxz> nevcairiel:  Deriving from D3D11 for libmfx requires the subresource index stuff which wasn't implemented for a long time; apparently Intel have done that now, but I don't know when it will be generally usable.
[17:10:42 CEST] <nevcairiel> not for intel, i meant from dxva2 -> d3d11
[17:17:36 CEST] <jkqxz> akravchenko188:  None of the hwcontext implementations are required to use the allocation from libavutil.  If you have your D3D9 surfaces by some other route you can plug them into a frames context.
[17:18:23 CEST] <jkqxz> nevcairiel:  Oh, right.  That would probably make sense, though I'm not sure what the use-case would actually be.
[17:19:05 CEST] <jkqxz> (I looked at DXVA2 -> D3D11 a while ago as a way to get around the incomplete support in libmfx, but that turned out to be really painful.)
[17:21:44 CEST] <akravchenko188> Does it mean that I can could d3d surface in amf and attach to av_frame for further usage?
[17:25:55 CEST] <akravchenko188> jkqxz: Does it mean that I should create d3d9 surface in amf and attach to av_frame and pass to output to pipeline?
[17:26:29 CEST] <akravchenko188> This case I dont need to create output frame context, right?
[17:27:14 CEST] <jkqxz> Yes, that should work.  I don't know if it's a better route to what you want, though.
[17:27:36 CEST] <jkqxz> You do still need an output frames context, it just wouldn't use the internal allocation.
[17:27:54 CEST] <akravchenko188> Ok
[17:27:58 CEST] <akravchenko188> Thanks
[17:33:45 CEST] <jamrial> should i just push the crypto_bench patch? Thomas Volkert hasn't replied or addressed anything about this tls library for a while now
[22:02:12 CEST] <cone-578> ffmpeg 03Paul B Mahol 07master:73438dbbbc87: avfilter/af_afir: draw IR frequency response
[00:00:00 CEST] --- Wed May 30 2018


More information about the Ffmpeg-devel-irc mailing list