[Ffmpeg-devel-irc] ffmpeg-devel.log.20170831

burek burek021 at gmail.com
Fri Sep 1 03:05:03 EEST 2017


[02:50:12 CEST] <cone-463> ffmpeg 03James Almer 07master:027c682fa079: avfilter/vf_mcdeint: remove usage of deprecated AVCodecContext.me_method
[02:53:37 CEST] <cone-463> ffmpeg 03Martin Vignali 07master:2fcf47e2d175: fate/pixlet : add test for rgb
[03:46:07 CEST] <cone-463> ffmpeg 03James Almer 07master:6e131a7cd970: ffmpeg_opt: add proper deprecation guards to lowres code
[03:57:11 CEST] <rpw> I need more performance from vf_format. The first thing I noticed is that it's not threaded.  I'm looking at vf_curves.c for an example of a threaded video filter; however vf_format.c doesn't have a filter_Frame function so I'm not sure where to start. 
[04:05:39 CEST] <durandal_170> rpw: format uses swscale or scale
[04:13:09 CEST] <rpw> Thanks. Checking now
[04:17:32 CEST] <kepstin> rpw: the 'format' filter itself doesn't do anything itself, it just sets allowed formats at a specific point in the stream. Usually (but not always) this means a 'scale' filter is auto-inserted to convert to an allowed format.
[04:19:13 CEST] <rpw> thanks. Is there any WIP on threading swscale?
[04:29:27 CEST] <cone-463> ffmpeg 03James Almer 07master:b34c16a38d3e: fate/flvenc: set bitexact output format flag explicitly
[09:55:29 CEST] <JEEB> > feeding 960 samples to the AAC encoder
[09:55:31 CEST] <JEEB> them artifacts
[10:49:13 CEST] <wm4> I wonder what akamai needs decklink for
[10:57:35 CEST] <JEEB> wm4: I would guess they offer services for customers that include that part
[10:59:24 CEST] <wm4> hm, live streams?
[10:59:32 CEST] <wm4> and stuff like that... could make sense
[11:04:59 CEST] <JEEB> so the audio frame size stuff... should that be handled by avfilter or avcodec? I guess avcodec since it knows the frame size required?
[11:05:31 CEST] <JEEB> and the push/pull mechanism already gives you the way to say "no, feed me more" or request multiple packets one after another without feeding
[11:05:54 CEST] <JEEB> because I'm not sure if all clients should be required to re-invent the audio buffering wheel ^^;
[11:06:08 CEST] <wm4> yeah we could handle that in libavcodec to make the API easier
[11:06:19 CEST] <wm4> but currently you're supposed to feed the frame sizes the encoder expects
[11:06:22 CEST] <JEEB> yea
[11:06:36 CEST] <wm4> libavfilter can do that (there's a function to request a specific number of how many samples you want)
[11:06:56 CEST] <JEEB> oh?
[11:07:06 CEST] <JEEB> I tried looking at the audio buffer/sink ones but I must have missed it
[11:07:37 CEST] <wm4> maybe av_buffersink_set_frame_size
[11:07:37 CEST] <JEEB> any grep'able keywords?
[11:07:41 CEST] <JEEB> k
[11:07:50 CEST] <wm4> also av_buffersink_get_samples
[11:07:58 CEST] <wm4> not sure which one you're supposed to use lol
[11:08:32 CEST] <JEEB> lol
[11:09:26 CEST] <JEEB> ffmpeg.c only uses the former
[11:09:28 CEST] <JEEB> so I will go with that
[11:28:41 CEST] <nevcairiel> av_buffersink_get_samples generally works better, because set_frame_size needs to be called pretty early and if the graph pushes samples before you call it, its a bad situation =p
[11:29:26 CEST] <nevcairiel> while get_samples just doesnt care, you could request a different amount every call
[11:29:35 CEST] <wm4> sounds like a bad implementation rather than an API problem
[11:30:24 CEST] <nevcairiel> avfilter is a bad implementation
[11:30:27 CEST] <nevcairiel> but what can you do
[11:31:55 CEST] <wm4> is directshow better?
[11:34:07 CEST] <nevcairiel> at least its a defined interface anyone can plug their own filters in, but it definitely has its own problems
[11:36:46 CEST] <BtbN> DirectShow is painful
[11:39:26 CEST] <rcombs> keep in mind that the decklink stuff can interface with a bunch of other blackmagic hardware too
[11:42:34 CEST] <durandal_1707> anybody of you know how lens blur works?
[11:43:22 CEST] <durandal_1707> looks like nobody
[11:51:38 CEST] <cone-187> ffmpeg 03Tobias Rapp 07master:b7101151b36c: fate: add tests for some video source filters
[12:10:23 CEST] <atomnuker> durandal_1707: a chromatic abberation filter would be great though
[12:12:21 CEST] <durandal_1707> atomnuker: patch welcome
[12:13:08 CEST] <atomnuker> that I can do
[12:14:02 CEST] <durandal_1707> how?
[12:14:39 CEST] <durandal_1707> im writting generic video convolver
[12:15:12 CEST] <atomnuker> the filter? its simple to model. how? by writing it today
[12:15:17 CEST] <atomnuker> I had the entire week off
[12:55:39 CEST] <atomnuker> actually this is a lot harder than I thought it would be
[12:55:45 CEST] <atomnuker> much easier in shaders though
[12:58:07 CEST] <atomnuker> and I have all this vulkan boilerplate lying around, and we don't need jit shader compilation
[13:12:39 CEST] <rcombs>  <atomnuker> the filter? its simple to model. how? by writing it today
[13:12:40 CEST] <rcombs>  <atomnuker> actually this is a lot harder than I thought it would be
[13:12:43 CEST] <rcombs> is this a record
[13:12:51 CEST] <rcombs> how long between those lines :P
[13:13:06 CEST] <rcombs> atomnuker: gonna try to sell the tech to kyoani?
[13:14:59 CEST] <atomnuker> 40 minutes BUT I ate a pizza and a chocolate and drank 2 litres of water and I only wrote some boilerplate
[13:15:18 CEST] <atomnuker> to anyone wanting to go to cornwall ever: don't
[13:15:29 CEST] <iive> what's that?
[13:15:52 CEST] <atomnuker> most southwest point of england
[13:16:59 CEST] <atomnuker> complete list of things to do: take the sleeper train from london, smell the sea, vomit, see st. michael's mount, eat fish and chips and return back on the train with sore feet, dehydration and sleep depravation
[13:17:25 CEST] <rcombs> sleeper trains are cool, at least
[13:17:47 CEST] <nevcairiel> cant be all that cool, you sleep through the experience
[13:17:57 CEST] <atomnuker> yep, this one was awesome, its known as the night riviera, one of only 2 sleeper trains in the uk running today
[13:18:19 CEST] <rcombs> nevcairiel: cool in the way that long-haul business-class plane seating is cool
[13:18:32 CEST] <rcombs> i.e. you _can_ sleep through the experience, reasonably comfortably
[13:19:00 CEST] <atomnuker> even better, you have your own coupe
[13:19:12 CEST] <atomnuker> and its much much quieter than a plane
[13:22:34 CEST] <atomnuker> what does the vld in videotoolbox_vld stand for?
[13:23:49 CEST] <kurosu> sleep depravation, that sounds kinky and interesting
[13:25:06 CEST] <iive> i'm more amazed that the train takes a whole day
[13:25:20 CEST] <atomnuker> in a passenger train full of screaming kids in the quiet car? I was just jetlagged
[13:25:26 CEST] <iive> are they still using steam engine?
[13:25:43 CEST] <atomnuker> it takes 8 hours if you take the sleeper train, 5.5 hours if you take the non-sleeper train
[13:26:11 CEST] <atomnuker> iive: only on some journeys
[13:26:38 CEST] <atomnuker> (and not the night riviera)
[13:27:25 CEST] <rcombs> britain is about US-level in terms of high-speed rail, isn't it?
[13:27:28 CEST] <rcombs> (i.e. "no")
[13:28:05 CEST] <kurosu> iirc, they have been mostly discontinued in fr, like <10 lines remaining (if not 4 iirc)
[13:28:23 CEST] <kurosu> (sleeper train)
[13:28:48 CEST] <rcombs> is that because france actually has high-speed rail, so it's fast enough that you don't need to sleep on it
[13:28:55 CEST] <rcombs> or is that just germany
[13:29:31 CEST] <rcombs> meanwhile japan's putting in a 500km/h maglev line
[13:29:36 CEST] <nevcairiel> we still have sleeper trains here, but they often go internationally
[13:29:49 CEST] <nevcairiel> like from northern germany to vienna or something like th at
[13:29:54 CEST] <kurosu> not for every direction and you often need to change trains for some of them, because these faster lines have only a few destinations
[13:30:12 CEST] <rcombs> and the hyperloop people somehow expect me to be impressed by 320kph
[13:30:17 CEST] <kurosu> sleeper trains are supposed to be an experience
[13:30:21 CEST] <rcombs> I mean, that's great by US standards
[13:31:00 CEST] <rcombs> but how the hell do you build a whole evacuated-tube system with 1-car trains and still not go as fast as proper trains at 1atm
[13:32:37 CEST] <iive> rcombs: i think that these speeds are only because they don't have big enough track
[13:32:38 CEST] <kurosu> evacuated or vacuumed or? both sounds weird
[13:33:27 CEST] <iive> rcombs: aka, they don't be able to stop on time.
[13:33:40 CEST] <iive> wont
[13:33:58 CEST] <rcombs> well maybe one day they'll build a proper test track and give me some numbers that are actually impressive
[13:34:21 CEST] <rcombs> but in the meantime they just look dumb, announcing speed records that are slower than real trains
[13:34:34 CEST] <rcombs> also what's up with this whole 1-car concept
[13:35:49 CEST] <atomnuker> reminds me how years ago the soviets planned an underground train from moscow to st. petersburg
[13:36:13 CEST] <atomnuker> built in a straight line e.g. not following the curve of the earth and being at a constant depth
[13:36:18 CEST] <atomnuker> because then they
[13:36:25 CEST] <rcombs> uh
[13:36:25 CEST] <atomnuker> 'd need no power to run it
[13:36:34 CEST] <rcombs> &that's not how this works
[13:36:36 CEST] <nevcairiel> rcombs: not sure how you can call the maglev a real train though, its a decade away from actually being in service
[13:36:40 CEST] <rcombs> that's not how any of this works
[13:36:43 CEST] <wm4> how much is the curvature of the earth over such a distance?
[13:36:52 CEST] <rcombs> nevcairiel: I mean trains with proper cars
[13:37:15 CEST] <rcombs> as opposed to tiny round pods
[13:37:27 CEST] <atomnuker> probably enough to cancel gravity contribution and make it zero-g
[13:37:54 CEST] <rcombs> nobody tell them about resistive forces
[13:37:57 CEST] <durandal_1707> im trying to do 2d fft with code from fftfilt filter
[13:38:14 CEST] <rcombs> by that logic no train needs power once it's up to speed
[13:38:27 CEST] <atomnuker> rcombs: https://en.wikipedia.org/wiki/Gravity_train
[13:38:31 CEST] <rcombs> nevcairiel: also apparently they're considering having a demonstration track running in 2020
[13:38:34 CEST] <nevcairiel> the problem with such high speeds eventually becomes acceleration and decceleration anyway, you can't really accelerate much f aster at some point because of the G forces, and as such the length of the track decides your average speed
[13:38:52 CEST] <rcombs> yeah, acceleration on the maglev is like .1g
[13:39:06 CEST] <rcombs> which is just about nothing
[13:39:28 CEST] <rcombs> atomnuker: "ignoring the effects of friction"
[13:40:41 CEST] <atomnuker> eh, put a small rocket engine at the back
[13:40:55 CEST] <atomnuker> fixed things in ksp
[13:41:03 CEST] <durandal_1707> how i should multiply re and im stuff after i get vertical rdft pass?
[13:41:32 CEST] <atomnuker> for a 2d rdft?
[13:44:04 CEST] <iive> rcombs: well, musk now has a boring machine and he could be boring wherever he wants.
[13:45:31 CEST] <wm4> oh no, who gave him this!?
[13:45:45 CEST] <wm4> imagine what things he'd be boring
[13:46:35 CEST] <stevenliu> :D
[13:49:33 CEST] <durandal_1707> atomnuker: yes 2d rdft, doing generic convolver which can do lens blur among other stuff
[14:02:17 CEST] <atomnuker> I think its separable so you should just run it horizontally
[14:06:12 CEST] <durandal_1707> what that means? i must multiply in complex domain input and kernel
[14:08:33 CEST] <atomnuker> separable means if you apply it separately to the vertical and horizontal it'll produce a transform of the entire 2d input
[14:10:00 CEST] <atomnuker> hm, not sure how that works if your transform does a real -> complex
[14:25:23 CEST] <durandal_170> atomnuker: see http://www.jhlabs.com/ip/FFT.java
[14:25:42 CEST] <durandal_170> and http://www.jhlabs.com/ip/LensBlurFilter.java
[14:25:55 CEST] <durandal_170> trivial stuff
[14:29:23 CEST] <durandal_170> and my code: https://github.com/richardpl/FFmpeg/blob/adefda3e4b679547971c56d1fd0c3c473ca9ace0/libavfilter/vf_fftfilt.c
[14:31:00 CEST] <atomnuker> oh, its not on a square block, not sure how that'd work then
[14:31:06 CEST] <atomnuker> maybe ask in #daala
[14:31:32 CEST] <durandal_170> atomnuker: i'm doing it on square block
[14:32:15 CEST] <durandal_170> with this http://www.johnloomis.org/ece563/notes/restoration/linblur/h.jpg
[14:32:41 CEST] <durandal_170> one should get nice linear blur with 256x256 input png
[14:36:07 CEST] <durandal_170> for input: https://0x0.st/5g5.png i get this: https://0x0.st/5gR.png
[14:37:48 CEST] <durandal_170> michaelni: can vf_fftfilt.c fft code be reused for generic convolution of 2 images?
[15:42:48 CEST] <durandal_170> atomnuker: no ideas?
[15:45:34 CEST] <atomnuker> no ideas, the code looks fine
[16:28:39 CEST] <cone-910> ffmpeg 03James Almer 07master:1291a6d0ff9a: avcodec/fits: add missing header includes
[16:34:10 CEST] <cone-910> ffmpeg 03Justin Ruggles 07master:1a0d9b503d2e: avformat/concatdec: add fallback for calculating file duration
[17:49:28 CEST] <BtbN> My new Ryzen arrived. Time to torture it
[17:49:51 CEST] <BtbN> Immediate observation: Under full load, it easily runs 7°C cooler.
[18:01:50 CEST] <Fenrirthviti> BtbN: Make sure you update BIOS before going too hard so you don't run into microcode crashes :)
[18:02:18 CEST] <BtbN> I'm not aware of any Microcode-Crashes on Ryzen
[18:02:31 CEST] <BtbN> There was one weird instance with running old 16 bit DOS code that crashed
[18:02:39 CEST] <BtbN> but that's fixed
[18:08:20 CEST] <Fenrirthviti> We see x264 crash all the time on early Ryzen CPUs without bios updates due to it
[18:12:21 CEST] <Fenrirthviti> Seems to be related to high-load scenarios, mostly on the linux side from what I've seen.
[18:31:45 CEST] <Fenrirthviti> BtbN: https://www.digitaltrends.com/computing/ryzen-amd-bios-fix-fma3-crash/ here we go, this is what I was talking about
[18:31:51 CEST] <Fenrirthviti> if you're curious at all
[18:40:42 CEST] <doublya> got a simple question. When I add -vf "filter=nv12" as a ffmpeg CIL flag, where in the source code is the call to sws_scale?
[18:47:38 CEST] <durandal_170> atomnuker: i get completly wrong magnitude/modulus of impulse in output
[18:47:58 CEST] <jkqxz> doublya:  I assume you mean format=nv12?  Here: <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavfilter/vf_scale.c;h=3329c1234663e06391e7a88c94d0750be0db936c;hb=HEAD#l399>.
[18:48:06 CEST] <durandal_170> it should be completely white frame, but I get stripes
[18:48:37 CEST] <doublya> jkqxz: sorry yes, that's what I mean. Thanks I'll take a look
[18:56:49 CEST] <atomnuker> durandal_170: stripes? stride issue?
[19:00:32 CEST] <durandal_170> atomnuker: https://0x0.st/5EF.png
[19:05:31 CEST] <atomnuker> weird, looks like a stride issue
[19:05:51 CEST] <atomnuker> how about this: do the transform only in one dimension
[19:06:11 CEST] <atomnuker> it should blur but only in one dimension
[19:52:59 CEST] <BtbN> Fenrirthviti, that's an actual hardware fault
[19:53:03 CEST] <BtbN> the fma3 but is long fixed
[19:53:13 CEST] <atomnuker> jkqxz/wm4: what's the difference between sw_format and hw_format
[19:53:35 CEST] <BtbN> This CPU here seems fine now. I just stressed it with a highly crashy workload for 2 hours. No errors
[19:53:46 CEST] <atomnuker> (as used in AVHWFramesContext)
[19:54:15 CEST] <atomnuker> is it there because some hw apis can do conversion between sw and hw formats?
[19:54:39 CEST] <atomnuker> (when its uploaded/downloaded)
[19:54:57 CEST] <BtbN> hw_format could be CUDA, and sw_format NV12
[19:55:07 CEST] <BtbN> When inside the CUDA frame there is NV12 pixel data in GPU memory
[19:55:21 CEST] <BtbN> And it's similar for most hwaccels
[19:55:46 CEST] <atomnuker> ah, ok, so its what's wrapped/allowed to be wrapped in a hw frame
[19:56:47 CEST] <BtbN> it's what's actually in there
[19:56:53 CEST] <BtbN> there can be pretty much everything
[19:58:05 CEST] <jkqxz> The hw format will be an opaque format (AV_PIX_FMT_FLAG_HWACCEL).  The sw format is something which reasonably represents the data actually contained (though not necessarily actual-layout accurate because tiling).
[19:58:48 CEST] <BtbN> For CUDA it actually is layout accurate. Just in GPU memory.
[19:59:00 CEST] <BtbN> But CUDA is a bit of a special case in terms of hwaccel
[19:59:07 CEST] <jkqxz> CUDA doesn't tile at all?
[19:59:25 CEST] <BtbN> CUDA hwaccel frames are basically just cuMemalloc + cuMemcpy
[19:59:30 CEST] <BtbN> same as you'd do on CPU memory
[19:59:40 CEST] <jkqxz> I guess it needn't in general, but it might be useful for some special cases.
[19:59:51 CEST] <BtbN> NVENC supports tiled input
[19:59:59 CEST] <wm4> typically the sw_format reflects the format of the data you see when you, well, access it
[20:00:08 CEST] <BtbN> But it's not wired up in ffmpeg. And I for sure won't bother with it.
[20:00:15 CEST] <jkqxz> Requiring you to use special allocation and copy functions can hide any manner of absurd layout.
[20:00:23 CEST] <wm4> e.g. when transfering it to CPU memory, or accessing it via shaders on the GPU
[20:00:34 CEST] <BtbN> jkqxz, you can actually do pointer arithmetic on the CUDA pointers.
[20:00:51 CEST] <BtbN> Of course you don't know the physical layout in VRAM, but you don't really do for normal RAM either.
[20:01:26 CEST] <jkqxz> You can?  Wow, I didn't realise that.
[20:02:42 CEST] <wm4> d3d11va supports decoding to an opaque format
[20:02:51 CEST] <wm4> the transfer functions simply fail on it
[20:03:03 CEST] <wm4> (this is pretty much by D3D11 design, not our choice)
[20:06:34 CEST] <durandal_170> atomnuker: now i get this: https://0x0.st/56j.png 
[20:07:08 CEST] <BtbN> jkqxz, http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavutil/hwcontext_cuda.c;h=dfb67bc941e2756a06d2b6dcc9576b0b4ce75ec1;hb=HEAD#l187
[20:07:34 CEST] <BtbN> data is just a CUdevptr
[20:08:41 CEST] <jkqxz> I like how the YUV420P is surprise YV12 :P
[20:09:49 CEST] <BtbN> Yeah, it was decided to disguise it, as YUV420P is super common, and supporting it that way saves _a lot_ of pointless conversions
[20:10:58 CEST] <wm4> jkqxz: I'm convinced ffmpeg actually got the inverse of the common convention
[20:11:37 CEST] <wm4> (if you're referring to the swapped planes)
[20:13:22 CEST] <atomnuker> durandal_170: on a white input?
[20:13:42 CEST] <atomnuker> is this for a vertical or a horizontal only transform?
[20:14:49 CEST] <BtbN> wm4, the weird thing is, for YUV444, ffmpeg and nvenc/cuda agree on the layout.
[20:14:54 CEST] <BtbN> For YUV420 they don't
[20:15:08 CEST] <BtbN> But both nvenc and ffmpeg call the format the same, yuv420/444
[20:15:40 CEST] <durandal_170> atomnuker: nope, horizontal only, doing only inverse of single dot pixel in center of image, it should display magnitude of fft part of impulse
[20:16:31 CEST] <durandal_170> atomnuker: using http://bigwww.epfl.ch/demo/ip/demos/11-FFT/ as reference
[20:17:02 CEST] <durandal_170> i modified code to not do convolve but just show mag
[20:17:53 CEST] <durandal_170> this is strange, because doing fft_2d and than ifft_2d on single image gives same output
[20:18:07 CEST] <wm4> BtbN: ok that seems inconsistent
[20:18:30 CEST] <ubitux> BBB: so want to use intrinsics?
[20:18:46 CEST] <BtbN> wm4, I'm pretty sure the inconsistency is on the nvenc side though
[20:18:47 CEST] <ubitux> (RE: ADM thing)
[20:19:44 CEST] <durandal_170> what?
[20:19:57 CEST] <BtbN> wm4, ok, nevermind. They either renamed it at some point, or I just plain misremembered that: http://git.videolan.org/?p=ffmpeg.git;a=blob;f=compat/nvenc/nvEncodeAPI.h;h=c3a829421282d5f22f82fc285723f13eb660f053;hb=HEAD#l316
[20:20:43 CEST] <BBB> ubitux: wait what?
[20:20:50 CEST] <BBB> missing context
[20:21:54 CEST] <BBB> Im pretty sure Im missing something incredibly obvious but Im not seeing it :-p
[20:22:04 CEST] <BBB> please someone clue me in
[20:22:26 CEST] <ubitux> [PATCH] avfilter: add ADM filter
[20:22:31 CEST] <ubitux> you're apparently co-author?
[20:22:51 CEST] <ubitux> +#include <emmintrin.h>
[20:23:19 CEST] <BBB> ashk43712: ohright we shouldnt use such headers :-p
[20:23:24 CEST] <BBB> no, no intrinsics
[20:23:29 CEST] <ubitux> :)
[20:23:31 CEST] <BBB> probably just a copypaste from the netflix code
[20:23:54 CEST] <BBB> (this is algorithmically based on their opensource code from github)
[20:26:26 CEST] <wm4> I hope it says that in the commit message
[20:35:27 CEST] <ashk43712_> BBB: ok, what's the alternative for emmintrin.h? They use it to calculate "float xi = _mm_cvtss_f32(_mm_rcp_ss(_mm_load_ss(&x)));" which I have a very little clue about.
[20:37:07 CEST] <BBB> thats 1/x
[20:37:17 CEST] <BBB> float xi = 1/x;
[20:38:16 CEST] <ashk43712_> oh, cool then. we don't need emmintrin.h.
[20:39:48 CEST] <BBB> 1.0/x
[20:39:50 CEST] <BBB> anyway
[20:39:53 CEST] <BBB> you know what I mean
[20:41:31 CEST] <ashk43712_> yeah, ok.
[20:43:30 CEST] <wm4> the next step would be writing asm for it
[20:43:36 CEST] <wm4> and making it faster than the intrinsic code
[20:43:55 CEST] <wm4> (for bonus points, I guess)
[20:45:18 CEST] <BtbN> why do they use intrinsics to calculate 1/x? oO
[20:48:05 CEST] <ashk43712_> wm4: yes, started working on it.
[21:11:53 CEST] <atomnuker> BtbN: its not even SIMD, its 1/first_float, next floats in the reg are passed as is
[21:13:09 CEST] <atomnuker> its not as exact as a div but its faster than a div
[21:16:30 CEST] <Gramner> it's a reciprocal approximation. faster than a normal division but significantly less accurate (only 12 bits precision)
[21:49:40 CEST] <durandal_170> michaelni: ping
[22:00:29 CEST] <atomnuker> neat, the vulkan hwaccel almost works now
[22:01:11 CEST] <atomnuker> what other filters are easily doable in shaders but difficult in software?
[22:03:44 CEST] <kiroma> There's vulkan hwaccel coming? How fast is it when compared to other methods?
[22:04:36 CEST] <atomnuker> should be quite fast for things doable quickly in shaders
[22:05:59 CEST] <atomnuker> I know most of mjpeg decoding can be done in shaders
[22:06:08 CEST] <kiroma> neat
[22:07:15 CEST] <atomnuker> but I'm mainly doing this for filters where having fractional pixels makes everything much easier
[22:07:50 CEST] <kiroma> I see
[22:08:37 CEST] <atomnuker> (well, not everything, just filters that do require fractional pixels)
[22:09:18 CEST] <michaelni> durandal_170, its probbly possible to reuse it for convolution in principle
[22:14:06 CEST] <atomnuker> schedule is up: https://www.videolan.org/videolan/events/vdd17/
[22:16:23 CEST] <atomnuker> (pinging j-b if he could schedule a technical meeting for ffmpeg sometime too)
[22:16:58 CEST] <durandal_170> michaelni: convolving with identity image produce mirrored output
[22:35:25 CEST] <BBB> atomnuker: did you propose the av1 meeting?
[22:36:04 CEST] <BBB> I would really like to see someone doing a summary of av1 talk, not just an update compared to yesterday, because 99% of us dont know what it did yesterday so the update is semi-meaningless
[22:36:32 CEST] <atomnuker> nope, wasn't me
[22:37:42 CEST] <durandal_170> i propose general convolution meeting, people easily forgot such stuff
[22:38:15 CEST] <BBB> TD-Linux: do you know whose talk it is?
[22:40:12 CEST] <durandal_170> michaelni: give me some hint how to fix it, i'm desperate
[22:42:11 CEST] <durandal_170> nobody knows how to do 2d fft with ffmpeg...
[22:42:31 CEST] <atomnuker> few people know how to do it at all
[22:42:46 CEST] <atomnuker> look on the bright side, once you learn you'll be one of them
[22:43:45 CEST] <kiroma> Would it work if I passed both --enable-static and --enable-shared? I'm expecting to have both libraries and static executables.
[22:45:33 CEST] <nevcairiel> On some systems that is possible, on some it isn't.
[22:46:00 CEST] <nevcairiel> Some just can't build shared and static at the same time
[22:48:25 CEST] <durandal_170> ubitux: you were writting 2d fft api?
[22:48:37 CEST] <ubitux> was i?
[22:49:15 CEST] <ubitux> ah you meant https://github.com/ubitux/dct ?
[22:49:30 CEST] <ubitux> i used it for the dct filter but that's all
[22:52:54 CEST] <kiroma> Well, it looks like it just omits the static flag.
[22:55:38 CEST] <durandal_170> michaelni: shouldn't I need IDFT_C2C for this>
[23:18:55 CEST] <michaelni> durandal_170, probably yes, 
[23:23:21 CEST] <durandal_170> michaelni: wouldn't than fft2d(image)-->ifft2d(image) not return same image if one doesnt do IDFT_C2C?
[23:30:24 CEST] <michaelni> the image input and and final image should be real valued if the data that its convolved with is real valued, The stuff in freq domain is complex valued
[23:30:45 CEST] <michaelni> the multiplication in fftfilt doesnt look correct for convolution
[23:31:35 CEST] <durandal_170> michaelni: you mean my added code or already present one?
[23:31:44 CEST] <michaelni> present one
[23:31:51 CEST] <michaelni> didnt look at your code
[23:32:06 CEST] <durandal_170> my code is based losely on fftfilt
[23:32:30 CEST] <durandal_170> michaelni: what's not correct?
[23:37:24 CEST] <iive> do you turn each pixel sample into complex real,img pair?
[23:40:58 CEST] <durandal_170> iive: yes
[23:49:27 CEST] <Compn> michaelni : i never got that dvdnav- ml password btw
[23:49:35 CEST] <Compn> thanks for your help with it , of course :)
[00:00:00 CEST] --- Fri Sep  1 2017


More information about the Ffmpeg-devel-irc mailing list