burek021 at gmail.com
Fri Mar 23 03:05:03 EET 2018
[00:13:42 CET] <jkqxz> atomnuker: There's a new version. This uses DRM_FORMAT_MOD_INVALID to distinguish between known-linear and unknown, so Vulkan can use the first case (from ESH) and reject the second (from ABH).
[00:24:05 CET] <atomnuker> jkqxz: how new does libdrm need to be to have DRM_FORMAT_MOD_INVALID?
[00:26:59 CET] <jkqxz> Six months or so? <https://cgit.freedesktop.org/mesa/drm/commit/include/drm/drm_fourcc.h?id=7ec689a5406a4c5f468e126007c5aa9d72dd7f59>
[00:27:02 CET] <jkqxz> But it's just a constant, so whatever.
[00:31:05 CET] <atomnuker> damn, that new
[00:32:05 CET] <jkqxz> Yeah, I hadn't actually seen it before. I would have used it earlier had I known it existed, since it's the obvious way to solve this problem.
[00:33:20 CET] <atomnuker> LGTM'd the patch, feel free to apply along with all the others I've lgtm'd (and the gsoc filters you lgtm'd too)
[00:34:08 CET] <jkqxz> Do you have any GSoC students?
[00:36:01 CET] <atomnuker> klaxa once he gets around to submitting his application probably (he's got a week)
[00:39:23 CET] <klaxa> well i plan on checking in with my uni tomorrow, haven't gotten the papers yet
[00:39:33 CET] <klaxa> registered with the google stuff
[00:40:38 CET] <atomnuker> cool
[00:40:48 CET] <jamrial> jkqxz: can't you use av_frame_apply_cropping() in patch 5? some of the code looks similar
[01:01:31 CET] <jkqxz> jamrial: Er, I'm not sure what you mean.
[01:02:37 CET] <jamrial> in patch 5/7, for vf_crop, can't you use the above function to apply the cropping?
[01:02:43 CET] <jamrial> maybe i'm misreading this, though
[01:03:07 CET] <jkqxz> It's modifying the cropping, not applying it.
[01:04:06 CET] <jamrial> ah, figures
[01:04:29 CET] <jkqxz> Hmm. It might be usable by the non-hwaccel case? But that's not what I'm doing here.
[01:07:30 CET] <jkqxz> There is weird magic in both of them, and not written by the same person, so I'm not sure. (And the av_frame one supports keeping some alignment, too.)
[01:09:56 CET] <jamrial> the av_frame one was written by wm4 (or elenril) and basically was split off decode.c
[01:10:11 CET] <jkqxz> I'm thinking of adding a flag to av_hwframe_transfer_data() to apply cropping there as well, but I haven't written it yet. (VDPAU would just fail if it gets that flag.)
[02:04:14 CET] <cone-389> ffmpeg 03Jun Zhao 07master:a4726288f8c1: lavc/dump_extradata_bsf: support dump options.
[02:04:15 CET] <cone-389> ffmpeg 03Jun Zhao 07master:b8e406c01a75: lavc/noise_bsf: support dump options.
[02:04:16 CET] <cone-389> ffmpeg 03Jun Zhao 07master:28aaed773712: lavc/remove_extradata_bsf: support dump options.
[02:19:05 CET] <cone-389> ffmpeg 03James Almer 07master:f14ca600015d: avcodec/avpacket: add av_packet_make_writable()
[03:00:20 CET] <Chloe> michaelni: I updated the avclass patch, it was fixed for the testcase you gave me (and still passes fate)
[03:00:44 CET] <Chloe> sorry i forgot to version it though
[04:15:34 CET] <cone-389> ffmpeg 03James Almer 07master:ead257db560a: cmdutils: print supported codecs in show_help_bsf()
[05:20:44 CET] <cone-389> ffmpeg 03James Almer 07master:ed0e0fe10211: changelog: add missing line for filter_units bsf
[10:56:31 CET] <durandal_1707> kierank: fine to apply gagandeep alpha patch?
[10:56:44 CET] <kierank> yes if it works
[10:56:52 CET] <kierank> and the other one is ok
[10:57:00 CET] <durandal_1707> which one?
[10:59:26 CET] <kierank> the padding one
[11:00:49 CET] <gagandeep> kierank: thanks, i am still learning stuff around here :)
[11:01:51 CET] <gagandeep> and i know a lot of mess up with email and commit messages, really sorry
[11:02:21 CET] <gagandeep> kierank: i am still not sure if i am doing the fate tests correctly
[11:03:57 CET] <gagandeep> kierank: carl messaged me quite some time later saying that the padding patch broke fate, so i will need help regarding that
[11:18:46 CET] <durandal_1707> gagandeep: have you rsynced with fate repo?
[11:25:27 CET] <Chloe> gagandeep: just run `make fate` after getting the samples with `make fate-rsync`, you need to set --samples in ./configure too
[11:28:32 CET] <wm4> if you dynamically link you also need to make install before running fate
[11:29:42 CET] <nevcairiel> (basically, avoid that)
[11:30:08 CET] <gagandeep> yeah, i did something called make fate rsync as described on page, and i have samples now,
[11:30:36 CET] <gagandeep> so running make fate will just process and if it breaks somewhere it will return error
[11:30:53 CET] <Chloe> yes
[11:30:58 CET] <gagandeep> last time i ran i didn't have samples (that rsync thing)
[11:31:08 CET] <gagandeep> k and what is that config for fat
[11:31:11 CET] <gagandeep> *fate
[11:31:24 CET] <gagandeep> fate.sh <config>
[11:31:46 CET] <nevcairiel> just do "make fate" and it'll run the tests
[11:31:55 CET] <gagandeep> ok
[11:32:11 CET] <gagandeep> thanks
[11:55:57 CET] <durandal_1707> [Parsed_declick_0 @ 0xf86080] Detected clicks in 3 of 2481745 samples. <--- for pure sine wave, WHY?
[11:56:57 CET] <nevcairiel> you wrote that, how would we know
[12:16:23 CET] <jdarnley> Dammit! Why are these transforms so complex! Why does there have to be so much temporary state?
[12:26:30 CET] <Chloe> jdarnley: which transofrms you looking at?
[12:27:02 CET] <jdarnley> Dirac/VC2's horrible wavelet ones.
[12:28:30 CET] <jdarnley> I am trying to adapt the decoder's somewhat elegant transforms that "roll" down the plane for use in the encoder.
[12:29:26 CET] <Mina> Hi, I am applying for FFmpeg as part of GSoC. I am almost done the qualification task but have some questions about the proposal.
[12:30:10 CET] <Mina> Is this the correct place to ask them?
[12:30:24 CET] <Chloe> Mina: sure
[12:30:50 CET] <Chloe> Which task did you do? Have you contacted the mentor (via email)?
[12:31:22 CET] <Mina> Yes, tho the task is not quite finished.
[12:32:00 CET] <Chloe> well which task?
[12:32:21 CET] <Mina> Super Resolution filter
[12:36:12 CET] <Mina> So, I wanted to know if there is a template or any sort of required structure for the proposal and who should I share the draft with.
[12:41:16 CET] <Chloe> Mina: just a patch to the mailing list
[12:42:29 CET] <Mina> What about the GSoC required proposal? And what if you get multiple patches how will you choose the best one?
[12:46:59 CET] <Chloe> Mina: I'm not entirely sure, I don't think anyone else has submitted a patch yet. Best to just submit to ML and send your mentor an email
[12:49:17 CET] <Mina> I have been contacting with the project mentor and he told me that someone else have already submitted a patch. He also said and I quote: " the proposal quality will have a big impact in the selection process." And when I asked about the proposal template he told me to ask the ffmpeg list.
[12:50:57 CET] <Chloe> Mina: 'Students must submit their final proposal as a PDF through the website' from the gsoc website
[12:51:04 CET] <Chloe> I assume this means the gsoc website
[12:52:14 CET] <Mina> Yes that's what I am asking about. Generally this proposal includes an overview of project, some milestones, some basic info of applicant, some required info by the organization. That's why I was asking about a proposal template or requirements.
[12:55:31 CET] <Mina> I think I will just send the patch to the mailing list and ask for further steps then.
[13:10:04 CET] <Chloe> Mina: that sounds like a good idea. I'm not entirely sure about the gsoc side, but I can help on the ffmpeg side of things (if you need any)
[13:12:56 CET] <Mina> Great. Is there any requirements for the patch? Or just the patch and description for it?
[14:03:43 CET] <wm4> wow I really should have blocked the release back then
[14:18:09 CET] <atomnuker> jdarnley: I'm still kicking myself for losing the source code
[14:18:22 CET] <atomnuker> I had haar working for all levels up to 3
[14:18:36 CET] <atomnuker> and it was as simple as applying the transforms multiple times over the same coefficients
[14:18:46 CET] <atomnuker> and then messing with the coefficient strides
[14:19:00 CET] <atomnuker> it was also very very fast
[14:19:35 CET] <jdarnley> Yes, haar is easy: it doesn't modify any values that it won't leave in the final transformed state.
[14:19:52 CET] <atomnuker> I think I had 9_7 working too, but this was almost 2 years ago
[14:19:54 CET] <jdarnley> I am just missing an hstride in my code but it works for depth 1
[14:21:01 CET] <durandal_1707> atomnuker: have you lost atrac9 code you have working? push it to github
[14:22:51 CET] <atomnuker> no, its already on github
[14:24:44 CET] <atomnuker> I'll work on it tonight, just those damn air files don't decode without errors yet
[14:33:38 CET] <jdarnley> Maybe I will rant a little bit about my troubles...
[14:33:52 CET] <BBB> which one?
[14:33:56 CET] <BBB> mpeg124?
[14:34:09 CET] <jdarnley> Oh you missed it. No. Dirac/vc2 encoder transform
[14:34:34 CET] <jdarnley> [Thu 22 12:28] <+jdarnley> I am trying to adapt the decoder's somewhat elegant transforms that "roll" down the plane for use in the encoder.
[14:35:51 CET] <BBB> that should be easy
[14:36:09 CET] <jdarnley> Not in my opinion.
[14:37:23 CET] <jdarnley> To output one "even" line you need the neighboring 2 "odd" lines to be finished.
[14:37:59 CET] <jdarnley> For 1 "odd" line you need the 2 neighboring "even" lines to be in a "half-way" state.
[14:38:16 CET] <jdarnley> That is: only the horizontal part of the transform is done on them.
[14:39:37 CET] <BBB> oh
[14:39:43 CET] <BBB> you mean forward transform
[14:39:46 CET] <BBB> instead of inverse
[14:39:47 CET] <BBB> right?
[14:39:51 CET] <BBB> right, that is harder, yes
[14:40:26 CET] <jdarnley> I think I mean forward. The one in the encoder.
[14:40:52 CET] <jdarnley> So I think I need an extra buffer, a 3rd buffer, to store a few more lines in.
[14:42:27 CET] <jdarnley> I think I need to rename the 1st temporary buffer I have to "scratch" and then maybe add a second called "partial"
[14:44:16 CET] <atomnuker> are you sure you even need a temporary buffer?
[14:44:41 CET] <jdarnley> Unsure.
[14:44:46 CET] <atomnuker> it should work if you just apply the transform over an over on a per-slice basis
[14:45:08 CET] <jdarnley> No. The transform overlaps on slice boundaries
[14:46:31 CET] <jdarnley> You might be right about the number of buffers though.
[14:47:11 CET] <jdarnley> I might be able to do the horizontal filter in-place on the input plane.
[14:51:13 CET] <jdarnley> Yes, I think that would be okay, at least on the first level, because the v filter only works on the output of the h filter.
[15:01:02 CET] <atomnuker> yes, but you don't have to do anything to make that work
[15:01:08 CET] <atomnuker> the transforms already do that
[15:01:20 CET] <atomnuker> you just need to ensure you have plenty of padding around the edges
[15:04:25 CET] <jdarnley> I'm not sure I understood that.
[15:05:55 CET] <jdarnley> I have made one special case, 5_3 depth 1, work in the past by padding each slice with data so that it could be transformed independantly.
[15:06:37 CET] <atomnuker> the transforms in vc2_dwt
[15:06:51 CET] <atomnuker> they already look past what you give them and do the right thing
[15:07:04 CET] <atomnuker> however you need to rip off the intermediary buffer
[15:07:16 CET] <atomnuker> and just leave it as 2 loops in either horizontal or vertical direction
[15:07:26 CET] <atomnuker> then the transforms become in place
[15:07:44 CET] <atomnuker> so you don't need any temporary buffers to hold anything
[15:07:55 CET] <jdarnley> Sure, that works when you have the whole frame.
[15:08:01 CET] <jdarnley> The whole plane
[15:08:53 CET] <atomnuker> no
[15:08:59 CET] <atomnuker> it works when you have part of it
[15:09:14 CET] <atomnuker> the 9_7 and 5_3 look horizontally
[15:09:34 CET] <atomnuker> so you need slice_x + 1 and slice_x - 1 only
[15:10:10 CET] <atomnuker> maybe it needs slice_y + 1 and slice_y - 1 too, but either way its not that much, 2 rows of slices at most plus part of a column
[15:13:28 CET] <jdarnley> That still sounds like I'll need to store a couple of lines somewhere.
[15:14:33 CET] <atomnuker> yep, if you don't have the entire frame you need to buffer
[15:15:34 CET] <jdarnley> And that is what makes it all conceptually hard. I can't picture what that should look like.
[15:17:08 CET] <durandal_1707> [Parsed_declick_0 @ 0x13b4100] Detected clicks in 13855 of 107084530 samples (0.0129384%). <--- 16x real time, not bad
[15:20:42 CET] <atomnuker> jdarnley: simple - you get 1 slice per some api call, you're guaranteed to get them in order, you have a temporary buffer which is 9x9 slices (remember to take the worst case where you have the most lines per slice), and you memcpy them on a per-slice basis (you need to keep slices ref'd)
[15:21:53 CET] <atomnuker> granted, you have to do multiple memcpys per call but I think it pays off in terms of a simpler encoder and less branching (if you hack up the transforms to take overlap from arbitrary memory)
[15:22:30 CET] <atomnuker> or better yet you can ask the API user to allocate 1 full frame
[15:22:37 CET] <atomnuker> and fill it up as it goes along
[15:22:54 CET] <atomnuker> and every time it does you get a call to inform you which slice has been done
[15:23:18 CET] <atomnuker> then all you need to do is put encoding the slice in a queue until all dependencies have been filled
[15:24:33 CET] <atomnuker> (well, not a queue, you'd just receive slice_x and slice_y for the current slice and use them to work out which slice is ready for encoding)
[15:26:46 CET] <atomnuker> that way you could integrate that into lavc too with minimal changes (for a hefty speedup)
[15:34:24 CET] <th3_v0ice> Does anyone have some sample code on how to properly generate HLS playlist using FFmpeg API? Just setting the header options doesnt do anything. Thanks!
[15:40:39 CET] <jdarnley> atomnuker: The full plane is already allocated so that's not a problem to expand it.
[15:41:32 CET] <jdarnley> I already copy the image samples into it as a separate step because they need an offset and promoting to dwtcoef (int32_t)
[15:43:42 CET] <jdarnley> I think I could easily do the padding in that step
[15:43:53 CET] <jdarnley> especially after I saw avpriv_mirror in the decoder.
[15:48:28 CET] <cone-909> ffmpeg 03James Almer 07master:59347c247480: avcodec/dxva2_vc1: add missing frame_params callback to vc1_d3d11va2 hwaccel
[15:51:25 CET] <atomnuker> jdarnley: you can modify the transforms to do that too
[15:52:02 CET] <atomnuker> they'll do it a bit more because of overlap but it'll be negligible compared to the speedup you get from in-placec
[15:52:53 CET] <atomnuker> I don't suggest you use the decoder transforms
[15:53:01 CET] <atomnuker> or anything that looks remotely like them
[15:53:18 CET] <atomnuker> the decoder's transforms are complicated and for a reason
[15:53:34 CET] <atomnuker> but with the decoder you can expect to know exactly what you'll get so you can simplify them
[15:53:41 CET] <atomnuker> hence why the forward transforms are so simple
[15:53:55 CET] <atomnuker> you don't need to do any padding
[15:54:06 CET] <atomnuker> in any step if you implement what I suggested
[15:54:10 CET] <jdarnley> So you think I've been following a dead-end path for a while?
[15:54:18 CET] <atomnuker> welp
[15:55:20 CET] <atomnuker> I follow more dead-end paths than anyone but you get used to it
[15:56:09 CET] <jdarnley> One more thing...
[15:56:12 CET] <jdarnley> "Shift in one bit that is used for additional precision"
[15:56:41 CET] <jdarnley> Are these shifts required by the standard or a choice the encoder can make?
[16:03:00 CET] <Victor_> Hello everyone!!
[16:06:28 CET] <jdarnley> I think I would rather keep the input data copying separate just because it keeps the complexity of handling AVFrame out of the transform.
[16:06:50 CET] <jdarnley> tig
[16:08:03 CET] <atomnuker> jdarnley: they're specified in the transforms, so normative
[16:09:14 CET] <atomnuker> jdarnley: what difficulty?
[16:09:46 CET] <atomnuker> the transforms have non-convoluted inputs and outputs, so you just need to change how the inputs are read
[16:09:56 CET] <jdarnley> Stride, 8 vs 16 bit data, the diff_offset value
[16:10:03 CET] <atomnuker> yeah, you'd need to template for that
[16:10:06 CET] <atomnuker> diff offset?
[16:10:26 CET] <atomnuker> oh, the overlap
[16:10:44 CET] <jdarnley> No, it gets subtracted from every pixel.
[16:10:55 CET] <jdarnley> Signed to unsigned change?
[16:11:04 CET] <jdarnley> Or unsigned to signed rather
[16:11:05 CET] <atomnuker> right, (its the opposite)
[16:11:51 CET] <atomnuker> no, that wouldn't be difficult I think
[16:12:18 CET] <atomnuker> but you can do that later on once you get your coeff strides and quantization working
[16:13:59 CET] <jdarnley> I don't know about you but I found that bit easier so I have already done it. I have already removed the deinterleave function on this branch.
[16:21:50 CET] <atomnuker> jdarnley: quantization works for all transform depths?
[16:23:26 CET] <jdarnley> Yep, passes fate and everything.
[16:23:38 CET] <jdarnley> I even added more fate tests first.
[16:23:51 CET] <jdarnley> To test haar and 5_3
[18:46:17 CET] <jdarnley> kierank: do you know about that ^ ?
[18:46:35 CET] <kierank> about what?
[18:47:02 CET] <kierank> jdarnley: ?
[18:47:14 CET] <jdarnley> Oh did my message not go through?
[18:47:16 CET] <jdarnley> How many bits can I skip using skip_bits on a GetBitContext?
[18:47:18 CET] <jdarnley> Can I skip many hundreds to thousands?
[18:47:50 CET] <kierank> dunno but you should use the pointers i think
[18:47:53 CET] <kierank> like the existing code
[18:49:08 CET] <jdarnley> Maybe I could
[19:32:12 CET] <cone-954> ffmpeg 03Gagandeep Singh 07master:c64c97b972c7: lavc/cfhd: add alpha decompanding in rgba12
[19:42:30 CET] <durandal_1707> michaelni: what is scalability in mpeg4? and extension?
[19:45:55 CET] <JEEB> tmm1: btw you could check if this gets one-upped by packet you receive http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/h264_parser.c;h=1a9840a62c5428cc09e73a4b5faf350c9de776f8;hb=HEAD#l386
[19:51:50 CET] <tmm1> sure, let me check..
[19:54:25 CET] <tmm1> JEEB: yea looks like its incrementing
[19:55:23 CET] <JEEB> wonder if we could double the time base of the stream and return field-based PTS with field based interlacism
[19:57:08 CET] <michaelni> durandal_1707, mpeg4 scalability is having a stream of lower resolution and a difference stream or so. not commonly used
[19:59:55 CET] <tmm1> JEEB: yea seems like that could work, similar to how vf_yadif works
[20:00:08 CET] <durandal_1707> michaelni: i guess specification how to decode it is available?
[20:00:38 CET] <michaelni> i think so
[20:00:42 CET] <cone-954> ffmpeg 03Courtland Idstrom 07master:65616bc191b1: lavf/movenc: write track title metadata for mov files
[20:07:27 CET] <kierank> michaelni: do you have samples of this?
[20:07:30 CET] <kierank> scalable mp4
[20:07:46 CET] <kierank> if we can't decode scalable mp4 then proves ffmpeg mp4 decoder is broken
[20:08:02 CET] <michaelni> no
[20:09:05 CET] <kierank> ok so academic project
[20:11:03 CET] <durandal_1707> kierank: there is sample on ml
[20:11:45 CET] <kierank> I would be shocked if someone implemented this
[20:11:51 CET] <kierank> must be some proprietary mp4 variant
[20:12:34 CET] <durandal_1707> MNM4 is fourcc
[20:29:59 CET] <BodecsB> <durandal_1707> Hi, last time we had a conversation about a live/realtime input selecting
[20:31:07 CET] <durandal_1707> BodecsB: and conclusion is that there is not such thing in FFmpeg currently
[20:31:33 CET] <BodecsB> yes, and you and someone suggested to write a client program rather
[20:32:02 CET] <BodecsB> it is very general.
[20:32:19 CET] <BodecsB> Is it any chance to send it to RFC and include as utility?
[20:32:32 CET] <BodecsB> if you and others like it
[20:32:50 CET] <BodecsB> I have wrote one
[20:33:07 CET] <durandal_1707> does it uses internal api like ffserver?
[20:33:12 CET] <BodecsB> no
[20:33:19 CET] <BodecsB> only new API
[20:33:33 CET] <JEEB> BodecsB: did you look at how upipe does it?
[20:33:37 CET] <JEEB> if it was of interest
[20:33:46 CET] <BodecsB> Hi JEEB
[20:34:01 CET] <BodecsB> yes I see them
[20:34:10 CET] <BodecsB> but I thought it could be done easier
[20:34:26 CET] <BodecsB> 800 rows
[20:34:46 CET] <BodecsB> i did not not find any requeriment for utilities
[20:36:38 CET] <BodecsB> does it exist like this?
[20:36:42 CET] <durandal_1707> well, post it and expect reviews, flames, bikesheds, etc... as usual
[20:37:26 CET] <BodecsB> ok, but how should it send, because it is not a patch, but a new c file
[20:38:02 CET] <durandal_1707> one can still create patch with git
[20:38:09 CET] <BodecsB> enclose simply the file?
[20:38:29 CET] <durandal_1707> no, learn to git
[20:38:30 CET] <BodecsB> the file is not in the repo
[20:38:35 CET] <durandal_1707> add it
[20:38:45 CET] <durandal_1707> create custom branch and so on
[20:38:55 CET] <durandal_1707> there is docs in ffmpeg source about this
[20:39:10 CET] <BodecsB> yes I did it locally here
[20:39:41 CET] <BodecsB> so create the file into the util subdir and create as patch
[20:40:08 CET] <JEEB> so it's an API client for the public APIs?
[20:40:13 CET] <BodecsB> yes
[20:40:18 CET] <JEEB> maybe look at how the doc/examples
[20:40:22 CET] <JEEB> are structured :)
[20:40:29 CET] <JEEB> could be a new example
[20:40:35 CET] <BodecsB> yes, originally created nto the example subdir
[20:40:43 CET] <JEEB> if it gets big enough it could as well be a new open source app :D
[20:40:49 CET] <BodecsB> but concluded that it is a utility rather
[20:41:02 CET] <BodecsB> no, it is very simple
[20:41:27 CET] <BodecsB> it is based on 4 examples
[20:42:24 CET] <cone-954> ffmpeg 03James Almer 07master:4f2ff3a53e17: avcodec/mpeg4_unpack_bframes: make sure the packet is writable when data needs to be changed
[20:43:03 CET] <JEEB> anyways, feel free to commit it and push onto your fork of FFmpeg on github or gitlab
[20:43:06 CET] <JEEB> and link it here
[20:43:44 CET] <JEEB> if it seems cool enough and can be built as an example then making a patch out of it and posting on ffmpeg-devel seems simple enough :)
[20:46:17 CET] <BodecsB> will you please to look at it before I submit, here: https://pastebin.com/nyJRDidf about the coding style?
[20:46:37 CET] <BodecsB> should I include the example authors also?
[20:46:51 CET] <JEEB> well to be honest you wouldn't be "submitting" anything :P just having your own WIP repo somewhere is enough
[20:46:56 CET] <JEEB> a fork is a fork
[21:17:06 CET] <cone-954> ffmpeg 03James Almer 07master:016d40011ac2: avcodec/extract_extradata: don't uninitialize the H2645Packet on every processed packet
[22:53:24 CET] <caimlas> hello, I'm trying to figure out how to compile ffmpeg with full nvidia/cuda support. While using --enable-libnpp I'm told "ERROR: libnpp not found" after having already installed the nvidia cuda packages provided by nvidia (on ubuntu 16.04) and appear to have numerous /usr/local/cuda/lib64/libnpp* files. I've followe the (brief) instructions here: https://developer.nvidia.com/ffmpeg
[22:53:45 CET] <caimlas> here his how I'm trying to configure: https://pastebin.com/ksxYdayg - is anyone able to point me in the right direction to get --enable-libnpp to work?
[23:03:36 CET] <tmm1> you need to adjust your CFLAGS and LDFLAGS to include /usr/local/cuda since that's not a standard path
[23:04:13 CET] <tmm1> see for example https://gist.github.com/Brainiarc7/988473b79fd5c8f0db54b92ebb47387a
[23:04:31 CET] <tmm1> > --extra-cflags="-I/usr/local/cuda/include/" --extra-ldflags=-L/usr/local/cuda/lib64/
[23:05:09 CET] <caimlas> tmm1, looking, thanks.
[23:05:48 CET] <caimlas> yeah, I've got that verbatim.
[23:06:19 CET] <nevcairiel> libnpp is largely useless though. Just use the cuda filters instead
[23:06:53 CET] <caimlas> config.log seems to suggest: /usr/bin/ld: cannot find -lnppi
[23:06:58 CET] <caimlas> which doesn't overtly help me, sadly.
[23:07:07 CET] <caimlas> nevcairiel, yeah, trying to build to someone else's specification, sadly.
[23:07:25 CET] <BodecsB> PKG_CONFIG_PATH="/root/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --enable-gpl --enable-libfreetype --enable-libfdk_aac --enable-libzmq --extra-cflags=-I/root/ffmpeg_sources/cudautils --extra-ldflags=-L/root/ffmpeg_sources/cudautils --enable-libx264 --enable-nonfree --enable-nvenc --disable-shared --enable-cuda --enable-cuvid --extra-cflags=-Ilocal/include --extra-cflags=-I/usr/local/cuda-8.0/include
[23:07:25 CET] <BodecsB> --extra-cflags=-I/root/ffmpeg_sources/nv_sdk/inc --extra-ldflags=-L/root/ffmpeg_sources/nv_sdk/lib --enable-libnpp --extra-ldflags=-L/usr/local/cuda-8.0/lib64/ --disable-libx265 --extra-ldflags="-L/root/ffmpeg_build/lib/" --extra-cflags="-I/root/ffmpeg_build/include" --extra-ldflags="-L/root/ffmpeg_build/lib -ldl"
[23:08:03 CET] <BodecsB> working on a virtual droplet
[23:08:30 CET] <tmm1> i don't see /usr/local/cuda/lib64 anywhere in there
[23:09:23 CET] <BodecsB> https://developer.nvidia.com/cuda-downloads
[23:10:41 CET] <BodecsB> have you read this pdf on nvidia site: FFMPEG WITH NVIDIA ACCELERATION ON UBUNTU LINUX Installation and User Guide
[23:11:51 CET] <cone-954> ffmpeg 03Paul B Mahol 07master:b78d55b2e63e: avfilter/avf_showvolume: add background opacity option
[00:00:00 CET] --- Fri Mar 23 2018
More information about the Ffmpeg-devel-irc