[Ffmpeg-devel-irc] ffmpeg.log.20160901

burek burek021 at gmail.com
Fri Sep 2 03:05:01 EEST 2016


[00:01:30 CEST] <kepstin> heh. if you're gonna do that, the options that matter are probably (smpte rp-202) crop=704:480:8:5, (DV) crop=704:480:8:4, and (centered) crop=704:480:8:3
[00:01:49 CEST] <kepstin> I rather doubt you'll see video using anything other than one of those
[00:02:31 CEST] <kepstin> the annoying thing is that the smpte and dv crops might give opposite fields in the interlacing (tff vs bff)
[00:03:53 CEST] <kepstin> ffmpeg can switch that, but it requires either throwing out a field or offsetting the video vertically and adding a black line :/
[00:21:53 CEST] <gesh> Hi. I have some videos I want to splice together - i.e. I
[00:22:49 CEST] <gesh> 'd like to create a new video which is the concatenation of e.g. vid1 at 01:30-10:24, vid2 at 6:42-17:36, etc.
[00:24:05 CEST] <llogan> hseg: either use the concat demuxer or the concat filter
[00:24:14 CEST] <hseg> How do I do that?
[00:24:48 CEST] <llogan> http://ffmpeg.org/ffmpeg-formats.html#concat
[00:24:55 CEST] <llogan> http://ffmpeg.org/ffmpeg-filters.html#concat
[00:25:02 CEST] <llogan> http://trac.ffmpeg.org/wiki/Concat
[00:26:10 CEST] <iive> the 3'd url is wrong
[00:26:25 CEST] <iive> or page has been moved.
[00:26:45 CEST] <llogan> https://trac.ffmpeg.org/wiki/Concatenate
[00:26:53 CEST] Action: llogan blames old firefox history
[00:27:58 CEST] <hseg> What's the difference between the demuxer and the filter?
[00:28:19 CEST] <llogan> use the demuxer if: 1) your cuts are on or close enough to key frames 2) your two inputs are similar in format and properties 3) if you want to attempt to avoid re-encoding. otherwise use concat filter but it requires re-encoding.
[00:29:21 CEST] <llogan> also, the concat demuxer will require temporary files.
[00:29:55 CEST] <hseg> OK. So e.g. in order to reconcatenate an mp4 split into m2ts files, I should use the demuxer, but to construct a compilation from a heterogeneous collection of home videos, I should use the filter?
[00:30:07 CEST] <hseg> Also, what are the drawbacks of re-encoding?
[00:31:02 CEST] <hseg> Also, your description contradicts the wiki, which claims the demuxer is the more flexible of the two.
[00:31:47 CEST] <llogan> It depends on what you need to do
[00:32:18 CEST] <llogan> it can be argued that the filter is simpler if you're performing any additional filtering or if you want to re-encode anyway, such as to a different format
[00:35:05 CEST] <hseg> well, given that I'll need to use inpoint and outpoint, I think the demuxer will be more useful in my case.
[00:35:37 CEST] <llogan> you can use inpoint and outpoint with the filter too
[00:35:58 CEST] <gesh> However, if I'll just need to concatenate videos, I think the filter will be of more use.
[00:36:54 CEST] <llogan> if you want to concatenate whole videos that are "the same" and stream copy (remux) then use the demuxer.
[00:37:08 CEST] <llogan> http://ffmpeg.org/ffmpeg.html#Stream-copy
[00:37:30 CEST] <hseg> Wait, what? Now I'm confused.
[00:38:50 CEST] <hseg> My use case is: I want to splice parts of a heterogeneous collection of videos into a single video.
[00:39:37 CEST] <iive> m2ts files are supposed to be mpeg-ts, so you can just concatanate the files themselves
[00:40:02 CEST] <llogan> if you want to "copy and paste" these into a video then use the demuxer. if you need to do other filtering stuff to it, or if you want to re-encode for any reason, such as making it a smaller file size such as if the original encoder was shitty or inefficient, then use the filter
[00:40:10 CEST] <hseg> Right, although that gives a slightly noticable jump at the file boundaries.
[00:40:10 CEST] <iive> i mean, the blueray probably reads them as a single stream.
[00:41:17 CEST] <iive> the protcol and demuxer methods won't work if  input files come from different encodes.
[00:41:56 CEST] <iive> e.g. modern codecs usually have something called extradata, it is info like resolution, colorformat etc.etc that is common for the whole file.
[00:42:24 CEST] <hseg> If you tell me how to find it, I can give you data on the encodings of the files I want to concatenate.
[00:42:29 CEST] <iive> if you try to splice 2 encodes that have different extradata, it won't work.
[00:43:16 CEST] <hseg> Also, to be clear, I'm referring to https://ffmpeg.org/ffmpeg-formats.html#concat as the concat *protocol* and https://ffmpeg.org/ffmpeg-filters.html#concat as the contact *filter*.
[00:43:21 CEST] <iive> the thing is, if you take one file and cut it in two... these two are likely to have same extradata...
[00:43:56 CEST] <hseg> From the docs, it seems that if I want to do funky stuff like re-encode or splice, I need to use the protocol.
[00:44:20 CEST] <hseg> Whereas if all I want is to concatenate, it's best if I use the filter.
[00:45:05 CEST] <hseg> Wait.... The Trac article and the docs seem to be using differing terminology.
[00:45:40 CEST] <llogan> there is also the concat protocol: http://ffmpeg.org/ffmpeg-protocols.html#concat
[00:46:39 CEST] <hseg> ... And this is why I have bad memories of trying to sort out what I needed. There's so much choice and so little data to inform it.
[00:49:25 CEST] <llogan> you can basically ignore the concat protocol
[00:49:55 CEST] <hseg> Yeah, it seems to be a smarter version of the GNU cat utility.
[00:50:12 CEST] <iive> concat protocol is like merging the files using `cat *.m2ts > all.m2ts`
[00:50:38 CEST] <iive> but it might work in your case.
[00:51:34 CEST] <hseg> I have two cases, iive: One is to concatenate M2TS files to an MP4, the other is to splice a heterogeneous collection of files into a single video.
[00:52:16 CEST] <iive> hetero->different... that one is for the filter.
[00:54:19 CEST] <hseg> Does the filter support saying, e.g. take vid1 at times 04:00-35:21, vid2 at times 02:43-03:42, vid1 at times 01:10-03:41, ...?
[00:54:31 CEST] <hseg> i.e. splicing only parts of the inputs?
[00:55:11 CEST] <Jnorthrup> MP4Box -cat is pretty deluxe
[00:57:30 CEST] <llogan> hseg: yes. either use -ss and -t, or the trim/atrim filters
[00:59:52 CEST] <hseg> These can be set per-input, right? What is the difference between the three options?
[00:59:55 CEST] <Guest15129> almost had it... ERROR: libfaac not found
[00:59:57 CEST] <Guest15129> lol
[01:00:01 CEST] <Guest15129> two hours
[01:00:08 CEST] <Guest15129> will try again tomorrow
[01:00:45 CEST] <Guest15129> have a great day
[01:01:37 CEST] <llogan> Guest15129: don't use libfaac. it sucks and is going to be removed from FFmpeg soon
[01:02:06 CEST] <Guest15129> ahah, i will remove the library from my script when i do tomorrow
[01:02:24 CEST] <Guest15129> will see if it will build
[01:02:47 CEST] <llogan> if the native FFmpeg AAC encoder sounds good enough just use that or use libfdk_aac
[01:03:24 CEST] <Guest15129> i proly will not need any fancy stuff, plan to use google speech API for voice recognitions
[01:03:42 CEST] <Jnorthrup> encode celp :)
[01:04:28 CEST] <hseg> llogan: So, what is the difference between the three options you listed?
[01:04:37 CEST] <hseg> And how do I set them per-input?
[01:05:12 CEST] <llogan> http://ffmpeg.org/ffmpeg-filters.html#atrim
[01:05:16 CEST] <llogan> http://ffmpeg.org/ffmpeg-filters.html#trim
[01:05:26 CEST] <llogan> http://ffmpeg.org/ffmpeg.html
[01:05:57 CEST] <gesh> Oh, wait. atrim is for audio streams and trim is for video streams?
[01:06:41 CEST] <hseg> OK, and why would I want to use atrim/trim vs -ss/-t?
[01:06:50 CEST] <llogan> ffmpeg -ss 4 -t 2 -i input0 -ss 6 -t 00:01:23 -input 1 ... or -i input0 -i input1 -filter_complex "[0:v]trim=start=4:end=9[clip0];[1:v]trim..."
[01:07:27 CEST] <llogan> (a)trim may be more accurate, but slower
[01:08:10 CEST] <hseg> OK. -ss/-t seems easier to use.
[01:08:59 CEST] <hseg> So it seems I want to make sure all files have the same amount of streams, then use the following command line:
[01:11:03 CEST] <hseg> ffmpeg -ss s0 -t t0 -i i0 -ss s1 -t t1 -i i1 ... -filter_complex '[0:v:0] [0:a:0] [1:v:0] [1:a:0] ... concat=n=...:v=1:a=1 [v] [a]' -map '[v] -map '[a] ... out.mkv
[01:11:21 CEST] <hseg> At least, this is what I infer from Trac
[01:11:50 CEST] <llogan> worth a try.
[01:13:38 CEST] <hseg> OK. One final question for now: How do I get the list of streams in each file?
[01:13:50 CEST] <DHE> ffprobe is my preference
[01:13:59 CEST] <hseg> I only need to know the stream types, not the metadata.
[01:16:27 CEST] <llogan> ffprobe -loglevel error -show_entries stream=index,codec_type input.foo
[01:19:15 CEST] <hseg> That's a bit too terse for me. Raw ffprobe gives me http://sprunge.us/USYa, but all I need is http://sprunge.us/BbHF
[01:19:52 CEST] <hseg> That way, I can see A) What encoding each stream has and B) whether the container has any extra structure
[01:20:39 CEST] <llogan> then use -show_streams instead
[01:21:20 CEST] <llogan> or -hide_banner if you don't like the build info
[01:22:18 CEST] <hseg> -hide_banner is near enough to what I want to work.
[01:22:42 CEST] <llogan> in that case you can just use ffmpeg instead of ffprobe if you prefer
[01:26:07 CEST] <hseg> OK, thanks.
[08:50:49 CEST] <luc4> Hello! Building the version of ffmpeg that is bundled in chromium I'm getting this kind of error on arm: "bad instruction ldrhcs r0,[e2],#2". I'm not able to find much info about this or about the ldrhcs instruction but I found this: http://review.cyanogenmod.org/#/c/83285/, but I'm not sure what the patch is. Any idea if a patch for this error was published to ffmpeg maybe or not? Any idea what that instruction mean? It seems a load register
[08:50:51 CEST] <luc4> but I can't find much info...
[08:59:59 CEST] <Guest15129> notification: I did, after much effort, manage to install ffmpeg onto the raspberry pi model 2b. I sincerely hope that debian will decide to make the switch away from lying to users in their packages. I think a LOT of people will benefit from debian making this decision, because today was wasted figuring out this problem rather than the mission I initially embarked apon. That said, THANK YOU to the guys in this IRC for your suggestions. I will likely empl
[08:59:59 CEST] <Guest15129> y a different distribution of linux when my new raspberry pi comes in the mail.
[09:00:09 CEST] <Guest15129> goodnight.
[09:47:24 CEST] <_robin_> hello
[09:48:37 CEST] <_robin_> i'm facing an issue when capturing video, black screen appears
[09:49:00 CEST] <_robin_> running under virtualBox , ubuntu 14.04
[10:01:28 CEST] <_z5gr_> hello
[10:02:08 CEST] <_z5gr_> i'm trying to capture screen in ubuntu 14.04 which is running under virtual box, showing just black screen
[10:02:14 CEST] <_z5gr_> my command
[10:02:46 CEST] <_z5gr_> ffmpeg -f x11grab -s 362x176 -framerate 30 -i :0.0 -pix_fmt yuv420p -vcodec libx264 /home/user/Videos/output.mkv
[10:47:16 CEST] <relaxed> _z5gr_: that should be -video_size 362x176
[10:47:43 CEST] <relaxed> pastebin.com your command and console output for more help
[10:47:55 CEST] <_z5gr_> tried that
[10:48:00 CEST] <_z5gr_> didn't work
[10:48:03 CEST] <_z5gr_> ok sure
[10:50:37 CEST] <_z5gr_> http://pastebin.com/MnGN8BSL
[10:53:38 CEST] <relaxed> _z5gr_: your ffmpeg version is fairly old. Try https://www.johnvansickle.com/ffmpeg/
[10:55:39 CEST] <_z5gr_> relaxed: i have another version cloned i.e ffmpeg version git-2016-08-28-a37e6dd Copyright
[10:56:14 CEST] <termos> is there any SCTE-35 support in FFmpeg?
[10:57:43 CEST] <relaxed> _z5gr_: on pastebin it said 2.4.3-1ubuntu1~trusty6. Show me the output from the updated version
[10:58:05 CEST] <_z5gr_> ok
[11:00:23 CEST] <_z5gr_> http://pastebin.com/y3rMP0gV
[11:00:33 CEST] <_z5gr_> relaxed : http://pastebin.com/y3rMP0gV
[11:01:32 CEST] <Spring> I think it was durandal_1707 who suggested I use -maxrate <value> -bufsize <maxrate * 10> to set a max bitrate limit on CRF yet I haven't found it to be very effective at capping the bitrate
[11:03:33 CEST] <Spring> eg, I have a CRF 20 source that I'm encoding to CRF 24 with 20M as the desired bitrate cap, yet it goes to 30-36Mbps in the output
[11:03:44 CEST] <Spring> If I use a CRF 23 source the results are closer to expected but is there anything else I can do to cap it properly?
[11:09:31 CEST] <nennenja> How is ffmpeg different from CCCP?
[11:10:02 CEST] <Spring> ah, no it was JEEB
[11:33:17 CEST] <Spring> so turns out setting bufsize to *2 rather than *10 resolved this
[11:36:18 CEST] <Spring> what is puzzling is the filesize is higher than another video I have with a higher average bitrate
[12:15:16 CEST] <Spring> so that was easily explained and obvious in hindsight, the lengths were slightly different
[12:27:28 CEST] <luc4> Hello! When building the version of ffmpeg that is bundled in chromium I'm getting this kind of error on arm: "bad instruction ldrhcs r0,[e2],#2". I'm not able to find much info about this or about the ldrhcs instruction but I found this which seems the same: http://review.cyanogenmod.org/#/c/83285/, but I'm not sure what the patch is. Any idea if a patch for this error was published to ffmpeg maybe or not? Any idea what that instruction mean?
[12:27:30 CEST] <luc4> It seems a load register instruction but can't find more info...
[12:44:21 CEST] <xeche> hello guys. I'm having some issues with the FFMPEG API, specifically AVCodecContext and all the settings it exposes. I'm currently using libx265 for encoding. It looks like the options in AVCodecContext are generally just ignored.
[12:46:13 CEST] <xeche> So, I've taken to use the av_set_opt on the context->priv_data. But that also seems shakey, as it's order dependent
[12:48:09 CEST] <xeche> If I switch the order of two parameters when setting priv_data key/values, the setting is ignored and reverts to the documented default of libx265
[14:55:25 CEST] <nonex86> offtopic, guys, can anyone experienced in hardware overclocking? especially in intel xmp? i have several questions about it...
[14:55:37 CEST] <nonex86> *is anyone
[16:53:26 CEST] <kepstin> nonex86: it's not really relevant to this channel, but intel XMP is basically "put XMP memory into a Z-series chipset board, enable XMP in the bios, tada your ram is now faster"
[17:31:44 CEST] <kk45> Hi, i am fast quetion, i have conert mkv 1080 to mkv 720p, what code?  My code http://pastebin.com/5U2FdGba
[17:36:07 CEST] <c_14> codec? -c:v libx264 probably and add -c:a copy
[17:37:09 CEST] <kk45> ok,
[17:38:20 CEST] <kk45> http://pastebin.com/bXaLHHaV it is a good code, yes?
[17:38:46 CEST] <xeche> anyone know how to set priv_data options for multiple settings?
[17:38:48 CEST] <c_14> ffmpeg -i /home/han/code32.mkv -c:v libx264 -c:a copy -s hd720 /han/code3.mkv
[17:39:11 CEST] <xeche> libx265 requires the x265-params switch and then a key value list separated by :
[17:39:34 CEST] <xeche> i'm doing this with av_set_opt, but it doesn't seem to be respected for multiple options?
[17:41:12 CEST] <xeche> i.e. av_opt_set(codecContext->priv_data, "x265-params", "keyint=15:keyint-min=10:no-scenecut", 0);
[17:41:57 CEST] <kk45> Thanks help is working
[18:57:13 CEST] <kyleogrg> i'm doing two-pass vp9, and on both passes i have a slow deinterlace filter.  could i speed up the first pass by using a fast filter?
[18:59:09 CEST] <soulshock1> I would imagine you need to run the same filters on both passes, to get good results
[19:02:34 CEST] <kyleogrg> soulshock1: that's what i supposed...
[19:03:40 CEST] <manusitara> Hi
[19:05:47 CEST] <manusitara> How can we find particular macroblock count from video frames?
[19:06:04 CEST] <kyleogrg> what crf might be considered visually lossless for vp9?
[19:13:27 CEST] <kepstin> kyleogrg: depends how good your vision is? I don't know anyone who has actually done any publically documented tests, so you might just have to test for yourself.
[19:15:20 CEST] <kyleogrg> kepstin: yeah, i was looking for a rule of thumb, but i'll have to experiment.
[19:15:41 CEST] <kepstin> if you've got the space for it, it might make sense to deinterlace to some fast lossless codec then do the multipass encode based on that. Hard to say whether that would be faster or slower overall.
[19:16:00 CEST] <kyleogrg> really....
[19:16:54 CEST] <kyleogrg> you mean because it's faster to decode a simple codec twice than a more complex one?
[19:19:02 CEST] <kyleogrg> i could see that if i was doing a really really slow filter, because every filter has to be applied twice in two-pass, so the slowness is doubled
[19:25:45 CEST] <kepstin> yeah, that was primarily referring to the speed of the deinterlacer. If it's slow enough to be the limit on the encoding speed, doing it separately from the actual encode might be an improvement with a 2-pass encode.
[19:26:05 CEST] <kepstin> (something like ffmpeg's nnedi filter might be slow enough for that to be worth it)
[19:30:55 CEST] <kyleogrg> kepstin: well, i've been using yadif with mcdeint.  is there another deinterlacing filter that does a better job?
[19:31:59 CEST] <kepstin> of the deinterlacers in ffmpeg, nnedi is probably the best. But it's *really* slow, since it's a single-threaded cpu implementation of an algorithm that runs best on gpus :/
[19:33:51 CEST] <kyleogrg> ah
[19:33:56 CEST] <kyleogrg> never tried it
[19:36:48 CEST] <kyleogrg> never heard of it either.  i wonder if it's new to ffmpeg.
[19:38:05 CEST] <kepstin> hmm, I think it was added in 3.0? I forget, it might have been earlier
[19:50:20 CEST] <kyleogrg> is lanczos the best scale filter?
[19:56:25 CEST] <kepstin> you ask 10 people, you're gonna get 10 different answers for "best scale filter"
[19:56:44 CEST] <kepstin> iirc, ffmpeg's default is a cubic algorithm of some sort that is pretty decent.
[20:00:40 CEST] <kyleogrg> kepstin: it seems like lanczos is extremely common anyway, so i was wondering why
[20:01:25 CEST] <kyleogrg> btw, i'm scaling deinterlaced footage
[20:03:43 CEST] <kepstin> lanczos is generally considered to be a pretty good upscaling algorithm, I think. It can add a bit of ringing sometimes, which will often look similar to "edge enhancement"
[20:04:13 CEST] <kyleogrg> okay, what about scaling from 720x480 interlaced to 640x480 progressive?
[20:04:19 CEST] <kepstin> and of course implementations vary
[20:05:14 CEST] <kepstin> kyleogrg: scaling like that? you'd have a very tough time telling any scalers apart. Like, bilinear would probably look ok :/
[20:07:26 CEST] <fritsch> kyleogrg: use yadif for deinterlacing
[20:07:31 CEST] <fritsch> then lanczos for scaling
[20:08:33 CEST] <kyleogrg> fritsch: do you think that's the best for downscaling, and scaling from 720x480 to 640x480?
[20:09:17 CEST] <kepstin> it certainly is very unlikely to be worse than any other filter at that kind of change. I'm sure it'll give good results.
[20:10:01 CEST] <kyleogrg> kepstin: downscaling too?
[20:10:41 CEST] <kyleogrg> i'm deinterlacing a source video and encoding it into four different resolutions.  the largest is 640x480, then they get smaller and smaller
[20:10:46 CEST] <kyleogrg> so i need to downscale
[20:11:28 CEST] <kepstin> kyleogrg: really, though, "best" is a matter of personal preference here. Why don't you try a couple and see which look best for the results you want?
[20:11:58 CEST] <kyleogrg> kepstin: yeah
[20:12:17 CEST] <kepstin> i bet you'd get acceptable, but slightly different, results from any of bilinear, bicubic, or lanczos.
[20:14:21 CEST] <kepstin> for downscaling from really big to really small sizes, the "area" scaler (which I think is a box filter) will often give a smoother image (less artifacts)
[20:14:57 CEST] <kepstin> that would be if you're scaling to 50% original size or smaller
[20:16:18 CEST] <kyleogrg> yeah, i'm going youtube-esque, with 480p, 360p, 240p, 144p
[20:16:38 CEST] <kyleogrg> so maybe i shouldn't use the same filter for every one
[20:19:19 CEST] <Spring> it's surprising how few people need to view videos for bandwidth to skyrocket
[20:22:02 CEST] <Spring> like, for twenty 50MB clips a mere twenty people would have to view them to consume 20GB of bandwidth.
[20:30:24 CEST] <Spring> for self-hosting and posting on a popular site with hundreds and thousands of views every day...
[20:30:47 CEST] <kyleogrg> yes...
[20:31:04 CEST] <kyleogrg> high efficiency codecs like vp9 help
[20:31:26 CEST] <Spring> well, I honestly have no idea how its practical
[20:31:31 CEST] <Spring> or how other sites do it
[20:31:59 CEST] <Spring> flickr host videos but they don't allow for direct links, even though many still images can be 10MB+
[20:32:27 CEST] <kyleogrg> idk, my project isn't for a big site
[20:33:08 CEST] <Spring> oh I wasn't meaning for your posts, just reminded me of my own one :p
[20:33:26 CEST] <Spring> *own problem
[20:35:46 CEST] <kepstin> i find youtube's use of vp9 kind of interesting. They're balancing the fact that it requires a lot of resources to encode vs. the bandwidth savings, and only running vp9 encodes for videos with more than X views (where X appears to be around 200-300)
[20:36:25 CEST] <kyleogrg> i know, vp9 is super slow.  it's mind boggling how google can use it on millions of videos
[20:36:46 CEST] <kepstin> most of the videos on youtube have nowhere near 200 views :)
[20:36:50 CEST] <Threads> cutting corners no doubt
[20:37:10 CEST] <kyleogrg> yeah, but i'm sure there are tons of videos that have enough views
[20:37:27 CEST] <kyleogrg> at least compared to my resources, it's very impressive...
[20:37:38 CEST] <kepstin> they probably have people (or maybe algorithms) calculating exactly what threshold saves them the most money on encoding vs. serving :/
[20:37:59 CEST] <kepstin> and google does, of course, have a lot of both compute and bandwidth available...
[20:39:47 CEST] <kyleogrg> are vp9 and hevc going to be supported by the major video editing softwares?
[20:40:11 CEST] <kyleogrg> if i encode a large library of videos, i want to know if they will be useable for editing
[20:40:51 CEST] <kepstin> probably hevc will be better supported by commercial software, overall
[20:41:13 CEST] <Spring> well, from what I've read Google doesn't pay for bandwidth costs in the traditional way
[20:41:38 CEST] <kepstin> open-soure video editing tools usually just use ffmpeg to decode, so they'd probably handle either without issue...
[20:41:39 CEST] <llogan> kyleogrg: if not then encode to a supported lossless intermediate.
[20:42:26 CEST] <llogan> such as ut video if using premiere or whatever (of course you have to install ut video first for premiere)
[20:42:45 CEST] <Spring> also someone mentioned Google wanted to use Vpx as a bargaining chip against the patented codecs
[20:43:16 CEST] <Spring> so it's in their interests to use it on the most popular video site
[20:43:33 CEST] <llogan> ...and the lossless it would possibly be easier to edit if your machine is an old jalopy graybeard
[20:44:15 CEST] <kyleogrg> llogan: yeah, lossless is something i'm considering.  speaking of, i haven't yet tried vp9 lossless
[20:44:44 CEST] <llogan> some are going to decode easier than others
[20:44:46 CEST] <kyleogrg> Spring: yup, i think places like youtube are trying to go towards the more open codecs
[20:46:51 CEST] <kyleogrg> if you're scaling to exactly half the resolution, can you use a filter that simply discards every other pixel?  like going from 640x480 to 320x240?  or am i looking at it too simply?
[20:49:31 CEST] <llogan> sounds like nearest neighbor scaling
[20:53:40 CEST] <kepstin> kyleogrg: that's nearest neighbor, yeah. And it'll look pretty bad because of aliasing issues
[20:54:00 CEST] <kepstin> (e.g.  imagine a 1-pixel wide line moving sideways 1 pixel per frame; it'll appear and disappear on alternating frames)
[20:54:22 CEST] <kyleogrg> kepstin: ah, that's right
[20:54:36 CEST] <llogan> you can use it for pixel art
[20:54:50 CEST] <llogan> scale=320:-1:flags=neighbor
[20:55:33 CEST] <kepstin> for downscaling by 1/2 i'd recommend using the 'area' scaler, which will generate each pixel by averaging the 2x2 square of pixels that cover the same area.
[20:55:43 CEST] <kepstin> fairly simple, with decent results.
[20:56:02 CEST] <kyleogrg> kepstin: would that be better than bicubic for 240p?
[20:56:31 CEST] <kepstin> yes
[20:58:31 CEST] <kepstin> (many of the sample-based "bi-whatever" filters have aliasing issues when downscaling to 1/2 or smaller)
[20:58:52 CEST] <kyleogrg> ah okay
[21:07:41 CEST] <kyleogrg> where can i find an example of nnedi in ffmpeg
[21:09:28 CEST] <kepstin> not really much to it; it's just a filter, and has only one required parameter which the docs say how to set.
[21:19:54 CEST] <kyleogrg> okay
[21:33:38 CEST] <abach> hi all
[21:34:09 CEST] <abach> I have troubles by installing ffmpeg on my raspberry PI2
[21:35:38 CEST] <llogan> what kind of troubles? which distro?
[21:37:00 CEST] <abach> i have Raspbian, based debian and when I launch sudo ./configure --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree, I have "ERROR: libx264 not found" as error message
[21:39:55 CEST] <llogan> did you install libx264? why are you using sudo? you don't need --enable-nonfree
[21:40:16 CEST] <llogan> might be easer for you to install ffmpeg from jessie-backports repo
[21:41:30 CEST] <abach> when I trie to enter sudo apt-get install libx264 it returns that this packet doesn't exist
[21:41:44 CEST] <llogan> you need libx264-devel
[21:42:23 CEST] <abach> and, on my raspberry, I don't have a graphical interface, si I don't know how to activate the backports
[21:42:53 CEST] <llogan> ask in #raspbian or #debian or whatever regarding that
[21:42:56 CEST] <kyleogrg> how can i use yadifmod in ffmpeg?
[21:43:28 CEST] <llogan> abach: for raspberry consider adding "--enable-omx --enable-omx-rpi"
[21:44:45 CEST] <llogan> kyleogrg: modify the yadif filter and recompile. or possibly use it via avs script.
[21:44:55 CEST] <abach> loogan: I'm trying
[21:45:10 CEST] <kyleogrg> llogan: is qtgmc supposed to be about the best deinterlacer?
[21:45:29 CEST] <llogan> i don't know. i've never used that
[21:45:30 CEST] <abach> llogan : I also have ERROR: libx264 not found
[21:45:31 CEST] <abach> If you think configure made a mistake, make sure you are using the latest
[21:45:31 CEST] <abach> version from Git.  If the latest version fails, report the problem to the
[21:45:31 CEST] <abach> ffmpeg-user at ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
[21:45:31 CEST] <abach> Include the log file "config.log" produced by configure as this will help
[21:45:31 CEST] <abach> solve the problem
[21:45:51 CEST] <llogan> use a pastebin site to paste more than a few lines
[21:46:02 CEST] <abach> and I launched sudo ./configure --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --enable-omx --enable-omx-rpi
[21:46:09 CEST] <llogan> why are you using sudo? did you install libx264-devel?
[21:46:22 CEST] <llogan> you don't need --enable-nonfree
[21:46:30 CEST] <llogan> did you read anything i typed previously?
[21:46:34 CEST] <kepstin> kyleogrg: qtgmc looks like an avisynth script that wraps around nnedi and a few other filters. you could probably replicate some of its effects with ffmpeg filters, but not exactly/all.
[21:46:45 CEST] <llogan> abach: maybe someone else can help you.
[21:47:23 CEST] <abach> llogan : thx for you help. I'm trying to find
[21:47:26 CEST] <kyleogrg> kepstin: okay
[21:49:02 CEST] <kyleogrg> thanks for the help everyone
[21:49:05 CEST] <kyleogrg> taking off
[21:49:58 CEST] Action: kepstin is generally more interested in detelecine rather than deinterlacing, given the SD content he works with :/
[21:50:22 CEST] <JEEB> fieldmatch/decimate
[21:51:45 CEST] <kepstin> that's pretty much the best ffmpeg has, yeah, but I find decimate has some issues when you're dealing with 12fps content (e.g. animation) where it picks the wrong frame to delete
[21:52:36 CEST] <JEEB> you can tweak the cycle etc
[21:52:59 CEST] <JEEB> if you need to start doing IVTC per-scene or so then I recommend going towards vapoursynth or so
[21:53:08 CEST] <kepstin> yeah :/
[21:53:16 CEST] <durandal_1707> i will kick you all
[21:53:55 CEST] <abach> I found the solution : I activated the backports on my debian
[21:54:09 CEST] <JEEB> well, yeah - because lavfi has such capable preview that lets you pick the moments where to cut and then simply create parts that you filter with somewhat different settings between scene
[21:54:22 CEST] <JEEB> I mean, I use lavfi a *lot* , don't get me wrong
[21:54:46 CEST] <JEEB> but it just isn't in my toolbox for the stuff where I need to invest time into working with something with a live preview
[21:56:07 CEST] <durandal_1707> you manually get frame numbers you need to process?
[21:57:24 CEST] <JEEB> it's not a problem in lavfi per se, it's just that it has nothing like avsp(mod) or vapoursynth editor :P so yes, you can use one of those to get the frame numbers, but at that point you might as well use those for the actual filtering
[21:57:49 CEST] <kepstin> one of my friends has actually done some work on adding an 'enable' option to the 'detelecine' filter so you could do manual frame selection applying a detelecine pattern, but I guess he hasn't tried to get it upstream :/
[21:58:15 CEST] <durandal_1707> than write lavfi editor
[21:58:17 CEST] <JEEB> someone could stick the indexing from LWLibavVideoSource into an app and then create a live preview for lavfi chains and has a nice way of modifying lavfi chains
[21:58:30 CEST] <JEEB> but as alternatives that already work exist
[21:58:34 CEST] <JEEB> why the fuck would you :P
[21:59:00 CEST] <JEEB> it's not that lavfi is bad, it's just that other things are simpler to use for this kind of stuff.
[21:59:17 CEST] <JEEB> for general simple filtering ffmpeg cli works and works well with lavfi
[21:59:48 CEST] <kepstin> I haven't used lavfi via api before, how well does it handle seeking? from looking at some of the filters, it seems like that could be problematic.
[22:00:18 CEST] <JEEB> no idea, but by default lavf/lavc don't keep indexes like most of the source filters do
[22:00:24 CEST] <durandal_1707> you feed it frames you want to filter
[22:00:55 CEST] <durandal_1707> but you cant access random frame from inside random filter
[22:00:59 CEST] <kepstin> hmm, that doesn't really work for stateful filters :/
[22:01:13 CEST] <kepstin> I guess you could seek early, run some extra frames through, and hope it converges
[22:01:23 CEST] <kepstin> might work for some filters.
[22:02:46 CEST] <kepstin> and obviously anything that's using expressions with e.g. frame count or whatever would be completely off. I suppose time based ones might be ok, if they can handle the time jumps at all :/
[00:00:00 CEST] --- Fri Sep  2 2016


More information about the Ffmpeg-devel-irc mailing list