[Ffmpeg-devel-irc] ffmpeg.log.20161005

burek burek021 at gmail.com
Thu Oct 6 03:05:01 EEST 2016


[00:30:03 CEST] <ozette> furq: thanks for your -skip_frame nokey tip by the way, that really really helped, processing now takes about 1 minute to 7 minutes where it took 1.5 to 3 hours first
[02:45:16 CEST] <stah0121> can anyone help me properly compile ffplay on its own ?
[02:45:17 CEST] <stah0121> followed the compilation guide for ffmpeg and tried a handful of different things, but I still can't get the ffplay binary to be generated
[02:46:27 CEST] <stah0121> everything else compiles just fine
[02:53:45 CEST] <klaxa> stah0121: well what kind of errors are you getting?
[02:53:50 CEST] <klaxa> and what have you tried
[02:54:08 CEST] <klaxa> if necessary (it will be) use pastebin to paste large chunks of text
[02:55:08 CEST] <stah0121> I'm currently doing 'gcc ffplay.c -o ffplay'
[02:55:19 CEST] <klaxa> oh
[02:55:35 CEST] <klaxa> you should maybe use the build system?
[02:55:53 CEST] <stah0121> I've googled for instructions and haven't come up with anything, so I went basic haha
[02:56:44 CEST] <stah0121> I've tried a ton of variations for the configure script and subsequent make commands but haven't been able to get ffplay to come out of it
[02:56:52 CEST] <klaxa> huh
[02:56:57 CEST] <klaxa> it's built by default though
[02:57:22 CEST] <stah0121> I know, I can see in the makefile its part of the core binaries
[02:57:30 CEST] <stah0121> so I'm pretty stumpted
[02:57:40 CEST] <klaxa> if you *only* want ffplay (and especially care that ffmpeg, ffprobe and ffserver are not built) you can use the --disable-ff<component> flags
[02:58:01 CEST] <stah0121> $ ls ~/bin/
[02:58:02 CEST] <stah0121> ffmpeg  ffprobe  ffserver
[02:58:02 CEST] <stah0121> [xxx at xxx ffmpeg]$
[02:58:03 CEST] <klaxa> but that won't make ffplay magically appear out of thin air
[02:58:24 CEST] <klaxa> you set the path for configure and ran make install?
[02:58:53 CEST] <stah0121> yep. the compilation guide is pretty solid. I just followed the steps on there
[02:59:31 CEST] <stah0121> if I have everything else built, I should be able to compile ffplay by itself, but I'm thinking the commands for compile aren't exactly intuitive
[03:00:26 CEST] <klaxa> hmm
[03:00:54 CEST] <klaxa> can you pastebin your config.log?
[03:01:26 CEST] <stah0121> sure .. give me a couple min (never used pastebin before and food almost ready :) )
[03:03:14 CEST] <stah0121> hm, weird
[03:03:55 CEST] <stah0121> so I tried to git-clone the repo and only did a couple of the steps in the install guide in a custom directory .. and that attempt created a config.log .. but when I followed the compile guide to the letter, it didn't generate a config.log
[03:04:15 CEST] <stah0121> the config.log file I do have is about 12,000 lines long, so I'll just look for a few relevant sections and pastebin that
[03:08:31 CEST] <stah0121> okay I think this should work -- http://pastebin.com/vf6E3Ndb
[03:09:14 CEST] <klaxa> well what the hell it already says ffplay in there
[03:09:36 CEST] <klaxa> oh, you also only copied the lowest part, well it's exactly the one i was interested in
[03:09:58 CEST] <stah0121> hah nice
[03:10:21 CEST] <stah0121> but yeah, very strange stuff.
[03:10:38 CEST] <klaxa> what happens if you run make again?
[03:11:40 CEST] <stah0121> make: Nothing to be done for `all'.
[03:12:00 CEST] <klaxa> are you in the correct directory?
[03:12:08 CEST] <klaxa> one sec
[03:13:06 CEST] <DHE> if you want ffplay, you'll need the SDL devel package installed
[03:13:47 CEST] <stah0121> okay wtf .. the custom git-clone directory lets me run 'make' manually and says 'nothing to be done' ... the ~/ffmpeg_sources directory doesn't even let me 'make
[03:14:39 CEST] <klaxa> DHE: but would that be recognized at ./configure time?
[03:14:46 CEST] <klaxa> and ffplay would not be built
[03:15:24 CEST] <klaxa> huh, it gets built on my server
[03:15:25 CEST] <DHE> that's configure's job - see what is (and isn't) available on the system and build only what would succeed
[03:15:48 CEST] <stah0121> so this is the package that I installed per the compilation guide -- 'libsdl1.2-dev'
[03:15:53 CEST] <stah0121> that should be the dev right ?
[03:15:57 CEST] <DHE> looks right
[03:17:09 CEST] <stah0121> I'm on ubuntu 14.04 LTS btw
[03:18:48 CEST] <stah0121> be back in a min
[03:24:07 CEST] <stah0121> back
[03:29:05 CEST] <stah0121> this is the compile guide I'm using -- https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
[03:30:04 CEST] <DHE> do you have a program called sdl-config installed?
[03:30:25 CEST] <stah0121> ]$ which sdl-config
[03:31:12 CEST] <stah0121> '/usr/bin/sdl-config'
[03:31:18 CEST] <stah0121> ^^ that's the path it generated
[03:31:36 CEST] <klaxa> is there an ffplay in ~/ffmpeg_build/ ?
[03:31:52 CEST] <klaxa> wait
[03:31:55 CEST] <klaxa> that wouldn't make sense
[03:32:25 CEST] <stah0121> just did a search from root, only ffplay files are the .c files and the .texi files
[03:35:56 CEST] <klaxa> well make doesn't really log, does it?
[03:36:14 CEST] <stah0121> I don't think so
[03:36:18 CEST] <stah0121> if it does, probably not by default
[03:37:53 CEST] <stah0121> does this mean anything to anyone ?
[03:37:55 CEST] <stah0121> ]$ cat config.mak | grep -i "ffplay*"
[03:37:55 CEST] <stah0121> CFLAGS-ffplay=
[03:37:55 CEST] <stah0121> LIBS-ffplay=
[03:37:55 CEST] <stah0121> !CONFIG_FFPLAY=yes
[03:38:15 CEST] <stah0121> do I need to add a CFLAGS-ffplay flag or something ?
[03:43:04 CEST] <DHE> can you pastebin the whole damned config.log ?
[03:43:51 CEST] <stah0121> sure
[03:47:27 CEST] <stah0121> http://pastebin.com/CsZsiHev
[03:49:20 CEST] <DHE> it's looking for SDL 2.0, not 1.2
[03:53:39 CEST] <stah0121> hm, yeah that's probably a problem
[04:00:19 CEST] <stah0121> installed SDL 2 dev package and recompiling now
[04:15:45 CEST] <stah0121> well that fixed it
[04:16:06 CEST] <stah0121> thanks DHE, much appreciated
[04:28:55 CEST] <moneylotion> anyone have a bitrate or rf recommendation for blueray rip?
[05:32:03 CEST] <radia> Does anyone know if there is a special command in ffmpeg that calculates all that is necessary to make a webm so it will be in good quality based on the .mp4
[05:47:02 CEST] <strongcoffee> radia, CRF mode using VP9 will do that.
[07:01:23 CEST] <teratorn> hi all, I'm trying to pull out the 100th frame of a testsrc2 video stream, and duplicate this frame at 30fps for 5 seconds... this is what I have so far (not working, only makes a video with one frame in it): ffmpeg -f lavfi -i testsrc -filter_complex "[0:v]trim=start_frame=100:end_frame=101[v0];[v0]fps=fps=30[v1];[v1]trim=duration=5[v2]" -map "[v2]" test_out.mp4
[07:02:32 CEST] <teratorn> this cmd, actually: ffmpeg -f lavfi -i testsrc2 -filter_complex "[0:v]trim=start_frame=100:end_frame=101[v0];[v0]fps=fps=30[v1];[v1]trim=duration=5[v2]" -map "[v2]" out.mp4
[07:03:27 CEST] <teratorn> any clues? :)
[08:29:21 CEST] <c_14> teratorn: you'll probably want the loop filter in there somewhere
[08:30:17 CEST] <teratorn> c_14: thanks, I was thinking I needed to investigate that one already, so now I will :)
[08:32:37 CEST] <c_14> Either after, or in place of the fps filter
[08:32:45 CEST] <c_14> Also, you don't need separate filterchains for that
[08:32:56 CEST] <c_14> you can just queue the filters by separating them with commas
[08:38:38 CEST] <teratorn> c_14: riiight
[08:39:02 CEST] <c_14> ie trim=start_frame:end_frame,fps,loop,trim
[08:43:51 CEST] <teratorn> c_14: then I concat that to the end or the orignal stream?
[08:44:00 CEST] <teratorn> to produce the pause effect?
[08:44:10 CEST] <teratorn> s/or/of/
[08:44:27 CEST] <c_14> hmm?
[08:44:37 CEST] <c_14> What are you trying to accomplish?
[08:44:51 CEST] <teratorn> extend the duration of a video by fabricating copies of the last frame at 30fps for 5 seconds
[08:45:46 CEST] <teratorn> oh wait
[08:45:51 CEST] <teratorn> just the damn loop filter can do it...
[08:45:58 CEST] <c_14> I think you could just use -vf fps=30,loop=150:start=100
[08:46:36 CEST] <teratorn> why start=100 ?
[08:47:11 CEST] <c_14> Because the 100th frame is the frame you want looped, no?
[08:48:00 CEST] <teratorn> oh, well no, the last frame, but i already calculate what number it is elsewhere, so I  got it...
[08:48:29 CEST] <kiwi_banal> On OX_X El Capitan is it possible with "ffmpeg -f avfoundation ..." to capture a particular window as with gdigrab on a Microsoft Windows system?
[08:48:41 CEST] <kiwi_banal> err OS_X
[08:48:58 CEST] <teratorn> which could also be accomplished by duplicating the last  frame once and adding 5 seconds to it's pts, I guess
[08:50:05 CEST] <kiwi_banal> I'm looking to screencast say just a terminal window... without needing to maximise it to fullscreen
[08:51:49 CEST] <teratorn> c_14: well it's sleep time, I'll tinker with this more tomorrow... thanks for your help, g'night
[12:29:53 CEST] <szulak> Hello, I am looking for any information how could I merge two audio tracks in video file programatically. Could anyone share some info with me?
[12:31:20 CEST] <bencoh> you'd have to define "merge"
[12:32:27 CEST] <szulak> I have a track 0 which is a song, and track 1 which is a micro - I would like to "merge" it into one track, and when played - hear both of these
[12:35:30 CEST] <Spring> szulak, do you mean have two separate audio tracks playing simultaneously or permanently combine the tracks?
[12:35:56 CEST] <szulak> permanently combine the tracks
[12:36:59 CEST] <Spring> this seems like what you're after: https://trac.ffmpeg.org/wiki/AudioChannelManipulation#a2stereostereo
[12:38:07 CEST] <Spring> the video stream would need to also be included in the mapping
[12:38:35 CEST] <furq> you can just use amix
[12:38:42 CEST] <furq> https://ffmpeg.org/ffmpeg-filters.html#amix
[12:38:51 CEST] <szulak> alright, but how about doing it programatically? (without calling ffmpeg binary)
[12:39:17 CEST] <furq> https://ffmpeg.org/doxygen/trunk/filtering_audio_8c-example.html
[12:40:29 CEST] <szulak> thank you, that's what I was looking for :)
[13:04:12 CEST] <n4zarh> any1 knows how to encode raw pcm data (from android-phone speaker) to pcm-mulaw frame by frame?
[13:07:09 CEST] <nonex86> n4zarh: check /doc/examples/transcode_aac.c
[13:07:22 CEST] <nonex86> n4zarh: its good place for start
[13:12:49 CEST] <n4zarh> yeah, but my problem seems to start with line: (*frame)->nb_samples = frame_size;
[13:13:16 CEST] <n4zarh> since as far as I understand alaw and mulaw codec contexts have frame_size==0
[13:17:00 CEST] <n4zarh> and I just don't know where to find value for nb_samples for those two encoders
[13:24:50 CEST] <nonex86> n4zarh: on initialized alaw/mulaw encode, what is the value of AvCodecContext->frame_size?
[13:25:05 CEST] <nonex86> n4zarh: are you sure its zero?
[13:25:47 CEST] <n4zarh> codec inited: pcm_mulaw
[13:25:47 CEST] <n4zarh> channels 1, framesize 0, samplefmt 1, buffersize -22
[13:25:56 CEST] <n4zarh> its from logcat
[13:26:47 CEST] <n4zarh> don't mind buffersize, tried to use av_samples_get_buffer_size from decoding_encoding.c example, I guess it won't be needed (might be wrong)
[13:29:18 CEST] <n4zarh> I read somewhere in documentation that frame_size might be 0 if codec has flag set with variable frame size
[13:29:50 CEST] <nonex86> n4zarh: http://stackoverflow.com/questions/37134003/ffmpeg-encoding-pcm-16-audio-data-allocation-error/39274304#39274304
[13:31:01 CEST] <n4zarh> oh.
[13:31:09 CEST] <nonex86> :)
[13:33:54 CEST] <n4zarh> so, if I get it right, if I get 640B frame as input, I should pass 640 to nb_samples?
[13:34:16 CEST] <n4zarh> sorry for asking stupid questions, I never needed stuff like this before
[13:37:18 CEST] <nonex86> n4zarch: guess it depends on number of channels and sample format, isnt it? :)
[13:52:50 CEST] <trfl> say you've got an h264 video with a keyframe every 10sec and you're seeking to somewhere in the middle of two keyframes... is it possible to create a .ts file or an rtmp stream which would instruct clients to skip the first 5sec when decoding?
[14:09:07 CEST] <furq> trfl: the output will start on a keyframe regardless
[14:12:16 CEST] <kepstin> trfl: with an rtmp stream, you could theoretically make a custom flash player that doesn't display the first part of the stream, maybe
[14:22:05 CEST] <CodecDev> Hi, Need quick help on ffmpeg hang issue on specific scenario on windows machine.
[14:23:01 CEST] <CodecDev> I am running multiple instance of ffmepg transcode in parallel on windows. Out of 5 instance 1-2 are getting hung in the middle of transcoding. And they will never resume back.
[14:23:36 CEST] <CodecDev> my source is 4k 10Mbps and target bitrates are 800, 1600, 2500, 3500, 5000.
[14:23:52 CEST] <CodecDev> has anyone seen similar behaviour before?
[14:24:39 CEST] <furq> no, but you should probably be doing all that with one ffmpeg instance so you don't have to decode a 4k stream five times
[14:25:16 CEST] <furq> particularly if the source is hevc
[14:25:29 CEST] <CodecDev> Yes thats true, but 800k stream has wait untill 5000k stream transcode completes.
[14:25:50 CEST] <furq> how are you running multiple instances in parallel then
[14:26:25 CEST] <CodecDev> But interesting thing If I make multiple copies of ffmpeg and use different binary for each command it never hangs !!!
[14:27:19 CEST] <CodecDev> I am suspecting, there could be some shared resouce, which is leading to deadlock among multiple instace of same ffmpeg binary ?? Please correct me If I sound wrong here.
[14:29:46 CEST] <CodecDev> I am running one command for each output bitrate. 1 in 1 out transcode.
[14:31:08 CEST] <nonex86> when you said "hangs" what do you mean?
[14:32:01 CEST] <nonex86> never used several instances of ffmpeg cli on windows, but i heavily using ffmpeg for decoding multiple streams in my software on windows without any problem
[14:32:20 CEST] <nonex86> lets say its about 16 fullhd h264 streams
[14:32:27 CEST] <CodecDev> here are my complete commands:  http://pastebin.com/mwCweJdq
[14:32:34 CEST] <furq> i still don't understand why you need to run one command per stream
[14:32:51 CEST] <nonex86> i doubt ffmpeg have any global shared kernel objects that may lead to deadlock or something
[14:33:00 CEST] <nonex86> this makes no sense
[14:33:17 CEST] <nonex86> deadlock between the processes i mean
[14:33:41 CEST] <nonex86> what do you mean by saying "hangs"?
[14:33:48 CEST] <CodecDev> what I mean by "hangs" is, ffmpeg indefinitely waits.. cpu usage 0%/
[14:34:00 CEST] <nonex86> you can try to find out exact place
[14:34:05 CEST] <nonex86> where it hangs
[14:34:10 CEST] <nonex86> download process explorer
[14:34:20 CEST] <nonex86> and check call stack of the "hanged" process
[14:35:02 CEST] <nonex86> on thread tab
[14:35:28 CEST] <CodecDev> these ffmpeg build dont have symbols.. downloaded from https://ffmpeg.zeranoe.com/builds/
[14:35:55 CEST] <nonex86> can you build it yourself with symbols enabled?
[14:36:05 CEST] <CodecDev> I can try that.
[14:36:41 CEST] <CodecDev> But as I mentioned earlier, when I make different copy of ffmpeg and use one copy for each commands, I will not see this hang behaviour.
[14:37:17 CEST] <nonex86> i have my build of ffmpeg 3.0 with libx264 compiled with ms vc from visual studio 2013
[14:37:22 CEST] <CodecDev> if I use same ffmpeg binary for all 5 commands then only this issue in happening.
[14:37:32 CEST] <nonex86> really strange, yes
[14:37:52 CEST] <CodecDev> can u share that build ?
[14:37:54 CEST] <furq> if this is for streaming then don't use nal-hrd
[14:37:57 CEST] <furq> it's just a waste of bits
[14:37:58 CEST] <nonex86> so if you cant build it yourself i can share with you
[14:38:01 CEST] <CodecDev> I can quickly try using that.
[14:38:04 CEST] <furq> nal-hrd cbr, that is
[14:38:11 CEST] <nonex86> sure
[14:39:02 CEST] <CodecDev> thank you furq. I will follow that.
[14:46:10 CEST] <trfl> alright thanks furq, kepstin - good to know it's essentially not possible :)
[14:47:07 CEST] <furq> i mean it might be possible, but afaik the ffmpeg cli won't let you start a cut on a non-keyframe
[14:47:20 CEST] <furq> it'll just seek to the nearest
[14:49:09 CEST] <furq> i'm pretty sure most players will discard everything before the first IDR frame in a stream anyway
[14:50:29 CEST] <nonex86> vlc is not :)
[14:50:39 CEST] <furq> most good players, then
[14:51:22 CEST] <nonex86> good remark ;)
[14:51:36 CEST] <CodecDev> Just wondering without IDR how VLC is able to understand stream characteristics??
[14:52:00 CEST] <nonex86> h264? sps+pps from codec_private field
[14:52:10 CEST] <nonex86> you dont need idr for this
[14:52:12 CEST] <furq> it's not that good. most of one player probably wouldn't work at all
[14:52:23 CEST] <nonex86> you can just take extra_data
[14:52:27 CEST] <nonex86> extract sps/pps
[14:52:36 CEST] <nonex86> and got your stream properties
[15:06:47 CEST] <geeky> what is the command to convert .avi to .mpg files that will work on a dvd video player
[15:07:48 CEST] <geeky> ffmpeg -i 1.avi 1.mpg generates a file that doesn't play
[15:14:48 CEST] <CodecDev> are u able to play 1.mpg on vlc?
[15:18:58 CEST] <relaxed> CodecDev: something like, ffmpeg -i input -target ntsc-dvd -q:v 3 output.mpg
[15:27:52 CEST] <geeky> CodecDev: yes
[15:28:04 CEST] <geeky> relaxed: ok thanks
[16:55:08 CEST] <szulak> I have a simple question about licensing - I've FFmpeg binaries (built as "--enable-version3" aka LGPL 3) - can I distribute this shared binary (single ffmpeg.exe) file with my application and call from my application?
[16:55:53 CEST] <szulak> I am not linking my application with FFmpeg libraries - instead just calling it's binary to perform some work on video file
[16:56:46 CEST] <furq> szulak: https://www.ffmpeg.org/legal.html
[16:57:06 CEST] <furq> it's debatable whether you need to do that if you're calling the binary, but it's best to be on the safe side
[16:57:15 CEST] <szulak> "The following is a checklist for LGPL compliance when linking against the FFmpeg libraries."
[16:57:28 CEST] <szulak> but I am not linking with FFmpeg, that's why I am confused :(
[16:57:48 CEST] <furq> the same thing applies
[17:01:56 CEST] <hawken> Hi.. I'm having issues with getting this bug handled...
[17:02:00 CEST] <hawken> https://trac.ffmpeg.org/ticket/5472#comment:12%3E
[17:02:29 CEST] <hawken> Apparently he wants the output file, but he doesn't want output files
[17:03:07 CEST] <hawken> And my example isn't okay for him. He wants the original bug reporter
[17:03:30 CEST] <hawken> I should open a dupe bug so he has to accept it..
[17:04:04 CEST] <hawken> oh wait, then it will be a dupe. so, closed.
[17:04:21 CEST] <hawken> I'm really stuck between two chairs here
[17:04:51 CEST] <furq> i am glad i've never had dealings with carl-eugen hoyos
[17:10:02 CEST] <hawken> I don't want to get emotional about this but oh man...
[17:44:58 CEST] <Spring> would I be correct in saying that ffmpeg obeys the encoding quality in the following order: 1. qmin/qmax, 2. max bitrate cap, 3. CRF ?
[17:46:08 CEST] <Spring> as it seemed that way when encoding with libvpx, since qmax would override both CRF and any max bitrate cap. While the max bitrate would override CRF.
[17:46:25 CEST] <furq> i imagine it depends on the codec
[17:47:08 CEST] <Spring> any idea if h.264 behaves similarly?
[17:47:25 CEST] <Spring> haven't done any tests with it using qmax to know
[17:48:13 CEST] <Spring> in fact I left it out entirely since it does a good enough job without it
[17:50:30 CEST] <kepstin> Spring: libx264 encoder uses crf, but max bitrate cap (implemented via vbv) can override the bitrate crf picks, then the -max-crf option can override that, and you should basically not ever set qmin/qmax.
[17:50:53 CEST] <furq> looks like crf takes precedence with libx264
[17:51:00 CEST] <furq> over -b and -q
[17:51:24 CEST] <kepstin> (any stuff you see saying to set qmin/qmax with libvpx is probably referring to old versions of libvpx/ffmpeg where the default qmin/qmax values were poorly chosen)
[17:52:42 CEST] <Spring> furq, -b and -q being the quantization options?
[17:52:53 CEST] <furq> b is bitrate
[17:53:13 CEST] <kepstin> I'm not even sure what -q maps to with libx264; the way to do constant quantizer is with -qp.
[17:53:17 CEST] <furq> actually it looks like -q is ignored
[17:53:32 CEST] <kepstin> (imo, -q should have been mapped to crf with libx264...)
[17:54:09 CEST] <furq> i can imagine people would think it mapped to -qp though
[17:54:17 CEST] <furq> i can imagine that because one of those people was me, just now
[17:54:52 CEST] <Spring> In my case I'm capping the CRF bitrate using -maxrate + bufsize, which works. I should try some tests later with qp or whichever it is.
[17:55:00 CEST] <kepstin> Spring: using -crf, -b (bitrate) and -qp are three mutually exclusive ways to select the bitrate control algorithm x264 uses.
[17:55:53 CEST] <furq> don't use qp for that
[17:56:17 CEST] <kepstin> you can't use any bitrate controls with -qp, since qp enables constant quantizer mode
[17:56:40 CEST] <kepstin> the -qp option is mostly useful if either you want to use lossless mode, or you already know why you'd want to use it.
[17:56:44 CEST] <Spring> so it's an entirely different mode in lib264 then
[17:58:16 CEST] <kepstin> there's basically 3 modes in x264 - crf mode (constant quality), average bitrate target (set with -b:v), and constant quantizer
[17:58:25 CEST] <kepstin> and you can use vbv stuff with crf mode and abr mode.
[17:59:38 CEST] <Spring> for libvpx I've found it benefits from defining qmax, even with the latest versions
[18:00:50 CEST] <Spring> thanks for the detailed explanations of lib264
[18:01:32 CEST] <furq> libvpx's rate control is pretty bad
[18:01:44 CEST] <furq> i still wouldn't expect qmin/qmax to do anything but who knows
[18:28:51 CEST] <Spring> I also have in my notes that -ss and -to before -i can be wildly inaccurate. I think at that time I was testing libvpx exclusively so it's probably different for lib264.
[18:29:40 CEST] <furq> no, that depends solely on the input format
[18:29:58 CEST] <furq> and also whether you're stream copying
[18:30:53 CEST] <Spring> I was always transcoding both video/audio. My sources were h.264 in MKVs, mostly.
[18:31:23 CEST] <Spring> still, I've been told it's frame accurate placing it after -i as it's processed on the decoding level.
[18:32:31 CEST] <klaxa> yes, see: https://trac.ffmpeg.org/wiki/Seeking
[18:33:03 CEST] <klaxa> you can even combine -ss before and after -i to skip in the demuxer first and then in the decoder
[18:34:10 CEST] <Spring> that last bit flew over my head. So it can be beneficial duplicating the -ss both before/after -i?
[18:34:51 CEST] <klaxa> yes, see the page i posted, it explains it pretty well i think
[18:35:13 CEST] <klaxa> oh actually, let me read that again
[18:35:46 CEST] <c_14> It better, took me enough effort to update.
[18:36:37 CEST] <Spring> I see how it works, you need two different -ss values, one to get there fast the other for accuracy (if I'm understanding it correctly)
[18:36:43 CEST] <klaxa> when transcoding you can just use -ss in front of -i and it does what i explained but with only one command (since 2.1)
[18:37:02 CEST] <klaxa> >As of FFmpeg 2.1, combined seeking is still possible but I have yet to find a valid use case for it since -ss as an input option is now both fast and accurate.
[18:37:25 CEST] <Spring> mmm, I'll stick with the -ss after -i method. Seems fast to me anyway.
[21:43:00 CEST] <stephenwithav> how do I build graph2dot from the source code?  (I'm using https://hub.docker.com/r/nachochip/ffmpeg-build/~/dockerfile/ to build.)
[22:02:46 CEST] <thebigbean> I got some old PAL video 720x576 from 1990 with that should be 4:3. The pixels are not square and it is messing up image stabilizing. Is there a good way to resample the video to use square pixels?
[22:03:07 CEST] <furq> https://ffmpeg.org/ffmpeg-filters.html#setdar_002c-setsar
[22:04:27 CEST] <thebigbean> thx furq but I did try -vf setdar=4:3 thing is that the headers dont get passed right and it becomes 5:4
[22:04:57 CEST] <furq> you probably want setsar
[22:05:38 CEST] <thebigbean> what is the sar for PAL that is from the 1990s.
[22:06:02 CEST] <furq> probably 12:11
[22:07:16 CEST] <thebigbean> Is there any way to test what the sar should be?
[22:07:18 CEST] <furq> either use setdar=15/11 or setsar=1,setdar=4:3
[22:07:29 CEST] <furq> or 4/3, either works
[22:11:15 CEST] <thebigbean> So you saying I should set both sar and dar like -vf setsar=1,setdar=4:3 ?
[22:11:29 CEST] <furq> if you want square pixels, sure
[22:11:36 CEST] <furq> i normally just use setdar
[22:12:44 CEST] <thebigbean> I do want square pixels. Reason is the VirtualDubs Deshaker is good but it really hates pixels that are not square. The rotation gets weird and jelly like.
[22:13:11 CEST] <furq> ffmpeg has a deshake filter
[22:13:14 CEST] <furq> i've not used it though
[22:13:43 CEST] <furq> https://ffmpeg.org/ffmpeg-filters.html#vidstabdetect-1
[22:14:16 CEST] <thebigbean> its ok-ish but sadly VirtualDubs Deshaker is freaking good. I have 150+- frames in my temporal image restoring for the edges etc
[22:14:21 CEST] <furq> fair enough
[22:58:06 CEST] <stephenwithav> is there a better source for learning about filters than that documentation?
[23:10:09 CEST] <llogan> stephenwithav: not really. there are some articles on the wiki.
[23:10:28 CEST] <llogan> but i'm not surwe what you mean by "better"
[23:11:27 CEST] <llogan> as for graph2dot: make tools/graph2dot
[23:11:42 CEST] <llogan> i guess
[23:18:33 CEST] <stephenwithav> thanks.  make alltools worked
[23:20:06 CEST] <stephenwithav> by better, that's hard to quantify.  reading through the wiki now.  more examples with pictures of the results would be nice.
[23:21:44 CEST] <BtbN> well, reading the source is a better source.
[23:24:06 CEST] <stephenwithav> agreed.  I'm trying to understand the overlay filter with multiple inputs, how it works.  once I figure that out, I think I'll be okay.
[23:31:15 CEST] <llogan> overlay accepts two inputs and outputs one output. if you need more you can do something like: [0:v][1:v]overlay[bg];[bg][2:v]overlay
[00:00:00 CEST] --- Thu Oct  6 2016


More information about the Ffmpeg-devel-irc mailing list