[Ffmpeg-devel-irc] ffmpeg.log.20150630

burek burek021 at gmail.com
Wed Jul 1 02:05:01 CEST 2015


[00:00:47 CEST] <Nolski> llogan: sorry I'm a bit new to ffmpeg. What's the difference between using the concat protocol vs the demuxer. (or what is the syntax to the demuxer?)
[00:09:40 CEST] <kepstin-laptop> Nolski, concat protocol is for when the container type supports file concatenation to join segments (e.g. mpeg-ts). Concat demuxer is for when containers cannot be concatenated (e.g. mp4), but the video stream inside them could be (same codec, parameters, whatever). Concat filter is for when neither of those work and you need to fully decode the video.
[00:25:41 CEST] <Nolski> kepstin-laptop: Oh boy, so my videos are all guerenteed to be the same format but I do not know what they may be (very likely either webm or mp4) do I have to choose my command depending on the file type?
[00:27:32 CEST] <Mavrik> Are they guaranteed to have the same resolution, framerate, aspect ratio?
[00:36:25 CEST] <kepstin-laptop> Nolski, the other thing to consider - are you re-encoding the video anyways? if so, you could just always use the filter method
[00:36:41 CEST] <kepstin-laptop> the other two methods are only really useful if in some cases you might not want to re-encode.
[00:36:50 CEST] <Nolski> Mavrik: yes
[00:37:09 CEST] <Nolski> kepstin-laptop: I wouldn't mind re-encoding
[00:37:30 CEST] <Nolski> I might try the filter method. It does look pretty scary haha
[01:13:22 CEST] <Nolski> Hmm.. It's saying too many inputs for concat. Am I doing this wrong? https://pastebin.mozilla.org/8838081
[01:14:40 CEST] <c_14> Nolski: well, you should obviously have n=3
[01:14:46 CEST] <c_14> since you have 3 inputs
[01:14:58 CEST] <Nolski> Oooh I thought n=3 meant the amount of tracks
[01:15:05 CEST] <Nolski> because in the example there were 2 audio and 1 video
[04:12:37 CEST] <brewtany> hi
[04:12:55 CEST] <brewtany> how can ffmpeg change mpegts start_time?
[04:18:17 CEST] <brewtany> to offset the dts/pts
[05:18:38 CEST] <pandb> i'm experimenting with generating videos from image sequences. what's the preferred format for such input images?
[05:19:39 CEST] <pandb> right now i'm using imagemagick to convert random images from the web to 8-bit rgb pngs that are all 640x480
[05:53:08 CEST] <grepper> I'm no expert on ffmpeg, but png sounds good because it is lossless.
[06:50:31 CEST] <grepper> unless you are talking about naming, in which case  something like a printf %06d format, ie. 000001.png 000002.png etc
[07:22:33 CEST] <pandb> yeah, png seems to work well
[07:23:45 CEST] <pandb> my current problem is that i know how to generate a video from a sequence of images. I also know how to broadcast a video to an rtmp server(twitch.tv in this case).
[07:24:15 CEST] <pandb> but what I can't figure out is how to do that all in one step
[07:24:56 CEST] <chungy> I'm not positive on it, but I think ffmpeg would be able to work with input images of varying dimensions. (btw, you can also use ffmpeg to rescale to 640x480 PNGs)
[07:25:07 CEST] <pandb> i.e. broadcast video to rtmp that's generated on-the-fly by a bunch of png files in a directory somewhere
[07:25:25 CEST] <chungy> It's the same as both methods.
[07:25:41 CEST] <chungy> I'd just give the image sequence as the input, and the rtmp server as the output
[07:26:02 CEST] <pandb> chungy, i've been trying different variations on that to no avail
[07:26:21 CEST] <chungy> well,
[07:26:24 CEST] <pandb> i've even been able to broadcast my X display successfully
[07:26:35 CEST] <pandb> um sure
[07:26:42 CEST] <pandb> might take a second
[08:17:48 CEST] <pandb> fflogger (and anyone else interested), here is some console output related to my problem of streaming a video generated by still images to an RTMP server: http://pastebin.com/LBHX2EqE
[08:25:23 CEST] <pandb> oh also chungy, too
[09:47:58 CEST] <brewtany> hi, how to change start time of mpegts files?
[10:53:38 CEST] <PeterNT> Hello everybody, hope someone can help me: i use ffmpeg for windows and im trying to transcode using the h264_qsv codec. Without QuickSync everything works but with qsv enabled i get a error after some time: Non-monotonous DTS in output stream 0:0; previous: XX, current: 0; changing to XX. This may result in incorrect timestamps in the output file.
[10:53:51 CEST] <PeterNT> any idea what i can do?
[11:30:54 CEST] <pkeuter> hi there
[11:32:33 CEST] <pkeuter> question: when I stream an input device to a file, that file gets locked and im unable to do anything with it. Is there any way I can use the file while it is recording?
[11:38:25 CEST] <pandb> pkeuter: can you make a temporary copy of it?
[11:40:22 CEST] <nashgul> hi, i've a question, why the 'colorkey filter' is not available on ffmpeg-linux?
[11:46:54 CEST] <pkeuter> pandb: that sounds like a good idea. Just copy the recording every n seconds. thanks!
[11:49:13 CEST] <durandal_170> nashgul: what is ffmpeg-linux?
[11:49:25 CEST] <nashgul> durandal_170: ffmpeg on linux
[11:51:49 CEST] <nashgul> i want to overlay to inputs, and replace a color from one input with transparency
[11:52:38 CEST] <durandal_170> what is?
[11:53:10 CEST] <durandal_170> Operating system name
[11:53:15 CEST] <nashgul> archlinux
[11:54:24 CEST] <durandal_170> filter is only in git version currently
[11:54:42 CEST] <nashgul> libav-git?
[11:55:16 CEST] <nashgul> or ffmpeg-git?
[11:55:19 CEST] <durandal_170> ffmpeg-git
[11:55:23 CEST] <nashgul> ok, thanks
[12:02:45 CEST] <BtbN> it's not exactly finished yet though
[12:03:36 CEST] <nashgul> well, it's too late, i've removed ffmpeg, and now i'm packing ffmpeg-git  :-D
[12:07:02 CEST] <chungy> arch packages the stable version of ffmpeg. anything only in the master branch won't be in the arch package :P
[12:10:06 CEST] <nashgul> well, no problem, if ffmpeg-git does not work fine i always can come back to stable version, arch is very effective in that case
[12:10:58 CEST] <chungy> yup. or do custom compiles and keep the git version in your home directory only. Either way works. :)
[12:11:26 CEST] <chungy> (tbh, might be beneficial if ffmpeg-git installed into /opt or somewhere and kept it non-conflicting... but heh)
[12:11:34 CEST] <nashgul> chungy: yeah, a good choice
[12:12:02 CEST] <chungy> keeping it separate just would reduce the chance of other packages breaking because the libraries underneath changed.
[12:13:26 CEST] <nashgul> chungy: i've to remove x265 by x265-hg, i don't know how that change affects to ffmpeg
[12:14:15 CEST] <nashgul> maybe only affects when encode with h265
[12:14:34 CEST] <chungy> I wonder if anyone makes a docker images available for ffmpeg. would be awesome for testing :)
[12:21:26 CEST] <nashgul> well, by the moment works fine  :-D
[12:21:42 CEST] <nashgul> durandal_170: thanks a lot for the info
[12:56:49 CEST] <nashgul> i don't find the filter or the option for making the output only black & white..
[13:00:41 CEST] <durandal_170> Hardmix with blend filter?
[13:00:56 CEST] <nashgul> ummm
[13:02:28 CEST] <nashgul> durandal_170: blend filter requieres 2 videos, i have only one
[13:02:56 CEST] <durandal_170> some complicated geq expression
[13:03:01 CEST] <nashgul> maybe
[13:04:18 CEST] <durandal_170> nashgul: for blend that would be some kind of grayscale gradient
[13:05:39 CEST] <nashgul> really, what i want to do is: apply showwave filter to an audio input, and then remove the black pixels making them transparency, and after overlay it with the original video source, the problem is showframes creates a greyscale video, i need that video B/W
[13:05:43 CEST] <chungy> what about -vf hue=s=0 ?
[13:05:51 CEST] <durandal_170> and that one could be created with geq too
[13:05:53 CEST] <nashgul> i'll try it
[13:07:22 CEST] <theeboat> Does anybody know if its possible to write the timecode on a video file as the current time on the machine capturing the stream?
[13:07:49 CEST] <durandal_170> in docs
[13:08:26 CEST] <chungy> This might be a weird and complex question, but does anyone know how to apply CRT and/or VCR-like effects to a video?
[13:09:03 CEST] <chungy> mainly introducing scanlines and subduing the colors to something like NTSC. Bonus for adding a curved-screen effect.
[13:09:31 CEST] <theeboat> when I have done experiments with the examples in the docs I have only got incremental timecodes starting from 00:00. unless theres a specific one which you have got in mind....
[13:11:33 CEST] <nashgul> theeboat: something like this: ffmpeg -i input.file -filter_complex "drawtext=textfile=/file/which/contains/timemachine:reload=1" output.file
[13:11:37 CEST] <nashgul> ??
[13:12:11 CEST] <nashgul> also you will need a simple command to writes every second the time on that file
[13:12:35 CEST] <theeboat> no sorry. i dont mean to burn the timestamp onto the video.
[13:13:21 CEST] <nashgul> theeboat: my command does not show the timestamp, show the local time on your machine
[13:14:27 CEST] <nashgul> shows*
[13:16:13 CEST] <theeboat> im under the impression drawtext overlays the time on the video? our editors use adobe premier. we record football games and our editors want to be able to refer to specific timestamps
[13:17:14 CEST] <nashgul> yes
[13:17:51 CEST] <nashgul> drawtext draws text over the video
[13:18:49 CEST] <theeboat> yes, thats not what I need ffmpeg to be able to do. I need the timecode of the video to be the actual timestamp of when the video is captured
[13:19:42 CEST] <theeboat> for instance now is 11:19 GMT if i was to start recording now i want ffmpeg to write the timecode as 11:19:00
[13:19:53 CEST] <nashgul> time(0)
[13:21:18 CEST] <nashgul> https://www.ffmpeg.org/ffmpeg-utils.html#toc-Expression-Evaluation  --> time(0)
[13:23:27 CEST] <nashgul> theeboat: write in a text file or write over the video?
[13:25:15 CEST] <theeboat> neither. when you open a video file you have a timecode as a reference point. if i wanted to skip to 30 seconds into the video i would go to 00:30. What I want is this time code to be the time of when the video began capturing.
[13:26:04 CEST] <theeboat> an example would be if i was to start recording at 19:30:00 and i wanted to skip two minutes of the recording i would go to 19:32:00 rather than 00:02:00
[13:27:47 CEST] <nashgul> theeboat: how do you know the time when the record begins?
[13:28:07 CEST] <theeboat> the timestamp on the server
[13:28:19 CEST] <theeboat> in linux typing the command date
[13:28:22 CEST] <theeboat> would show this
[13:28:26 CEST] <nashgul> ok ok
[13:28:51 CEST] <nashgul> you can make a simple bash script to calculate the offset time
[13:29:48 CEST] <nashgul> if video begins at 18:05.43  and you want play 19:00.00 ->  19:00.00 - 18:05.43
[13:30:07 CEST] <nashgul> 00:54.17
[13:30:38 CEST] <nashgul> -ss 00:54:17 ...
[13:31:49 CEST] <theeboat> is it not possible to write this as the timecode within the video file?
[13:33:06 CEST] <theeboat> basically the reason behind this is that when we capture the videos its possible that the cameraman may unplug the camera. ffmpeg will not fill this with blank frames it will just jump from when the video stopped and when the cameraman plugged the video back in
[13:37:38 CEST] <nashgul> -timestamp HH:MM:SS.xxx output.file ?
[13:45:31 CEST] <davidpeach> hello people
[13:46:50 CEST] <davidpeach> I am currently streaming mp4 output into node pipe to the browsers But the result is choppy - it loads a bit, plays that bit really fast, waits for next piece, plays that really fast thn on and on. Any ideas? Thanks in advance.
[13:50:10 CEST] <davidpeach> ill add code to pastebin
[13:51:25 CEST] <davidpeach> http://pastebin.com/3iEY7kEN
[13:51:35 CEST] <davidpeach> thank you to whoever answers my query
[14:09:24 CEST] <davidpeach> hi - can anybody help my query please?
[14:14:29 CEST] <davidpeach> I am currently streaming mp4 output into node pipe to the browsers But the result is choppy - it loads a bit, plays that bit really fast, waits for next piece, plays that really fast thn on and on. Any ideas? Thanks in advance.
[14:14:35 CEST] <davidpeach> http://pastebin.com/3iEY7kEN
[14:29:55 CEST] <davidpeach> I am currently streaming mp4 output into node pipe to the browsers But the result is choppy - it loads a bit, plays that bit really fast, waits for next piece, plays that really fast thn on and on. Any ideas? Thanks in advance.
[14:29:56 CEST] <davidpeach> http://pastebin.com/3iEY7kEN
[14:33:27 CEST] <PeterNT> Hello everybody, hope someone can help me: i use ffmpeg for windows and im trying to transcode using the h264_qsv codec. Without QuickSync everything works but with qsv enabled i get a error after some time: Non-monotonous DTS in output stream 0:0; previous: XX, current: 0; changing to XX. This may result in incorrect timestamps in the output file.
[14:33:37 CEST] <PeterNT> any ideas?
[14:49:13 CEST] <hadrien> Hello, is seeking supposed to work with the mt2s container?
[14:50:56 CEST] <hadrien> More precisions: ffmpeg -ss 00:13:00.000 -i 00001.m2ts -to 00:14:00.000 -c:v copy -an -sn out.mkv just never stops, when it should do approximately 1min
[14:53:29 CEST] <davidpeach> I am currently streaming mp4 output into node pipe to the browsers But the result is choppy - it loads a bit, plays that bit really fast, waits for next piece, plays that really fast thn on and on. Any ideas? Thanks in advance.
[14:53:31 CEST] <davidpeach> http://pastebin.com/3iEY7kEN
[14:55:45 CEST] <relaxed> hadrien: try with -ss and -to before the -i
[14:57:24 CEST] <hadrien> Error: Option to (record or transcode stop time) cannot be applied to input file 00001.m2ts
[14:58:12 CEST] <relaxed> hmm, try with both after the -i
[14:58:24 CEST] <hadrien> That's what I tried.
[14:58:41 CEST] <hadrien> What's strange is that this exact syntax works on my mp4 and mkvs, but on this mt2s.
[14:58:47 CEST] <hadrien> -ss -i -t works
[14:59:11 CEST] <relaxed> and you tried with both before the -i too?
[14:59:42 CEST] <hadrien> Yeah
[14:59:51 CEST] <hadrien> The error i reported was from this one
[15:01:05 CEST] <relaxed> tsmuxer is probably the tool to use, then. http://forum.doom9.org/showthread.php?t=168539
[15:02:19 CEST] <hadrien> It was just to test high422 10bit, anyway. But it seems strange for such a common option to fail like this
[15:10:37 CEST] <davidpeach> relaxed : I wonder if you could help me please?
[15:13:18 CEST] <davidpeach> relaxed : with ffmpeg stream
[15:16:08 CEST] <relaxed> davidpeach: I would try streaming flv instead of mp4
[15:16:16 CEST] <ffmpeguser> Hi Guys
[15:16:24 CEST] <ffmpeguser> someone can help me
[15:16:57 CEST] <hadrien> Ask your question
[15:17:42 CEST] <ffmpeguser> when i do an ffmpeg -i input.wmv -vcodec libx264 output.mp4 i get the error Unknown encoder 'libx264'
[15:18:02 CEST] <ffmpeguser> ffmpeg version 2.6.git Copyright (c) 2000-2015 the FFmpeg developers   built with gcc 4.9.2 (Debian 4.9.2-10)   configuration: --enable-gpl --enable-libx264   WARNING: library configuration mismatch   avutil      configuration: --enable-shared   avcodec     configuration: --enable-shared   avformat    configuration: --enable-shared   avdevice    configuration: --enable-shared   avfilter    configuration: --enable-shared   swscale  
[15:18:49 CEST] <hadrien> Maybe you don't have libx264 installed?
[15:19:05 CEST] <ffmpeguser> thanks hadrien
[15:19:23 CEST] <ffmpeguser> how to check if libx264 is installed ?
[15:19:52 CEST] <hadrien> Not a debian guy, but something like apt-get install libx264 should install it if it's not
[15:20:12 CEST] <ffmpeguser> i did several times
[15:20:19 CEST] <ffmpeguser> thus  gues its installed
[15:20:44 CEST] <relaxed> dpkg -l "libx264*" | grep ^ii
[15:21:05 CEST] <ffmpeguser> ii  libx264-146:amd64 3:0.146.2538+git121396c-dmo2 amd64        x264 video coding library ii  libx264-dev:amd64 3:0.146.2538+git121396c-dmo2 amd64        Development files for libx264
[15:21:59 CEST] <relaxed> ffmpeguser: Might be easier to use my static build, http://johnvansickle.com/ffmpeg/
[15:23:10 CEST] <hadrien> Don't you have to install some libavcodec-extra-something on Debian and derivatives?
[15:23:18 CEST] <Eduardo_1> it must appear in: ffmpeg2 -codecs | grep x264
[15:24:22 CEST] <Eduardo_1> sorry without the "2", i have a lot of  ffmpegs to test stuff: ffmpeg -codecs | grep x264
[15:25:20 CEST] <theeboat> isnt it ffmpeg -encoders | grep libx264
[15:26:01 CEST] <ffmpeguser>   postproc    configuration: --prefix=/usr --extra-cflags='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security ' --extra-ldflags='-Wl,-z,relro' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-nonfree --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb 
[15:27:55 CEST] <claz> is there any reason why this command "ffmpeg -i src.mp4 -vf fps=1 frames/%d.jpg" cuts frames at 0.466, 1.466, 2.466 etc?
[15:28:06 CEST] <claz> i mean, why the 0.466 offset? can i set it to 0?
[15:29:47 CEST] <davidpeach> relaxed: thanks for reply. My issue is, is that I need it to work with html 5, not reliant on flash, which I believe flv is.
[15:31:14 CEST] <relaxed> davidpeach: you're trying to stream mp4 while it's encoding?
[15:31:58 CEST] <ffmpeguser> looks like the codec is installed properly
[15:32:03 CEST] <davidpeach> yer. It is streaming, just a bit choppy
[15:32:16 CEST] <davidpeach> relaxed: well, very choppy
[15:34:31 CEST] <relaxed> I don't think the mp4 container is a good choice for what you're trying to do. webm is streamable and works with html5
[15:34:33 CEST] <davidpeach> relaxed: any ideas, at all? Or links to look at? Ive been trying to get it to work for a few days now
[15:34:50 CEST] <davidpeach> relaxed: webm, mmm
[15:35:23 CEST] <davidpeach> relaxed: so i use the -f option to specify container?
[15:35:55 CEST] <relaxed> yes, -f webm
[15:36:21 CEST] <relaxed> https://trac.ffmpeg.org/wiki/Encode/VP8
[15:38:36 CEST] <relaxed> well, I would try vp9 first https://trac.ffmpeg.org/wiki/Encode/VP9
[15:38:58 CEST] <hadrien> VP9 encoding is slower than the coming of gimp 2.10
[15:39:08 CEST] <hadrien> I wouldn't recommend it for streaming
[15:39:34 CEST] <davidpeach> relaxed: is mp4 a no-go? Basically I need it cross browser, and webm needs separate things installing for both safari and ie9
[15:39:49 CEST] <davidpeach> relaxed: I appreciate your help btw
[15:47:52 CEST] <relaxed> davidpeach: maybe https://www.ffmpeg.org/ffmpeg-formats.html#hls-1
[15:51:52 CEST] <davidpeach> relaxed: thankyou. How is that implemented please? at the moment I am piping it with '-' at the end for the output, then node js feeds it to browser. Can your suggestion be added easy?
[15:51:54 CEST] <relaxed> I don't think there's one choice that will work with everything
[15:51:59 CEST] <Eduardo_1> html5 does not work well with streaming. HLS support only exists in mobile browsers, on android 4.x and iOS, on desktop only in safari on MAC OSX
[15:52:14 CEST] <JEEBsv> Eduardo_1: stuff that supports MSE has good changes for streaming
[15:52:17 CEST] <JEEBsv> with DASH or something similar
[15:52:22 CEST] <ffmpeguser> still no succes
[15:52:54 CEST] <JEEBsv> I think recent IE, Chrome and Firefox support it and most probably there are flash based solutions for the rest
[15:53:20 CEST] <JEEBsv> also with MSE in theory you could implement HLS without the browser itself supportingi t
[15:53:56 CEST] <Eduardo_1> yes, but there's still not a wide support
[15:54:26 CEST] <davidpeach> this is a tough one
[15:54:59 CEST] <Eduardo_1> still using hls is a good option
[15:55:22 CEST] <Eduardo_1> you could use a flash based player, with hls support, and in mobile browsers do a fallback to html5
[16:56:16 CEST] <pgunnars> whatsup fellas
[16:57:13 CEST] <pgunnars> im trying to insert a custom header into the streams i stream. ffmpeg -headers="" doesnt seem to be a valid call, please to be helped
[16:58:28 CEST] <pgunnars> custom http header*
[16:58:32 CEST] <pgunnars> to allow cross domain shit
[17:48:43 CEST] <pgunnars> can some1 point me to an example of the -headers option actually working?
[17:51:45 CEST] <zhanshan> hi
[17:51:46 CEST] <zhanshan> how can I change the keyframe-setting (key frames are exactly 10 seconds apart)?
[17:52:17 CEST] <zhanshan> my command is: $ ~/bin/ffmpeg -f image2 -pattern_type glob -i '*.jpg' -i '' -pix_fmt yuv420p -c:v libx264 -crf 18 -c:a flac '.mkv'
[17:53:59 CEST] <pgunnars> ffmpeg -i input -headers $'Access-Control-Allow-Origin: *\r\n' output, the headers option isnt picked up by anything, any1 know why?
[17:54:36 CEST] <Brian> Has anyone manage to compile a static build of ffmpeg on OS X? I am in the middle of figuring it out, but it seems impossible with static libraries missing i.e. fontconfig.a
[17:56:28 CEST] <pgunnars> does anyone have any answears or is everyone just lost
[17:56:29 CEST] <Eduardo_1> @zhanshan try :  -force_key_frames expr:gte\(t,n_forced*10\)
[17:57:23 CEST] <Brian> or busy
[17:57:25 CEST] <Eduardo_1> @pgunnars don't know what are you doing, but the headers Access-Control-Allow-Origin, go in the webserver configuration
[17:59:29 CEST] <pgunnars> ?
[17:59:34 CEST] <pgunnars> Eduardo_1: please explain further
[18:04:11 CEST] <pgunnars> where is that
[18:04:12 CEST] <pgunnars> what is love
[18:06:26 CEST] <pgunnars> Eduardo_1: I'm working with ffserver, not just ffmpeg. I am sending the headers to a server feed
[18:14:59 CEST] <pgunnars> don't leave me hanging bro
[18:17:09 CEST] <Eduardo_1> sorry don't know how to use ffmpeg server, i do know that the header Access-Control-Allow-Origim, goes in the configuration of the webserver, in case you're using nginx or apache
[18:19:40 CEST] <pgunnars> Eduardo_1: I'm using ffserver
[18:19:50 CEST] <pgunnars> :(
[18:25:44 CEST] <pgunnars> how is google not coming up with anything
[18:26:30 CEST] <Eduardo_1> it should be on the documentation: https://www.ffmpeg.org/ffserver.html
[18:27:48 CEST] <pgunnars> except it isnt
[18:27:54 CEST] <pgunnars> no mention of http header
[18:30:07 CEST] <zhanshan> Eduardo_1 wow, what a special command! it's something normal like -g 3 or something
[18:30:25 CEST] <zhanshan> thanks
[18:48:43 CEST] <zhanshan> Eduardo_1 does it make sense to change to keyframes?
[18:49:02 CEST] <zhanshan> the file will be much bigger probably?
[18:50:01 CEST] <Eduardo_1> @zhanshan that option doesn't make every keframe 10 seconds apart, but it makes sure that every 10 seconds there is a keyframe. it's difficult (or impossible) to make every keyframe 10 seconds apart, because codecs needs keyframes on scene change and decoders need keyframes on a steady rate
[18:50:36 CEST] <Eduardo_1> it depends on what do you want to do
[18:50:57 CEST] <zhanshan> ah maybe I didn't explain right
[18:51:10 CEST] <Eduardo_1> keyframes every 2 o 3 seconds ensure that a file is seekeable when playing
[18:51:27 CEST] <Eduardo_1> i asumed that you wanted a keyframe every 10 seconds
[18:51:32 CEST] <zhanshan> on my command the output was "key frames are exactly 10 seconds apart"
[18:51:34 CEST] <Eduardo_1> maybe for segementation
[18:51:37 CEST] <zhanshan> but I wanted more I guess
[18:51:51 CEST] <zhanshan> there were some playback issues with the sound
[18:51:54 CEST] <Eduardo_1> why 10 seconds?
[18:52:01 CEST] <zhanshan> I don't know if that's why?
[18:52:49 CEST] <zhanshan> yah, when seeking mpv player needed some seconds to synchronize the sound (maybe depening on the keyframe?)
[18:52:54 CEST] <Eduardo_1> yes
[18:53:05 CEST] <Eduardo_1> more keyframes more "seekeable"
[18:53:07 CEST] <zhanshan> it's kinda annoying
[18:53:33 CEST] <zhanshan> so why did that happen on the command above and how can I make ffmpeg create keyframes more often?
[18:53:45 CEST] <Eduardo_1> usually players seek to the closest keyframe, if there's no keyframe it takes a while to find one
[18:53:48 CEST] <zhanshan> which key-frame amount makes sense
[18:54:22 CEST] <Eduardo_1> like a said 2 o 3 seconds is a good amount of tiem to have a keyframe
[18:55:05 CEST] <Eduardo_1> the codec will insert a keyframe if  needed, for example on scene change
[18:55:11 CEST] <zhanshan> so within the given command above how can I increase the amount?
[18:55:31 CEST] <Eduardo_1> what encoder are you using?
[18:55:36 CEST] <Eduardo_1> libx264?
[18:55:58 CEST] <zhanshan> yes
[18:57:00 CEST] <Eduardo_1> -force_key_frames expr:gte\(t,n_forced*10\), 10 is the number of seconds, you can change that
[18:57:50 CEST] <zhanshan> alright I'm gonna try
[18:57:52 CEST] <Eduardo_1> also you can try: -x264opts "keyint=90"
[18:58:22 CEST] <Eduardo_1> keyint, keyframe interval, 90 frames, if your video is about 30 fps, thats about 3 seconds
[18:58:35 CEST] <zhanshan> or 50 if using 25 per second and for two second interval
[18:58:42 CEST] <zhanshan> yeah
[18:58:42 CEST] <Eduardo_1> right
[19:36:38 CEST] <zhanshan> Eduardo_1, it's strange, if I seek forward in mpv player while playing the resulting file there's no delay, only if I seek backwards there's a delay of maximum 2s
[20:30:27 CEST] <durandal_170> dericed: ping
[23:17:01 CEST] <Nolski> Hey all, I have a question. I am using ffmpeg to do a lot of video editing and some of the final product of videos may contain sections that is just text on a background. is there any way for me to render a background of a blank color to create a base video to work on?
[23:18:43 CEST] <kepstin-laptop> Nolski, sure, use the "color" filter
[23:19:15 CEST] <Nolski> but there is no input video
[23:19:18 CEST] <Nolski> kepstin-laptop:
[23:19:28 CEST] <kepstin-laptop> color is an input filter, you don't need an input video
[23:19:34 CEST] <kepstin-laptop> er, a source filter
[23:20:20 CEST] <kepstin-laptop> you can do e.g. 'ffmpeg -filter_complex color=c=white output.mkv' to generate an infinitely long blank white video :)
[23:20:36 CEST] <kepstin-laptop> (there are options to set resolution, length, etc.)
[23:20:43 CEST] <Nolski> ...This tool is so freaking awesome
[23:20:48 CEST] <Nolski> Thanks kepstin-laptop
[23:33:39 CEST] <llogan> Nolski: then use drawtext to make your text. it can even read from a file.
[00:00:00 CEST] --- Wed Jul  1 2015


More information about the Ffmpeg-devel-irc mailing list