[Ffmpeg-devel-irc] ffmpeg.log.20180425

burek burek021 at gmail.com
Thu Apr 26 03:05:02 EEST 2018


[00:18:37 CEST] <c_14> can't you use avcodec_descriptor_get_by_name or something?
[00:18:57 CEST] <c_14> ar avcodec_descriptor_get
[00:19:24 CEST] <c_14> Those'll return the descriptor without having to open the codec
[00:22:56 CEST] <c_14> oliverdain: ^
[00:30:32 CEST] <oliverdain> avcodec_descriptor_get: yup, had just found that and got it working. Thanks c_14!!
[01:52:04 CEST] <frecklealex> Hey folks. Question: I have a 16Mbps bitrate mp4 file which I input into ffmpeg with `ffmpeg -i start.mp4 finish.mp4` and it seems like the final bitrate is somewhere around 800Kbps. What is ffmpeg's default behavior? Does it figure out an optimal bitrate for me, ending up with that final number?
[01:53:57 CEST] <frecklealex> I couldn't find the default behavior in the docs, so asking here now.
[01:55:20 CEST] <klaxa> it picks some "sane" defaults
[01:55:21 CEST] <atomnuker> no, you need to set the bitrate and all settings yourself, this is just the defaults
[01:55:30 CEST] <furq> the default behaviour with mp4 is x264 at crf 23
[01:56:02 CEST] <furq> which i'd probably describe as "generally fine"
[01:56:03 CEST] <furq> but not really good
[02:04:06 CEST] <frecklealex> "not really good" in terms of final quality, or what specifically? I'm unfortunately not super familiar with this space, so might be asking something really obvious.
[02:05:48 CEST] <klaxa> just run ffmpeg -i start.mp4 -t 10 finish.mp4 and see for yourself
[02:05:54 CEST] <klaxa> that'll encode the first 10 seconds
[02:06:24 CEST] <klaxa> if you want to use a short segment from somewhere else use: ffmpeg -i start.mp4 -ss 1:20 -t 10 finish.mp4
[02:06:40 CEST] <klaxa> that'll use 10 seconds at ~1 minute and 20 seconds
[02:06:59 CEST] <klaxa> and yes, in terms of final quality
[02:07:26 CEST] <klaxa> if you look for it you will most likely be able to see that some parts of the encode show artifacts (parts that look "wrong")
[02:07:50 CEST] <klaxa> if you increase the quality they will start to become less visible
[02:08:36 CEST] <klaxa> increasing the quality means decreasing the crf value, so: ffmpeg -i start.mp4 -crf 20 finish.mp4 would be better than the default quality
[02:18:08 CEST] <frecklealex> Gotcha, thanks for explaining. Our videos are mostly static frames being show on the screen for 5-10 seconds, very little movement with exception for fade in and out, so I've perceived no reduction in quality.
[02:19:39 CEST] <frecklealex> Basically a powerpoint that ended up in an mp4
[02:19:43 CEST] <klaxa> yeah those are pretty "easy" to code, iirc x264 even has a tune setting for still frames
[02:20:02 CEST] <klaxa> my uni made me doe something similar...
[02:20:17 CEST] <klaxa> audio + slides as jpeg with timecodes in custom .ogg
[02:20:35 CEST] <klaxa> converted that to mkv x264 with chapters
[02:23:12 CEST] <frecklealex> Heh sounds clever. I've realized that our "slideshows" were starting off as these giant 300MB files because of folks setting the bitrate as high as they could
[02:23:25 CEST] <frecklealex> Even Cloudinary was saying "Yeah.. I'm not accepting files that large"
[02:24:50 CEST] <klaxa> well they had to ship their own java player for playback
[02:25:00 CEST] <klaxa> and i don't like java poisoning my computers
[02:25:10 CEST] <klaxa> they made the spec open, but their lib closed
[02:28:32 CEST] <klaxa> not really the way to go as a university imo, but whatever
[02:28:43 CEST] <klaxa> i think they moved to something "html5-compatible" now
[02:28:54 CEST] <klaxa> gotta support those iphones
[02:40:03 CEST] <furq> frecklealex: you probably want to check out -tune stillimage and maybe lower the framerate
[02:40:24 CEST] <furq> but preset and crf are the two most important x264 settings
[02:41:00 CEST] <furq> it's generally worth spending some time tooling around with those and seeing what kind of results you get
[02:42:24 CEST] <frecklealex> furq: thanks for that, will check it out
[05:24:08 CEST] <stephen> What up folks
[05:24:39 CEST] <stephen> Anyone around who remembers my data in a video convo a few days back?
[05:55:54 CEST] <bubbaFFMPEG> I think im having palette problems. Im trying to make animated gif from mp4, Ive tried many ways, and failed. I get  Error initializing filter 'fps' with args '1', googling hasnt resolve the issue.
[05:57:07 CEST] <furq> bubbaFFMPEG: pastebin the full command
[05:59:35 CEST] <bubbaFFMPEG> This is a the most recent attempt.
[05:59:36 CEST] <bubbaFFMPEG> https://pastebin.com/VH0u9fwx
[06:03:21 CEST] <bubbaFFMPEG> I was able to make a gif, but it looked horrible, so I looked into making it look reasonable, and I guess that means making a palette first. I cant seem to do that part.
[06:03:48 CEST] <bubbaFFMPEG> like discribed here: https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality
[06:07:37 CEST] <bubbaFFMPEG> Here was my first attempt to follow those instructions, I changed some values, removed the time stuff.
[06:07:40 CEST] <bubbaFFMPEG> https://pastebin.com/3rQFQ0aa
[06:07:47 CEST] <furq> palettegen creates an output stream, so palette.png in that command should be the output file
[06:07:55 CEST] <furq> and then you'd do a second pass with the video and the palette and -vf paletteuse
[06:08:13 CEST] <furq> but you can do it in one command like this: http://vpaste.net/o8QPH
[06:08:28 CEST] <furq> also you should really be using ffmpeg, not avconv
[06:10:00 CEST] <bubbaFFMPEG> Unrecognized option 'lavfi'.
[06:10:30 CEST] <bubbaFFMPEG> Yeah, I hate avconv, and ive tried to apt-get install ffmpeg, but it refuses
[06:10:56 CEST] <furq> if you're on an old debian/ubuntu then just use https://www.johnvansickle.com/ffmpeg/
[06:11:19 CEST] <furq> -lavfi is an alias for -filter_complex so use that for now
[06:11:43 CEST] <furq> also i assume this is a short enough video for it all to be buffered
[06:11:48 CEST] <furq> but if not then you'd need to do it in two passes
[06:11:51 CEST] <bubbaFFMPEG> I think Im going to be upgrading to the new LTE or whatever ubuntu is coming out this month, but for now I can put up with avconv.
[06:12:17 CEST] <bubbaFFMPEG> Yeah, its short, its like 30 seconds
[06:12:52 CEST] <furq> well yeah if it throws buffer queue overflow errors then pretty much just run the example commands for palettegen and paletteuse
[06:13:45 CEST] <bubbaFFMPEG> what are the example commands? how do I use those?
[06:14:12 CEST] <furq> !filter palettegen @bubbaFFMPEG
[06:14:12 CEST] <nfobot> bubbaFFMPEG: http://ffmpeg.org/ffmpeg-filters.html#palettegen-1
[06:14:15 CEST] <furq> !filter paletteuse @bubbaFFMPEG
[06:14:15 CEST] <nfobot> bubbaFFMPEG: http://ffmpeg.org/ffmpeg-filters.html#paletteuse
[06:18:59 CEST] <bubbaFFMPEG> I tried the palettegen example and it failed.
[06:19:01 CEST] <bubbaFFMPEG> https://pastebin.com/Zc9t7Bhx
[06:19:16 CEST] <furq> oh wow
[06:19:27 CEST] <furq> yeah i guess that's just down to having an ancient libav
[06:19:34 CEST] <furq> just use those ffmpeg static builds i linked
[06:20:11 CEST] <bubbaFFMPEG> Ugh, well, I better back up data, so I can make disk space while im at it :(
[06:20:59 CEST] <furq> it's only about 20MB
[06:23:16 CEST] <bubbaFFMPEG> I just treid a website called ezgif, and it did a nice job, and it used ffmpeg and imagemagik, just like i am.
[06:32:45 CEST] <bubbaFFMPEG> I downloaded and uncompressed the static 32bit build, but I cant figure out how to install, there doesnt seem to a an install script
[06:32:58 CEST] <furq> just put everything in bin/ in /usr/local/bin/
[06:33:05 CEST] <furq> or ~/bin if you prefer
[06:34:23 CEST] <furq> actually i guess there's no bin/ in the archive, so just put ffmpeg in /usr/local/bin
[06:35:18 CEST] <bubbaFFMPEG> Doh, I treid to uncompress it, but i ran out of disk space. its a 126mb download.
[06:42:03 CEST] <bubbaFFMPEG> It looks like i had a lot of duplicate video on my primary disk, so I deleted a whole bunch of files. I sure hope they were all duplicates!
[06:54:36 CEST] <bubbaFFMPEG> furq [Parsed_palettegen_2 @ 0xc58b880]
[06:56:39 CEST] <bubbaFFMPEG> furq: it seems to have make a palette, im grinding out a gif to see if its working good.
[06:58:19 CEST] <bubbaFFMPEG> furq: Yep, its working nicely. I think it was the palatte issue hanging me up.
[06:59:41 CEST] <bubbaFFMPEG> furq: Thank you, I think I can move on to making gifs now.
[09:24:04 CEST] <GrayShade> hi. do -preset and -crf interact? that is, can I use e.g. -preset ultrafast to pick a crf, then use a slower one to get a smaller file?
[09:26:27 CEST] <TheAMM> Yes.
[09:26:49 CEST] <TheAMM> Or wait, what, use -preset to pick a crf?
[09:27:18 CEST] <TheAMM> "-crf 20 -preset ultrafast" will create a bigger file than "-crf 20 -preset veryslow"
[09:29:10 CEST] <furq> no
[09:29:12 CEST] <GrayShade> TheAMM: does -crf 20 -preset ultrafast look the same as -crf 20 -preset veryslow, disregarding the file size?
[09:29:16 CEST] <furq> no
[09:29:35 CEST] <GrayShade> oh
[09:29:53 CEST] <furq> ultrafast might produce a smaller file than veryslow at the same crf
[09:30:30 CEST] <furq> you really need to test with the exact same settings
[09:30:37 CEST] <GrayShade> I see, okay
[09:30:39 CEST] <furq> and tuning etc
[09:30:41 CEST] <GrayShade> thanks
[09:30:56 CEST] <TheAMM> ?
[09:31:19 CEST] <JEEB> CRF value is 100% dependant on the settings
[09:31:26 CEST] <TheAMM> I've seen the -presets being about the speed and compression
[09:31:28 CEST] <JEEB> what CRF can "see"
[09:31:29 CEST] <JEEB> yes
[09:31:46 CEST] <JEEB> they also make the algorithms internally which are also relevant to CRF differ
[09:31:58 CEST] <JEEB> so the same CRF value does not produce the same quality for different presets
[09:32:08 CEST] <GrayShade> I have some crappy phone video I want to make smaller than 16 mbps. I guess I'll have to play with the settings for a bit
[09:32:17 CEST] <JEEB> for example if you go from slow to veryslow or placebo in x264 it's highly likely that the file size for the same CRF will go *up*
[09:32:22 CEST] <JEEB> because the algorithms "see" more things
[09:32:34 CEST] <JEEB> and thus the result is different
[09:32:55 CEST] <JEEB> so you a) pick the preset b) iterate over ~2500 frames or something to find the highest CRF value that still looks good
[09:32:57 CEST] <TheAMM> Fuzz in terms on the h264 encoding guide
[09:33:00 CEST] <furq> crf is supposed to be constant quality regardless of source, not regardless of other settings
[09:33:15 CEST] <TheAMM> Not compression as in data packs better, but lossy compression as in more detailed data is packed
[09:33:39 CEST] <GrayShade> and woah, the noise magically goes away
[09:33:47 CEST] <JEEB> furq: lol I got scolded hard by pengvado when I called CRF "constant quality"
[09:33:50 CEST] <JEEB> since it's not
[09:33:53 CEST] <JEEB> it's the closest we have to such
[09:34:03 CEST] <furq> yeah i was careful to add "supposed to be"
[09:34:03 CEST] <JEEB> but it's "just" constant rate factor
[09:34:28 CEST] <JEEB> and when you iterate over CRF values, generally start somewhere high where you have at least chance of seeing artifacts :P
[09:34:32 CEST] <JEEB> like 23
[09:34:35 CEST] <JEEB> (which is the default CRF value)
[09:34:51 CEST] <furq> obviously if the source is five minutes of confetti raining down over the sea, it's maybe not going to hold up
[09:37:35 CEST] <fella> furq: < furq> ultrafast might produce a smaller file than veryslow at the same crf -- I can hardly imagine in what cases that might happen?
[09:37:56 CEST] <furq> maybe superfast rather than ultrafast, since ultrafast is so much different
[09:38:21 CEST] <furq> i rarely go lower than slow, but i've definitely had slow encodes turn out noticeably smaller than veryslow
[09:47:55 CEST] <alone-y> furq, may i ask?
[09:48:20 CEST] <alone-y> thank you for filter-complex idea. it's working perfect for avoiding bat file limit 8192
[09:54:25 CEST] <mlok> hey anybody know a similar command to -q:a 1 ?
[09:54:30 CEST] <mlok> *flag
[09:54:39 CEST] <mlok> or another way to tune this setting more
[10:13:46 CEST] <alone-y> can i do if rgb(x,y)<>255 then z=z+1?
[11:18:22 CEST] <faLUCE> Hello. I'm trying to make a video from single image + mp3.   I tried:   ffmpeg -loop 1 -r 1 -i de-bellis.jpg -i leonardo-leo-dixit-dominus.mp3 -acodec copy -vcodec libx264 leonardo-leo-dixit-dominus.mp4      but it goes into an endless loop. what's wrong ?
[11:19:46 CEST] <durandal_1707> faLUCE: add -shortest
[11:22:13 CEST] <faLUCE> durandal_1707: at which point?
[11:23:03 CEST] <durandal_1707> faLUCE: where output options are
[11:23:37 CEST] <faLUCE> thnks durandal_1707
[12:47:12 CEST] <shfil> hi, I want use "avio_context_free" but on our CI (ubuntu, fedora, MSVC) I get "error: 'avio_context_free' was not declared in this scope". On macOS and Arch it works. Do you know what can be suspect?
[12:47:32 CEST] <shfil> for example: https://travis-ci.org/rwengine/openrw/builds/370138895
[12:48:01 CEST] <shfil> it is pull: https://github.com/rwengine/openrw/pull/420/files
[13:48:06 CEST] <shfil> it looks like I should rewrite FindFFmpeg.cmake
[14:00:48 CEST] <hay> hi all... when I am trying to do ffprobe -v debug -i udp://239.0.1.135:3135 for an UDP multicast stream, ffmpeg (and ffprobe) stops at "[udp @ 0x2f7c3e0] end receive buffer size reported is 131072"... it doesnt hang, but I can see with tcpdump that stream is arriving... but I am unable to use that input in ffmpeg... what am I doing wrong? thanks
[14:02:03 CEST] <JEEB> if it's source-specific add ?sources=SOURCE_IP
[14:02:11 CEST] <JEEB> also always use -timeout with UDP
[14:02:20 CEST] <JEEB> -timeout 5000000 for 5 seconds IIRC
[14:02:25 CEST] <JEEB> (before -i)
[14:02:30 CEST] <JEEB> also ffprobe doesn't need -i
[14:03:05 CEST] <JEEB> it's just ffprobe -v verbose -timeout 5000000 -sources SOURCE_IP udp://XXXX:PORT
[14:03:07 CEST] <JEEB> that should work
[14:03:19 CEST] <JEEB> it should die after five seconds if there's no data (according to ffprobe)
[14:17:13 CEST] <hay> JEEB, thanks... I will try it in a few minutes when I'm back at the computer :)
[14:18:25 CEST] <hay> should I use IP address as SOURCE_IP and not the MC address right?
[14:23:06 CEST] <hay> with -sources and timeout added it just finishes immediately... output: https://pastebin.com/45yMdhKp
[14:25:52 CEST] <JEEB> hay: SOURCE_IP is the multicast source
[14:26:09 CEST] <JEEB> in the udp:// you put the MULTICAST address and port
[14:30:07 CEST] <hay> thanks, it works :)
[17:42:46 CEST] <Li> is it possible to extract audio based on h:mm:ss time interval for -ss and -to?
[17:43:04 CEST] <Li> any example would be highly appreciated
[17:55:54 CEST] <Li> good
[18:41:36 CEST] <marcus_> Hi, I'm using ffmpeg to extract frames and combine using tile. Is it possible to optimize this command and make it faster? ffmpeg -i https://localhost:5001/original.mp4 -vf select='not(mod(n\,62))',yadif,scale=480:270:force_original_aspect_ratio=increase,crop=480:270,tile=1x50 -q:v 5 -nostats -loglevel 0 -frames:v 1 scrub.jpg
[18:54:46 CEST] <ChocolateArmpits> marcus_, tile is a pretty hefty filter
[18:54:49 CEST] <ChocolateArmpits> it seems
[18:55:02 CEST] <ChocolateArmpits> try benching it with the bench filter to see how much it takes up processing time
[18:58:51 CEST] <marcus_> ChocolateArmpits: I'm far from an expert on ffmpeg. Can you give me an example on what you mean?
[18:59:06 CEST] <marcus_> ChocolateArmpits: Is it possible to tile with some other filter maybe?
[18:59:36 CEST] <ChocolateArmpits> marcus_, overlay, but you'll have to multiply the video stream 50 times for that. Can't be in any way more efficient
[19:00:15 CEST] <ChocolateArmpits> not to mention it won't lay out the frames correctly, so there'll have to be even more select filters
[19:00:41 CEST] <ChocolateArmpits> bench filter is used to test processing time taken for a single frame
[19:01:06 CEST] <ChocolateArmpits> place bench=start at the position you want to start checking performance from and bench=stop where you want the performance checking to end
[19:01:33 CEST] <ChocolateArmpits> place these in the filtergraph that is, will print time processing information
[19:02:11 CEST] <ChocolateArmpits> tile is also not threaded leading to potentially even slower performance
[19:03:11 CEST] <durandal_1707> tile just does memcpy
[19:05:46 CEST] <ChocolateArmpits> replacing yadif for pp=lb will speed it up, but that's not where the real performance issue is
[19:07:52 CEST] <ChocolateArmpits> dropping color should speed up tile I think, if you're not concerned with only having a black and white image
[19:09:25 CEST] <marcus_> I need color for this feature so that is not an option
[19:09:35 CEST] <ChocolateArmpits> scale filter flag set to area or neighbor should speed up the operation too, in exchange for decreased quality (mostly with neighbor)
[19:11:00 CEST] <ChocolateArmpits> marcus_, so what's the actual speed ? is it impossible to run concurrently if multiple videos need to be processed
[19:11:01 CEST] <marcus_> How would you approach this problem to make it fast and effective on even very large movies? Is seek an option?
[19:11:11 CEST] <durandal_1707> the biggest source of cycles is before select filter, you decode bunch of frames which you then discard
[19:12:55 CEST] <marcus_> I have an mp4 movie 228MB that takes about 25 seconds to create a 50 frame tile
[19:13:40 CEST] <marcus_> Running on aws with the source on s3 using accelerated endpoint
[19:17:08 CEST] <ChocolateArmpits> you could try grabbing only keyframes using concat file with inpoint-outpoint for the number of frames needed, but you'd have to determine their position first using ffprobe
[19:19:34 CEST] <ChocolateArmpits> so it's more or less counting time offsets for whatever number of frames based on file duration you need, then searching those time offsets backwards and forwards for any keyframes and grabbing the times. Add file start time to each exact keyframe offset and input it to a concat file, outpoint having the timestmap of the next frame after the keyframe
[19:20:08 CEST] <ChocolateArmpits> this sounds, but not sure if it'll work out completely smooth
[19:20:13 CEST] <ChocolateArmpits> this sounds possible to do*
[19:20:29 CEST] <marcus_> Ok, I see
[19:20:49 CEST] <furq> maybe -skip_frame nokey
[19:21:10 CEST] <furq> although you'd need to do ffprobe -skip_frame nokey -count_frames to get an accurate keyframe count
[19:21:23 CEST] <furq> and doing that plus your ffmpeg command might end up taking more time than it does now
[19:22:20 CEST] <ChocolateArmpits> hmmm an example in the tile filter does suggest using -skip_frame nokey
[19:22:49 CEST] <ChocolateArmpits> https://ffmpeg.org/ffmpeg-filters.html#Examples-101
[19:24:17 CEST] <ChocolateArmpits> the only problem if the gop duration isn't fixed, it won't be possible to do frame selection the way they are done in the initial command, you'll have to pick using timestamps
[19:25:41 CEST] <ChocolateArmpits> and it'll still have to decode all keyframes, though that's less work than intially planned
[19:25:53 CEST] <ChocolateArmpits> I would probably explore this option first, before doing concat file
[19:27:39 CEST] <marcus_> Ok, I will give -skip_frame a try
[19:27:42 CEST] <marcus_> Today I do "-select_streams", "v:0", "-show_entries", "stream=nb_frames"
[19:27:58 CEST] <marcus_> to select the number of frames using ffprobe
[19:28:18 CEST] <marcus_> Will this give me the wrong number of frames?
[19:28:22 CEST] <furq> you can try just adding -skip_frame nokey to that, but i suspect you'll need -count_frames as well
[19:28:28 CEST] <furq> i did last time i tried this
[19:29:35 CEST] <ChocolateArmpits> for ffprobe you may want to use read_intervals, you can then input time where the file should be read, it's pretty much seeking so no need to collect frames before or between the times you are interested
[19:29:56 CEST] <ChocolateArmpits> you can also list lots of time in succession
[19:30:16 CEST] <ChocolateArmpits> so no need to rerun ffprobe for each potential offset of interest
[19:43:40 CEST] <marcus_> Adding -skip_frame nokey to my command results in just one frame and the others 49 is blank?
[19:54:24 CEST] <mlok> .win 3
[20:06:41 CEST] <marcus_> With -skip_frame nokey it seems to run faster but I get a lot of blank frames
[20:07:12 CEST] <ChocolateArmpits> marcus_, did you try with -vsync 0 as suggested?
[20:11:57 CEST] <marcus_> ChocolateArmpits: yes, but the result is the same
[22:21:16 CEST] <GamleGaz> should srcSliceY and srcSliceH be the width and height of the frame when converting yuv420p->RGB24 using sws_scale?
[23:16:09 CEST] <b0bby__> hello
[23:17:20 CEST] <b0bby__> with the ffmpeg command how do you get ffmpeg to output a new line rather than a cartridge return when outputting file progress
[23:17:23 CEST] <b0bby__> ?
[23:31:21 CEST] <b0bby__> hello?
[23:32:07 CEST] <kepstin> b0bby__: the main stats output is hardcoded to do \r, so you'd have to edit the source and recompile to change that
[23:32:35 CEST] <kepstin> b0bby__: why do you want to change it? maybe there's some other things to try.
[23:33:22 CEST] <b0bby__> kepstin: is there a way to output only the current frame ffmpeg is on
[23:33:34 CEST] <b0bby__> kepstin: Like no other output
[23:33:49 CEST] <kepstin> b0bby__: what do you want to use this output for?
[23:34:15 CEST] <b0bby__> kepstin: Keep track of how complete ffmpeg is
[23:34:52 CEST] <kepstin> as a person? just watch the output on the terminal, that should work fine with \r...
[23:35:29 CEST] <kepstin> it will just keep updating the same line on the terminal (assuming it's wide enough that you don't get wordwrapping)
[23:35:30 CEST] <b0bby__> kepstin: I'm not watching it. A program is. I'm redirecting output to a log
[23:35:40 CEST] <tdr> pipe the output through a text filter?  (sed /awk ... )
[23:36:00 CEST] <tdr> or tr
[23:36:00 CEST] <b0bby__> tdr: How
[23:36:11 CEST] <kepstin> b0bby__: ok, so if you're doing this programmatically, you shouldn't be using that output - instead, you probably want to use -vstats and -vstats_file to a pipe or file, and read that
[23:36:12 CEST] <b0bby__> ?
[23:36:21 CEST] <tdr> b0bby__, are you on linux or iwndows or ?
[23:36:27 CEST] <b0bby__> tdr: linux
[23:36:56 CEST] <kepstin> b0bby__: the vstats output is a defined program-readable format, separated with \n
[23:38:18 CEST] <b0bby__> kepstin: what happens if it is only half complete
[23:38:26 CEST] <kepstin> not sure what you mean
[23:38:46 CEST] <kepstin> ffmpeg doesn't know what percentage complete it is
[23:39:06 CEST] <b0bby__> kepstin: but it does no what frame it is one and the time stamp of that frame
[23:39:12 CEST] <b0bby__> *know
[23:39:31 CEST] <kepstin> yes, but it doesn't know how long the video will be when it's done
[23:39:46 CEST] <b0bby__> kepstin: I don't care about the time
[23:39:53 CEST] <b0bby__> kepstin: just the percent
[23:40:13 CEST] <kepstin> but unless you know how long the video will be when it's done, you can't know how far through the video you are
[23:40:23 CEST] <kepstin> so there's no way to calculate a percent
[23:40:51 CEST] <kepstin> If you (as the person running ffmpeg) know how long the output is, you can calculate the percent yourself from the current time
[23:41:15 CEST] <b0bby__> 1. Find the length of the original video 2. Monitor ffmpeg for output on current frame. 3. current/original length 4. profit
[23:42:57 CEST] <kepstin> that works as long as the output video will be the same length as the input video, yep.
[23:44:12 CEST] <furq> i already have an awk script that does that somewhere
[23:44:16 CEST] <b0bby__> kepstin: So when trancoding(which is what I'm doing) ffmpeg is going to add extra time to the file because it feels like it?
[23:44:39 CEST] <kepstin> not unless you use filters that change the video length
[23:44:39 CEST] <b0bby__> furq: please can I get that awk script
[23:44:57 CEST] <furq> http://vpaste.net/iraOS
[23:47:06 CEST] <furq> i don't really recommend using this for the reasons that have already been said
[23:47:13 CEST] <furq> but it does basically work
[23:48:28 CEST] <furq> bye
[23:50:28 CEST] <kerio> IRA OS huh
[23:51:00 CEST] <furq> your tone is antagonistic and you're making me very angry
[23:51:12 CEST] <kerio> tiocfaidh ár lá
[00:00:00 CEST] --- Thu Apr 26 2018


More information about the Ffmpeg-devel-irc mailing list