[Ffmpeg-devel-irc] ffmpeg.log.20160125

burek burek021 at gmail.com
Tue Jan 26 02:05:01 CET 2016


[01:11:15 CET] <Snaggle> Using ffmpeg-git to make a movie from a series of tiffs. The command and ffmpeg output is at http://pastebin.com/yeLiCQBY  But when I try to play the local file in Firefox, I get an error that the video is corrupt and unplayable.
[01:12:11 CET] <J_Darnley> What do I want to bet on?
[01:13:25 CET] <J_Darnley> So many reasons.  yuv444?  vp9?  no audio stream?
[01:13:31 CET] <furq> vp9 and no audio should be fine
[01:13:44 CET] <furq> i'm pretty sure yuv444p isn't supported by browsers
[01:14:02 CET] <furq> Snaggle: -pix_fmt yuv420p
[01:14:45 CET] <J_Darnley> Well, the browser in this case isn't as rubbish as it could be.
[01:14:45 CET] <furq> http://stackoverflow.com/a/21510586/1516484
[01:14:47 CET] <furq> nice answer J_Darnley
[01:15:24 CET] <furq> that's not even the worst answer on the page
[01:15:42 CET] <J_Darnley> A GUI?  What kind of pleb do you think I am?
[01:16:13 CET] <furq> the kind of pleb who doesn't want to run three separate tools to demux a pgc?
[01:17:19 CET] <furq> handbrake is actually quite good now provided you have a relatively clean source
[01:20:05 CET] <Snaggle> thank you.  pix_fmt did it.
[01:50:36 CET] <andrey_utkin> any idea how to cope with NOPTS from mkv files? Want to concat a couple of _streamed_ ones (mkv saved to seekable files works fine).  Simple testcase: https://gist.github.com/andrey-utkin/d24f2fa744bd18a2067a Output: https://gist.github.com/andrey-utkin/f9b72c0ae711716500c8
[02:29:12 CET] <andrey_utkin> http://trac.ffmpeg.org/ticket/5186 help is appreciated
[02:44:08 CET] <J_Darnley> Is catting mkv supposed to work?
[02:59:22 CET] <andrey_utkin> J_Darnley: why not?
[02:59:50 CET] <andrey_utkin> J_Darnley: or you mean cat(1)?
[03:01:13 CET] <J_Darnley> Yeah, isn't that what concat does?
[03:06:49 CET] <furq> you're thinking of the concat protocol
[03:07:04 CET] <furq> -f concat is the concat demuxer
[03:08:46 CET] <J_Darnley> oh my bad
[04:48:14 CET] <Chagall> so i have an audio file with mono and 5.1ch audio
[04:48:30 CET] <Chagall> i encode it with opus but every time mediainfo says it is 6 channels
[04:48:40 CET] <Chagall> regardless of which one i select, and the ID seems right
[04:51:05 CET] <Chagall> nevermind, seems a container issue
[05:20:18 CET] <tldr`> http://news.softpedia.com/news/zero-day-ffmpeg-vulnerability-lets-anyone-steal-files-from-remote-machines-498880.shtml
[05:20:21 CET] <tldr`> has this been fixed?
[05:20:54 CET] <furq> did you read to the end of the article
[06:35:17 CET] <tldr`> now i did
[07:39:57 CET] <jleclanche> I'm trying to concat 4 mp4 files together so I followed the wiki and used -f concat, but it says Requested output format 'concat' is not a suitable output format. any idea why?
[07:45:21 CET] <chungy> You need to use -f concat with input, not output.
[08:01:48 CET] <jleclanche> chungy: oh of course
[08:01:57 CET] <jleclanche> I keep forgetting how argument ordering works in ffmpeg
[08:02:33 CET] <jleclanche> hmm, Invalid data found when processing input
[08:29:52 CET] <jleclanche> Aight well I tried several of the concat ways but all of them result in a corrupt file :( `ffmpeg -f concat -i list -c copy output.mp4` gives me something fine for the first chunk, but then completely corrupt for the last 3 (spams "aac: Number of bands (49) exceeds limit (40)." when testing it with mpv and crashes)
[08:46:23 CET] <durandal_1707> you can't use copy if input params change
[09:04:33 CET] <jleclanche> durandal_1707: input params don't change, that I know of
[09:04:41 CET] <jleclanche> chungy: what do you want pastebinned?
[09:06:51 CET] <chungy> The command you're using and the output
[09:15:40 CET] <jleclanche> chungy: results of ffmpeg -f concat -i list -c copy output.mp4: http://sprunge.us/YTeR
[09:17:05 CET] <chungy> that looks like a successful run...
[09:18:01 CET] <jleclanche> chungy: mpv output playing the file: http://sprunge.us/dhPd
[11:45:20 CET] <andrey_utkin> Please recommend some multimedia container format which supports h264 & flac, is streamable, and is not matroska (because i have this bug with matroska http://trac.ffmpeg.org/ticket/5186)
[11:47:40 CET] <JEEB> I can't recommend it for anything else than ffmpeg-to-ffmpeg but there's NUT
[11:47:53 CET] <JEEB> it's usually used for raw video and audio piping
[11:48:03 CET] <JEEB> for compressed formats it's usually not used
[11:48:15 CET] <JEEB> but it can do them
[11:52:19 CET] <explodes> When FFMpeg decodes my video track as scrambled images, what is a common mistake I may be making?
[11:52:35 CET] <explodes> The audio track is also very high-pitched.
[11:53:09 CET] <explodes> In some videos I get a line of output saying something about how mis-alignment may cause slowdown
[11:53:11 CET] <andrey_utkin> JEEB: yes this is for ffmpeg-to-ffmpeg, thanks, will try
[11:53:28 CET] <andrey_utkin> any ideas what's that matroska bug and how to overcome?
[11:53:44 CET] <BtbN> andrey_utkin, just stream-copy the mkvs to a new mkv before concating them, should work fine after that.
[11:54:34 CET] <andrey_utkin> BtbN: i want to avoid transition files for max performance and for simplicity
[11:55:03 CET] <BtbN> Well, that doesn't seem to work too well.
[11:55:30 CET] <andrey_utkin> yep but i'm sure it is supposed to work well somehow
[11:55:39 CET] <andrey_utkin> that way
[11:55:44 CET] <BtbN> It's supposed to work well with non-broken files
[11:57:33 CET] <andrey_utkin> well, then ffmpeg produces broken matroska files when in streaming mode
[11:59:33 CET] <BtbN> It does that for quite a few files if it's unable to write the "lead-out", which is the case if you write to stdout, because it can't seek in the output file.
[11:59:56 CET] <BtbN> Just stream-copying to a local mkv file should fix that.
[12:01:00 CET] <andrey_utkin> in my usecase, it'll be doing it all the time, so "quite a few files" is wrong thing to say
[12:27:36 CET] <andrey_utkin> Thanks JEEB, nut seems to do what I ultimately want, but with some weirdness in logs: https://gist.github.com/andrey-utkin/5c61d91acbd8a32acef1 (script: https://gist.github.com/andrey-utkin/d1d204884d49b8f1a9c3)
[12:27:58 CET] <andrey_utkin> see "invalid dropping"
[12:37:27 CET] <explodes> video doesn't get scrambled when I change the output size.. interesting indeed
[12:58:34 CET] <andrey_utkin> created a ticket for nut concat warnings issue http://trac.ffmpeg.org/ticket/5189
[13:00:08 CET] <explodes> Sorry for keeping on asking dumb random questions, but I'm not very smart with ffmpeg or c for that matter. I'm trying to build ffmpeg for android, and I get some compiler errors. I get a lot of errors say "version.h" isn't found
[13:00:19 CET] <explodes> I'm  building 2.7.2
[13:00:48 CET] <andrey_utkin> explodes: how do you build it?
[13:01:06 CET] <andrey_utkin> and what output it gives you?
[13:01:18 CET] <explodes> let me show you, give me a sec
[13:01:32 CET] <andrey_utkin> whare full input and output via pastebin
[13:01:36 CET] <andrey_utkin> share
[13:02:15 CET] <explodes_> build output: http://pastebin.com/LwPbBW3W
[13:02:50 CET] <explodes_> build script: http://pastebin.com/KdYxjcqS
[13:04:01 CET] <andrey_utkin> where from is the ffmpeg sources tree?
[13:04:12 CET] <andrey_utkin> is it tarball of release downloaded, or a git checkout?
[13:04:46 CET] <explodes> tarball
[13:04:54 CET] <andrey_utkin> from where?
[13:05:23 CET] <explodes> from here: https://www.ffmpeg.org/download.html
[13:06:49 CET] <andrey_utkin> i have just downloaded http://ffmpeg.org/releases/ffmpeg-2.8.5.tar.bz2 , extracted it, and I see there are version.h in all lib subdirs.
[13:07:06 CET] <andrey_utkin> i guess 2.7.2 tarball should have them, too
[13:07:37 CET] <andrey_utkin> so try again
[13:07:53 CET] <andrey_utkin> BTW why not the latest release?
[13:07:58 CET] <explodes> weird, dunno why they disappeared
[13:08:20 CET] <explodes> no real reason, just haven't upgraded and done all of the testing. I'm going to try with 2.8.5 now, though.
[13:08:42 CET] <explodes> that tarball was downloaded last september
[13:11:33 CET] <explodes> dunno how that happened, maybe something happened in git somewhere, but 2.8.5 is building fine so far
[13:29:05 CET] <explodes> yep. there was .gitignore that ignored, explicitly, add version.h files.
[13:29:32 CET] <andrey_utkin> explodes: if you use git, just make ffmpeg a submodule
[13:32:46 CET] <explodes> will do.
[13:33:13 CET] <explodes> but first, now that my ffmpeg is built, it's back to debugging SIGSEGV's and very high-pitched audio streams
[14:18:56 CET] <t4nk183> Hey, would anyone be able to help me with my encoding settings? I am trying to create two pass encoding settings but can't seem to get it to run correctly. The individual lines work but not as a two pass setup using Windows Form C#. My arguments - http://pastebin.com/vbYEaRCR. Any help at all would be great thanks!
[14:19:50 CET] <DHE> and what errors are you getting?
[14:20:58 CET] <t4nk183> I tried && to join the passes together but I am given invalid arguments
[14:24:58 CET] <DHE> that's a unix shell-ism. does windows do that
[14:25:44 CET] <J_Darnley> cmd supports it too
[14:26:22 CET] <J_Darnley> t4nk183: who says "invalid arguments"?  ffmpeg or c#?
[14:26:45 CET] <t4nk183> ffmpeg
[14:26:48 CET] <DHE> (this is why you should paste everything, not just your commandline)
[14:32:52 CET] <J_Darnley> Are you going to give us the whole log?
[14:35:04 CET] <t4nk183> Yes sorry it's on a separate machine. I'll grab it now.
[14:35:35 CET] <dorp> Could I screen-capture a playing video stream of 100fps, with gdigrab? Is there an internal limit to the captured framerate? I would like to learn and read further about the nature of 'gdigrab', is there a recommended resource other than the brief usage within the docs?
[14:36:14 CET] <J_Darnley> The source code?  It is the ultimate reference.
[14:36:36 CET] <J_Darnley> I don't see why there would be an arbitrary limit though.
[14:36:43 CET] <waressearcher2> dorp: https://trac.ffmpeg.org/wiki/Capture/Desktop
[14:37:46 CET] <dorp> waressearcher2: Thanks, but these are merely usage examples with brief commentary
[14:38:18 CET] <t4nk183> The entire log: http://pastebin.com/C4D6Nc7N
[14:38:53 CET] <J_Darnley> You can't use && to chain commands without going through the shell
[14:38:54 CET] <DHE> dorp: 100 is a bit extreme. you may experience a lot of deviation and drift because encoding real-time at 100fps might be too much for your system.
[14:39:35 CET] <DHE> yeah, && is a shell thing. along with running it in a shell you'd have to specify "ffmpeg" at the start again since it's a second command to be run
[14:40:16 CET] <dorp> DHE: How about high framerate, with a small resolution? 100fps with 100x100
[14:40:52 CET] <t4nk183> Ok and is there a windows equivalent?
[14:41:08 CET] <J_Darnley> Change how you run ffmpeg
[14:41:26 CET] <J_Darnley> Either spawn/fork ffmpeg twice
[14:41:30 CET] <DHE> dorp: that's a bit more reasonable, as long as it's a 100x100 capture window and not something silly like 1920x1080 being downsampled to 100x100
[14:41:36 CET] <J_Darnley> Or use a function which uses the shell
[14:41:45 CET] <dorp> DHE: When I use such a framerate, the command line indicates: 'dups'... clearly these dups are not related to the source, because even if the source is static, it wouldn't indicate any dups at lower framerates. So wouldn't that suggest that there is another limitation somewhere?
[14:42:15 CET] <J_Darnley> If you want to use && then you need the C# equivalent of system()
[14:43:47 CET] <jkqxz> dorp:  Are you sure the output that you are capturing from is actually rendering at 100Hz?  If it's only rendering at 60Hz then half of your frames are disappearing before they get rendered, so you can't capture them.
[14:45:05 CET] <dorp> jkqxz: But how would that explain the 'dups' as indication to capture-rate. Given a static/blank screen, 10fps, there are no dups. Same blank screen, 50fps, there are dups
[14:45:34 CET] <J_Darnley> dups doesn't tell you anything about the content
[14:45:50 CET] <jkqxz> Because the capture driver knows that the frame hasn't been rerendered since the last time it ran, so there is definitely a dup.
[14:48:36 CET] <t4nk183> What do you mean by the C# equivalent of system()?
[14:48:36 CET] <dorp> jkqxz: Let me try to expand on what I've attempted, I've taken a video stream with motion/camera-panning, cropped it to a small resolution, and played it at 100fps rate, which ran very smoothly and clearly
[14:50:00 CET] <dorp> jkqxz: When I try to screen-capture said playing video stream, while adjusting the x/y and the video_size appropriately, there are lots and lots of dups, and the export doesn't represent the motion of the played video stream
[14:50:34 CET] <DHE> t4nk183: in C there is a function called system() that calls out to run the command you specify
[14:51:16 CET] <DHE> the idea being you'd run system("ffmpeg -i input.mp4 ... .... output.mp4");  and the easiest way to do it in C would be to run system() twice. must be something similar for C# ?
[14:51:58 CET] <dorp> jkqxz: So I'm trying to understand where's the limitation?
[14:54:02 CET] <dorp> Playing the same sample, at 23.976fps, 1920x1080 ... would result with a much better capture-export, so I came to the conclusion that the limitation is specific to the high framerate capturing?
[14:54:44 CET] <jkqxz> dorp:  You've also set the framerate argument to gdigrab?  (Along with offset_x, offset_y and video_size.)
[14:55:07 CET] <dorp> jkqxz: Yes, ./ffmpeg -f gdigrab -framerate 100 ...
[15:04:59 CET] <jkqxz> Looking at the source (libavdevice/gdigrab.c), it looks like it always blindly copies, so the problem is probably somewhere else.
[15:05:45 CET] <jkqxz> Given that it could be timing, have you tried with -vsync 0?
[15:07:04 CET] <dorp> jkqxz: I've tried all the -vsync options within the docs, -1, 0, 1, 2
[15:07:40 CET] <dorp> The nature of the exports were very much different, most notably the elimination of all the duplicated frames
[15:09:07 CET] <dorp> But the captured motion of the 23.976fps 1080p sample, is still much better than the captured motion of the 100fps 100x100 sample
[15:09:31 CET] <dorp> So I just wanted to know what's the consideration/limitation behind that
[15:14:33 CET] <jkqxz> I don't know.  Probably somewhere in the interaction between rendering and capture, but that's going to be a Windows thing rather than ffmpeg.  The screen you're capturing really is a 100Hz output with a 100Hz monitor here, right?
[15:16:19 CET] <dorp> jkqxz: I don't know if the screen matches the exact properties, all I know is that by playing the source, and playing the export, there's a big gap
[15:16:46 CET] <dorp> jkqxz: (I'm pretty much a beginner that would like to understand the concepts behind screen capturing)
[15:19:54 CET] <jkqxz> The rendering will likely run at whatever the framerate of the output is (hence the output and monitor being important), and that gets pasted into a framebuffer somewhere.  The capture just reads out of that framebuffer at it's own specified rate, with that rate not synchronised to the input.
[15:21:06 CET] <jkqxz> As such, nasty things can happen if those rates don't quite match, because the capture can easily read the same frame twice or miss a frame.
[15:22:58 CET] <jkqxz> It also changes the duration of frames.  Suppose you have a 1/3Hz output and a 1/2Hz capture (somewhat similar to your case).  Frames are output at times 0, 3, 6, 9 ..., and are captured at times 0, 2, 4, 6, ....
[15:24:13 CET] <dorp> But isn't that the purpose of duplicating frames? For adjusting the timing?
[15:24:21 CET] <jkqxz> Then the even frames (at 0, 6, ...) will appear twice in the capture, but the odd frames (3, 9, ...) will only appear once.  Since it then plays back at uniform rate, the captured video will look very stuttery despite the output not being.
[15:27:14 CET] <jkqxz> The duplication doesn't help, because it still doesn't know the original clock that the output was running on before it captured.  It's just making sure that the output is uniform from its point of view, but it doesn't have enough information to help here.
[15:28:06 CET] <dorp> I assume that all the issues that you mention, are not specific to ffmpeg or gdigrab, right?
[15:29:05 CET] <dorp> Do you think that 'hardware' capture devices are likely to handle such issues better? Or it's likely to be more of the same?
[15:29:26 CET] <jkqxz> Correct.  Something cleverer than gdigrab might be able to fix it up by knowing the output clock.
[15:30:33 CET] <jkqxz> A hardware device will do better, because the capture there is purely derived from the output clock (it will just capture frames as they are given to it).
[15:32:19 CET] <jkqxz> Can you try modifying the capture framerate until you don't get stuttery video?  If it looks smooth at 60, then that's what your output is and therefore where your problem lies.
[15:34:18 CET] <dorp> jkqxz: I was under the assumption that there is no 'precise' number, because the display rate/interval is not required to be precise, and it could lag/shift?
[15:40:17 CET] <jkqxz> Not very much.  Clock error there is on the order to tens of ppm.  (So at worst 1 frame off after 100000, so a drop/dup at most every ~15 minutes at 100Hz.)
[15:40:42 CET] <jkqxz> Differences in software latencies on a loaded system are much larger.
[15:41:34 CET] <dorp> I see. So it definitely worth trying a range of framerates
[15:42:01 CET] <jkqxz> Maybe that should be 10000 -> ~2 minutes.  Still, sufficiently low to not be very significant.
[15:44:59 CET] <dorp> BTW, is there a better sample to evaluate motion-stutter, than camera-panning?
[15:46:33 CET] <jkqxz> Make a 1000-frame video out of images of the numbers 1..1000?
[15:47:58 CET] <dorp> That would be interesting
[15:57:54 CET] <jkqxz> for i in $(seq -f "%03g" 0 999) ; do convert -size 100x100 xc:white -pointsize 48 -fill black -draw "text 10,64 '$i'" test-$i.png ; done
[15:58:05 CET] <jkqxz> ffmpeg -framerate 100 -i 'test-%03d.png' -c:v libx264 -r 100 -pix_fmt yuv420p numbers.mp4
[15:58:24 CET] <jkqxz> ffmpeg ftw :)
[16:14:27 CET] <dorp> jkqxz: Thanks a lot, I'll bbs and share the result
[16:17:43 CET] <dorp_> jkqxz: Thanks a lot, I'll bbs and share the result
[17:36:30 CET] <dorp_> jkqxz: Hmmm... I've used the sequence of frames for creating a 25fps stream, I played it, and captured it with -framerate 25 .. and within ~15 frames, I have 2 dups, and 2 missing frames (without -vsync)
[17:36:51 CET] <andrey_utkin> JEEB: (sorry to disturb you) the same issue happens with nut when I specify x264: https://gist.github.com/andrey-utkin/633090e183e82b3ae5a4 , script: https://gist.github.com/andrey-utkin/1900b6098e09806dc1e0
[17:37:15 CET] <dorp_> It seems reasonable? Maybe there's something wrong with my setup?
[17:49:54 CET] <jkqxz> dorp_:  The real framerate might not be divisible by 25, then?  I'm still expecting you can find a framerate (30 or 60?) that will play perfectly.
[18:11:37 CET] <andrey_utkin> anybody knows why h264 stream has NOPTS in first packet? matroska, nut etc. fail the purpose of this script: https://gist.github.com/andrey-utkin/208b18869e68cc83cc4d and only MPEG TS works: https://gist.github.com/andrey-utkin/208b18869e68cc83cc4d
[18:11:49 CET] <wvuu> hello
[18:12:00 CET] <wvuu> how to run ffplay with multiple threads?
[18:12:25 CET] <wvuu> this option keeps mutatin, worse than flu bacteria.
[18:20:49 CET] <digidog> ok after long research and with the help of some of you here I cam to the conclusion the qtgmc 50p via vapoursynth will be my best option for deinterlacing old DV PAL material for integration into uptodate cut. but I am totally lost of how to start scripting this. does anyone has examples around or links for script and how to fire them? is it like with shell scripts and ffmpeg, like a script injection in t
[18:20:55 CET] <digidog> he command ?
[18:21:35 CET] <ChocolateArmpits> digidog: hey digi, have you tried any scripts yet, did you come up with a base vpy script ?
[18:21:50 CET] <digidog> ChocolateArmpits: hey! :)
[18:22:42 CET] <ChocolateArmpits> I have written a batch script that writes a vpy script that performs quite a few actions, like connecting videos, adding subtitles, masking etc
[18:22:54 CET] <ChocolateArmpits> so I can help you with the process
[18:23:20 CET] <digidog> ChocolateArmpits: well honestly, ... I maybe was a litle bit fast with asking *hides* I used to work with avisynth back in the days a liltte but I honestly are a littel bit lost in the wild. I think I should rather take a nap, read some more and should test out some examples before asking :)
[18:23:49 CET] <digidog> ChocolateArmpits: oh thats interesting o.O
[18:23:51 CET] <ChocolateArmpits> digidog: ok but did you get vapoursynth running, did you test a script with your video ?
[18:24:06 CET] <ChocolateArmpits> even simple i/o
[18:25:10 CET] <digidog> ChocolateArmpits: errm, yes I did, 2 days ago ... but dont ask me ... I was 24 hour awake this day -.O ... let me check and come back 30 min later ? I dont want to waste your time while being unprepared :/
[18:25:27 CET] <digidog> ChocolateArmpits++
[18:25:28 CET] <ChocolateArmpits> You should get the Vapoursynth Editor https://bitbucket.org/mystery_keeper/vapoursynth-editor/overview
[18:25:32 CET] <digidog> thanks a million for chiming in!
[18:25:42 CET] <ChocolateArmpits> it has a scripting window and you can test and preview frames
[18:25:49 CET] <digidog> ChocolateArmpits: ah good to know!
[18:25:53 CET] Action: digidog grabs the link
[18:26:02 CET] <ChocolateArmpits> http://forum.doom9.org/showthread.php?p=1689404#post1689404 here's how to build it, I think
[18:26:15 CET] <ChocolateArmpits> it makes scripting easier
[18:29:38 CET] <digidog> ChocolateArmpits: this is awesome start! thank you!
[18:30:11 CET] <digidog> ChocolateArmpits: I will try to get in and come back later if I need a side kick :)
[18:30:27 CET] <digidog> ChocolateArmpits++
[18:33:40 CET] <digidog> didn't I wanted to provide an additional Debian build instruction for vapoursynth ? *scratcheshead* I found asnippet here in the folder ... jesus, I must have been really tired this night ...
[18:37:25 CET] Action: digidog filters chatlog, what I have talked to and with who the days ...
[18:48:35 CET] <digidog> ah yes found it ... I have talked with Myrsloik about it :)
[18:49:44 CET] <durandal_1707> off topic is off topic
[18:53:37 CET] <digidog> Myrsloik: should I provide this Debian install info in markdown or what would be the best format ?
[18:54:06 CET] <durandal_1707> qtgmc uses eedi for deinterlacing IIRC
[20:58:13 CET] <Tzimmo> c_14: J_Darnley: I created ticket #5196 with a small sample. I can upload a larger (large!) sample if needed.
[21:03:04 CET] <furq> durandal_1707: http://avisynth.nl/index.php/QTGMC#Interpolation
[21:07:07 CET] <durandal_1707> furq: I ported nnedi3 to lavfi
[21:08:02 CET] <J_Darnley> Not we just need mvtools and an easy to have filters call other filters!
[21:09:00 CET] <J_Darnley> I cannot type worth a damn today
[21:16:58 CET] <furq> well a better deinterlace filter would be much appreciated
[21:17:07 CET] <furq> for starters it would mean that i'd never have to use avisynth again
[21:45:12 CET] <YamakasY> mhh nice, cannot convert my flv to mp4
[21:45:32 CET] <YamakasY> [mp4 @ 0x32553a0] Could not find tag for codec nellymoser in stream #1, codec not currently supported in container
[21:47:15 CET] <dystopia> what does -y before -i actually do?
[21:51:13 CET] <kepstin> dystopia: the position of the '-y' doesn't actually matter, I think
[21:51:54 CET] <kepstin> YamakasY: pretty obvious message there; your video is in the nellymoser codec, and you can't store that in mp4. you'll have to reencode the video (e.g. to h264)
[21:54:53 CET] <J_Darnley> Yes, some options are global.
[21:55:09 CET] <J_Darnley> Like -y, -n, and -loglevel
[21:56:09 CET] <YamakasY> kepstin: I did and what a damn bad quality I get back :(
[21:56:47 CET] <YamakasY> sound is great, video is artifacted
[21:57:06 CET] <kepstin> YamakasY: then you might need to set some encoding options, like -crf to set a quality level (the default is sort of medium-low, try around 16)
[21:57:25 CET] <YamakasY> kepstin: yeah ok, but this is the way ?
[21:57:39 CET] <YamakasY> everyone wants to see his pr0n in good quality! :P
[22:10:46 CET] <YamakasY> kepstin: heh now is the quality ok but does the sound not lineup
[23:38:17 CET] <dorp_> I've been trying to screen-capture a playing video which consists of frames of 1,2,3,4,5... each frame is a digit, at the rate of 25fps. I've been trying to capture it at various rates from 25 and up to 50... and it seems that no matter what, in the scope of 4 seconds, 1 to 4 frames will be missing. Using -vsync would affect the timing and duplicate frames, but frames will still be missing
[23:39:19 CET] <dorp_> Is it to be expected that within 100 frames, 1-4 frames to be missing via screen capture? Or I'm likely to be doing or having something wrong?
[23:40:54 CET] <c_14> capture at your screen's refresh rate
[23:42:22 CET] <kepstin> well, depending on the capture driver in use, you might not be able to hit the requested input fps.
[23:45:33 CET] <kepstin> one common mistake is to use the '-r' input option rather than '-framerate' - but I think most capture drivers default to around 30fps which should be enough
[23:45:53 CET] <kepstin> that said, when I was testing gdigrab on windows 7, I was lucky to get 12fps on my laptop :)
[23:47:41 CET] <dorp_> c_14: My refresh rate seems to be 60hz, which means to try -f gdigrab -framerate 60 ...? should I expect to see 100% of the frames, and if not- what is to be expected?
[23:48:50 CET] <c_14> Assuming your capture device can actually capture at that framerate, I think so yes.
[23:49:02 CET] <c_14> And you can encode all the frames realtime
[23:49:04 CET] <kepstin> dorp_: with gdigrab, it's probably too slow to capture the framerate you're requesting, depending on the windows version you're using. You should also check the video encoder - you need to use something fast enough
[23:50:59 CET] <kepstin> dorp_: when you run the ffmpeg command, the status line should say the fps that it's actually encoding at.
[23:51:23 CET] <dorp_> kepstin: I don't have any drops, my test resolution is only 100x100, and I use Windows 10 with gdigrab, basically: ffmpeg -f gdigrab -framerate 25 -video_size 100x100 -i desktop  (with x264 -qp 0 ultrafast), my test machine is offline, I'm not sure how much the whole console output would be helpful
[23:53:08 CET] <kepstin> well, next time you run it, check the status line to see the real fps that the encode is running at
[23:53:20 CET] <kepstin> and check the cpu and disk usage to try to figure out the bottleneck
[23:54:02 CET] <kepstin> if neither cpu or disk is overloaded and it's not managing 25fps, it might just be limitations of the api the capture driver is using.
[23:55:35 CET] <dorp_> kepstin: fps= shows 60-61, no disk/cpu bottleneck, ~6% cpu for ffmpeg ... the resolution is trivial: 100x100
[23:56:01 CET] <dorp_> drop=0, but there are a lot of dups
[23:56:22 CET] <kepstin> you shouldn't be getting dup frames unless you did something wrong in the command line
[23:56:29 CET] <kepstin> please pastebin your complete command line
[23:57:36 CET] <dorp_> kepstin: I've copied it by hand: http://codepad.org/wepSt13e
[23:58:35 CET] <kepstin> ok, that's really weird. you shouldn't be getting any 'dup' frames with that command line as far as I understand the ffmpeg sync code.
[23:58:49 CET] <dorp_> If I use a lower -framerate, then I wouldn't be having lots of dups, and maybe none at all
[00:00:00 CET] --- Tue Jan 26 2016


More information about the Ffmpeg-devel-irc mailing list