[Ffmpeg-devel-irc] ffmpeg.log.20140505

burek burek021 at gmail.com
Tue May 6 02:05:01 CEST 2014


[00:21] <NeutrinoPower> hi, I got from kdenlive at importing big images this message from by crashing: "sws: filterSize 536 is too large, try less extreme scaling or increase MAX_FILTER_SIZE and recompile \n  sws: initFilter failed \n  GL error 0x501 at resource_pool.cpp:274"
[00:21] <NeutrinoPower> have I to recompile ffmpeg again with another value for MAX_FILTER_SIZE or a part of kdenlive?
[00:23] <NeutrinoPower> and what is the easiest way for doing it at archlinux in PKGBUILD of ffmpeg-git ?
[01:31] <ergZay> whats ffmpeg_g vs ffmpeg?
[01:39] <watsonkp> ffmpeg_g is for debugging
[01:43] <NeutrinoPower> wtf, I downloaded 151MB from libvpx-git
[02:10] <AGSPhoenix> Hey guys, is there any way to get ffmpeg to do frame blending when converting between frame rates? I have a 990 fps video that I'd like to convert to 30 fps, blending 33 frames from the input into one frame in the output.
[02:16] <CapsAdmin> is there a way to make ffmpeg read more audio than just a frame every av_read_frame?
[02:32] <NeutrinoPower> wow, ffmpeg-git is 87MB big but libvpx-git is bigger
[02:38] <ergZay> NeutrinoPower: libvpx-git isn't ffmpeg so how's this relevant?
[02:38] <ergZay> and neither is kdenlive
[03:07] <NeutrinoPower> ok, changing the "MAX_FILTER_SIZE" parameter doesn't prevent crashing of kdenlive "GL error 0x501 at resource_pool.cpp:274"
[03:14] <xreal> How can I delay audio by 1 second?
[03:15] <AGSPhoenix> perhaps with the concat filter? Not sure if that would work though
[03:15] <AGSPhoenix> I'm no expert
[03:17] <AGSPhoenix> https://ffmpeg.org/ffmpeg-filters.html#concat
[03:18] <AGSPhoenix> You might need to demux the audio stream and process it separately
[03:18] <AGSPhoenix> Actually, that's probably a better idea
[03:20] <klaxa> see -itsoffset instead maybe?
[03:29] <xreal> klaxa: but how can I say "audio only" ?
[03:29] <klaxa> have you read the help-entry for -itsoffset?
[03:29] <klaxa> you apply it before one of the streams (or was it after?)
[03:30] <klaxa> wait...
[03:30] <klaxa> was it before the codecs?
[03:30] <klaxa> seems to only work for inputs, just specify the input twice, using video from 0, audio from 1 and delay audio by using -itsoffset
[03:31] <klaxa> copy codecs and you should be fine
[03:31] <xreal> okay
[03:31] <xreal> klaxa: let me try
[03:32] <xreal> works, thanks.
[03:36] <xreal> klaxa: is this a "hardcoded" delay with digital silence or just metadata?
[03:36] <AGSPhoenix> Pretty sure it's metadata
[03:37] <AGSPhoenix> According to the docs, it just changes the time the audio should play at
[03:37] <AGSPhoenix> hence, Input TimeStamp OFFSET
[03:42] <xreal> klaxa: is this a "hardcoded" delay with digital silence or just metadata?
[03:43] <klaxa> AGSPhoenix said it's setting a delay on the timestamps, while i agree that it is sort of metadata, it's not metadata in the sense of file-information metadata
[03:43] <klaxa> it's part of the bitstream afaik
[03:44] <xreal> I've just loaded it in Adobe Premiere: Audio stats at 0 seconds, but I've set a delay of 5 seconds. mplayer plays it with a delay of 5 seconds.
[03:44] <xreal> So it's only muted using metadata.
[03:45] <AGSPhoenix> It might just be easier to generate 1 seconds of silence, dump the audio and video streams into separate files, and use ffmpeg's concat filter to append the two audio files.
[03:46] <AGSPhoenix> Depends on the audio format and whether or not Premiere can handle raw streams
[03:46] <AGSPhoenix> Well, I suppose you could reencapsulate them separately
[03:46] <xreal> AGSPhoenix: Premiere can handle nearly anything :)
[03:46] <AGSPhoenix> Except audio with a delay, apparently
[03:47] <xreal> AGSPhoenix: It doesn't parse some ffmpeg-only metadata :(
[03:47] <xreal> AGSPhoenix: About concat, let me explain what I am working with.
[03:48] <xreal> I've created a demo video with a frame accurate timecode and LTC data on one audio channel. I want to measure the delay introduced by several audio codecs. Muxing external mp3 gives a big delay, since ffmpeg can't handle them correctly.
[03:48] <xreal> I wanted to force a delay to check, if everything works as expected.
[03:50] <AGSPhoenix> So don't use mp3? Dump the audio track to flac, and append it to a one second clip of silence
[03:51] <xreal> AGSPhoenix: I would, if I could :) Target format needs to be DivX/XviD with MP3.
[03:52] <AGSPhoenix> okay, dump to flac, append, then encode?
[03:52] <AGSPhoenix> Not sure if this LTC data is vulnerable to degradation from reencoding
[03:59] <xreal> AGSPhoenix: it's pretty good, it even works for MP3 :)
[04:00] <AGSPhoenix> Everything is working?
[04:00] <Lamboote> Hello guys . Dose change syntax ffmpeg command ?
[04:01] <AGSPhoenix> Not sure what you mean Lamboote. Maybe try rephrasing your question?
[04:03] <Lamboote> Befor i use -vcodec but now i seen a command with v:c
[04:03] <AGSPhoenix> Oh I see
[04:04] <AGSPhoenix> Yes, there are some new options you can use like -c:v instead of -vcodec
[04:04] <AGSPhoenix> The old options should still work
[04:05] <Lamboote> new option so good .
[04:05] <Lamboote> May i ask you  question ?
[04:05] <AGSPhoenix> yes, of course
[04:07] <Lamboote>  what dose different between ffmpeg and x264 ?
[04:07] <Lamboote> Sorry for english .
[04:08] <AGSPhoenix> Your english is no problem. It is good enough.
[04:08] <AGSPhoenix> x264 is makes H.264 video streams
[04:09] <AGSPhoenix> ffmpeg can make lots of types of streams, and combine them. It can also make H.264 streams with x264
[04:10] <AGSPhoenix> Simply, ffmpeg does more
[04:11] <Lamboote> Ok . Thanks
[04:11] <AGSPhoenix> :)
[04:13] <Lamboote> do you have experience on convert animation ( anime ) with ffmpeg and x264 ?
[04:14] <Lamboote> I need best parameter for good result .
[04:14] <AGSPhoenix> Converting anime can be hard. The video changes a lot. What are you trying to convert for? Are you trying to watch on a portable video player?
[04:15] <Lamboote> No . I didn't
[04:16] <AGSPhoenix> What do you need to convert to?
[04:17] <Lamboote> I want make hardsub anime . And watch on laptop or pc or even tablets or smart phone or maby hd player .
[04:18] <Lamboote> My problem is result don't sate me .
[04:20] <AGSPhoenix> Lamboote, how are you converting? Paste the command you use
[04:21] <c_14> Lamboote: tried looking at https://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo ?
[04:23] <Lamboote> I know how burning subtitle on video .
[04:24] <Lamboote> I want have best quality with redeuce file size
[04:24] <c_14> paste the command you're using and we'll look at it
[04:25] <xreal> oh damn... using MP3 is totally out of sync. It starts at 2 frames delay and after 4 seconds, it's only 1 frame delay. Then after 8 seconds, it's back to 2 frames.
[04:25] <Lamboote> I am online on smart phone . My loptop is not here
[04:25] <Lamboote> Sorry
[04:26] <Lamboote> I comeback few hour and sent command
[04:26] <AGSPhoenix> Lamboote, there's not much we can do without that information except send you to guides
[04:26] <c_14> What I can do is give you this: https://trac.ffmpeg.org/wiki/x264EncodingGuide . Start with a -crf around 20 and adjust until you get the quality/size you want.
[04:26] <Lamboote> Yes i know
[04:27] <Lamboote> Are you always online ?
[04:27] <AGSPhoenix> I am not, I am waiting for help myself
[04:27] <AGSPhoenix> But I will be here for a few hours
[04:27] <Lamboote> c_14
[04:28] <c_14> I'm usually online, but even when I'm not there's usually someone who can help. At least as long as questions don't get too technical.
[04:30] <Lamboote> My source size around 330 ~350 Mb . I want make hardsub from source with size file under 100 mb (80 ~ 90 good for me ) .
[04:30] <Lamboote> With good quality
[04:30] <c_14> If you know the output filesize you want, look at the 2-pass section on the x264 guide I linked. That will give you the best quality for that specific filesize.
[04:31] <AGSPhoenix> If that quality is not good enough, you will need to make the file bigger
[04:33] <Lamboote> How much percent loos quality is normal ?
[04:33] <AGSPhoenix> You can choose the quality loss
[04:33] <AGSPhoenix> You should try different amounts until you find something you like
[04:34] <Lamboote> Ok i understund .
[04:34] <Lamboote> Ok i will comback if i have problem .
[04:35] <AGSPhoenix> Ok
[04:35] <Lamboote> Thank you
[04:35] <AGSPhoenix> :)
[04:37] <AGSPhoenix> Since my question is a few pages up now, I'll ask again: Can ffmpeg take an input video and blend multiple frames together so that each output frame is a blend of multiple input frames?
[04:38] <AGSPhoenix> For example, taking a 90 fps video and outputting a 30 fps video where each frame is a blend of 3 input frames
[05:13] <blippyp> AGSPhoenix: why would you lower it from 90fps to 30fps?
[05:15] <AGSPhoenix> blippyp, that was an example. My real input is 990 fps, and I'm looking into removing some software limitations to get it higher
[05:15] <blippyp> that is crazy fps
[05:15] <blippyp> but either way - let me see if I got your question right
[05:16] <blippyp> you have three videos and you just want to overlay each one on top of the other right?
[05:16] <AGSPhoenix> No
[05:16] <AGSPhoenix> I have an input video at 990 fps, and I want to blend 33 frames from the input into one frame of output
[05:17] <AGSPhoenix> I could probably cobble something together with the blend filter, but I was hoping for an easier solution
[05:17] <blippyp> okay - so you want to take frame 1,2,3 blend them into a new #1, then you want to take frame 4,5,6 and blend those into a new #2?
[05:17] <AGSPhoenix> Yep
[05:18] <blippyp> let me think
[05:18] <blippyp> I honestly don't think you can do that
[05:18] <blippyp> not without some serious craziness
[05:19] <blippyp> why do you need to blend them? you know that would make one fuzzy video
[05:20] <blippyp> at those speeds, I don't even think you'd notice it to be honest
[05:20] <AGSPhoenix> The input comes from a 3D renderer with no motion blur support, so I'm rendering at a very high framerate then blending to make motion blur in post
[05:21] <blippyp> so why not just cut the frame rate to 30 fps and blur the frames?
[05:21] <AGSPhoenix> I'm not sure I understand, how would I blur them right?
[05:21] <blippyp> something like this
[05:22] <blippyp> ffmpeg -i video.mkv -vf "blur" -r 30 -c:v libx264 -c:a ac3 out.mkv
[05:22] <blippyp> there are options for blur also to specify how much
[05:22] <AGSPhoenix> No, that will just make a blurry image, not simulate motion blur
[05:23] <AGSPhoenix> I want this: http://en.wikipedia.org/wiki/File:London_bus_and_telephone_box_on_Haymarket.jpg
[05:23] <blippyp> I see - so you need that really good motion blur - a dynamic one... not even just a simulated one (because it would always be in one direction)???
[05:23] <blippyp> if I understand that right
[05:24] <AGSPhoenix> Correct. The scene is very finely detailed, and just doing a general blur wouldn't look right
[05:24] <blippyp> I think you'd have to work with that frame by frame
[05:25] <blippyp> I don't think there's a filter for what you want
[05:25] <AGSPhoenix> At this point, I'm considering just dumping each frame to PNG, blending them with ImageMagick, and working with that
[05:25] <AGSPhoenix> ...which is what you just said
[05:25] <blippyp> yeah - that's probably what I would do as well
[05:25] <blippyp> very slow - I know
[05:25] <blippyp> I assume you're at least scripting it... ;)
[05:25] <blippyp> haha
[05:26] <AGSPhoenix> But I can't figure out how to do it reasonably
[05:26] <blippyp> you mean you can't get a good blend with imagemagick?
[05:26] <blippyp> or it's too slow
[05:26] <AGSPhoenix> no, gimme a sec to type
[05:27] <AGSPhoenix> I need to feed 33 images at a time from a list of thousands into a command that outputs a file with a sequentially increasing filename
[05:27] <AGSPhoenix> And I have no idea how to manage both of those bit
[05:27] <AGSPhoenix> s
[05:28] <blippyp> but you have your command right for imagemagick?
[05:28] <AGSPhoenix> Yeah
[05:28] <blippyp> k
[05:28] <blippyp> so you're having problems with the scripting then?
[05:28] <AGSPhoenix> Yeah
[05:29] <blippyp> k give me a second to play a bit
[05:29] <AGSPhoenix> I tried doing it with xargs, but I must be missing something, because I can't get it to work right
[05:29] <blippyp> can you paste your imagemagick command so I don't have to play with that part at least
[05:29] <AGSPhoenix> convert {} -evaluate-sequence mean processed/out.png
[05:29] <blippyp> thx
[05:30] <blippyp> that's not specifying any files though?
[05:30] <blippyp> give man an example of it specifying three of the files
[05:30] <blippyp> man=me
[05:30] <AGSPhoenix> sorry, {} is xargs for 'files go here'
[05:30] <AGSPhoenix> convert 1.png 2.png 3.png -evaluate-sequence mean processed/out.png
[05:30] <blippyp> ah
[05:30] <blippyp> I'm with ya
[05:32] <blippyp> actually this should be easy
[05:32] <blippyp> convert 1.png 2.png 3.png -evaluate-sequence mean processed/out.png
[05:32] <blippyp> oops
[05:32] <blippyp> you can turn that into a simple script like:
[05:32] <blippyp> convert $1 $2 $3 -evaluate-sequence mean processed/$1
[05:33] <blippyp> I assume you follow that
[05:33] <AGSPhoenix> sorta
[05:33] <AGSPhoenix> but it still has a few problems
[05:33] <blippyp> these will be off - but they won't be '1,2,3,4,5,6' it will be 1,3,6,9 etc...
[05:33] <blippyp> right
[05:33] <blippyp> there's a fix for that
[05:33] <blippyp> they're still 'sequential' - ffmpeg can deal with that
[05:34] <AGSPhoenix> Also, how do I feed the filenames into the script 3 (or 33, or whatever) at a time?
[05:34] <blippyp> ah
[05:34] <AGSPhoenix> Hmm... $*?
[05:34] <blippyp> no - we'll come up with a fix for that
[05:35] <blippyp> you'll need a seperate script for that part which will call that other command I just gave you
[05:35] <blippyp> give me a bit - to figure it out
[05:36] <AGSPhoenix> I'm beginning to smell the faint aroma of spaghetti code (or pipeline in this case), but okay
[05:36] <blippyp> no - this should be small and simple - but I don't know it off the top of my head - it will just take me a few minutes to weed out the details
[05:37] <blippyp> someone better at bash could probably help you quicker (I'm a programmer, but I hate programming in bash - it gives me headaches)
[05:39] <blippyp> how were you using xargs?
[05:41] <AGSPhoenix> oh sorry
[05:41] <AGSPhoenix> uh
[05:41] <AGSPhoenix> ls *.png | xargs -p -L33 -i convert {} -evaluate-sequence mean processed/out.png
[05:41] <AGSPhoenix> but it didn't work
[05:41] <relaxed> what are you trying to do?
[05:42] <AGSPhoenix> Take 33 files from stdin, feed them to convert to be blended, and output them
[05:42] <blippyp> Overflip: making a master file list then looping through them 3 at a time with a for loop - one command - I think I got it, just let me try it
[05:42] <blippyp> oh - you want to do 33 files at a time?
[05:42] <relaxed> ffmpeg can't do what you want?
[05:42] <blippyp> how?
[05:43] <AGSPhoenix> blippyp, any amount preferably
[05:43] <AGSPhoenix> relaxed, not that we can tell
[05:43] <relaxed> what is your goal?
[05:43] <AGSPhoenix> Well, aside from some horrific blend filter mess
[05:43] <AGSPhoenix> AGSPhoenix> Since my question is a few pages up now, I'll ask again: Can ffmpeg take an input video and blend multiple frames together so that each output frame is a blend of multiple input frames?
[05:43] <AGSPhoenix> <AGSPhoenix> For example, taking a 90 fps video and outputting a 30 fps video where each frame is a blend of 3 input frames
[05:45] <relaxed> oh, I'm not sure about that. convert can do it?
[05:46] <AGSPhoenix> Yeah, with lots of scripting and a few hundred GB of drive space
[05:46] <blippyp> frame by frame - he could use ffmpeg just as well - I have no idea which one would be faster or better for it though... I imagine they both pretty much use the same technique if not even the same basica library to do it....
[05:47] <blippyp> no - not lots of scripting - you'll see
[05:47] <blippyp> I wish I knew xargs better - there's probably an easier way
[05:49] <relaxed> ffmpeg's blend takes two frames
[05:50] <blippyp> you can blend as many as you want actually
[05:50] <blippyp> you just have to filter the laters right
[05:50] <blippyp> it would be much harder with ffmpeg though I think with the amount of frames he wants to do it with
[05:50] <AGSPhoenix> I could just go with a power of two
[05:51] <AGSPhoenix> Oh wait, I misread, disregard that
[05:51] <blippyp> no - convert is what you want I think
[05:51] <blippyp> you've already tested it and are happy with the output - and it's easy to work with.... I wouldn't change it
[05:51] <blippyp> I doubt you'd get much of a speed increase by using anything else
[05:51] <AGSPhoenix> I don't care about speed
[05:52] <AGSPhoenix> Well, as long as it takes less than 3 days
[05:52] <AGSPhoenix> That'd be pushing my patience
[05:52] <blippyp> depends on the how many frame syou have
[05:52] <AGSPhoenix> Could be a lot
[05:52] <AGSPhoenix> but we'll see
[05:52] <blippyp> I once did this kind of thing with convert - only I changed each frame of a video into a cartoon using a imagemagick script I found online
[05:52] <blippyp> using three computers it took like a week for like 60 mins of video
[05:53] <AGSPhoenix> Good god
[05:53] <blippyp> but you're just doing simple overlays - should be MUCH faster
[05:53] <relaxed> isn't there a multithreaded convert?
[05:53] <blippyp> not sure
[05:53] <AGSPhoenix> Here, most of the time will be waiting for the process to start and load the files, not the blending
[05:54] <relaxed> I think it's a fork called GraphicsMagick
[05:58] <relaxed> If you know the command to blend three into one with convert I could probably script the rest.
[05:59] <AGSPhoenix> I'd prefer an arbitrary amount, instead of just 3, but yeah
[05:59] <relaxed> then use ffmpeg on the blended frames
[05:59] <AGSPhoenix> blippyp is already working on it, I think
[06:30] <blippyp> this looks pretty cool actually
[06:30] <blippyp> just one more step is needed
[06:30] <AGSPhoenix> oh?
[06:31] <blippyp> the number formatting on my output needs to be more specific - let me find an example for that then it should be ready...
[06:33] <AGSPhoenix> brb
[06:33] <blippyp> no hurry
[06:41] <blippyp> http://sprunge.us/acBX
[06:41] <blippyp> that will make your files
[06:42] <blippyp> run it like 'cmd filelist numberofframes'
[06:42] <blippyp> so run ls pathtoimages/*.png>filelist
[06:42] <blippyp> then cmd filelist 33
[06:42] <blippyp> for 33 frames at a time
[06:42] <blippyp> it's not very fast though
[06:43] <blippyp> blame that on convert - not on the script... ;)
[06:43] <blippyp> now let me find that command to merge them easily for you (since they're not in sequential order like 1,2,3)
[06:44] <blippyp> I'm stupid
[06:44] <blippyp> http://sprunge.us/adKB
[06:45] <blippyp> there all done
[06:45] <blippyp> that will produce the files you want....
[06:45] <blippyp> alter the script as you want - when joining with ffmpeg you would need to us out%20d.png as the source name... look in the script and you'll easily see where to change that if you need to
[06:46] <blippyp> so ffmpeg would be: ffmpeg -i processed/out%20d.png -i audio.wav -c:v libx264 -crf 17 -c:a ac3 final.mkv or something like that....
[06:47] <AGSPhoenix> back
[06:48] <blippyp> sorry it took so long - I've been on a C++ programming sprint lately and haven't touched bash in a few weeks now...  :(
[06:49] <blippyp> I'm sure there's better ways of doing it - but I think that does what you want....
[06:50] <AGSPhoenix> let me test it
[06:50] <AGSPhoenix> Just gotta finish eating
[06:50] <blippyp> k - I don't know what the limits are - I use 3 frames at a time
[06:51] <blippyp> that would depend on convert i guess
[06:51] <blippyp> or your character limit at the prompt
[06:51] <AGSPhoenix> according to xargs, 24240
[06:51] <blippyp> so if you wanted to do LOTS of frames at a time I would keep the src files as short as possible
[06:51] <AGSPhoenix> Yeah
[06:52] <blippyp> that's quite a bit - but still a limit to consider...  ;)
[06:55] <AGSPhoenix> interesting. It starts to grind to a halt as it goes longer and longer
[06:55] <AGSPhoenix> I just changed the convert to an echo to test
[06:55] <blippyp> I know it's killing my system to do it
[06:56] <AGSPhoenix> I'm at 15,000 images, and each loop is taking about half a second
[06:56] <blippyp> that's weird
[06:56] <blippyp> maybe it's the fixed width of 20 that's killing it - that is rather high
[06:57] <blippyp> you can probably shrink that...  ;)
[06:57] <blippyp> depends on how many images you have
[06:57] <AGSPhoenix> uh
[06:57] <AGSPhoenix> I think you made a mistake
[06:57] <blippyp> if you have more than one computer to devote to the task you could break up the files into chunks easily enough
[06:57] <blippyp> how so?
[06:58] <AGSPhoenix> $lines never gets erased
[06:58] <blippyp> it was - maybe I delete that line
[06:58] <blippyp> yeah I must have
[06:59] <blippyp> ouch - that would definately kill things way more
[06:59] <AGSPhoenix> Oh god my scrollback buffer
[06:59] <blippyp> just put a lines="" after the convert command
[06:59] <blippyp> I know I did put that in at one point.... haha
[07:00] <AGSPhoenix> Ah there we go
[07:00] <AGSPhoenix> runs in 7s on my 16k image test set
[07:00] <AGSPhoenix> echoing, anyway
[07:01] <blippyp> I'm re-running it now as well
[07:01] <blippyp> this is pretty slow though - I would use as many computers on this as you can
[07:02] <AGSPhoenix> Hmm...
[07:02] <blippyp> something else wrong?
[07:02] <AGSPhoenix> What I really need is a program like SrcDemo2, but more flexible
[07:03] <AGSPhoenix> It uses Dokan do present a folder that TGAs get written to, and it does the same thing your script does
[07:03] <AGSPhoenix> to*
[07:03] <blippyp> ffmpeg does the overlay WAY faster
[07:03] <AGSPhoenix> Of course
[07:04] <AGSPhoenix> It needs to be fast
[07:04] <AGSPhoenix> Actually, scratch what I said earlier, what I really need is an ffmpeg filter to do this
[07:05] <blippyp> well, we're kind of limited by the tools - especially if you want to join 33 frames at a time - you could build a filter for that with ffmpeg - but I honestly don't know if it would be much faster
[07:05] <blippyp> yeah
[07:05] <AGSPhoenix> Since that's where my problem started before getting yak-tipped into begging for scripts to control imagemagick
[07:05] <blippyp> imagemagick is great - but it's very slow
[07:05] <AGSPhoenix> How could it not be faster though?
[07:06] <AGSPhoenix> You'd eliminate the time to takes to start each instance of convert
[07:06] <blippyp> I don't know - I'm not an expert - but because you'd be running so many frames though (in such short sequences) you might be limited by the hardrive....
[07:07] <blippyp> because you'd still have to run a bunch of sequences through ffmpeg - just to build 1 image
[07:07] <AGSPhoenix> Well, my original source is a bunch of .avis with UTvideo
[07:07] <blippyp> then you'd have to join those images into a movie
[07:07] <blippyp> unless there's some filter already for this - but I haven't come across it
[07:07] <AGSPhoenix> I just used ffmpeg to dump each frame to PNGs so I could work with them via IM
[07:08] <blippyp> let me try with ffmpeg - I'll compare the speeds
[07:08] <AGSPhoenix> how are you testing it?
[07:08] <blippyp> what do you mean?
[07:09] <AGSPhoenix> how are you testing ffmpeg's blending speed?
[07:09] <blippyp> I'm going to time both
[07:09] <blippyp> all I have to do is modify the convert line in the script and it will do the same thing
[07:09] <blippyp> but I'm not going to get it to do 33 frames at a time - I have no desire to build the filter for that
[07:09] <blippyp> I'll do three
[07:11] <AGSPhoenix> As a side note, the reason I have 33 samples per frame is because the rendering system has an arbitrary limit of 999 frames per second, and 30*33 is 990
[07:11] <blippyp> so you don't care about dropping the frames to 90 or 60 first?
[07:12] <blippyp> it would definately make your life much easier in the long run
[07:12] <AGSPhoenix> for tests yeah, but the final needs to handle much more
[07:12] <blippyp> ah
[07:12] <AGSPhoenix> Ideally, 256 samples per output frame, but maybe less if I can get away with it
[07:16] <AGSPhoenix> wait wait wait wait
[07:16] <AGSPhoenix> blippyp
[07:17] <blippyp> yeah
[07:17] <AGSPhoenix> When you say you're testing with filters
[07:17] <AGSPhoenix> Are you saying that ffmpeg can do basically the same thing IM is doing here?
[07:17] <blippyp> yes
[07:17] <blippyp> exactly
[07:18] <AGSPhoenix> Hmm... ffmpeg can do multiple outputs in a single run
[07:18] <AGSPhoenix> I have a terrible, horrible idea
[07:18] <blippyp> yes - but I'm only doing one
[07:18] <blippyp> with three inputs
[07:22] <blippyp> oh I think this is going to be much faster
[07:23] <blippyp> oh wait - no I screwed it up - still working on it
[07:36] <blippyp> okay - I'm done
[07:36] <blippyp> I do think it's faster - let me actually time them now
[07:37] <AGSPhoenix> Praise be!
[07:37] <blippyp> ya, I'm pretty sure this is faster
[07:37] <blippyp> I run a test of 100 frames
[07:37] <blippyp> my system is slow so bare with me...  ;)
[07:37] <blippyp> the convert is going to take forever I think
[07:39] <blippyp> using ffmpeg - realtime was 29.804s
[07:41] <blippyp> not sure about this
[07:42] <blippyp> using convert - realtime was 1m8.596s
[07:42] <blippyp> guess so
[07:43] <AGSPhoenix> Oh damn
[07:43] <blippyp> over twice as fast with ffmpeg
[07:44] <blippyp> http://sprunge.us/YciY - ffmpeg version
[07:45] <blippyp> as you can see there is more manually labor you will need to do the more frames you wish to overlap at once
[07:46] <blippyp> you can also use different methods of overlaying if you wish - there are like a dozen different ones and you can choose percentages of overlap and all kinds of stuff, so you might want to play with the blend method
[07:51] <blippyp> oh and although it doesn't show it in that script - when I compared using the convert method I also switch that to a fixed width of 6 as well, so they were run identically as far as I know
[07:55] <AGSPhoenix> Well I'll tell you this, even the old convert version is fast enough that it finished in 34 minutes with my test dataset
[07:55] <quup> Trying to rescale images into 2x2 pixels for som analysis, but I get this error: sws: filterSize 708 is too large, try less extreem scaling or increase MAX_FILTER_SIZE and recompile
[07:56] <AGSPhoenix> Now I need to go render more
[07:56] <quup> using this line: -c:v libx264 -profile:v baseline -an -vf scale="-1:2"
[07:56] <Mavrik> ugh
[07:56] <quup> do I really need to recompile?
[07:56] <quup> (images=videos)
[07:57] <AGSPhoenix> possibly stupid question: Why not use scale=2:2?
[07:58] <blippyp> good question...  ;)
[07:58] <AGSPhoenix> Maintaining aspect ratio at that size isn't going to do much
[07:58] <blippyp> I still think it's too small though
[07:58] <blippyp> I've seen the same issue before
[07:58] <AGSPhoenix> Oh I get it
[07:58] <quup> AGSPhoenix: heh, no reason, just got started and that error hit me in the face
[07:58] <blippyp> I just used a bigger scale
[07:58] <quup> https://trac.ffmpeg.org/ticket/1852 seems like that is the issue
[07:59] <AGSPhoenix> It's trying to sample 708 pixels into one in the output, and that's too many
[07:59] <AGSPhoenix> I think
[07:59] <blippyp> no you're right
[07:59] <blippyp> it might work with 2x2 - it's worth a shot - and it's what he wants anyways
[07:59] <AGSPhoenix> If not, maybe chain multiple scales together?
[07:59] <blippyp> it's probably trying to do 0.0000322x2 instead as it is or something like that
[08:00] <quup> 2:2 fails too, and so does 10:10, but 20:20 works
[08:00] <AGSPhoenix> called it
[08:00] <blippyp> try 20x20
[08:00] <blippyp> then re-scale it again
[08:00] <AGSPhoenix> Scale to 20x20, then scale to 2x2
[08:00] <AGSPhoenix> damn!
[08:00] <blippyp> haha
[08:00] <blippyp> I called it!
[08:00] <blippyp> haha
[08:00] <blippyp> :P
[08:00] Action: AGSPhoenix blows raspberry
[08:01] <Mavrik> hmm
[08:01] <Mavrik> will x264 even encode 2x2 video
[08:01] <blippyp> I doubt it
[08:01] <Mavrik> isn't smallest unit something like 16x16
[08:01] <Mavrik> it'll just probably pad out the frame to 16x16 tho
[08:01] <quup> AGSPhoenix: double scale works well, thanks
[08:02] <AGSPhoenix> Aw yeah!
[08:02] <quup> -vf scale="20:20,scale=2:2
[08:02] <AGSPhoenix> *awesome rock solo*
[08:04] <quup> Mavrik: libx264 seems to be happy as long as it's divisible by 2
[08:04] <quup> (width that is)
[08:05] <blippyp> AGSPhoenix: I just looked at that image you posted (didn't even notice it until now)
[08:05] <Mavrik> quup, mhm, it pads the frame to 16 multiplier
[08:06] <AGSPhoenix> ...oh, the bus?
[08:06] <blippyp> do get that kind of outcome using the ffmpeg method would take LOTS of filtering - I'd just go with the convert method - it's slower, but MUCH easier in the end.... You can run as many frames as you want easily that way....
[08:06] <blippyp> ya - the bus
[08:07] <AGSPhoenix> Yeah, I know
[08:07] <blippyp> also - I'm not guaranteeing it will even look the same...  ;)
[08:07] <blippyp> I think it will - but I don't know
[08:07] <AGSPhoenix> I've half a mind to smash my head into the brick wall that is C++ to try and write a frameblend filter
[08:07] <blippyp> at least similar
[08:08] <blippyp> good luck with that...  ;)
[08:08] <blippyp> look into frei0r
[08:08] <blippyp> you can enable that with ffmpeg - they might even already have a filter for it
[08:08] <blippyp> there's another common library for ffmpeg to but I forget it at the moment
[08:09] <blippyp> it might give you an idea for how to build what you want at least - it's all open source (or at least you can see the source)
[08:09] <blippyp> I don't know what license it's under
[08:10] <blippyp> but I do believe they have an overlay filter - so you can easily use that as a base
[08:10] <AGSPhoenix> Yeah, I'll look into it later. For now though, I've got something that works, and I need to get my renderer working above 1000 fps.
[08:10] <AGSPhoenix> Also, I need to sleep
[08:10] <AGSPhoenix> Ugh, it's 2 AM
[08:10] <blippyp> yup
[08:10] <blippyp> well good luck with that...  ;)
[08:10] <AGSPhoenix> Thanks for your help
[08:11] <blippyp> np
[08:11] <blippyp> it was fun :)
[08:11] <AGSPhoenix> The best work is
[08:11] <AGSPhoenix> Night
[08:11] <blippyp> ooh - the BEST!
[08:11] <blippyp> haha
[08:11] <blippyp> night
[08:11] Action: blippyp pats his back for a job well done!
[08:12] <blippyp> haha
[13:52] <holgersson> Hi all!
[13:53] <holgersson> I have a large mkv-file here and want to recode it to a way smaller one. The video stream is all black/white/grey only - can I improve compression at that point?
[13:54] <holgersson> In summary my main problem is that I don't know proper keywords for searching after howtos etc.
[13:54] <holgersson> Could you give me some, please?
[13:56] <xharx> i have troubles recording from VHS. I Use the commandline ffmpeg -f v4l2 -i /dev/video1 -f alsa -i hw:3 -vcodec libx264 -preset ultrafast -qp 30 -aspect:v 4:3 -acodec libmp3lame -ar 44100 -b:a 256k ~/DickePladde/nix2.mkv, i get an error like in the paste. http://pastebin.com/7G0h7y8U How can I prevent this?
[15:53] <Lokie> hello. Are there any guides for encoding with ffmpeg (using libvpx) from variable frame rate sources?
[15:53] <Lokie> also about: -sub_charenc
[15:54] <Lokie> documentation that is
[15:54] <JEEB> I have no idea if the rate control supports VFR
[15:54] <JEEB> with libvpx that is
[15:54] <JEEB> otherwise encoding it into a container that can deal with timestamps should make it nice and workable
[15:54] <Lokie> i tried it using the normal flags, it came out the proper length but the video was slowing down and speeding up on random places
[15:55] <Lokie> can mp4 do that or just mkv?
[15:56] <JEEB> mp4, matroska, quite a few others base upon time stamps instead of a "frame rate" field
[15:56] <Lokie> ok, do u know of a guide i can use?
[15:56] <Lokie> explaining and maybe giving some examples
[15:57] <ubitux> http://trac.ffmpeg.org/wiki/vpxEncodingGuide
[15:57] <ubitux> Lokie: what's the matter with -sub_charenc?
[15:57] <Lokie> i started from that ubitux
[15:57] <Lokie> i don't recall it having anything about vfr but will check
[15:57] <Lokie> nor search helped
[15:58] <JEEB> VFR should just work if you have timestamps in the input correct :P
[15:58] <JEEB> and use an output container that handles them right
[15:58] <Lokie> doubt i will JEEB, i just have the source
[15:58] <Lokie> though there must be some way to create them right?
[15:58] <JEEB> what?
[15:58] <JEEB> you are not making a whole lot of sense here
[15:58] <Lokie> let me try again then
[15:59] <Lokie> i have only the input file, no idea if it has the timestamps you are referring to
[15:59] <Lokie> but i am wondering if it doesn't have them if there is some way to "extract" them from the input file
[15:59] <JEEB> if it's VFR and plays correctly, and if possible is not a 120fps hacky AVI file, the timestamps should live through the encoding chain just fine
[15:59] <Lokie> i see
[15:59] <JEEB> unless libvpx's wrapper does something homicidally stupid
[16:00] <Lokie> would such information appear on the output of ffprob?
[16:00] <Lokie> i could pastebin it
[16:01] <Lokie> ubitux i was receiving errors about the encoding of the srt subtitles and it said: maybe include -sub_charenc
[16:01] <Lokie> i googled a bit but found no documentation about that
[16:01] <ubitux> sub_charenc encoding (decoding,subtitles)
[16:01] <ubitux> Set the input subtitles character encoding.
[16:02] <Lokie> tried throwing it around the ffmpeg command using utf-8 and the output of file -bi
[16:02] <ubitux> http://ffmpeg.org/ffmpeg-all.html
[16:02] <ubitux> it's used to specify the input charset of your file
[16:02] <ubitux> because it's not in utf-8
[16:03] <Lokie> used the output of file -bi that didn't work too, though i think it might have to do with the command line i used
[16:04] <RoyK> hi all. anyone that knows how I can use ffmpeg/avconv to garble audio files for (somehow) anonymizing them?
[16:04] <Lokie> ./ffmpeg -i file.mkv -c:v libvpx -crf 9 -b:v 3.5M -c:a libvorbis -vf -sub_charenc iso-8859-1 subtitles="file.srt" "/opt/no/file.webm"
[16:04] <ubitux> Lokie: aah in the -vf sub !
[16:04] <ubitux> well
[16:04] <Lokie> that's the line i used ubitux, RoyK that line has nothing to do with what u asked :P
[16:04] <ubitux> subtitles=file.srt:charenc=iso-8859-1
[16:05] <Lokie> oh
[16:05] <ubitux> -sub_charenc was if you were using -i file.srt
[16:06] <Lokie> i see
[16:06] <Lokie> i got hit with a :  Neither PlayResX nor PlayResY defined. Assuming 384x288
[16:07] <ubitux> yeah well, ignore that shit ;)
[16:07] <Lokie> i see google says it might make my subtitles too small
[16:07] <ubitux> mmh
[16:07] <JEEB> (basically your srt gets converted to ASS for rendering, and the converted ASS doesn't have those fields that usually mean how big stuff is rendered)
[16:08] <Lokie> stackoverflow has a possible resolution http://stackoverflow.com/questions/17750036/ffmpeg-didnt-burn-srt-subtitle-on-mkv-properly
[16:08] <JEEB> basically your video size will now be a grid of 384x288 for the subtitles
[16:08] <Lokie> yea
[16:08] <JEEB> and the font scaling etc depend on that
[16:08] <Lokie> though i specifically used the flag for srt
[16:08] <JEEB> generally I wouldn't think it would make things too small, too big possibly?
[16:08] <Lokie> mm
[16:08] <JEEB> Lokie, the rendering thing uses ASS internally
[16:09] <JEEB> srt gets converted to basic ASS, that gets fed to the rendering thingamabob
[16:09] <Lokie> how come i never received that warning with other srts that i doubt included that information
[16:09] <Lokie> can be luck ofc
[16:09] <JEEB> different versions? not using hardsubbing?
[16:09] <JEEB> etc etc
[16:10] <Lokie> i am using the same command since moment 0 :P but they just might have it included
[16:10] <Lokie> that's the logical answer
[16:10] <JEEB> srt doesn't have that info
[16:10] <Lokie> ::confused:: then
[16:10] <JEEB> so it might be a change in how the SRT gets converted to ASS, or something else
[16:10] <JEEB> loldunno
[16:10] <ubitux> i'm trying ffplay -f lavfi color=s=1280x800 -vf subtitles=$HOME/fate-samples/sub/SubRip_capability_tester.srt
[16:10] <ubitux> with different size
[16:10] <ubitux> and it renders mostly fine
[16:11] <JEEB> yeah, I'd only guess fonts get somewhat big (possibly) with the grid being smaller than the video resolution
[16:11] <Lokie> i used the first paragraph of : http://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo
[16:11] <ubitux> we should really do something about all these subtitles&
[16:11] <ubitux> but no one wants to :(
[16:16] <Lokie> well will try and see how the subs come out once / if i get my hands on a more powerfull vps
[16:17] <Lokie> (home pc is windows)
[18:09] <Maverick|MSG> are there any known issues when using image2pipe with targa images?  It works fine with piped jpges.
[18:09] <Maverick|MSG> http://pastebin.com/GSBUEbEh
[18:10] <Maverick|MSG> I've got image001.tga, image002.tga, image003.tga... all being piped into ffmpeg but it spits out a bunch of errors
[18:11] <sacarasc> Maverick|MSG: Does it work if you don't pipe them, but use them as files?
[18:11] <Maverick|MSG> it does, yes
[18:11] <Maverick|MSG> using image2 (rather than image2pipe)
[21:48] <GTAXL> How do I record a live http dynamic stream that are f4f files?
[21:55] <blippyp> I'm not sure I'm understanding - I expect that you have a link for one of these?
[21:55] <blippyp> If so - post it so I can try
[00:00] --- Tue May  6 2014


More information about the Ffmpeg-devel-irc mailing list