[Ffmpeg-devel-irc] ffmpeg.log.20140920

burek burek021 at gmail.com
Sun Sep 21 02:05:01 CEST 2014


[00:28] <goulard> Sokolio: as it turns out I was doing something very dumb
[00:28] <goulard> over writting part of my packet
[00:28] <goulard> so as it turns out... its ok to have one ADTS packet per pes
[00:29] <Sokolio> Ah it always comes down to this, believe me
[00:29] <Sokolio> Good luck
[00:30] <goulard> lol
[00:30] <goulard> thanks for the help
[00:30] <Sokolio> I was more a rubber ducky
[00:30] <Sokolio> than actual help, but thanks
[00:30] <goulard> haha
[11:57] <grepwood> How can I compute a deprecated variable AVCODEC_MAX_AUDIO_FRAME_SIZE?
[12:04] <santa1> Hi, trying to capture video from logitech c920 camera using 'ffmpeg -r 24 -s 1920x1080 -f v4l2 -vcodec h264 -i /dev/video1 -copyinkf -vcodec copy output.mp4' which captures well, but playback of captured video is faster (like in cartoons). Any input to overcome the faster playback?
[12:05] <grepwood> santa1, your framerate is spoofed?
[12:06] <santa1> grepwood: what do you mean?
[12:06] <santa1> It is pal standard 24fps, right?
[12:06] <grepwood> if your video is captured at X frames per second, and then you write in the stream header that the framerate is 2X, then your player will play it at 2X rather than X
[12:07] <grepwood> maybe you need to capture with -r 12?
[12:07] <santa1> grepwood: in that case how can one capture either to comply with NTSC or PAL standard?
[12:08] <grepwood> you can if your hardware supports it and it was configured for it
[12:08] <grepwood> I'm not exactly an expert on cameras, I don't know how would you accomplish that
[12:09] <grepwood> sorry :(
[12:09] <santa1> grepwood: Thanks for the useful input. Appreciate that! :D
[12:10] <grepwood> np :)
[12:12] <santa1> grepwood: Do one has to have the screecapture at the same rate as the video to synchronize?
[12:12] <santa1> I have specified  -r 1 for the screencapture and -r 24 for the video.
[12:13] <grepwood> that could end up funny :p
[12:14] <santa1> grepwood: What I noticed is the same captured video from webcam is faster in mplayer, but normal when played in vlc! hmmm!
[12:15] <grepwood> plot thickens
[12:19] <relaxed> pal is 25 fps
[12:20] <relaxed> santa1: try with "-re" before the input
[12:25] <relaxed> er, scratch that
[12:27] <Animedude5555> Anybody here? I have a question about dithering in FFMPEG.
[12:28] <Animedude5555> How does arithmetic dithering work? There's bayer dithering and error diffusion, but the first time I have ever heard of aritmetic dithering was here in FFMPEG.
[12:29] <Animedude5555> Can somebody explain the algorithm to me? I can't find anything about "arithmetic dithering" on Wikipedia, or any other website at all whatsoever. Did you guys just recently invent the algorithm or something? Please tell me how it works. I've been looking for a decent dithering algorithm to use in a graphics program I'm writing, and this looks like it might be the answer to my problem.
[12:30] <Animedude5555> There's two forms, an "addition" and an "xor" form. I'd love to have somebody here tell me just how this works.
[12:30] <relaxed> Animedude5555: maybe "man ffmpeg-scaler" will give you some hints
[12:31] <santa1> relaxed: Thanks, -re before the input didn't change anything, not -bf values (tried with 0 to disable as well as -1 for auto) :-(
[12:32] <Animedude5555> What's "man ffmpeg-scaler"? If that is a document included only in the Linux version then it's useless to me, because I don't have that. I have the Windows version.
[12:32] <relaxed> Animedude5555: https://www.ffmpeg.org/ffmpeg-scaler.html
[12:33] <Animedude5555> a_dither says "arithmetic dither, based using addition". That's not at all helpful. What the heck is "arithmetic dithering"?
[12:33] <relaxed> santa1: pastebin the output of ffmpeg -i on your output.mp4
[12:34] <Animedude5555> Still, I don't know what "arithmetic dither" is. Can somebody here tell me? Or is it some sort of secret?
[12:35] <Animedude5555> Maybe FFMPEG trade secret?
[12:36] <Animedude5555> Anybody? Anybody know what "aritmetic dither" is? <Relaxed> are you still here? Hello?
[12:36] <JEEB> just go read the fine source code :)
[12:36] <JEEB> or find the author and try to pry that info from him :P
[12:38] <Animedude5555> It should be well documented in the software's online documentation, for those like me who may wish to implement it in their own software, especially since it is NOT DOCUMENTED ANYWHERE ON THE NET AT ALL (not even on the all-knowing Wikipedia). This leads me to think it is an invention of those working on FFMPEG, and a relatively recent invention too. But seriously SOMEBODY should have...
[12:38] <Animedude5555> ...reverse engineered it by now and posted it up online in some kind of unofficial documentation (at the very least).
[12:39] <JEEB> there's no need to reverse engineer anything
[12:39] <JEEB> it's right there in the source code, this is not proprietary software
[12:39] <JEEB> there might or might not be comments, too!
[12:39] <JEEB> I do agree that more documentation is a good thing, though
[12:40] <Animedude5555> Why is it not included on this wikipedia page? https://en.wikipedia.org/wiki/Dither
[12:40] <JEEB> possibly because it's something under another name compared to the wikipedia page's author's word selection
[12:40] <Animedude5555> It has absolutely EVERY kind of dithering, except FFMPEG's so called "arithmetic dithering".
[12:41] <JEEB> I just don't know, you go look at what it actually does or the comments in the source code, and you might find it out
[12:41] <JEEB> it's not the part of libav* that I touch :P
[12:42] <Animedude5555> Is "aritmetic dithering" a brand new type of dithering, never before seen in the world? In other words, is it litterally a brand new invention of FFMPEG's developers over just the last couple months?
[12:42] <JEEB> as I said
[12:42] <JEEB> it might just be something called a different way in that article
[12:42] <JEEB> it might be something not in that article
[12:42] <JEEB> I have no idea, those are just possibilities
[12:42] <JEEB> I mean, go check how Wikipedia confuses people with the aspect ratio article
[12:43] <JEEB> because many video formats and FFmpeg uses SAR for the single sample's aspect ratio
[12:43] <JEEB> yet Wikipedia means something completely different with SAR
[12:43] <JEEB> or well, the author of that article
[12:44] <JEEB> because wikipedia is not a single entity and all that
[12:46] <relaxed> Animedude5555: http://pippin.gimp.org/a_dither/
[12:46] <relaxed> secrets found in the source
[12:47] <relaxed> see libswscale/output.c
[12:47] <JEEB> yes, which is why taking a look at the source is generally a Very Good Idea
[12:47] <JEEB> you can find links to references etc
[12:48] <Animedude5555> On the page you linked to he gives a function called "dither". It doesn't say which type of dither is being implemented in his example code. It does have a number of types of dithers listed near the top, but doesn't say which one is being used in his sample code. It could be any one of them.
[12:49] <Animedude5555> Which means I may still be missing an explanation to "arithmetic dithering".
[12:50] <JEEB> I will have to get a bit rude at this point and point you towards The Fine Source
[12:50] <JEEB> that way you should have no questions whatsoever
[12:51] <JEEB> there should be a a_dither dither in there implemented :P
[12:51] <Animedude5555> It's implemented in C. I'm good ad VB, but not C. I read the FFMPEG C code and my head was spinning.
[12:52] <Animedude5555> If somebody just made a simple pseudocode example that could be translated easily into any programming language, that would be what I ned.
[12:52] <Animedude5555> *need
[12:52] <JEEB> good luck getting spoonfed like that
[12:59] <Animedude5555> In the pageyou linked to, it has several parameters to the dither function. They are input, x, y, c, pattern, levels. Nowhere does it explain what the different parameters represent. X and y are obviously the coordinates of a pixel. Input probably means the brightness level of the current pixel (but it is explicitly stated anywhere on that webpage). I have no clue what the parameter "c"...
[12:59] <Animedude5555> ...represents, nor what "pattern" represents, nor what "levels" represents. Apparantly these last 2 are the constants "4" and "4". However I'm still confused about what "c" is. Where do I get this value, prior to passing it to the "dither" function? The page has a lot of stuff, but it is presented in a way that has almost NO EXPLANATION of what it is actually doing. This makes taking the...
[12:59] <Animedude5555> ...code as it is presented, and actually implementing in anything, EXTREMELY DIFFICULT.
[13:00] <JEEB> feel free to go and poke the author, that's all I can think of :P
[13:26] <santa1> relaxed: please find the ffmpeg output (http://pastebin.com/DefacsjV) of the file which plays faster in mplayer but normal in vlc. There is also a bit of intermittent random pause in the video which avidemux says is related with b-frames.
[13:28] <santa1> relaxed: got to go. Be back in another four hours time. Thanks for your support.
[13:59] <Animedude5555> Got a problem here. No matter what I set -sws_dither to, it always uses error diffusion.
[14:00] <Animedude5555> I'm trying to convert a series of BMP files to an animate gif.
[14:00] <Animedude5555> My commandline is:
[14:00] <Animedude5555> ffmpeg -f image2 -r 9.5 -i InputFrames\Frame%%04d.bmp -sws_dither none output.gif
[14:00] <Animedude5555> I also tried changing "none" to "0". It still doesn't work!
[14:02] <Animedude5555> It appears to always force error diffusion!
[14:03] <Animedude5555> Is the Windows version hard-coded to use error diffusion?!
[14:05] <relaxed> maybe -option sws_dither=none
[14:06] <relaxed> nope
[14:08] <Animedude5555> -option does notwork.
[14:08] <Animedude5555> *not work
[14:10] <Animedude5555> The people at Zerano appear to have compiled it for Windows, and it appears they may have changed some of the source-code prior to compiling for Windows. So the Windows version is not just the main branch of the software, recompiled to run on Windows. It appears to be an entirely different branch of the software. I am assuming this based on the fact that using based on its documentation does...
[14:10] <Animedude5555> ...NOT always work (as in this case with the dither).
[14:11] <Animedude5555> In effect, it is a "mod" of FFMPEG, not actually FFMPEG.
[14:12] <Animedude5555> Can you do a favor for me? If you have access to the Linux source, and a Windows compiler, can you please compile me a "pure" Windows version?
[14:16] <rcombs> uh& no, it's straight out of git
[14:18] <rcombs> your problem is that there is no "none" option for `-sws_dither`
[14:18] <Animedude5555> Then why is it not working with dithering?
[14:19] <Animedude5555> I also tried it with "bayer" and "a_dither" and "x_dither". It always looks exactly the same.
[14:20] <Animedude5555> The files output are identical (you can check the CRCs in programs that let you calculate the CRC of files).
[14:21] <Animedude5555> And those 3 things (bayer, etc) ARE valid, just look in the help file that is output by "-h full".
[14:21] <relaxed> -sws_flags lanczos -sws_dither a_dither
[14:22] <Animedude5555> But I don't want to resize it. Lanczos is for resizing.
[14:22] <Animedude5555> I want to change the dithering mechanism for conversion to gif.
[14:22] <Animedude5555> I don't want to change the dithering mechanism for resizing.
[14:23] <Animedude5555> I want to keep the same size, just convert to gif (for making an animated gif).
[14:23] <Animedude5555> Does it require I use the -sws_flags commandline switch, before it will recognize the -sws_dither commandline switch?
[14:23] <relaxed> I don't think so
[14:24] <relaxed> just -sws_dither a_dither works here
[14:24] <Animedude5555> So I am going to be FORCED to resize the image, if I want to select the dithering type?
[14:24] <Animedude5555> Try -sws_dither bayer
[14:24] <Animedude5555> You will quickly see that it doesn't work.
[14:25] <Animedude5555> It will process the image, and output a file, giving NO ERRORS AT ALL, but the output file will look as if the default dithering (error diffusion) has been used, when with a "bayer" dither, it should have an obvious repeating pattern appearance to the image.
[14:26] <Animedude5555> You can immediately tell that it is still using error diffusion, despite having selected "bayer" as the dithering type, because there will be no such repeating pattern.
[14:27] <Animedude5555> Do you have a fix for this? Is this a bug in the program?
[14:29] <Animedude5555> Hello?
[14:29] <Animedude5555> Are you still ther?
[14:29] <Animedude5555> *there
[14:32] <relaxed> Animedude5555: ffmpeg -i input -vf scale=w=iw:h=ih:sws_dither=a_dither out.gif
[14:33] <relaxed> you're welcome
[14:34] <Animedude5555> What does "-vf" do? According to the internally generated help file with "-h full" the command line switch "-sws_dither" is supposed to be valid, such that I don't need -vf (followed by a bunch of stuff that I don't even know what it does).
[14:34] <Animedude5555> But strangely enough, -sws_dither does not work.
[14:34] <Animedude5555> Why?
[14:34] <Animedude5555> Did some programmer make an error when typing out his C code for FFMPEG?
[14:34] <relaxed> you need the scale filter
[14:36] <relaxed> -s WxH -sws_dither a_dither would probably work too
[14:36] <Animedude5555> The internally generated help file doesn't say I need to use a scale filter. It says that "-sws_dither" is a valid commandline switch. Is it not working as a stand-alone commandline switch, due to an error introduced into the FFMPEG software by one of the programmers?
[14:36] <JEEB> sws_dither as it reads is an swscale option
[14:37] <JEEB> swscale does not get included in your encoding chain by default
[14:37] <Animedude5555> Huh? What does that mean?
[14:37] <XHFHX> Hi there. I currently pipe an HD upscaled video with ffmpeg to ffmbc to create a xdcamhd file. I now wants that the final file has 8 mono-tracks instead of one stereo. how can this be archieved? My current command looks like this: ffmpeg\bin\ffmpeg.exe -i ffmpeg\oasis1.mp4 -vf "scale=1920:1080" -f avi pipe: | ffmbc\ffmbc.exe -i pipe: -target xdcamhd422 -vtag xd5c test2.mov
[14:37] <JEEB> Animedude5555, instead of being a separate filter the dithering is within swscale, which is the do-it-all library that does colorspace conversions and resizing, among other things
[14:38] <JEEB> and -sws_dither is a valid swscale option
[14:38] <JEEB> if you have swscale actually working in your encoding chain
[14:38] <JEEB> by default it isn't plugged in there
[14:38] <JEEB> adding a scaling filter does that, unsurprisingly
[14:38] <JEEB> even if it scales to your input width and height
[14:39] <Animedude5555> But it appears that -sws_dither is being run internally during the conversion to gif, because a dither is most certainly being applied to the output.
[14:39] <Animedude5555> So I though that meant that the -sws_dither commandline switch was automatically available to the user, when performing a conversion to a GIF file.
[14:40] <JEEB> could be something else, or in any case your option was taken in but it had no effect because while you were doing stuff swscale was not called :P
[14:40] <JEEB> or something like that
[14:40] <JEEB> good luck and have fun with swscale :P
[14:40] <JEEB> also it could even be the gif encoder doing extra dithering if needede
[14:41] <JEEB> I just don't know
[14:41] <Animedude5555> Can you pass this message onto the dev team for me? "Please include in the next version of FFMPEG, an option to set the dithering used on output files, particularly for GIF files."
[14:41] <JEEB> no, you do it yourself if you think something is wrong
[14:42] <JEEB> the trac is there for it
[14:42] <Animedude5555> Are you not part of the development team yourself?
[14:42] <JEEB> I have done some code for libav* but I keep the fuck away from swscale
[14:42] <relaxed> Animedude5555: I showed you the option to set dithering
[14:42] <JEEB> the dithering functionality is in swscale, and you need to use swscale for it to become available
[14:43] <JEEB> so relaxed's command line does exactly that
[14:43] <JEEB> you add a scale filter (which is swscale)
[14:43] <JEEB> and then set the scale size to input width and height, and add the sws_dither option there
[14:43] <JEEB> of course you could file a feature request for a separate dithering filter
[14:44] <JEEB> so you would do -vf dither_shit=type=a_dither
[14:44] <Animedude5555> I'm trying to avoid rescaling the image, using a command that specifies dither type only, and absolutely nothing for width or height.
[14:44] <Animedude5555> Is there such a command?
[14:44] <relaxed> -vf scale=w=iw:h=ih:sws_dither=a_dither
[14:44] <JEEB> well good luck then, since it's in swscale
[14:44] <JEEB> you have to scale
[14:45] <JEEB> iw and ih set the output to be the same as input, though
[14:45] <JEEB> for width and height
[14:45] <relaxed> you don't actually scale in the above command
[14:45] <relaxed> it passes through
[14:45] <JEEB> yes, most sane scalers just skip the scaling part for that
[14:45] <oomkiller> mediainfo file 1: http://pastebin.com/Cwz614tG , mediainfo file 2: http://pastebin.com/DNYfNe9W , my command    ffmpeg -fflags +genpts -i VTS_01_1.VOB -i VTS_01_2.VOB -ss 00:00:00 -to 01:06:19 -vcodec hevc -x265-params crf=20 -sn -acodec ac3 -map 0:0 -map 0:1 -map 0:2 e01.mkv            the error I get:   Data stream encoding not supported yet (only streamcopy) . Can someone help me?
[14:46] <oomkiller> if I add -dn it works but then I only get one audio stream
[14:46] <Animedude5555> But then I'm still having to specify that output width = input width, and that still means I'm filling in height and width parameters. Is there a way to get swscale to just skip asking me about height and width, and accept the desired dithering as the only argument that I plan to provide to it?
[14:47] <relaxed> Animedude5555: "JEEB : iw and ih set the output to be the same as input, though"
[14:47] <relaxed> can you read?
[14:48] <Animedude5555> I just found out that there is. I tried this "-vf scale=sws_dither=bayer" and it seems to work.
[14:49] <Animedude5555> This way it isn't even filling in a "width argument" and "height argument" with the image's own width and height. It skips to the important part, of telling it the type of dithering you want to do.
[14:49] <JEEB> it is filling it in inside
[14:49] <relaxed> it's the same exact thing
[14:49] <JEEB> it just happens to be the default
[14:49] <JEEB> which of course is good since it makes the line shorter
[14:49] <Animedude5555> Ok.
[14:50] <JEEB> I of course have no idea if it is the default or not, but it sounds like that from your comments
[14:50] <JEEB> and if it is, great
[14:50] <JEEB> also for gif you probably want to set the output pix_fmt to rgb8
[14:52] <Animedude5555> Not sure why a chain of "=" signs does what it does anyway, but it works. It would seem that scale=sws_dither=bayer would be the same as saying separately "scale=bayer" and "sws_dither=bayer", but apparantly the second "=" sign has a different meaning than the first. This is kind of confusing, but seams to be very important when it comes to applying filters. Can someone explain it to me?
[14:53] <JEEB> welcome to the libavfilter syntax :P
[14:53] <JEEB> another part of libav* that I shall never touch
[14:53] <rcombs> oomkiller: post the full output of the ffmpeg command that fails
[14:53] <rcombs> (in a pastebin, please)
[14:55] <oomkiller> rcombs: http://pastebin.com/TVW5R4TQ
[14:56] <Animedude5555> it also appears to work to group them in more logical units like -vf scale="sws_dither=bayer" where you are basically saying to take the command "sws_dither=bayer" and pass it to the filter "scale".
[14:57] <rcombs> Animedude5555: except those quotes are removed by your shell
[14:57] <rcombs> oomkiller: you specified -map 0:0
[14:57] <rcombs> oomkiller: that stream is a data stream
[14:58] <Animedude5555> I'm typing in in the Windows command prompt, no the Linux shell.
[14:58] <oomkiller> rcombs: oh lol yes you're right
[14:59] <rcombs> well, cmd.exe runs on insanity and crack, so good luck there
[14:59] <Animedude5555> I think that the text in the command prompt in Windows is passed litterally to the program being run, and it is up to the program to handle the quot marks.
[14:59] <oomkiller> rcombs: thx
[14:59] <rcombs> oomkiller: also, you might want to do `-map 0:#0x1e0` and similar instead
[15:00] <rcombs> oomkiller: the stream indexes (0, 1, 2&) in VOB files are arbitrarily defined, and may change between releases, whereas the IDs (#0x1e0, #0x80, &) are defined by the file and won't change
[15:00] <oomkiller> rcombs: ah ok I will do, thx
[15:01] <rcombs> it shouldn't make an actual difference in your output here, but I prefer to do it in VOBs and similar
[15:03] <oomkiller> rcombs: why do I need the # ?
[15:03] <XHFHX> Hi there. Can someone help me with this error message? http://pastebin.com/MFTfKuvs
[15:05] <Animedude5555> Is it possible to force no dithering?
[15:06] <Animedude5555> I want it to use "nearest color" only in my animated GIF. That's what I've been trying to achieve. However I can't figure out how. Now that I figured out how to specify various types of dithering, I'd like to be able to specify the type of dithering that is in fact NO DITHERING. Is there a way to do this?
[15:08] <rcombs> oomkiller: specifies that you want a stream ID and not an index
[15:08] <rcombs> XHFHX: I think it's pretty cut-and-dry
[15:09] <rcombs> XHFHX: you need to specify a frame rate with -r before -target
[15:09] <XHFHX> yeah, it seems so. but which frame rate should I choose? I'm not that much into video conversion
[15:09] <rcombs> though I don't quite understand why you're piping ffmpeg to ffmpeg
[15:09] <XHFHX> piping ffmpeg to ffmbc
[15:09] <Animedude5555> I want it to use "nearest color" only in my animated GIF. That's what I've been trying to achieve. However I can't figure out how. Now that I figured out how to specify various types of dithering, I'd like to be able to specify the type of dithering that is in fact NO DITHERING. Is there a way to do this?
[15:11] <rcombs> XHFHX: well, ffmbc is not ffmpeg, so it's not directly supported here
[15:11] <rcombs> XHFHX: but the frame rate you should use generally depends on what you intend to use the output for
[15:11] <XHFHX> so the output, not what the input is?
[15:12] <XHFHX> because i don't understand when i pipe a single file i dont have to enter the framerate, but when i pipe something ffmpeg has created i have to set the framerate
[15:12] <Animedude5555> Can you help me set FFMPEG for nearest-color mode? I want to disable dithering on my GIF output.
[15:12] <rcombs> XHFHX: apparently (it's an ffmbc error, and I don't know that software, so I'm just guessing reading the error message)
[15:12] <Animedude5555> Can  you see my posts?
[15:15] <XHFHX> ok, thanks rcombs: i now set -r pal and try some tests with different files how this works out! :)
[15:15] <rcombs> XHFHX: cool!
[15:16] <XHFHX> btw, is there a reason why ffmbc isn't combined with ffmpeg?
[15:18] <rcombs> it's by different people
[15:18] <rcombs> probably just uses ffmpeg's libs
[15:21] <Animedude5555> Is there a way to make it so FFMPEG does not use any dithering when converting to gif?
[15:21] <Animedude5555> Please let me know.
[15:21] <Animedude5555> I want to set it to just use "nearest color".
[15:24] <vlatkozelka> hi , if i want to scale and overlay an icon to a video that would be -filter_complex " [1:v] scale=20:20 ; [0:v][1:v] overlay=10:10 " right ?
[15:25] <vlatkozelka> the overlay is working perfectly  but the scaling isnt , the icon comes out in its native size
[15:25] <rcombs> vlatkozelka: you're not using the output of the scale filter
[15:25] <vlatkozelka> i dont understand that sorry
[15:25] <rcombs> you're just taking the same input as the scale's input
[15:26] <rcombs> you want something like this: " [1:v] scale=20:20 [scaled] ; [0:v][scaled] overlay=10:10 "
[15:26] <vlatkozelka> ah
[15:26] <vlatkozelka> scaled is like a variable name ?
[15:27] <rcombs> yeah, similar
[15:27] <vlatkozelka> explains all these examples ive been reading
[15:27] <vlatkozelka> didnt know u can do that
[15:27] <rcombs> now you know! :D
[15:28] <vlatkozelka> thx alot that worked perfectly :)
[15:28] <vlatkozelka> with a video too
[16:50] <vlatkozelka> i have video1 and video2 , im scaling video2 and overlaying it onto video1 ... video2 is a short 5 sec video , but when it ends it doesnt dispear , how to do that ?
[16:51] <c_14> https://ffmpeg.org/ffmpeg-filters.html#overlay-1
[16:51] <c_14> look at eof_action
[16:51] <vlatkozelka> command im using : -filter_complex "[1:v] scale=50:50 [sc] ; [0:v][sc] overlay=10:10"
[16:52] <vlatkozelka> ok
[16:52] <vlatkozelka> btw where does -loop go ?
[16:52] <vlatkozelka> if i want a gif to loop
[16:53] <oomkiller> I'm getting an error with concatenating two files: http://pastebin.com/BeapSpqn , I guess the problem is the difference in the two video streams. how can I fix that?
[16:54] <vlatkozelka> thanks c_14 :)
[16:54] <vlatkozelka> added :eof_action=pass
[16:55] <c_14> vlatkozelka: I think it's output
[16:55] <vlatkozelka> ah
[16:55] <vlatkozelka> so if i want to loop a gif i could use eof_action:repeat
[16:55] <vlatkozelka> ill give it a try
[16:55] <c_14> vlatkozelka: nah, repeat won't work
[16:55] <c_14> That just repeats the last frame.
[16:55] <c_14> It's the default anyway.
[16:56] <vlatkozelka> yeah
[16:56] <vlatkozelka> thats what happened indeed
[16:57] <c_14> oomkiller: You probably at very least have to get the SARs to match up for both streams, you also might need to have the sizes be equal, I'm not sure if/how x265 handles size changes in the middle of the stream.
[17:00] <oomkiller> c_14: I'll try to convert each file to x265 first and then try to concatenate them.
[17:01] <c_14> I'd probably just prepend 2 scale filters into the beginning of the filterchain to give the streams square pixels.
[17:38] <eldome> hello all, guys. Is there someone i can ask to about linux versions? (the one i installed from jon severinsson's launchpad)
[17:45] <c_14> Just ask, if someone can help you they will.
[17:46] <eldome> straigth forward so ;) Ok
[17:47] <eldome> is there any chanche to see back the rotate filter?
[17:48] <c_14> hmm?
[17:49] <eldome> i need to rotate an overleyed set of frames. Need to obtain an arbitrary rotation
[17:49] <eldome> i cannot run a command withe the rotate filter on the latest version
[17:49] <eldome> and actually a ffmperg -filters command don't list the 'rotate' filter
[17:50] <c_14> Can you pastebin the output of ffmpeg -version?
[17:50] <eldome> sure
[17:50] <eldome> just a sec
[17:51] <eldome> ffmpeg version 1.2.6-7:1.2.6-1~trusty1 built on Apr 26 2014 18:52:58 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1) configuration: --arch=amd64 --disable-stripping --enable-avresample --enable-pthreads --enable-runtime-cpudetect --extra-version='7:1.2.6-1~trusty1' --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --enable-bzlib --enable-libdc1394 --enable-libfreetype --enable-frei0r --enable-gnutls --enable-libgsm --enable-libmp3lame --enable-lib
[17:52] <eldome> sorry, cutted away something... do you need the complete output or is this enough?
[17:52] <c_14> That's enough.
[17:54] <eldome> (jus saw the pastebin thingie. sorry, kinda n00b: cannot use it :| )
[17:54] <c_14> The rotate filter was added to master 3 (or 4) months after that version of FFmpeg was split from master.
[17:55] <c_14> You'll need a newer version of ffmpeg.
[17:55] <c_14> (Which is always a good idea anyways)
[17:55] <eldome> that's a step.... thank you. Well i also found newer versions
[17:55] <c_14> You can probably just use a static build.
[17:56] <eldome> a static, right
[17:56] <eldome> but the same command produced a horribly wrong result
[17:56] <eldome> dropping almost the whole set of frames
[17:57] <eldome> can i bother submitting the command ande the results?
[17:57] <eldome> (on pastebin, i swear :D )
[17:57] <c_14> Ye, sure.
[17:59] <eldome> http://pastebin.com/DYeAJm8W
[17:59] <eldome> here's the command
[18:00] <eldome> http://pastebin.com/wMJCCTAC
[18:00] <eldome> and there's the output
[18:01] <eldome> there always is an I/O error on the base frame (but it seems to not affect the reading of that fram)
[18:01] <eldome> and it always elabs 2 frames dropping the remaining 173
[18:01] <eldome> (the sets are both made of 175 640x640 pngs)
[18:03] <c_14> Ok, there's a few things I notice right off the bat.
[18:04] <c_14> wait
[18:04] <eldome> waiting and thanks in advance :)
[18:05] <c_14> For the sake of my sanity I'm going to convert that filtergraph into a complex filtergraph with multiple inputs.
[18:06] <eldome> do whatever you want... if you pop up with a solution, i'll adopt your way and build a commemorative monument to your person ;)
[18:08] <c_14> You do know that x264 ignores -qscale, right?
[18:09] <eldome> i know... there are some dirt coming from old tries. I guess it shouldn't impact on the whole stuff
[18:10] <c_14> Yeah, just noticing things while I go over the command.
[18:10] <eldome> fair enough :)
[18:10] <oomkiller> c_14: converting it first to x265 before concatenating, didn't work. How does this work with 2 scale filters you mentioned?
[18:11] <c_14> eldome: http://ix.io/epo < try that
[18:13] <c_14> oomkiller: [input]scale=iw*sar:ih[out] < something like that for each of the streams somewhere at the beginning of the filtergraph
[18:13] <c_14> you might need to append a setsar=1 after each of those (before the [out]), not sure though
[18:16] <eldome> Output error: No such filter: ''
[18:17] <eldome> (On a side note, the command i pasted works on a windows version: ffmpeg version N-61570-gaa86ccc built on Mar 17 2014 22:06:38 with gcc 4.8.2 (GCC) )
[18:18] <oomkiller> c_14: well that gives me the error: Too many inputs specified for the "scale" filter. my command right now: http://pastebin.com/JTzxR1hY
[18:21] <eldome> (added the second set, removed the commas after [1] and [2], but... same old result: 173 frames dropped :( )
[18:21] <eldome> (Oh well, 347 frames dropped, actually :O )
[18:23] <c_14> oomkiller: You have to run the scale filter as a separate filterchain with the [input] being the first video stream and again with [input] being the second video stream. Make sure [out] is a different name both times and then take those names and give them as input to the concat filter
[18:25] <c_14> eldome: The [2] should be a [1]
[18:25] <c_14> and yes, the commas shouldn't have been there
[18:25] <c_14> I forgot to get rid of them
[18:26] <c_14> The now modified command works for me.
[18:27] <c_14> It produces output at least.
[18:29] <eldome> it does, but it's a vide with 2 frames
[18:29] <eldome> 0 sec duration
[18:29] <eldome> (it always did, sorry for being unclear)
[18:36] <c_14> Ah, ok.
[18:36] <c_14> But it worked correctly with that Windows build?
[18:37] <eldome> yes it did
[18:37] <eldome> (searching that version source to compile on linux, btwt)
[20:06] <JaredBusch> any guides for setup on CentOS7 ? My Google skills are failing.
[20:21] <c_14> https://trac.ffmpeg.org/wiki/CompilationGuide/Centos maybe?
[22:54] <Animedude5555> I still have a question about dithering in FFMPEG.
[22:54] <Animedude5555> I know now how to set the dithering with -vf scale=sws_dither=NameOfDitherType
[22:55] <Animedude5555> However, I still can't figure out how to set it to do no dithering. I want to set it just to use "nearest color".
[23:03] <relaxed> Animedude5555: Try -vf scale=sws_dither=0
[23:04] <Animedude5555> Tried it, but it seems to automatically use ED dithering anyway.
[23:06] <relaxed> then it might not be possible, file a feature request and maybe they'll add the option
[00:00] --- Sun Sep 21 2014


More information about the Ffmpeg-devel-irc mailing list