[Ffmpeg-devel-irc] ffmpeg.log.20140105

burek burek021 at gmail.com
Mon Jan 6 02:05:01 CET 2014

[00:01] <rolf> I have a problem with an HD movie which I get converted to .mp4 under linux but not under Windows 7 64 Bit. This is the problem: http://pastebin.com/izDWUCMW
[00:09] <Mavrik> rolf, I'm rather sure that vo_aacenc audio encoder does not support 5.1 audio
[00:09] <Mavrik> you probably have different versions of ffmpeg and they choose different audio streams to transcode
[00:09] <Mavrik> since your input has 3 different audio streams, two stereo and one 5.1
[00:10] <rolf> i do not need this 5.1, I only need stereo
[00:11] <Mavrik> well then use one of the stereo inputs as output
[00:11] <Mavrik> or tell ffmpeg to remix audio to stereo from 5.1
[00:11] <Mavrik> depending what you want
[00:11] <Mavrik> see documentation on "-map" parameter.
[00:12] <rolf> -map   under windows? I am at the moment on my linux, is the output here the same?
[00:14] <rolf> ok, ffmpeg -help under linux tells:   Advanced options:
[00:14] <rolf> -map file.stream[:syncfile.syncstream]  set input stream mapping   , I will try to understand what to do.
[00:23] <BeWilled> how do I improve the quality of the jpg file output using this command -filter:v scale=150:-1 -vframes 1 -codec:v mjpeg
[00:23] <BeWilled> right know I am getting a 4 kb jpeg file
[00:23] <relaxed> BeWilled: -q:v 1
[00:24] <relaxed> ^^ should give you the highest quality
[00:24] <relaxed> the -q_v scale goes from 1 to 31
[00:24] <relaxed> er, -q:v scale
[00:31] <rolf> Mavrik, thank you very much, it is running on windows now. I have this options:   -map 0:1  -ar 48000
[00:32] <rolf> but I have an AMD PhenomII X4 and ffmpeg is using ony one core, is there an option to use all 4?
[00:32] <relaxed> -threads 4
[00:32] <relaxed> although a recent ffmpeg should use them all by default
[00:33] <BeWilled> thank you relaxed
[00:33] <rolf> relaxed, thank you, I will try
[00:34] <relaxed> you're welcome
[00:43] <BeWilled> need help with this error while extracting images from flv http://pastebin.com/KAxvxfhZ
[00:44] <relaxed> I see no error. Always provide the command in your pastebin.
[00:45] <BeWilled> muxing overhead -100.00000 its -filter:v scale=250:-1 -vframes 1 -codec:v mjpeg -q:v 1
[00:46] <relaxed> what is the complete command and what's your goal?
[00:47] <BeWilled> ffmpeg -ss 20 - i intput.flv -filter:v scale=250:-1 -vframes 1 -codec:v mjpeg -q:v 1 output.jpg
[00:47] <rolf> with -map 0:1 the generated .mp4 has only the german stereo sound.  I do not understand how to set correct options for -map .
[00:48] <BeWilled> I'm succesfully extracting jpgs from flv files in a batch process... but seem to be getting this error on a specific file
[00:49] <rolf> or this is to to with two steps? first to get the audio and the the rest?
[00:49] <relaxed> you only want one frame? does that command output an image?
[00:50] <BeWilled> yes
[00:50] <relaxed> rolf: pastebin ffmpeg -i yourinput
[00:50] <relaxed> BeWilled: there were two questions there
[00:50] <relaxed> hurry, because I'm heading out for pints
[00:50] <BeWilled> yes, and yes
[00:50] <relaxed> great, then what's the problem?
[00:51] <BeWilled> muxing overhead -100.000000%
[00:51] <relaxed> why are you hung up on that?
[00:52] <relaxed> there's no error- move on
[00:52] <relaxed> do you know how to check the exit status of a command?
[00:52] <relaxed> to see if there was an error?
[00:53] <BeWilled> no
[00:53] <relaxed> echo $?
[00:53] <relaxed> if it's zero there was no error
[00:54] <BeWilled> ok
[00:54] <relaxed> or:  ffmpeg -i .... || echo "Houston, we had a problem"
[00:54] <relaxed> ffmpeg makes it very clear when there's actually an error
[00:55] <relaxed> And for more precise seeking you may want to move the -ss after the input. It will be a little slower though.
[00:55] <relaxed> rolf: waiting for pastebin..
[00:56] <relaxed> nevermind scrolled up. you want the french audio?
[00:59] <rolf> yes , one moment
[01:00] <rolf> http://pastebin.com/YUaTBCMY
[01:00] <relaxed> ok, what is your goal?
[01:01] <rolf> to get video and audio combined in one .mp4, audio only german stereo
[01:03] <relaxed> you want german?
[01:03] <rolf> Stream #0:0   and    Stream #0:1    I think I want to have
[01:03] <rolf> yes german
[01:04] <relaxed> ffmpeg -i input -map 0:0 -map 0:1 -c copy -f mpegts output.ts
[01:29] <rolfgerman> http://pastebin.com/PZqMQcj1      I think I solved my prolblem, he is on step 2, tomorrow I will see if the .mp4 is OK.   Thank you very much, thank you  very much  relaxed   for your help.
[01:46] <average> is it possible to draw an arrow with ffmpeg on a video ?
[01:46] <average> I know drawtext is possible and I'm using it
[01:47] <sacarasc> You can overlay images, I don't know how myself, but I know it can be done.
[01:48] <sacarasc> Using the overlay filter, I think.
[01:48] <average> I could do it with imagemagick , separating video, audio, superimposing image over frames in video, reassemble video, then merge video/audio again, but that would be tedious
[01:48] <average> well, yeah, the overlay filter, but that would require all this disassmble/reassemble thing. I don't mind it that much, I did it for a sox-based noise-reduction, it's fine but.. it's a bit tedious
[01:49] <average> I was hoping if ffmpeg has drawtext it should maybe has some sort of "drawarrow" somewhere..
[01:49] <average> ofcourse that is just a hope.. and I don't have anything that leads me to believe that such a thing actually exists
[01:49] <konflict> Hi, Im currently trying to solve -timecode sync issue. Im piping two SDI streams from BlackMagic Decklink  into ffmpeg to capture two DNXHD mxf files. I need to get identical timecodes on both clips based on either system time or some external source for later "frame precise" editing. So far I wasnt able to find any solution. Could anyone help?
[01:51] <average> konflict: what's a timecode ?
[01:51] <average> konflict: sorry I'm like.. clueless on that topic..
[01:55] <average> konflict: is timecode that piece of information that lets the video player know how big the video is and stuff like that ?
[01:55] <average> like the current second etc
[01:55] <average> ?
[01:55] <konflict> No worries, thanks for feedback in the first place. Well its basically time information for each individual frame of the video. Its significant say when you want to sync video and audio from separate sources, if both of them have timecode you can sync them perfectly.
[01:56] <average> how can you check if both sources have timecode ?
[01:57] <konflict> well most of the NLEs like premiere, final cut or media composer supports this feature as its crucial for multicam editing
[01:57] <konflict> in my scenario my sources doesnt have any but i want to add real time as my reference timecode
[01:57] <average> also, have you seen this ? http://stackoverflow.com/questions/18253340/keep-timecode-in-ffmpeg/18259493#18259493
[01:57] <konflict> it that make sense :)
[01:58] <average> options -copyts or -copytb <mode> with <mode> one of { 1 , 0 , -1 }
[01:58] <average> what you wrote makes sense
[01:59] <average> moreover, I also need it for one of my things because it seems that somewhere across the post-processing I mess up my timecodes and the video player does not know how long the video is anymore
[02:00] <average> and doesn't know how to seek properly in the video
[02:00] <average> so I guess that's timecode-related too right ?
[02:01] <konflict> well I will certainly give this a try. You see even without timecode syncing two camera feeds would be piece of cake. Just find reference point like light flash or something audible. But I want to capture between 4 - 12 streams for multicam edit and that would be very time consuming and whats worse probably inaccurate
[02:02] <average> konflict:  then maybe you will have to inject some sort of external signal so that you can check for it in post-processing ?
[02:02] <average> for example a red-dot somewhere
[02:03] <average> konflict: but your cameras are filming the same place/area but from different angles. is that correct ?
[02:03] <konflict> exactly
[02:04] <relaxed> average: you could overlay a transparent png of an arrow or try to use drawtext with a unicode arrow, www.alanwood.net/unicode/arrows.html
[02:04] <average> and you want to join all the cameras in one single video in some sort of 3x4 grid (if you have 12 streams for example ) ?
[02:05] <relaxed> https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
[02:05] <average> konflict: are you working on a live stream or something that you have time to post-process on ?
[02:05] <konflict> not really, I want to keep them as they are in full resolution and just edit cuts between them
[02:06] <konflict> well, Im trying to shoot in small studio then post process end then distribute
[02:06] <average> oh I see, like a football match for example, where different angles are needed in different moments of the game..
[02:07] <konflict> yeah, thats exactly it
[02:07] <average> don't your cameras start recording all at the same time ?
[02:07] <average> if you have them starting all at the same time then 01:20:31 will mean the same thing for all of them
[02:08] <average> but I suspect they don't start at the same time for some reason
[02:08] <average> and you don't know the relative differences between them.. or do you ?
[02:08] <average> or you want to close some of them sometimes and they de-sync for that reason
[02:09] <konflict> well Im capturing with ffmpeg and every "capture" starts in slight offset to previous. Its not not much, difference is in seconds maybe frames only but its still there and its noticeable
[02:11] <average> ok, let's suppose you have the videos already stored somewhere     1.avi , 2.avi , 3.avi .
[02:12] <average> you can put a bulb, led, whatever, somewhere in the set, as you said
[02:12] <konflict> so this way if I specify -timecode 00:00:00,00 starting point will be same for all recorded clips but it wont represent same point in real time when recorded
[02:12] <average> yeah, I understood
[02:12] <average> thanks for the explanation btw
[02:12] <average> here's what I think
[02:12] <average> you talked about a light, that's a very good idea
[02:12] <average> now what's left, is identifying that light
[02:14] <average> maybe you don't want to identify light, maybe you throw some object in the image to sync http://www.imagemagick.org/script/command-line-options.php#subimage-search
[02:14] <konflict> Yes, you are right because this is certainly one way. Im not editor really but afaik most of pro editing stations allow you to specify reference point for each individual clip and it will line up on timeline automatically
[02:14] <average> you can search for that object with compare -subimage-search from ImageMagick
[02:16] <konflict> interesting, thank you for the link...
[02:17] <average> but in order to search for light, you can just check the frame luminocity or brightness http://www.imagemagick.org/discourse-server/viewtopic.php?f=6&t=24046
[02:18] <average> here's another one that finds the average gray-level http://www.wizards-toolkit.org/discourse-server/viewtopic.php?f=1&t=14073#p48395
[02:18] <average> which is also for the purpose of brightness level
[02:19] <konflict> thank you sir :)
[02:20] <average> no problem. you will have to take frames and so forth, but, maybe it will work :) I wish you the best of luck :)
[02:21] <konflict> even though I still not sure if thats going to work, I guess it wont be so difficult feature to add. I mean for example take the -timecode information from system time or extract it from audio input which is by the way typical way of distributing timecode
[02:21] <konflict> anyway, thank you for the input. Definitely interesting ideas
[02:21] <konflict> good luck with your project too
[02:31] <average> "Dejavu can memorize audio by listening to it once and fingerprinting it. Then by playing a song and recording microphone input, Dejavu attempts to match the audio against the fingerprints held in the database, returning the song being played"
[02:31] <average> https://github.com/worldveil/dejavu#dejavu
[02:31] <average> konflict: this is the method you mentioned just now. of finding things through audio input
[02:31] <average> you can use dejavu
[02:31] <average> it's written in python
[02:32] <average> you'll probably have to adapt it to your needs as it's made for a much general purpose I think (it's made for things like soundhound.com or midomi.com or shazam)
[02:36] <konflict> Thank you so much for keeping me in mind. This is actually very nice project and I think this could work provided coding skills which I lack almost completely. This certainly is beyond my level :)
[02:36] <konflict> But hey, thank you!
[03:10] <average> konflict: hope is not lost my friend, you can still do it. here is codecademy, where you can learn some python for the greater good :) http://www.codecademy.com/tracks/python
[03:12] <Fusl> is ffmpeg buffering stdin?
[03:12] <Fusl> pipe:0?
[03:14] <Fusl> i'm reading pipe:0 with -re and try to omit overflows in node.js at the same time but after a couple of time input gets unsync and it takes ffmpeg a minute to hear a sound we injected into the stream :/
[06:58] <jimi_> I have a movie with audio. I want to keep the first N second of video and audio, but then after N seconds to end of file, I want only video, how can I do this?
[07:10] <DeadSix27> jimi_: i'd cut the audio to N seconds, then mux it with the full video it should continue to play even though the sound has ended
[07:11] <DeadSix27> atleast thats what happends when i merge a shorter audio and a longer video all the time
[07:12] <DeadSix27> but that ofc only works when its supossed to be at beginning
[07:13] <jimi_> DeadSix27, yes, i want to keep seconds 0-N, and then replace N-EOF with another audio track, all over the same underlying video
[07:16] <DeadSix27> ah dunno how to do that
[07:16] <DeadSix27> with ffmpeg i mean.
[07:16] <jimi_> DeadSix27, ill google abit just adnt found antyhing
[07:17] <DeadSix27> maybe you can cut 2 audios, and then align them both together and then mux em
[07:17] <DeadSix27> the only thing i dont know then, is how to align them both together
[07:18] <DeadSix27> maybe this helps: https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join,%20merge)%20media%20files
[08:35] <Zeranoe> Is the -b:a for fdk-aac per channel? Meaning, if I set the bitrate to 128k, will that be for the total, or per channel
[08:40] <Zeranoe> I see that yes it is per channel
[11:56] <haarp> why does ffmpeg in example -1- of http://pastebin.com/Li3dHQk8 display wrong stats? the time is counting up way too fast, although the stream is obviously realtime. it doesnt happen in example -2-, where it can recognize a duration. is this a bug? should i file it?
[13:58] <nano-> Isn't the following supposed to work? ffmpeg -i test.wav -ac 2 -c:a tta test.tta (input is a s16 3 second wav)
[13:58] <nano-> I get:
[13:58] <nano-> [NULL @ 0x7febdb02ca00] Unable to find a suitable output format for 'test.tta'
[13:58] <nano-> test.tta: Invalid argument
[14:00] <spaam> nano-: to old version of ffmpeg ?
[14:01] <nano-> ffmpeg version 2.1.git-a044a18 Copyright (c) 2000-2013 the FFmpeg developers
[14:02] <JEEB> tta _encoding_ ?
[14:02] <nano-> 20131206
[14:02] <nano-> yes. encoding.
[14:03] <JEEB> ok, there is a ttaenc... but is that lavf or lavc
[14:03] <JEEB> ok
[14:03] <JEEB> lavc
[14:03] <JEEB> anyway, by that I would guess that there is no tta muxer
[14:04] <JEEB> try muxing into mkv
[14:04] <nano-> Tried. That works. But I don't hink that's what I want, and VLC didn't play it (which isn't the target anyway so that doesn't matter).
[14:05] <JEEB> then you can extract that bit stream if you want to
[14:05] <JEEB> or, if tta files are just the raw bit stream
[14:05] <JEEB> you could test -f rawvideo out.tta
[14:05] <JEEB> yes, that format sounds completely wrong but that generally means "just output the data"
[14:06] <nano-> Humm.. Output file #0 does not contain any stream
[14:07] <JEEB> oh, it might not work when encoding...
[14:07] <JEEB> in any case, there is no tta muxer
[14:07] <JEEB> just checked the source tre
[14:07] <JEEB> *tree
[14:07] <JEEB> there's a demuxer, decoder and even an encoder
[14:07] <JEEB> I'm surprised paul actually implemented encoding because having Yet Another Lossless Audio Encoder is kind of getting out of hand
[14:08] <JEEB> not that you gain much if anything compared to flac or wavpack
[14:08] <nano-> Yep, perhaps I should just drop it instead of putting effort into generating test files :)
[14:08] <JEEB> if you need test files
[14:08] <JEEB> there are such on the FATE server
[14:08] <JEEB> that's used for testing ffmpeg
[14:10] <nano-> Thanks for your help.
[14:11] <JEEB> np
[14:12] <spaam> nano-: beer later today?
[14:13] <nano-> spaam: Nah, going for a first run of the latest Pandemic expansion.
[14:14] <spaam> nano-: okey :)
[14:14] <nano-> http://boardgamegeek.com/boardgameexpansion/137136/pandemic-in-the-lab
[14:29] <average> hello ?
[14:30] <sacarasc> Greetings.
[14:30] <average> do you know someone who makes music ?
[14:31] <average> I'm making a screencast and I need some music to put in the intro
[14:31] <average> I'll give props/credits to the artist
[14:31] <average> but I need a 12-15 second intro that sounds good for it
[14:31] <sacarasc> I do, but as it's 05:30 for them currently, I don't think they're around.
[14:32] <average> sacarasc: hook me up
[14:32] <average> sacarasc: and I'll talk to them when they're available
[14:33] <sacarasc> Join us in #DigitalGunfire.
[00:00] --- Mon Jan  6 2014

More information about the Ffmpeg-devel-irc mailing list