[Ffmpeg-devel-irc] ffmpeg.log.20130526

burek burek021 at gmail.com
Mon May 27 02:05:01 CEST 2013


[01:15] <djahandarie> Anyone have a guess why I would get this error "[format @ 0x1fb8540] Format name too long Output pad "default" for the filter "Parsed_ass_0" of type "ass" not connected to any destination Error opening filters!" when trying to hardsub a video?
[01:15] <djahandarie> Here is the full output: http://pastebin.com/BC8icJ2c
[01:15] <djahandarie> I've used an identical command line on another mkv/ass combo and it has worked just fine in the past
[01:18] <klaxa> djahandarie: is the subs.ass file present? (you can also pass the video file for subs), also consider using fdk-aac for aac encoding
[01:18] <djahandarie> Yes, it's present. I didn't know you can pass the video file, I'll try that.
[01:20] <djahandarie> It just fails entirely if I try passing it the mkv.
[01:20] <djahandarie> But yeah, the ass file is fine and it seems to parse that without issue when I directly pass it.
[01:21] <djahandarie> Otherwise I wouldn't get "[Parsed_ass_0 @ 0x17007c0] Added subtitle file: 'subs.ass' (4 styles, 1409 events)" I assume.
[01:23] <djahandarie> Seems like if I remove the -vf entirely it still breaks.
[01:24] <djahandarie> Even just ../ffmpeg/ffmpeg -i ookami.mkv ookami2.mkv breaks.
[01:24] <djahandarie> What in the world is this Format name too long message about?
[01:24] <klaxa> hmm... lemme grab that rip
[01:24] <klaxa> might take a while though
[01:24] <klaxa> is it good though?
[01:24] <klaxa> i mean story wise
[01:25] <djahandarie> The movie? I hear it's good, haven't watched it yet. Watching it with a friend on Monday, was trying to get it into a TV-friendly format.
[01:26] <djahandarie> Maybe it's just getting angry since there are so many streams due to the embedded fonts. Not sure what else it could be.
[01:26] <klaxa> not really
[01:26] <klaxa> btw, you might want to add those fonts to your fontpath, otherwise ffmpeg might not find them and render the fonts in whatever
[01:27] <djahandarie> Ah, yeah, indeed.
[01:27] <Darkman> hi there
[01:28] <klaxa> djahandarie: i wrote a crappy script like a year ago or something https://gist.github.com/klaxa/5651164
[01:28] <klaxa> totally inefficient, but does the job
[01:29] <klaxa> you have to do like... "ls \[Commie* > mkvlist.lst" though
[01:29] <djahandarie> klaxa, hold on, no need to download the rip (unless you want to watch it).
[01:29] <djahandarie> I think my ffmpeg is totally bricked.
[01:29] <klaxa> then execute the script and if you have mkvtoolnix installed it will extract the fonts to ~/.fonts/
[01:29] <klaxa> maybe recompile?
[01:30] <Darkman> i'm looking for someone who would like to write a script ;)
[01:30] <djahandarie> Yeah, I think some libraries were swapped since I last compiled.
[01:31] <klaxa> Darkman: if you elaborate what you need, one might be able to do so :)
[01:34] <Darkman> its for a small project, i want some script/foo/bar/whatsoever and all the necessary stuff to throw in a bunch of videos and get back html5 video stuff in different versions / quality. Which exactly would be something that someone with some more knowledge then me should define ;)
[01:34] <Darkman> sounds much, eh? ;)
[01:37] <klaxa> the only hard part i see right now is encoding videos only to lower qualities, i.e. if you have some 320x240 video you don't want to encode to 720p or something
[01:37] <klaxa> that would require some parsing, but nothing out of the ordinary
[01:38] <Darkman> ffprobe or something like that should provide the infos, at least from what i found so far
[01:39] <klaxa> yes you still have to parse it and check it with preset values, etc.
[01:39] <Darkman> yep
[01:39] <klaxa> and then there's different aspect ratios
[01:40] <Darkman> yeah, so its a bit of work ;)
[01:57] <djahandarie> klaxa, wow. That script was insanely helpful. Glad I came!
[01:58] <djahandarie> I have no idea why my (re-)compiled ffmpeg isn't working, but I just switched back to my package manager's.
[01:58] <klaxa> if you feel like getting tons of fonts, do "ls *mkv > mkvlist.lst" and run the script again
[01:58] <klaxa> however, it's pretty shit and if i would feel like it i would rewrite it
[01:59] <klaxa> however, i don't feel like it :V
[01:59] <djahandarie> lol
[02:00] <klaxa> oh... it puts the fonts in a folder called "fonts" in the current directory? hmm...
[02:00] <djahandarie> Yeah, it does.
[02:00] <klaxa> that's not what i want, but okay
[02:42] <aristarchus> does anyone know how to improve the resolution of an ffmpeg capture?
[02:44] <klaxa> you can increase the resolution by using -s or -vf scale
[02:47] <aristarchus> klaxa: thanks
[02:48] <klaxa> however, it does not magically increase quality
[02:48] <aristarchus> ok
[02:49] <aristarchus> i'm getting very pixelated results right now
[02:50] <aristarchus> is -s supposed to match the resolution of my screen?
[02:50] <aristarchus> what if i just want to capture a portion of the screen?
[02:50] <klaxa> ah you are talking about x11grab?
[02:51] <aristarchus> ya
[02:51] <aristarchus> sorry, i should have clarified
[02:51] <klaxa> use -f x11grab <input options, i.e. -s 320x240 -r 30> -i <x-server + offsett, i.e. :0.0+400,500>
[02:51] <klaxa> hmm
[02:51] <klaxa> i think that should actually be :0.0+400,+500
[02:51] <klaxa> let me check with some older scripts
[02:52] <klaxa> hmm no according to them it's :0.0+400,500
[02:57] <aristarchus> hmm i'm still getting reduced quality
[03:00] <aristarchus> wait...
[03:00] <aristarchus> -sameq fixed it
[03:14] <acovrig> Is there a flag (like -formats) that will list what I can give to -acodec?
[03:15] <acovrig> because mp3,wav,libmp3lame don't work (I would prefer libmp3lame or mp3).
[03:33] <acovrig> can I use acodec to output to wav?
[03:42] <klaxa> yes, use -c:a pcm_s16le or i think you can just skip that and specify wav as the container
[03:43] <klaxa> if you only need audio that is, if it's within another container use pcm_s16le or whatever you feel is apropriate (see ffmpeg -codecs)
[03:48] <acovrig> klaxa: avconv -codecs lists 'mp3' yet avconv -acodec mp3 doesn't work, why?
[03:48] <klaxa> oh you are using avconv...
[03:49] <klaxa> also, try using -c:a libmp3lame and try your luck in #libav
[03:50] <acovrig> I can use either one (ffmpeg,avconv) yet both don't work with mp3 or libmp3lame
[03:51] <acovrig> yet I have libavcodec53 linstalled
[03:57] <klaxa> acovrig: are you sure that when running "ffmpeg" you aren't actually running avconv? anywho, is it compiled with libmp3lame? it's not included in libavcodec
[04:02] <acovrig> klaxa: I ran this same script on a diff system and remember compiling ffmpeg or mencoder instead of using the repositories, but that was on a ubuntu 10 system, this is an ubuntu 13; can I see what ffmpeg was compiled with?
[04:02] <acovrig> When I run ffmpeg it says "This program is only provided for compatibility and will be removed in a future release. Please use avconv instead."
[04:02] <klaxa> avconv is generally not supported in here
[04:03] <klaxa> you can grab a static binary or compile from source
[04:07] <acovrig> klaxa: thanks, I just realized that ffmpeg shows Libav developers, thanks: I think I'll build ffmpeg myself per the link
[04:19] <davidvorick> :q
[13:39] <luc4> Hi! I'm trying to transcode with this command line but I'm getting an error with presets: http://paste.kde.org/750980/. This seems to wrok with 1.2. Maybe 0.8 is not supporting presets?
[13:49] <JEEB> luc4, preset, not pre
[13:49] <anew> just learning about ffmpeg, are other plugins needed to show video on a site or is only ffmpeg enough ?
[13:49] <JEEB> also that is a libav binary, if you didn't notice that yet :)
[13:54] <luc4> JEEB: thanks, now I get this: http://paste.kde.org/751010/. I'm not requesting a specific bitrate for aac. I made this go away by -acodec copy. Is this ok?
[13:56] <anew> jeeb is ffmpget the only thing i need ?
[13:57] <arp> hello
[13:59] <arp> I need a bit of complicated help :)
[13:59] <anew> what happened
[14:00] <arp> well, nothing happend, but... I'll elaborate :)
[14:00] <arp> I built a dashcam out of a laptop and a webcam
[14:00] <arp> I use ffmpeg to capture from the cam
[14:01] <arp> now I got a gps device and would like to put the gps data onto the captured video.... live.... I have no idea how to do that...
[14:03] <luc4> Hi! I have a video which seems to be 16:9 but opens up in players as 4:3. Maybe something is written in some metadata which makes it open up that way? Can I change this with ffmpeg?
[14:04] <JEEB> luc4, seemingly the default bit rate was too high for your type of input "[aac @ 0x8cf2800] Too many bits per frame requested" :P
[14:04] <JEEB> and yes, your input was AAC so you could just copy it over
[14:06] <JEEB> luc4, also your -vprofile high isn't doing anything. If you want to set a specific H.264 profile, use -profile:v profile_name , but if you are OK with high profile you shouldn't specifically set it, since libx264 will just auto-set the profile depending on your x264 settings
[14:07] <luc4> JEEB: oh... ok, thanks
[14:08] <JEEB> luc4, basically if you don't set a profile and get "main" instead of "high" selected, it just means that your other settings regarding x264 don't make the stream need the high profile
[14:09] <JEEB> luc4, also you could switch acodec/vcodec to c:a and c:v
[14:10] <JEEB> and I think -strict experimental is the same as -strict -2 , but more readable
[14:11] <arp> JEEB, do you also know how to put changing text onto a live capture? :)
[14:12] <luc4> JEEB: thanks
[14:13] <luc4> JEEB: anyway, -vprofile high was coming from this: https://www.virag.si/2012/01/web-video-encoding-tutorial-with-ffmpeg-0-9/
[14:18] <JEEB> luc4, I have no idea why they have it like that, you should only set a profile with libx264 when you want to specifically limit the encoder
[14:18] <JEEB> also they don't even mention vbv :V
[14:18] <JEEB> ooh
[14:18] <JEEB> they actually do
[14:18] <JEEB> arp, unfortunately I don't :)
[14:19] <luc4> JEEB: thanks. Also, do you have any idea why a video which seems to be like 16:9 opens up in players like it was 4:5? Some metadata might be wrong maybe?
[14:20] <Mavrik> luc4, see what your SAR is :)
[14:22] <luc4> Mavrik: ffprobe can't read that, right?
[14:22] <Mavrik> of course it can
[14:23] <Mavrik> it's written in the video stream part :)
[14:26] <luc4> Mavrik: Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 720x576, 1275 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc
[14:26] <luc4> Mavrik: the resolution is somehow strange for me...
[14:27] <Mavrik> that's PAL.
[14:27] <Mavrik> nothing strange with that
[14:27] <Mavrik> it's almost certanly anamorphic
[14:27] <luc4> Mavrik: yes, the image is distorted.
[14:27] <Mavrik> even though, it's wierd it doesn't show SAR. are you using libav or some obsolete ffmpeg?
[14:27] <luc4> Mavrik: I have to set aspect ratio to 16:9 to get a correct image.
[14:28] <luc4> Mavrik: I mean in vlc.
[14:28] <Mavrik> yeah, the SAR flag is wrong then
[14:28] <Mavrik> it was probably lost when someone was fiddling with the video
[14:28] <Mavrik> usually 16:9 DVDs are encoded as anamorphic 720x576 sterams
[14:28] <luc4> Mavrik: it comes from a camera directly.
[14:29] <JEEB> and you re-encoded it as AVC?
[14:29] <luc4> JEEB: yes
[14:29] <JEEB> and then there you then somehow derped up the anamorphic flag most probably
[14:30] <JEEB> depending on how it was set originally in your input :P
[14:30] <luc4> JEEB: the source video was exatly identical to this. Except it was larger and interlaced.
[14:30] <JEEB> larger as in resolution or you are just meaning the file size :P
[14:31] <luc4> JEEB: file size sorry :-)
[14:31] <JEEB> because if the frame size was the same then most probably it had the aspect ratio flag there somewhere
[14:32] <JEEB> anyways, adding the setsar filter there should do the trick? Your needed SAR most probably is 16:11, which is the PAL "16:9" aspect ratio used in DVDs and most SD things
[14:33] <luc4> JEEB: ah thanks, I'll see how it works then.
[14:45] <luc4> JEEB: is it necessary to re-encode to set sar?
[14:47] <JEEB> luc4, it shouldn't be in general, but the setsar filter might need re-encoding, I have no idea how to do SAR changing otherwise :s
[14:47] <luc4> JEEB: ok, thanks :-)
[14:49] <JEEB> luc4, since that looked like a mov/mp4 file, I guess you could just use mp4box as well for that job if ffmpeg can't do plain SAR change in the container/whatever by itself
[14:49] <luc4> JEEB: when I use the setsar filter I get this: Stream #0:0(eng): Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 720x576 [SAR 16:11 DAR 20:11], q=-1--1, 12800 tbn, 25 tbc. Does it seem correct to you?
[14:50] <JEEB> yes, seems correct
[14:50] <luc4> JEEB: thanks :-)
[14:50] <JEEB> (in mp4box for whatever reason sar is called par *shrug*)
[15:09] <arp> does anybody know if, when I load a subtitle file, does it load all of it at once, or sequentially?
[15:10] <arp> or with other words... is it possible to change the subtitle file while it is beeing overlaid?
[15:40] <ura> Hi!
[15:41] <ura> Why when I'm encoding with -crf 18 I see in logs q=23 instead of q=18?
[15:41] <JEEB> q is not crf
[15:41] <JEEB> I don't even know if the q values are valid in case of crf encoding anyways :)
[15:42] <JEEB> as long as you see in the line that libx264 outputs that rc=crf and crf=18, you're fine :)
[15:43] <ura> thanks!
[15:43] <ura> can I see this in advance, or I will notice it just in final statistics?
[15:43] <JEEB> when libx264 starts encoding, it should show that line
[15:44] <JEEB> the long line with the x264 version and so forth
[15:44] <ura> yes, sure: rc=crf mbtree=1 crf=18.0 :)
[15:44] <ura> thanks!
[15:44] <JEEB> np
[15:45] <ura> I should not interrupt my encoding... I wanted to try with -t 10 to see it in final statistics, but I interrupted encoding before I saw your answer :) Anyway, when I start over I'm sure that I'm encoding with desired parameters :)
[16:30] <arp> hey, one more question, if anybody is listening :)
[16:31] <arp> on my laptop, which I use as a dashcam and use ffmpeg to capture, the battery is gone. the laptop is connected to the cars power supply. this means, if the power is cut, the laptop is off, without a proper shutdown procedure
[16:32] <arp> when ffmpeg is captureing, I noticed that sometimes the timestamps on the files is corrupt... for example VLC would show that it captured 10 minutes, but if you let it run, it runs through for much longer
[16:32] <arp> I saw that when I use ffmpeg later to create a new file and just copy the videostream, the timecode is corrected. But is there also a way to specify something in the encoding parameter so that the video is not corrupted when the power is suddenly gone?
[16:44] <arp> can I insert some kind of special frame to ensure that its not corrupted?
[16:54] <t4nk364> hi
[16:55] <t4nk364> i stream to ustream using ffmpeg, i use only 50 % of my bandwidth and i get huge lag , any idea what i could try ? i even tried tcp relay and i get no improvement
[16:56] <t4nk364> what flashver parameters are available ? i use 3.0, i can't find any info about it
[17:23] <arp> uh... if I use something to create a videofile which just contains text, ie. the gps data, can I then pipe this as raw data into ffmpeg to use it as overlay?
[17:49] <durandal_1707> arp: i assume you want to render text on video?
[18:02] <arp> yes
[18:02] <arp> but I want the text to be dynamic... coming from a gps device
[18:14] <acovrig> Can I do this with just ffmpeg?: ffmpeg -y -deinterlace -f dv -i - -f flv -vcodec flv -s 350x272 -aspect 1:1 -qscale 3.5 -acodec libmp3lame -ab 32k -ar 22050 "$ofn".flv; mencoder "$ofn".flv -audiofile "$ofn".wav -o "$ofn".mp4 -vf dsize=300:272:2,scale=-8:-8,harddup -oac faac -faacopts mpeg=4:object=2:raw:br=128 -of lavf -lavfopts format=mp4 -ovc x264 -sws 9 -x264encopts nocabac:level_idc=30:bframes=0:bitrate=512:threads=auto:global_hea
[20:38] <t4nk364> can anyone here stream with ffmpeg to ustream without getting a lot of lag ?
[20:39] <t4nk364> i've seen that tcprealy used with ffmpeg is the solution, i tried and i get no difference
[21:02] <Epicanis> This seems like a stupid, lazy question but I'm going to ask anyway: whats the simplest way to determine programmatically (e.g. from a shell script) whether my install of ffmpeg can interpret the audio in a given media file?
[21:04] <Mavrik> hmm
[21:05] <Mavrik> just running a simple short /dev/null copy and checking for errors would be by far the most reliable one
[21:07] <Epicanis> I guess I'm specifically wondering what the simplest and/or most portable way of doing that error check is: is grepping for the specific text the only way?
[21:08] <Epicanis> Wasn't sure if ffmpeg had "return codes" when run or not, for example
[21:08] <Mavrik> um, I don't understand your question
[21:09] <Mavrik> if you want to RELIABLY know if ffmpeg can read a file, run a mock transcode and see return status which is supported on all ffmpeg platforms
[21:09] <Mavrik> parsing text will just lead you to hell
[21:09] <Mavrik> because there is no guarantee ffmpeg will even detect stream of unknown type in file
[21:09] <Mavrik> not to mention that output may vary between versions, platforms and input files
[21:09] <Mavrik> parsing text is almost always a quick way to fail
[21:11] <Darkman> Mavrik: do you have a sample for such a quick mock transcode? i mean, if you check for a "big" file it should be quick...
[21:12] <Epicanis> I'm calling ffmpeg from a script and want to pass arbitrary audio or audio/video files. I'm just trying to figure out what the simplest way is to do the initial "test to ensure ffmpeg can extract audio from this file" is.
[21:12] <Mavrik> ffmpeg -i <file> -codec:a pcm_16le -t 00:00:00:001 -vn -f wav /dev/null should be enough
[21:12] <Mavrik> Epicanis, and you can't read?
[21:12] <Mavrik> yes, you can go parse ffmpeg output
[21:12] <Mavrik> no it won't work reliably. you'll have a hell of problems.
[21:13] <Mavrik> you may parse text if you know what formats do you expect and limit yourself to them.
[21:13] <Epicanis> I only did a cursory google, but haven't turned up the list of return codes...
[21:13] <Mavrik> 0 on succes, anything else failure?
[21:13] <Mavrik> Darkman, basically you just need to poke ffmpeg to try to decode a few frames
[21:14] <Darkman> Mavrik: yeah, got that, simply forgot about the -t option
[21:14] <Epicanis> (I definitely would prefer not to try to parse text output...) Is there anything informative in the return coded that I could interpret?
[21:14] <Mavrik> Darkman, since the biggest problem with arbitrary input is that ffmpeg may detect proper codec even if it doesn't have the decoder
[21:14] <Mavrik> you COULD cross-reference that to the "-codecs" output
[21:14] <Epicanis> (beyond "something went wrong", that is...)
[21:15] <Darkman> yep, but that would be.. well.. much work for no gurantee
[21:15] <Mavrik> but that's just waiting for hell to happen when you switch versions or if the decoder name doesn't match the probed codec type (like for example when using FAAD)
[21:16] <Darkman> i'm just playing around with to get automatic encoding for videos to webformats running, so i ran into that issue, too, but delayed it till the rest is working ;)
[21:17] <Epicanis> I'd ideally like to be able to distinguish "local ffmpeg doesn't support this codec" vs. "this file has no detectable audio track" vs"nice try, wiseguy, but /etc/shadow isn't even valid media"...
[21:18] <Epicanis> Darkman, sounds like you're working on the same sort of thing I am...
[21:18] <Darkman> Epicanis: but thats three different types of errors and you should capture them in different ways
[21:20] <Darkman> Epicanis: like the shadow thing is easy, and not ffmpeg related at all, the "is this a valid media file" can be verified with ffprobe for example (i do that for getting meta data infos out of every file) and the last one is the thing Mavrik mentioned, just try to re-encode a bit of the file and see if that works
[21:21] <arp> durandal? you asked a couple of hours ago if I want to render text on video... yes :)
[21:21] <Epicanis> What I'm asking in part is whether the failure of ffmpeg for various reasons will return different (reliable) error codes that I could interpret for the failure reason, or am I going to have to do multiple separate tests for every anticipatable error condition?
[21:22] <durandal_1707> arp: there is drawtext filter
[21:22] <Darkman> Epicanis: i think the last one
[21:22] <arp> yes, but this only draws static text, as far as i know
[21:23] <durandal_1707> any your text changes with every frame?
[21:24] <arp> no, about once a second
[21:24] <Epicanis> Darknan: ugh. Avoiding that is the reason I hoped someone here knew of a better way :-)
[21:24] <arp> I want to read out a gps device once a second and put the result on the video
[21:25] <durandal_1707> that is just calling drawtext with different arguments
[21:25] <Darkman> Epicanis: its just wild guessing by me, i just play around with ffmpeg at the moment to find the best settings for different resolutions and devices
[21:26] <arp> while it is capturing from a webcam?
[21:26] <durandal_1707> how is text obtained?
[21:26] <arp> at the moment I just have a python script that puts the text into a text file
[21:27] <durandal_1707> usually if you want some feature that is not available in the way you need, you open bug report with feature request....
[21:27] <arp> so every second the file would change
[21:27] <Mavrik> Epicanis, ffmpeg should return different error codes on different conditions
[21:27] <durandal_1707> arp: same file or different?
[21:27] <Mavrik> Epicanis, however I'm not sure how consistent that is
[21:27] <arp> well I just wanted to ask if anybody if this already words somehow... I am not an expert with ffmpeg :)
[21:28] <arp> same file
[21:28] <Mavrik> Epicanis, plus, for the conditions you listed, "valid" file differs wildly between usecases
[21:28] <arp> but I can change the script of course if different would be better
[21:28] <Mavrik> for some people (and ffmpeg) a missing stream may not be an error condition at all
[21:28] <Epicanis> I'm working specifically on "legally-free web audio" - right now I'm only using ffmpeg to get metadata and encoding parameters for input audio. Now I'm adding "source file can be anything with audio the local ffmpeg can convert to pcm"...
[21:29] <Epicanis> (instead of "wav or aiff only")
[21:30] <Epicanis> Hence my desire to, as easily as possible, determine whether the currently installed ffmpeg on a given server can handle a given media file so the interface can informatively report the problem when it can't)
[21:31] <durandal_1707> arp: you are lucky, there is flag to reload file after each frame
[21:31] <durandal_1707> so it can be same file
[21:31] <arp> uh, nice
[21:31] <arp> that would probably do the triclk
[21:32] <Epicanis> In my case, a valid input file is anything containing an audio track that ffmpeg can generate pcm output from.
[21:35] <t4nk364> can anyone help me about ffmpeg streaming to ustream.... i get lag even with 50% of my bandwidth
[21:43] <Epicanis> (Now on something with a real keyboard so I can type...) I thought about trying to parse the output of -codecs and/or -formats, but like Mavric says, that seems like a really unstable and unreliable way to do it. I'm hoping I can dredge up some info on the return codes that I could use so that in a single ffmpeg call I could tell if the input file was readable, and had valid (audio) media.
[21:46] <Epicanis> Crap. No, if I'm reading ffmpeg.c correctly, main() only returns "0" or "1" ("Like, DUDE! Something went wrong!") Can't use that...
[21:53] <t4nk364> use the error stream output
[21:54] <durandal_1707> Epicanis: than open bug report
[21:55] <Epicanis> I may, but then I'm back to worrying about parsing potentially-unreliable-between-versions text.  I could also relent a bit on my insistence on limiting binaries for what I'm working on (currently needs ffmpeg, opusenc, and oggenc. I was hoping not to need any others...) and also require ffprobe, which DOES appear to give more parseable info (seemingly exactly for the kind of thing I'm trying)
[21:58] <Liberator> what video standards (and apps) include the gps, g-vector, compas, and other sensors in a multi-channel video format?
[21:59] <Mavrik> Epicanis, is there a reason why aren't you just bundling a static ffmpeg build which would also handle ogg encoding?
[21:59] <arp> would you happen to know the syntax to load text for the drawtext filter, durandal?
[21:59] <Mavrik> Epicanis, you'd save yourself a ton of headaches
[22:00] <Epicanis> Mavric: Initially, it's intended to be a system anyone can fling onto their own web server and use whatever is already installed (on any architecture).  ffmpeg's role overall is pretty simple (read metadata, and extract audio to pcm) in this project.
[22:00] <durandal_1707> arp: there is documentation
[22:00] <durandal_1707> have you looked at it at all?
[22:01] <Liberator> I am concerned with the compressed encoding of the sensor data channels (from phone devices) into an appropriate container
[22:01] <Mavrik> Epicanis, yeah. That's why you should bundle compiled ffmpeg
[22:01] <Epicanis> Eventually, I DO want to have a little repository with ffmpeg (and opusenc and oggenc) compiled statically for various architectures that people could pull from when they don't have authorization to do system-software installs (i.e. people on generic web hosts)
[22:01] <Mavrik> most distros bundle libav or have obsolede f'ed up ffmpeg
[22:02] <Mavrik> adding a couple MB of binaries will make your app hellova more useful than explaining to people why THEIR exact server craps out on some files when their friends doesn't
[22:02] <Epicanis> Mavrik: I'm trying to put off having special "x86", "x86_64", "ARMv6", "ARMv7", etc. separate packages for now that *I* have to maintain...
[22:02] <arp> ah, got it
[22:02] <arp> just need to find this reload flag :)
[22:02] <Epicanis> (Also bear in mind this is a collection of PHP scripts, rather than a polished "app")
[22:02] <Mavrik> Epicanis, so you're rather explain to people using Ubuntu why their ffmpeg keeps dying on your code
[22:02] <Mavrik> because ubuntu symlinks libav for ffmpeg?
[22:03] <Mavrik> instead of invoking a simple ffmpeg binary on x86 and falling back on default for arm?
[22:03] <arp> sweet
[22:03] <arp> it works
[22:03] <arp> thanks!
[22:03] <Mavrik> Epicanis, when I say "most distros package broken ffmpeg" I'm not joking - this channel is full of questions because people try to use obsolete wrong builds
[22:04] <Epicanis> Mavrik: You may be overestimating the current scope of my project :-) (But for the reasons you give, adding a repository of my own with statically-compiled ffmpeg, opusenc, and oggenc for various architectures is in the roadmap, such as it is.)
[22:04] <Epicanis> I actually do know about the broken/ancient ffmpeg issues.
[22:05] <Epicanis> (I'm on Arch, myself, partly to avoid that kind of thing).
[22:06] <Mavrik> Epicanis, no I'm not
[22:06] <Mavrik> if you're expecting to support different architectures with their ffmpeg problems you're biting way more than you can chew
[22:06] <kaizoku__> I am trying to do screen capture of my desktop (for testing purposes I'm recording my browser playing a youtube video), but I'm having problems with the audio.       I tried:  -i hw:0,0         -i plughw:0,0         -i default         But no luck so far.  Any suggestions?
[22:06] <Epicanis> (I also considered just using sox for pcm conversion, but then A)that's another binary I need and B)it doesn't give me ability to easily suck audio out of an audio/video file like ffmpeg)
[22:06] <Mavrik> and using a controlled ffmpeg binary will make your scope more managable
[22:07] <Mavrik> kaizoku__, did you tell ffmpeg that you want to use alsa and/or pulse?
[22:09] <kaizoku__> Mavrik: yes.   -f alsa
[22:09] <Epicanis> What I'm currently aiming for is the simplest possible web interface that will take an input file (currently only accepts wav because that's what opusenc requires) with audio, present a form for filling in metadata, and then feeds the input audio to opusenc (and oggenc, soon) to provide "legally free" media that people can then post somewhere.
[22:10] <Epicanis> Rather than restrict to a small number of formats, I'd like to expand it to "whatever the locally-available ffmpeg can decode", which can vary from system to system right now (unless as you suggest I "bundle" x86/x86_64/ARMv6/ARMv7/etc. versions myself).
[22:11] <Epicanis> Main reason I am looking for more detailed error output is so the interface can say "your local ffmpeg doesn't understand this file" or "your local ffmpeg recognizes this file but doesn't support (codec X) audio" or whatever.
[22:12] <kaizoku__> Mavrik: http://pastebin.com/xk7QkG8f
[22:13] <Mavrik> no, I suggest you bundle x86 version youself which will cover 90% of use cases and fallback for everything else
[22:13] <Mavrik> which will make sure people CAN decode most stuff
[22:13] <Mavrik> kaizoku__, ok, what are your problems?
[22:14] <t4nk364> where can i find info about the flashver parametre of rtmp.... i can't find anything usefull, it must have a list of choices
[22:14] <Mavrik> kaizoku__, also, why aren't you encoding audio? :)
[22:14] <Epicanis> (Can support for aac audio be statically-compiled into ffmpeg?)
[22:15] <Epicanis> (Or does that require an external library like opus?)
[22:15] <Mavrik> pretty much everyhing that doesn't have to deal with OS (like x11 grabbing) can be statically compiled
[22:16] <Mavrik> there are already prepackaged builds out there for linux/win
[22:17] <Epicanis> I'll have to look into that then. I expect the great majority of anyone who is interested in using what I'm building will mainly be interested in converting flac, wav, aac, and mp3 (plus a handful of other audio formats from common video formats).
[22:18] <Epicanis> I may look at moving up "binary repository" on my priority list if its feasible. I was more worried about formats (like opus) requiring external libraries anyway and making it more of a headache than it'd be worth.
[22:19] <kaizoku__> Mavrik: my problem is that I have no audio.    I'm not encoding the audio you say?    Wouldn't this do the encoding:  -acodec pcm_s16le  ?
[22:20] <Liberator> ffmpeg- the VP8-VP9 / H264-H265 problem is pissing us off. How do we best encode sensor data including gps/vector/compass/light/etc in line with audio and video channels for the snapdragon armv7 and other on-market devices like samsung s4 or htc pro with 8-12mpix cameras?
[22:21] <Mavrik> kaizoku__, that's raw PCM audio
[22:21] <Liberator> we also need to handle random access frame extraction efficiently, what container is best?
[22:21] <Mavrik> taking alot of space& also some players may have problems with it
[22:22] <Epicanis> Liberator: I can't actually answer the question, but I'm a minor-league geo-metadata fanboy so it's awesome to hear about SOMEONE geotagging something besides TIFF and JPEG files for a change.
[22:23] <Liberator> we did some earlier systems with parallel data streams or a false audio side channel
[22:23] <Liberator> but there are allocations in the standards for a decade for this
[22:23] <Liberator> but no implementations
[22:23] <Mavrik> uh
[22:23] <kaizoku__> Mavrik: I replaced it with libmp3lame   but no change... still no audio.
[22:23] <Mavrik> kaizoku__, ok, now check your alsamixer
[22:24] <Mavrik> and find the hardware address of your audio output ;)
[22:24] <Mavrik> you're probably recording from mic in :)
[22:24] <Mavrik> Liberator, that really has little to do with video formats
[22:24] <Liberator> Mavrik: it is a video standard
[22:24] <Epicanis> Liberator: do you have some pointers to those standards? (I'm trying to collect information about geo-metadata standards that currently exist for a future Hacker Public Radio episode...)
[22:25] <Mavrik> Liberator, yes, and you're not trying to save video are you?
[22:26] <Epicanis> (If you were talking to the Xiph folks, I know they'd be suggesting a text stream muxed into the output rather than putting it in the video stream)
[22:26] <Mavrik> Liberator, the easiest way would probably be to embed a datastream into mp4 container and store data in a third stream
[22:26] <Mavrik> of course, nothing will be able to read that more or less
[22:26] <Liberator> Epicanis: there are some "motion vector" allocations since mp4, also a depth/3d data block, but only a few scattered vector-motion and gps/etc encodings within the frames
[22:26] <Mavrik> you can also encode that as a text subtitle stream and have it be shown with a player
[22:26] <Liberator> yes, the mp4 container has various options, but it requires tighter frame sizes
[22:27] <Mavrik> "tighter frame sizes"? huh?
[22:27] <Liberator> Mavrik: check your phone's vector and other sensor resolution, they are faster than vid frame rate
[22:27] <Epicanis> Do you have a pointer to some documentation on that which I could poke through? (So far aside from exif, the only other official or semi-official geotagging "standard" I've found is the "geo_location" tag for Ogg Vorbis/Opus files)
[22:27] <Mavrik> 1/90000 timebase isn't good enough for you?
[22:27] <kaizoku__> Mavrik: I'm not following.   I start alsamixer, press F4 for Capture Devices... and now what?
[22:28] <Mavrik> Liberator, none of those devices offer granularity over 1/90000 for any reliable measurment
[22:28] <Mavrik> and stop mixing containers and video formats please :P
[22:28] <Liberator> ok, what container options are most suitable?
[22:28] <Mavrik> kaizoku__, you need to find out what the ALSA address (the "hw0,0" part) of your computer audio source would be
[22:29] <Mavrik> Liberator, for arbitrary text payloads you don't really have much choices
[22:29] <Mavrik> grab a mkv, add a data/subtitle track for your info
[22:29] <Liberator> there *should* be a format, Epicanis, that is based on the collection of sensors typical on the fone devices
[22:29] <Mavrik> and make sure you encode video in intra-only mode for arbitrary frame access (of course, your video will be huge)
[22:30] <Liberator> its going to be encoded (binary) data, not text for all these
[22:30] <Mavrik> of course, you'll have to mux it yourself.
[22:30] <Mavrik> other option is MP4 which also allows arbitrary payload streams
[22:30] <Mavrik> again, you'll have to mux it yourself
[22:31] <Mavrik> for example MP4 it seems already has LOCI atom standardized for location
[22:31] <Mavrik> but it seems it's a header atom
[22:32] <Mavrik> kaizoku__, it seems ALSA doesn't create a loopback device for all computers& I suggest using pulseaudio as a source if at all possible
[22:33] <kaizoku__> *luke reaction at I am your father*
[22:33] <kaizoku__> I still have terrible memories from pulse.
[22:34] <Mavrik> yeah well, it tends to work better
[22:34] <Mavrik> e.g. for your case you run pactl list sources to see which devices you have
[22:34] <Mavrik> and then just say "-f pulse -i <name from previous command>"
[22:35] <Mavrik> Liberator, if you expect an open widely supported format for your usecase, I'm afraid you're not going to find it though
[22:37] <Liberator> prior designs used mp4 channels
[22:37] <Liberator> yes, the mp4 geoloc is only first header, not inline
[22:38] <Liberator> i was expecting to use the hw encoders like the fones have for compression
[22:38] <Mavrik> on Android 4.1+ you can
[22:38] <Liberator> this is a very severe problem, especially for the phone devices
[22:38] <Mavrik> since you're not actually doing anything to the H.264 bitstream
[22:38] <Liberator> url mav?
[22:39] <Liberator> yes, these are accessory channels
[22:39] <Mavrik> Liberator, check the mediacodec APIs
[22:39] <Liberator> the entire point, of course, is getting this packed into a standard format
[22:40] <Mavrik> of course, the other option is to record the mp4 and data separately
[22:40] <Mavrik> and then mux data into the container
[22:40] <Mavrik> which means a post-processing step after recording, but it'll be easier and compatible quite a way back
[22:40] <Liberator> not viable, have done that, need one (mp4) container to store
[22:40] <Mavrik> because?
[22:41] <Liberator> it has to be a merged container, otherwise one or the other gets lost... was a very common issue for the human factor last tool
[22:41] <Mavrik> um, what are you talking about?
[22:41] <Liberator> and the frames are not necessarily accurate temporally
[22:41] <Mavrik> your software records data into two separate files then your software merges that into output mp4
[22:41] <Liberator> first, the camera compressor may not be exact frame rate
[22:41] <Mavrik> frames always have exact framerate
[22:41] <Mavrik> they MAY be dropped
[22:41] <Mavrik> but that doesn't matter to you
[22:41] <Mavrik> because you don't care about camera timestamps
[22:41] <Liberator> so binding the sensor data gives a begin and end reference
[22:41] <Mavrik> since your datastream has other set of timestamps
[22:42] <Mavrik> hmm
[22:42] <Liberator> its the camera and encoder that have temporal problems
[22:42] <Mavrik> I think you really need to check up on how video encoding works first.
[22:42] <Liberator> lol
[22:42] <Mavrik> Since you're mixing things up.
[22:42] <Liberator> which container and encoder formats might be best options?
[22:42] <Liberator> for a parallel data stream
[22:43] <Mavrik> the ones I've written about 200 lines up.
[22:43] <kaizoku__> Mavrik: would it still be possible to use alsa, by creating a loop device (have to see how can I create one)
[22:43] <Mavrik> kaizoku__, yes
[22:43] <Mavrik> google it, I bet someone did that already
[22:59] <kaizoku__> reading about sync problems with alsa... will try installing pulse and see if it works -_-
[23:02] <kaizoku__> brb
[23:05] <RhesusMinus> I converted a 4K video clip and it turned from 6.5 GB to 70 MB with no loss of quality. However, it changed the colors slightly so they are a little darker. What gives?
[23:06] <Liberator> what is the best multi-part or multi-channel media container format to use for this situation?
[23:07] <RhesusMinus> Comparison: http://i.imgur.com/gUxZHYF.jpg
[23:07] <t4nk364> mp4 ?
[23:07] <t4nk364> nobody can help me ?
[23:08] <RhesusMinus> (Actually 6.95 GB => 79.6 MB.)
[23:08] <t4nk364> o_O
[23:08] <t4nk364> it's impossible
[23:08] <RhesusMinus> No...
[23:09] <RhesusMinus> I had to save it as a plain AVI file from Premiere Pro because it wouldn't let me set the high resolution (4K) for any other format.
[23:09] <RhesusMinus> So it was likely entirely uncompressed.
[23:20] <Mavrik> RhesusMinus, what did you convert it to?
[23:21] <RhesusMinus> Mavrik: To .avi. Let me show you the full command.
[23:21] <Mavrik> .avi isn't a video format. it's a container
[23:21] <Mavrik> add full output as well
[23:21] <RhesusMinus> Mavrik: ffmpeg.exe -i test.avi -q:a 0 -q:v 0 test_out.avi
[23:22] <t4nk364> o_O
[23:22] <Mavrik> O.o
[23:22] <Mavrik> dude.
[23:23] <t4nk364> lol ?
[23:23] <Mavrik> never. ever. never. not even in 100 years. encode quality-sensitive files with default parameters.
[23:23] <Mavrik> now.
[23:24] <RhesusMinus> What?
[23:24] <RhesusMinus> "never. ever. never. not even in 100 years. encode quality-sensitive files with default parameters."
[23:24] <RhesusMinus> The whole damn point is to not have to enter manual shit.
[23:24] <Mavrik> ok.
[23:25] <Mavrik> I'm done.
[23:25] <RhesusMinus> ...
[23:25] <RhesusMinus> You're "done"?
[23:25] <RhesusMinus> You haven't even started helping yet.
[23:25] <RhesusMinus> I asked a simple question.
[23:26] <RhesusMinus> You either are too good to help or don't know the answer.
[23:26] <kaizoku__> Mavrik: I think I now have pulseaudio installed and working (at least smplayer plays audio when using the pulseaudio output device).         But ffmpeg says it doesn't recognize pulse:     Unknown input format: 'pulse'
[23:26] <kaizoku__> ffmpeg -f pulse -ac 2 -i alsa_input.pci-0000_00_1b.0.analog-stereo  -f x11grab -r 25 -s "$screen_res" -i :0.0 -acodec libmp3lame -ab 192k -vcodec libx264 -preset ultrafast -threads 0 output.mkv
[23:26] <Mavrik> kaizoku__, ugh
[23:26] <Mavrik> you have a build without pulse support
[23:26] <kaizoku__> :'(
[23:27] <RhesusMinus> I converted a 4K video clip and it turned from 6.95 GB to 79.6 MB with no loss of quality. However, it changed the colors slightly so they are a little darker. Comparison: http://i.imgur.com/gUxZHYF.jpg Command used: "ffmpeg.exe -i test.avi -q:a 0 -q:v 0 test_out.avi" What gives?
[23:27] <Mavrik> kaizoku__, are you using this build: http://dl.dropboxusercontent.com/u/24633983/ffmpeg/index.html
[23:28] <kaizoku__> Mavrik: I'm using the package from debian multimedia.
[23:28] <RhesusMinus> I converted a 4K video clip and it turned from 6.95 GB to 79.6 MB with no loss of quality. However, it changed the colors slightly so they are a little darker. Comparison: http://i.imgur.com/gUxZHYF.jpg Command used: "ffmpeg.exe -i test.avi -q:a 0 -q:v 0 test_out.avi" What gives?
[23:28] Last message repeated 1 time(s).
[23:28] <Mavrik> hmm, that's from 1991 probably :/
[23:28] <RhesusMinus> I converted a 4K video clip and it turned from 6.95 GB to 79.6 MB with no loss of quality. However, it changed the colors slightly so they are a little darker. Comparison: http://i.imgur.com/gUxZHYF.jpg Command used: "ffmpeg.exe -i test.avi -q:a 0 -q:v 0 test_out.avi" What gives?
[23:29] Last message repeated 1 time(s).
[23:29] <Mavrik> *sigh* Yay for color space conversion ;)
[23:30] <Darkman> does repeating a question helps to solve it these days? ;9
[23:30] <Darkman> ;)
[23:31] <t4nk364> no :(
[23:31] <Darkman> ah, good to know ;)
[23:31] <t4nk364> sigh
[23:43] <kaizoku__> Mavrik: SUCCESS!  But not a great one yet.   When ffmpeg is capturing I had to got to pavucontrol and in the recording tab select "monitor of built-in audio analog stereo".          The only problem is that the audio and video are not synced :(
[23:44] <kaizoku__> I used     ffmpeg -f alsa -ac 2 -i pulse   .....
[23:45] <kaizoku__> Any suggestions on how can I fix the sync issue.      I will always have to reencode the video after I'm done.     I just don't know yet if the desync is constant or not.
[23:47] <Mavrik> hmmm, try one of the -async or vsync parameters
[23:47] <Mavrik> also, encoder not being able to catch up with grabbing can cause desync
[23:51] <matu> Hi, i have a problem using ffmpeg, here is my command :  ffmpeg -i s01e04_lhopital_fr_3_P3_1z7.mp4 -ss 00:08:24 -t 00:08:27 image-%d.jpeg
[23:51] <matu> here is the error
[23:51] <matu> http://pastebin.com/WwDPH7yT
[23:53] <matu> no image is extracted and i have to stop the program using CTRL+C otherwise it would make the computer very slow
[23:53] <Mavrik> mhm
[23:53] <Mavrik> matu, yeah, you're not using ffmpeg
[23:53] <matu> oO
[23:53] <Mavrik> also, put ss before i parameter.
[23:54] <matu> i tried using -ss time - i file but it does not work
[23:54] <ubitux> just use ffmpeg and you'll be fine
[23:54] <Mavrik> grab a non-ancient ffmpeg binary to replace your libav (like from here: http://dl.dropboxusercontent.com/u/24633983/ffmpeg/index.html ), use that and then we can see what's goin on :)
[23:55] Action: Mavrik draws another line next to the symbols of Ubuntu and a huge stick on the wall.
[23:57] <matu> ok i am downloading the file from the url you posted right now
[23:57] <matu> i am very tired >_<
[00:00] --- Mon May 27 2013


More information about the Ffmpeg-devel-irc mailing list