[Ffmpeg-devel-irc] ffmpeg.log.20121208

burek burek021 at gmail.com
Sun Dec 9 02:05:01 CET 2012


[01:10] <elkng> I have 2 files: "1.avi" and "1.srt" is it possible to render subtitles into video using ffmpeg ?
[01:14] <llogan> elkng: https://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20burn%20subtitles%20into%20the%20video
[01:19] <elkng> it says: "With recent versions (starting Nov 29th 2012) of FFmpeg, you can also simply use the subtitles filter: ffmpeg -i video.avi -vf subtitles=subtitle.srt out.avi"
[01:19] <elkng> thats mean no need to use "--enable-libass" ?
[01:19] <ubitux> you need --enable-libass
[01:20] <ubitux> it's using the same code
[01:20] <ubitux> except that it's decoding any text subtitles format supported
[01:20] <ubitux> `https://www.ffmpeg.org/ffmpeg.html#subtitles
[01:20] <ubitux> https://www.ffmpeg.org/ffmpeg.html#subtitles
[01:21] <elkng> you angry ? so much text
[01:22] <ubitux> so much text? oO
[01:22] <ubitux> that section is pretty small
[03:50] <NoSu> is there any reason to use ffmpeg-mt or has this be evolved into the '-threads 0'
[03:53] <NoSu> we have an ffserver that streams 4 cams up to 320X240 if we raise this up to 640X480 the CPU (3.0GHz P4-HT) craps a brick, If I switch up to this newer mobo and cpu (AMD Athlon 64 X2 3800+ / 2.0 GHz) will it handle this 4 live streams a little better?
[03:53] <JEEBsv> NoSu: ffmpeg-mt was merged into ffmpeg between march and june/july of 2011
[03:53] <NoSu> thank you JEEBsv
[04:03] <NoSu> does ffserver take advantage of multi-threads
[04:18] <ubitux> ffserver uses fork() iirc
[06:39] <roboman2444> is it possible to have two audio inputs?
[06:39] <roboman2444> err
[06:39] <roboman2444> for a flv rtmp output that is
[06:39] <roboman2444> im trying to screencast to ustream, which i am successful at
[06:40] <roboman2444> but i want both my system audio and my mic being captured
[07:12] <Aristide> roboman2444: ffmpeg "file.avi" -i "file01.mp3" -i "file02.wav" -i "lol.wav" "output.avi" ?
[10:56] <Samus_Aran> how can I prevent ffmpeg complaining that it doesn't know the output format, when all I'm doing is dumping the audio with -acodec copy?  there should be no question of what the output format is when it is copying it
[10:56] <Samus_Aran> this is in a script where I don't know the audio format
[10:58] <sacarasc> You either give the output file an extension or use -f blah as an output option.
[10:59] <Samus_Aran> ffmpeg isn't smart enough to set the file extension when it already knows the format?  -.-
[11:00] <sacarasc> You can put multiple different audio codecs into different containers.
[11:00] <sacarasc> Pretty much all of them can go into matroska, for example.
[11:00] <Samus_Aran> there's only one
[11:00] <Samus_Aran> the one it is copying with -acodec copy
[11:01] <Samus_Aran> how can I extract just the format, to use as the file extension?
[11:01] <sacarasc> ffmpeg -i foo.bar and some awk magic.
[11:05] <Samus_Aran> the awk magic I chose was: 2>&1|awk -F'[: ,]' '/    Stream.
[11:05] <Samus_Aran> *: Audio: / {print $10}'
[11:05] <Samus_Aran> oops, minus the split line
[11:05] <Samus_Aran> cheers
[13:10] <mpfundstein> hello
[14:19] <Lexx69> Hey everyone, I've got ~11k small files that I need to convert, and I've got a script setup, and everything's working fine, but the console windows that pop up take focus, so I can't do anything else on the PC it's running on. Is there a way to get one ffmpeg window to work through a batch file of commands, or alternatively run ffmpeg without it making a console window?
[14:58] <Lexx69> Should have added I'm on a windows system
[18:13] <funyun> can i use ffmpeg to encode multiple videos into one? the videos are all the same AR and shot with the same camera..
[18:13] <JEEB> check out the concat demuxer (not protocol)
[18:15] <ubitux> or concat filter
[18:15] <funyun> JEEB: awesome. thank you :)
[18:17] <undercash> i thought files had to be mpeg for this to work
[18:25] <ubitux> undercash: with concat protocol yes
[18:25] <ubitux> we are talking about concat demuxer, and concat filter
[18:25] <undercash> ok sorry
[18:28] <dar_ek> Hello, is there a possibility to generate RTMP stream with ffserver/ffmpeg? I've compile ffmpeg with librtmp but ffmpeg, but I dont know how to configure ffserver to use this transport...
[18:29] <JEEB> dar_ek, I think ffmpeg now has its own rtmp(e) stuff?
[18:29] <JEEB> at least for output
[18:30] <dar_ek> hmmm, maybe ffmpeg, but I need to use ffserver rather to control connections and transfer..
[18:30] <dar_ek> normaly ffserver use http transport, eventualy rtsp
[18:31] <dar_ek> I cant find any ffserver options to use rtmp transport.
[18:32] <JEEB> there probably aren't any :P You can just send rtmp(e) to a streaming server from ffmpeg, but I don't think ffserver can be the thingy that serves people who want to see the stream...
[18:34] <dar_ek> pity
[18:34] <dar_ek> I need to generate live stream
[18:35] <dar_ek> h264 in flv (or mpegts) its ok to web
[18:35] <dar_ek> on pc
[18:36] <dar_ek> but smartfones dont recongnize it...
[18:38] <dar_ek> Q: Whats is the best sets for smartfones (androids/iPhones) transport/container/codec to live streams?
[18:59] <undercash> i do live stream with ffmpeg
[18:59] <undercash> working really nice
[19:00] <dar_ek> hi, with what codecs, protocols (container)?
[19:00] <undercash> flv
[19:01] <dar_ek> ok, I do, too. but I have problem with smartphones
[19:01] <undercash> http://pastebin.com/zP0QXNES
[19:01] <undercash> somebody from here helped me
[19:02] <undercash> donno how smartphone.. maybe if you use iOS compatible stream  ,would be fine?
[19:03] <Yulth> Hi everyone!
[19:03] <dar_ek> hmm, thx, i try this
[19:03] <dar_ek> you used a rtmp transport
[19:03] <undercash> i am on lucid so yea i had to compile lib-rtmp
[19:04] <undercash> but on precise, it is in the repos
[19:04] <undercash> it s on the ffmpeg install manual
[19:04] <dar_ek> uhm, I compile it too
[19:05] <dar_ek> but for now, Im try to use rtmp via ffserver
[19:05] <undercash> i did to stream to ustream, their rtmp server wouldnt accept my stream, unlike justin
[19:05] <undercash> lib rtmp fixed that
[19:05] <undercash> can't help sorry
[19:05] <dar_ek> no, thx very much
[19:05] <dar_ek> maybe I'll resign to use ffserver
[19:06] <dar_ek> and will generate stream directy from ffmpeg like you
[19:07] <Yulth> Am I executing this command-line wrong? It doesn't work:     ffmpeg1 -i - -acodec -f aac libfdk_aac -profile:a aac_he -ab 40k -ar 44100 -ac 2 -
[19:07] <funyun> JEEB, ubitux: do you mean ffmpeg -f concat -i "movie1.mpg:movie2.mpg"? because when i try that, i get "movie1.mpg:movie2.mpg: Protocol not found"
[19:09] <ubitux> -f is to specify a demuxer in this case
[19:10] <ubitux> 'concat:movie1.mpg|movie2.mpg' would be a protocol
[19:11] <ubitux> concat demuxer: https://www.ffmpeg.org/ffmpeg.html#concat-1
[19:11] <ubitux> concat protocol: https://www.ffmpeg.org/ffmpeg.html#concat
[19:11] <ubitux> concat filter: https://www.ffmpeg.org/ffmpeg.html#concat-2
[19:12] <funyun> ubitux: so i need a filter for mpg files?
[19:12] <ubitux> for mpeg, concat protocol is fine
[19:16] <funyun> ubitux: so the final syntax would be "ffmpeg -i 'concat:movie1.mpg|movie2.mpg' -vcodec libx264 -preset veryslow -crf 18 -threads 0 final.mp4"?
[19:17] <ubitux> you want to re-encode?
[19:17] <funyun> ubitux: yes
[19:17] <ubitux> should be ok at first sight
[19:17] <funyun> ubitux: alright. thanks so much. :)
[19:17] <ubitux> why -threads 0?
[19:18] <funyun> ubitux: i read that was best?
[19:18] <ubitux> how so?
[19:18] <funyun> or fastest
[19:18] <ubitux> i thought it was automatic
[19:18] <funyun> no clue. is there a better option?
[19:18] <ubitux> what happens if you don't specify anything?
[19:19] <funyun> ubitux: that works also. i was just looking for the fastest method
[19:19] <Yulth> In this pastebin I've attached one command and the error message it shows. Could somebody help me please on why that command doesn't work? Thanks!   http://pastebin.com/Z922xWF2
[19:19] <ubitux> funyun: doesn't it use the threading anyway?
[19:19] <ubitux> 0 is likely an alias for "auto", which should be the default
[19:19] <funyun> ubitux: yes, i believe so
[19:20] <funyun> i think i read default was 4
[19:20] <funyun> i could be wrong tho
[19:20] <ubitux> Yulth: remove -f aac
[19:20] <ubitux> funyun: look at your cpu usage you'll see
[19:21] <Yulth> ubitux: ok solved, I must to change "-f aac to -f adts"
[19:26] <funyun> ubitux: without -threads 0, ffmpeg is using 583% CPU
[19:26] <ubitux> then that's a good sign it works ;)
[20:38] <Yulth> Well, I've a really hard problem (at least in my opinion and knowledge): Is there any way to feed a ffmpeg process through its STDIN at correct bytes/sec speed to obtain a real time audio stream through its STDOUT?? I mean: on-the-fly mp3 to aac conversion and at the same time, sending AAC stream through a TCP connection
[20:44] <klaxa> Yulth: see the -re flag
[20:45] <klaxa> or use another player to produce output with the correct amount of bytes/sec
[20:51] <notacatnoradog> hi, does this channel also support avconv?
[20:52] <ubitux> no
[20:53] <ubitux> ask the fork instead, #libav
[20:53] <notacatnoradog> alright, thank you ubitux
[20:53] <ubitux> any reason you want to use avconv instead of ffmpeg?
[20:54] <notacatnoradog> ubitux: just what's in my distro
[20:54] <ubitux> ok
[20:54] <ubitux> debian-like?
[20:54] <notacatnoradog> ubitux: is there a reason I should make an effort to use ffmpeg intsead?  Yes, debian
[20:57] <ubitux> ffmpeg has something like ~30 more formats, ~50 more codecs, ~50 more filters, thousands of additionnal bug & security fixes, and various other features
[20:57] <ubitux> but that's not an objective statement ;)
[20:59] <ubitux> notacatnoradog: for example for the faststart issue you have with avconv, you can use -movflags +faststart
[20:59] <ubitux> which is not available with the fork
[20:59] <notacatnoradog> ubitux: ah, yes I noticed that :)
[20:59] <notacatnoradog> well, I noticed those options didn't work on avconv anyway
[21:00] <ubitux> note that it shock on your device maybe because of the level of the h264 selected
[21:11] <Yulth> klaxa: I understand, but the question is: How to feed ffmpeg to produce the correct amount of bytes/sec for aac format, for example?
[21:12] <klaxa> well where does your STDIN stream come from? a file?
[21:12] <klaxa> another stream?
[21:16] <Yulth> klaxa: my stdin comes from a PHP script that internally invokes ffmpeg. The php script has to read a mp3 file at correct speed and feed ffmpeg in order to allow ffmpeg to produce the correct amount of bytes/sec (AAC format)
[21:17] <klaxa> so it's a file?
[21:17] <Yulth> yes
[21:17] <klaxa> can't you just use a file variable and use the -re flag?
[21:17] <klaxa> this way ffmpeg encodes at realtime speed
[21:18] <klaxa> as if it was an audio stream
[21:18] <Yulth> very interesting
[21:19] <Yulth> lets me try this option :) Thanks a lot!!
[21:20] <klaxa> :)
[21:51] <notacatnoradog> thanks for the help and advice ubitux and saste
[22:36] <klaxa> is there a way to easily determine groups of pictures?
[22:36] <klaxa> h264 codec that is
[22:37] <klaxa> if i have it in a matroska container i can list I-Frames, is that enough?
[22:37] <klaxa> is every I-Frame starting a new GOP?
[22:40] <Mavrik> if I read the standard correctly... yes
[22:40] <Mavrik> you may only have one I-frame in a GOP
[22:40] <Mavrik> however frames in GOP may refer to other GOPs
[22:41] <klaxa> well... that would be... problematic
[22:42] <Mavrik> so you need to look for IDR frames :)
[22:42] <klaxa> oh what exactly are IDR frames?
[22:42] <klaxa> are those grouping up completely independent blocks of frames?
[22:44] <klaxa> ah i read a few sentences about it and it seems to clear the pictures available for use for reference
[22:45] <Mavrik> klaxa: those are just I-frames marked as "IDR"
[22:45] <klaxa> however, does that ensure that i can encode a small portion with two i-frames marked with IDR without having the rest of the video?
[22:45] <Mavrik> klaxa: the standard states that there shall never be a frame after an IDR frame that would reference a frame before IDR
[22:46] <Mavrik> thus allowing seeking, cutting etc. :)
[22:46] <klaxa> can a frame before an IDR frame reference a frame after an IDR frame though?
[22:46] <Mavrik> nope
[22:46] <klaxa> so it really cuts the video in totally independent parts, correct?
[22:46] <Mavrik> IDR acts as a wall for references so you can start decoding stream on that point and have all the info
[22:46] <Mavrik> yep
[22:46] <klaxa> nice, exactly what i wanted, now how the hell do i find those?
[22:48] <Mavrik> IDR frames have NAL unit type 5
[22:49] <klaxa> i see... looks like i'll have to do more research than i thought
[22:50] <Mavrik> it's easier if you tell us what you want to achieve :)
[22:52] <klaxa> right, i want to encode via a botnet of some kind, so one client is assigned a short clip that is independent from the rest, encodes it, uploads it, if there are still unassigned pieces left, fetches a new one, encodes that one, uploads it, and so on
[22:53] <klaxa> so you have an arbitrary number of clients encoding for you
[22:53] <klaxa> and in the end you concat all files and have a re-encode
[22:53] <klaxa> it should be faster than re-encoding on one machine
[22:53] <klaxa> right?
[22:53] <Mavrik> I understand
[22:54] <Mavrik> klaxa: the thing is... consider the network latency and download speeds first
[22:54] <Mavrik> and if you go through with it, yeah, you'll need to cut on IDR frame boundaries
[22:55] <Mavrik> or... you can decode frames on a single machine (which isn't all that expensive) and pass raw frames to encoders
[22:55] <Mavrik> depending on how fast your network infrastructure is
[22:55] <klaxa> network infrastructure would be consumer level :P
[22:55] <klaxa> kind of crowdsourcing for encoding
[22:56] <Mavrik> ah
[22:56] <klaxa> i believe that if you have a lot of clients doing the encoding, it could speed up encoding time significantally
[22:56] <Mavrik> klaxa: upload/download times can easly kill all your advantage here if you don't have gigabit-grade infrastructure you know :\
[22:56] <Mavrik> 100Mbit links would work though
[22:56] <Mavrik> just make sure you check your math about that first :)
[22:57] <klaxa> yeah...
[22:58] <klaxa> now that i think about it... but i guess even with two clients it would be even faster than with one computer doing all the work, right?
[22:58] <klaxa> i mean including downloading and uploading
[22:58] <klaxa> for the coordination there'd be servers with high bandwidth
[22:58] <klaxa> but encoding would be done on consumer level pc, with consumer level internet connection
[22:59] <klaxa> because servers with good CPU's are too expensive for this project and quite a few people have strong cpu's at home :P
[23:00] <Mavrik> :P
[23:00] <Mavrik> I think you'll have to try it... we just usually ended buying more CPU power :)
[23:01] <klaxa> heh
[23:01] <klaxa> i think it would scale well though
[23:02] <klaxa> if you have 100 clients each encoding 5 second clips (or whatever the distance between idr frames is) i think it would speed up encoding pretty much, up- and downloading is part of the process anyways
[23:02] <klaxa> so uploading would be sped up too
[23:03] <Mavrik> remember, quality will be lower probably
[23:03] <Mavrik> so you'd want to keep clips as long as possible
[23:03] <Mavrik> (like cut a 2-hour video into 4 pieces)
[23:03] <klaxa> why would the quality decrease?
[23:04] <Mavrik> because encoders track encoded bitrate to keep within limits over the movie
[23:05] <Mavrik> and by passing a 5-minute video you'll see more wierd peaks in bitrate
[23:05] <klaxa> i see...
[23:06] <klaxa> bitrate based encoding is stupid with h264 right?
[23:07] <klaxa> at least i thought so until now
[23:07] <Mavrik> it is
[23:07] <Mavrik> unless you need to keep a constant bitrate
[23:07] <klaxa> and if i were to use constant rate factor for short clips the bitrates would spike too much
[23:07] <klaxa> is there an average bitrate mode that would make sense?
[23:09] <Mavrik> hmm, I think not... but I suggest you test just how bad such effect is :)
[23:10] <klaxa> yeah i'll try... first i have to learn how to get the timecodes of idr frames :P
[23:10] <klaxa> are there any end-user h264 frame parsers?
[23:10] <klaxa> i queried the arch repo with "h264" and it only showed decoders and encoders, google didn't list anything useful either
[23:11] <Mavrik> hmm, not that I'd know of
[23:11] <Mavrik> I usually just wrote my own, finding NAL startcode and then NAL type is rather easy
[23:13] <klaxa> are there timecodes in h264? because when i used mkvextract to extract the raw h264, mplayer wouldn't be able to play it at the correct framerate, if i extracted it with ffmpeg it wouldn't recognize h264 correctly and interpret it as DV video, ffplay would just fail
[23:14] <Mavrik> uuum, I think you need to store it in Annex B format
[23:16] <klaxa> so it is possible, but not necessary?
[23:19] <klaxa> hmm every frame has a NAL Unit, right? so if i just count the NAL Units i'd get the framecount? i guess that would be enough
[23:19] <Mavrik> klaxa: no, every frame is in a NAL unit
[23:19] <Mavrik> there are other types of NAL units
[23:19] <Mavrik> some of them carrying just metadata
[23:20] <klaxa> ah, but if i find a NAL Unit and i can check by the NAL unit type whether or not it is a frame?
[23:20] <klaxa> -and
[23:21] <Mavrik> yeah
[23:21] <Mavrik> sometimes some frames go into several NAL units though.
[23:22] <klaxa> is that noted in the NAL unit type though?
[23:22] <Mavrik> klaxa: I would suggest passing muxed files around
[23:22] <Mavrik> it'll be easier for you
[23:22] <klaxa> i still have to know where to cut the video
[23:23] <klaxa> that's the reason i asked about the h264 specs (btw, thanks for your patience to explain all this to me)
[23:25] <Mavrik> klaxa: well you can find the first NAL with IDR type (which could be followed by more if the frame is fragmented) and cut there
[23:25] <Mavrik> klaxa: also, software like ffmpeg and such can usually cut on right places for you :)
[23:26] <klaxa> yeah i noticed that, however it seems like the timecodes are off when doing so (video starts at a positive non-zero timestamp) and might cut off not right before the next cut
[23:29] <klaxa> i think rather than discussing possibilities and concerns i'll try out a few things, thanks for your time and patience
[23:30] <Mavrik> yeah :)
[00:00] --- Sun Dec  9 2012


More information about the Ffmpeg-devel-irc mailing list