[Ffmpeg-devel-irc] ffmpeg.log.20141010

burek burek021 at gmail.com
Sat Oct 11 02:05:01 CEST 2014


[00:25] <JodaZ> is there some option to best effort burn in subtitles?
[00:25] <JodaZ> like if there are of a format ffmpeg can do, do it otherwise not fail
[00:50] <JodaZ> hmm, chrome doesn't seem to display embedded webvtt in webm's is that known?
[00:52] <c_14> http://www.iandevlin.com/blog/2012/06/html5/google-chrome-supports-webvtt-subtitles ?
[01:25] <JodaZ> c_14, so far the idea was to use them as external files referenced in a video track tag, being able to embed them into webm's directly is new (also new to ffmpeg)
[01:33] <mediocregopher_> I'm trying to stream webm, using an ffserver which is version 1.0.9 and a client which is 2.4, but I keep getting the error "Cannot allocate memory"
[01:33] <mediocregopher_> both have been built with --enable-libvpx
[01:33] <mediocregopher_> here is the config: http://gobin.io/ejct and here is the output from the client: http://gobin.io/Pfrk
[01:34] <mediocregopher_> if anyone has any tips I'd really appreciate it, I'm totally lost
[01:58] <voip_> decode_band_types: Input buffer exhausted before END element found
[01:58] <voip_> Error while decoding stream #0:0: Invalid data found when processing input
[03:09] <ukkonen> Can anyone help me with the compilation of ffmpeg 2.4.2 ?
[03:23] <ukkonen> Is literally everybody afk or why don't I see any messages at all?
[04:19] <delet> how to download in best quality available?
[06:55] <zenny> Hi, trying to remove a static background from a moving foreground, yet no luck. I used <ffmpeg -report -y -i "FORGROUND.mp4" -i "BACKGROUND.mp4" -filter_complex "[1:v]format=yuva444p,lut=c3=128[video2withAlpha],[0:v][video2withAlpha]blend=all_mode=difference[out]" -map "[out]" "OUTPUT.mp4">, but no go. any inputs? Thanks!
[07:47] <zenny> ??
[08:43] <ghartz> hello
[11:33] <hungnv> hi everyone, I use scale filter ""scale=iw*min(640/iw\,360/ih):ih*min(640/iw\,360/ih), pad=640:360:(640-iw*min(640/iw\,360/ih))/2:(360-ih*min(640/iw\,360/ih))/2" in command line: ffmpeg -i bpgoc.ts -filter:v "scale=iw*min(640/iw\,360/ih):ih*min(640/iw\,360/ih), pad=640:360:(640-iw*min(640/iw\,360/ih))/2:(360-ih*min(640/iw\,360/ih))/2" out.ts and it works great, then I use it to pass to scale filter and it throws this error: Input picture width (854) i
[11:33] <hungnv> s greater than stride (640) , please help!
[11:50] <ubitux> hungnv: you can use print() to debug eval
[11:56] <Chuck_> hi all. I have a question about container streams in ffmpeg, and can't find the answer in the docs
[11:58] <Chuck_> I have an mpeg2 file that contains 2 streams that are seen as separate 'tracks'. when i operate on it using ffmpeg, the streams get merged into a single 'track'. cli output: http://pastebin.com/Wt0jzZh0
[11:58] <Chuck_> can anyone explain what is happening here?
[12:42] <hungnv> ubitux, I don't know how :-D. output likes this http://pastebin.com/CqFKRuDv when I run my application
[12:43] <hungnv> 854 is width of input video, I don't know why it produces error like this
[12:44] <ubitux> because it's calling pad with a smaller value maybe
[12:45] <hungnv> ubitux, the filter string applied to this application is simple, I stole ffmpeg/doc/example/transcoding.c, change filter_desc = null to "scale=iw*min(640/iw\\,360/ih):ih*min(640/iw\\,360/ih), pad=640:360:(640-iw*min(640/iw\\,360/ih))/2:(360-ih*min(640/iw\\,360/ih))/2";
[12:46] <hungnv> is it the right way?
[12:47] <ubitux> i suppose you're talking about filtering_video.c
[12:47] <hungnv> no, transcoding.c
[12:47] <ubitux> there is no filter_desc in transcoding.c
[12:47] <hungnv> filter_spec, sorry
[12:47] <ubitux> anyway, try to reproduce with ffmpeg first
[12:48] <hungnv> ffmpeg runs file
[12:48] <hungnv> fine*
[12:48] <ubitux> with that -vf ?
[12:48] <hungnv> yes, with this command ffmpeg -i bpgoc.ts -filter:v "scale=iw*min(640/iw\,360/ih):ih*min(640/iw\,360/ih), pad=640:360:(640-iw*min(640/iw\,360/ih))/2:(360-ih*min(640/iw\,360/ih))/2" out.ts
[12:49] <hungnv> same input file, same numbers
[12:49] <ubitux> i don't know
[12:50] <hungnv> hic
[13:01] <hungnv> ubitux, "you can use print() to debug eval" <<< how can I do this?
[13:42] <hungnv> ubitux, got it works! Thank you sir!
[15:00] <t4nk861> hi
[15:00] <t4nk861> what to add to this command to save recording also on disk, one ffmpeg process ffmpeg -loglevel info -re -rtbufsize 400M -f dshow -i video="UScreenCapture" -f dshow -i audio="virtual-audio-capturer" -c:v libx264 -force_key_frames expr:gte(t,n_forced*2) -b:v 1000k -minrate 1000k -maxrate 1000k -bufsize:v 1000k -preset:v veryfast -pix_fmt yuv420p -tune film  -c:a libmp3lame -q:a 2 -ar 44100 -f flv "rtmp://localhost:1935/app/live_27039
[15:00] <t4nk861> thx for help
[15:01] <c_14> https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs#Teepseudo-muxer
[15:02] <c_14> replace everything after the '-ar 44100 -f' with a tee output as described in the link
[15:04] <t4nk861> thx
[15:07] <t4nk861> my second question. On windows 7, using this command FFmpeg after 2 hours (every time is similiar duration) starts dropping frames all the time. Tired everything what i can but cant fix it. Any suggestions?
[15:08] <t4nk861> win xp no problems
[15:14] <t4nk861> here is log just before it happens http://pastebin.com/NKnDTe0G
[15:14] <t4nk861> there is 10 seconds break and next dropping starting
[15:16] <c_14> I'm going to go ahead and blame dshow.
[15:16] <t4nk861> this 10 seconds silent?
[15:20] <c_14> The fact that it's dropping in the first place.
[15:21] <ghartz> ubitux, ?
[15:21] <tedy> both codec_decode_frame(avctx , frame, pkt, got_frame) and codec_encode_frame(avctx, pkt, frame, got_pkt) are capable of returning immediately with got_frame of got_pkt set to 0. My question is where does they store the delayed frames so that they can be collected later ?
[15:22] <ubitux> ghartz: ?
[15:23] <ghartz> je suis celui au power8 d'online :)
[15:28] <zenny> Hi again with the same issue that I posted some 8 hours ago ;-) : trying to remove a static background from a moving foreground, yet no luck. I used <ffmpeg -report -y -i "FORGROUND.mp4" -i "BACKGROUND.mp4" -filter_complex "[1:v]format=yuva444p,lut=c3=128[video2withAlpha],[0:v][video2withAlpha]blend=all_mode=difference[out]" -map "[out]" "OUTPUT.mp4">, but no go. any inputs? Thanks!
[15:33] <edakiri> Wish to put 1 video above another. overlay=0:in_h gives error: "Error when evaluating the expression 'in_h' for y"
[15:34] <edakiri> I expect it to work according to the documentation.
[15:34] <c_14> edakiri: pretty sure you'll have to pad first
[15:35] <anshul_mahe> I use openSuse and I have installed ffmpeg using make and make install
[15:36] <c_14> edakiri: also, in_h isn't a variable for overlay. there's main_h and overlay_h
[15:36] <anshul_mahe> in pkg-config --list-all there is no library name avformat
[15:37] <c_14> It's called libavformat, where did you install it?
[15:37] <edakiri> Thanks.
[15:38] <anshul_mahe> pkg-config --exists libavformat --print-errors is showing success but it is not listed in --list-all
[15:39] <anshul_mahe> c_14: do you see libavformat with pkg-config --list-all|grep avformat
[15:41] <c_14> yep
[15:44] <anshul_mahe> So What should I check, What is wrong with my pkg-config
[15:45] <c_14> Where did you install ffmpeg?
[15:47] <anshul_mahe> I did make install without any prefix on my system, ffmpeg binary is located  /usr/local/bin/
[15:52] <c_14> try export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH maybe?
[15:55] <anshul_mahe> no help :(
[15:55] <c_14> Hmm, no clue then.
[16:09] <edakiri> For 'select' filter, what are the pict_type values S,SI,SB,BI ? I know types I,P,B.
[16:18] <edakiri> How can I filter to only pass key frames and have them be 1s apart from each other?
[16:19] <edakiri> This selects key frames, but they have the same time between them as in the original: ffmpeg -an -i In.mp4 -filter select=key -an Out.mp4
[16:19] <c_14> select I frames and set fps to 1
[16:20] <edakiri> I don't see how that sets FPS to 1, like c_14 says.
[16:20] <c_14> hmm?
[16:21] <edakiri> Oh, I thought you were a bot interpreting what I typed.  ;-)
[16:22] <c_14> Not last time I checked.
[16:22] <c_14> What I meant was -vf select=key,fps=1
[16:23] <edakiri> Thanks for explaining more detail because '-r1' to the output did not give the desired result.
[16:23] <c_14> -r1 didn't do it?
[16:23] <c_14> Might need to put the fps filter in front of the select filter then
[16:31] <edakiri> c_14: fps did not change the output.
[16:31] <delet> anyone can help me?
[16:32] <edakiri> (visibly)
[16:32] <edakiri> and I tried fps=120 to compare.
[16:53] <batasrki> good morning all
[16:53] <batasrki> I've a fairly simple question
[16:53] <batasrki> Is there a command in ffmpeg that will fix timecodes in a file?
[16:54] <batasrki> basically, we record a file consisting of a live stream and another video file. We think that switching between the two dynamically messes with the timecodes, which results in a corrupted recording
[17:32] <edakiri> c_14: https://paste.debian.net/hidden/15a19ef6/
[17:34] <c_14> And using -vf fps=1,select=key doesn't change anything?
[17:35] <edakiri> I'll try. Here was with select,fps rather than fps,select https://paste.debian.net/hidden/d1e7fb57/
[17:39] <edakiri> c_14: fps=6,select=key changes the speed of replay. It does not put the key frames at that interval from one another.
[17:40] <c_14> wait
[17:40] <c_14> What exactly do you want?
[17:40] <c_14> You want the keyframes from the input in the output and each of them 1s apart?
[17:41] <edakiri> c_14: yes.
[17:42] <edakiri> like a slide show of key frames for preview.
[17:43] <c_14> afaik, either -vf fps=1,select=key or -vf select=key,fps=1 should work
[17:43] <edakiri> Would like to not reencode them, but 'select' is incompatible with '-codec copy'.
[17:44] <edakiri> The first one has an effect, but the wrong one. The second form has no effect for fps.
[17:44] <ChocolateArmpits> You want of the original quality or the original compression ?
[17:44] <edakiri> ChocolateArmpits: both.
[17:45] <ChocolateArmpits> Then save them as an uncompressed image sequence, uncompressed YUY2 stream or a 444 Prores stream
[17:46] <ChocolateArmpits> You can also try parsing the video file with ffprobe and get a frame number list for keyframes. Then with ffmpeg use that frame number list to extract a frame from the file and save it as a png image.
[17:47] <ChocolateArmpits> Two for loops with batch if you're on windows
[17:47] <edakiri> I'm trying to make a new video file only the key frames.
[17:47] <ChocolateArmpits> And my suggested method would do just that
[17:47] <ChocolateArmpits> you can replace the png image sequence to any other uncompressed format
[17:48] <edakiri> Yes, I can do it in 2 stages. I think if I understand ffmpeg and ffmpeg works correctly, it can be done in 1 pass.
[17:48] <ChocolateArmpits> Well, c_14, already suggested you a method with ffmpeg exclusively, it didn't work out for  you
[17:49] <edakiri> Yes. I think ffmpeg might not be working as it should. I think I understand what the answer should be.
[17:50] <ChocolateArmpits> Is the video stream encoded in h264 ?
[17:51] <edakiri> ChocolateArmpits: yes
[17:51] <edakiri> I suspect time stamps from the original are surviving attached to the frames.
[17:52] <edakiri> and the -r or fps options for ffmpeg are not causing them to be replaced.
[17:52] <edakiri> No, wait. In MPEG4, timestamps are not attached to frames. They are independant.
[17:53] <edakiri> The time stream is independent of both video and audio and both refer to it.
[17:53] <edakiri> but maybe somehow timestamps are getting dragged past the output fps filter.
[17:54] <delet> make: *** [libavfilter/vf_subtitles.o] Error 1
[17:55] <edakiri> delet: which version?
[17:55] <edakiri> delet: a release or unreleased?
[17:56] <delet> last from git
[17:56] <delet> edakiri git clone
[17:57] <edakiri> delet: if you have not tried compiling the release, try it so that you know you can.
[17:57] <c_14> edakiri: 'select=key,setpts=(RTCTIME - RTCSTART) / (TB * 1000000)' <- that works
[17:57] <edakiri> ooooh!
[17:57] <c_14> ffmpeg was keeping the timestamps from the input
[17:57] <c_14> had to override them
[18:03] <edakiri> c_14: that was better, but they seem to not be always 1s apart. Adding another '-r 1' appears to work fully.
[18:07] <edakiri> c_14: maybe what you typed should work by itself because no part depends on which frame # of output it is.
[18:08] Action: edakiri tries with NB_CONSUMED_SAMPLES
[18:12] <anshul_mahe> c_14: I got it, there was an error in leptonica pc file, because of which --list-all was not listing avformat library
[18:14] <Lindrian> Hello. I have a bluray rip (VIDEO_TS etc) that I would like to convert into .mkv. Could anyone help me?
[18:16] <kepstin-laptop> assuming it's not encrypted and there's no BD+ copy protection or anything, the video and audio is just in some mpeg-ts streams. It's simply a matter of having ffmpeg read those directly (with concat if needed)
[18:19] <elliotd123> anyone know if it's possible to have ffmpeg output closed captioning on stdout?
[18:20] <edakiri> elliotd123: for mpeg files, captions are video, not text.
[18:20] <edakiri> i'm not sure about capture from a card.
[18:21] <elliotd123> I see
[18:21] <elliotd123> no we're capturing from multicast streams, but that explains a lot
[18:21] <edakiri> even using CONSUMED_SAMPLES, output is not evenly spaced. select=key,setpts=((PTS-STARTPTS+NB_CONSUMED_SAMPLES)/5)
[18:21] <elliotd123> is there a separate PID them for captions?
[18:21] <elliotd123> *then
[18:23] <elliotd123> (In an mpegts)
[18:23] <Lindrian> kepstin-laptop: is there a guide?
[18:25] <edakiri> c_14: thanks for your help. With another '-r',the result is what I seek.
[18:33] <c_14> np
[19:05] <anshul_mahe> is there any function in libavutil to convert pts in hh:mm:ss format
[19:05] <anshul_mahe> is there any structure where hh:mm:ss are stored, I am using library for development
[19:14] <anshul_mahe> i got it, its av_timecode_get_smpte_from_framenum
[19:24] <Lindrian> Bumping my issue
[19:25] <Lindrian> Anyone know how I can use ffmpeg to convert my bluray files into mkv?
[19:26] <Baked_Cake> what r bluray files called again
[19:26] <Baked_Cake> vob maybe
[19:26] <kepstin-laptop> ffmpeg -i <something in bd directory>.m2ts output.mkv - add encoding parameters and stream selection stuff to taste.
[19:27] <Baked_Cake> o m2ts
[19:27] <Baked_Cake> ya its the same as any other file
[19:27] <Lindrian> kepstin-laptop: I'm unsure of those parameters. I would like to maintain as high quality as possible, both sound and video wise.
[19:27] <Lindrian> Is that possible?
[19:27] <Baked_Cake> u might have to use a program to copy the file off the disc cause copyright protection
[19:28] <kepstin-laptop> Lindrian: in that case, use -c:v copy -c:a copy to just copy the streams rather than reencoding.
[19:28] <Lindrian> Baked_Cake: its already on my harddrive
[19:28] <Lindrian> kepstin-laptop: thank you!
[19:28] <Baked_Cake> damn ur not gonna compress it
[19:28] <Lindrian> Is this correct? "ffmpeg -i /my/bluray/file.m2ts output.mkv -c:v copy -c:a copy"
[19:28] <kepstin-laptop> output filename goes last
[19:29] <Baked_Cake> u dont need the file stricture if u put the file in the bin folder
[19:29] <Lindrian> ffmpeg -i /my/bluray/file.m2ts -c:v copy -c:a copy output.mkv <-- like so?
[19:29] <kepstin-laptop> that'll obviously not reduce the file size or anything, and it'll only select one audio stream (the "default" one)
[19:29] <Lindrian> Is it possible to extract all audio-streams?
[19:29] <Lindrian> How do you make it compress it?
[19:30] <Baked_Cake> you have to use the -map to tell it
[19:30] <Lindrian> (Sorry for all the questions)
[19:30] <Baked_Cake> if u do just "ffmpeg -i filename pause" it will spit out all the info on the streams
[19:30] <kepstin-laptop> Lindrian: regarding compressing it, see https://trac.ffmpeg.org/wiki/Encode/H.264 for the video, and maybe https://trac.ffmpeg.org/wiki/Encode/HighQualityAudio for the audio
[19:31] <Lindrian> actually, im fine with a 50gb mkv file. i got plenty of space, cant be bothered with that headache right now haha
[19:31] <Baked_Cake> when u know whate streams are there u can do -map 0:0 -map 0:1 etc to kee pth streams you want
[19:31] <Lindrian> Baked_Cake: thanks!
[19:32] <Baked_Cake> hv
[19:32] <Baked_Cake> hf*
[19:32] <Lindrian> How do I extract subtitles?
[19:32] <Baked_Cake> extract them or keep them
[19:32] <kepstin-laptop> bluray subtitles are images, you can't easily extract them as text. You can copy them to the output mkv as-is, tho.
[19:33] <Lindrian> Meaning they become hard-coded subs?
[19:33] <kepstin-laptop> no, they remain a separate track.
[19:33] <Lindrian> ah cool
[19:34] <Lindrian> I have a bunch of m2ts-files, how does that work? Which do I use?
[19:34] <kepstin-laptop> use -map to select a subtitle stream, and stick a "-c:s copy" to tell ffmpeg not to try to modify them.
[19:34] <kepstin-laptop> Lindrian: watch them with a video player to see which ones you want?
[19:34] <Lindrian> no i mean there is a "STREAM"-folder with a bunch of m2ts files.
[19:35] <Lindrian> They're all small-ish
[19:35] <Lindrian> BDMV/STREAM
[19:35] <kepstin-laptop> the smallish ones are usually stuff like menu backgrounds and things; the main picture will be in bigish ones.
[19:35] <Lindrian> The first one is 30gb, the rest are 200-500mb
[19:35] <Lindrian> Ah.
[19:35] <Baked_Cake> probly commercials and stuff
[19:35] <kepstin-laptop> but just watch them in a video player to see
[19:35] <kepstin-laptop> they're just mpeg ts streams, you can play them as-is
[19:36] <Lindrian> They're on my headless server.
[19:36] <Lindrian> So it's not super easy to watch them
[19:38] <Baked_Cake> commercials producer scenes at the titling and maybe the title menu itself
[19:38] <Baked_Cake> nothing worth keeping usually
[19:39] <kepstin-laptop> unless you collect movie trailers or whatnot.
[19:39] <Baked_Cake> ya
[19:40] <Lindrian> hehe
[19:40] <kepstin-laptop> but there's no real easy way to automatically find out what each of the m2ts files is for, since it's managed by the java code in the menu program, mostly...
[19:41] <Lindrian> fascinating
[19:44] <Baked_Cake> coalgirls atd utw should take a page out of ur book and just convert to mkv
[19:44] <Baked_Cake> that way i could do a source encode everytime i have to d/l thier bloat
[19:44] <kepstin-laptop> well, the coalgirls practise of converting pcm audio to flac does save a little bit of space :)
[19:52] <Baked_Cake> i def prefer 5.1 flac over the ac3
[19:52] <Baked_Cake> i dont think ive seen very many groups that have 5.1 in aac
[20:02] <zenny> Hi asking againfor some inputs ;-) . I am trying to remove a static background from a moving foreground, yet no luck. I used <ffmpeg -report -y -i "FORGROUND.mp4" -i "BACKGROUND.mp4" -filter_complex "[1:v]format=yuva444p,lut=c3=128[video2withAlpha],[0:v][video2withAlpha]blend=all_mode=difference[out]" -map "[out]" "OUTPUT.mp4">, It does not show any errors, yet it does not remove the background. :-(
[20:08] <ChocolateArmpits> zenny, you should probably try out Blender's Compositing -> Difference Keying
[20:10] <zenny> ChocolateArmpits: Thanks, but I am trying to do that in command line with ffmpeg. Is that something impossible with ffmpeg? Just wondering!
[20:10] <kepstin-laptop> yeah, if what you're trying to do is turn the areas of a video that match a static background transparent, that can't be done with ffmpeg.
[20:10] <kepstin-laptop> at least, not very easily.
[20:12] <zenny> kepstin-laptop: yep, that is what I am trying to achieve. The background didn't change at all and have both background with/out moving parts.
[20:13] <zenny> @kepstin-laptop: Indeed, thanks to your advice that make me learn a lot about ffmpeg! You are genius! :D
[20:14] <zenny> *dinner time* ;-)
[20:22] <batasrki> hey all
[20:22] <batasrki> I'm wondering if anyone has tried taking in two RTMP sources and combining them through ffmpeg to output one stream
[20:23] <ChocolateArmpits> zeeny, if you want it fast and dirty you could probably try Avisynth and it's Overlay filter set to Difference. Because avs script files can be loaded into ffmpeg you could write a script that would echo all needed lines with arguments in place to an avs script and then load that
[20:23] <batasrki> I'm noticing that if I do that using one source for both inputs, ffmpeg seems to process them in order they're laid out in the command rather than in parallel and separate from each other
[20:24] <ChocolateArmpits> batarski, what do you mean one stream? Composite them together ?
[20:25] <batasrki> ChocolateArmpits: yeah
[20:57] <batasrki> hello?
[21:37] <voip_> hello guys
[21:38] <voip_> i have problem
[21:38] <voip_> [aac @ 0x2dab320] TYPE_FIL: Input buffer exhausted before END element found
[21:38] <voip_> Error while decoding stream #0:0: Invalid data found when processing input
[21:38] <voip_> [mpegts @ 0x2d99aa0] PES packet size mismatch
[21:38] <voip_> [aac @ 0x2dab320] Number of bands (29) exceeds limit (28).
[21:38] <voip_> Error while decoding stream #0:0: Invalid data found when processing input
[21:38] <voip_> [h264 @ 0x3659420] corrupted macroblock 64 38 (total_coeff=16)trate=1491.3kbits/s
[21:38] <voip_> [h264 @ 0x3659420] error while decoding MB 64 38
[23:38] <gcl5cp> is there a official guide to install ffmpeg in ubuntu 14.04?
[23:38] <c_14> https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
[23:41] <gcl5cp> ok, last time i try it, break audacity, i don't remember why. thank c_14
[23:53] <delet> [mp4 @ 0x2100a00] pts has no value
[23:53] <delet>     Last message repeated 147 times
[23:53] <delet> [mp4 @ 0x2100a00] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 447453 >= 447420
[23:53] <delet> av_interleaved_write_frame(): Invalid argument
[23:53] <delet> ??
[00:00] --- Sat Oct 11 2014


More information about the Ffmpeg-devel-irc mailing list