[Ffmpeg-devel-irc] ffmpeg.log.20140911

burek burek021 at gmail.com
Fri Sep 12 02:05:01 CEST 2014


[00:18] <c_14> FFmpeg does not currently support motion interpolation.
[00:32] <DoctorTrombone> Greetings. How would I go about merging several audio input streams (some stereo, some mono) from my input file into one stereo output file?
[00:33] <Mavrik> DoctorTrombone, you'll probably have to use amix filter to mix them together
[00:33] <DoctorTrombone> Ah, that sounds tricky. Could you go into a little detail on that?
[00:35] <Mavrik> this might be a good starting point: https://trac.ffmpeg.org/wiki/AudioChannelManipulation
[00:35] <DoctorTrombone> Thanks!
[00:35] <Mavrik> DoctorTrombone, but basically, see how audiofilters work
[00:36] <Mavrik> DoctorTrombone, and then just tell amix filter (documentation on ffmpeg-filters page) to merge them together into one
[00:36] <DoctorTrombone> I'm confused, I thought you said this page was the one I wanted.
[00:36] <Mavrik> Well you want it if you want to do anything useful.
[00:36] <DoctorTrombone> I don't understand.
[00:38] <c_14> There's more than one place with documentation.
[00:39] <DoctorTrombone> c_14: I don't follow.
[00:39] <c_14> There's the wiki and there are the man pages.
[00:39] <DoctorTrombone> c_14: and I read both
[00:39] <DoctorTrombone> There's a reason I came to the IRC channel.
[00:40] <c_14> You have one input file with multiple audio streams that you want to merge into one Stereo stream?
[00:41] <DoctorTrombone> Yeah, my input file has around six audio streams.
[00:42] <c_14> And most streams are stereo but some are mono?
[00:42] <DoctorTrombone> c_14: That is correct, yes.
[00:44] <llogan> you'll nned to provide the complete ffmpeg console output of "ffmpeg -i input", and a description of where each channel for each input stream should end up in the appropriate channel in the output.
[00:45] <DoctorTrombone> What?
[00:45] <llogan> we can't give an example without knowing what you want to do
[00:45] <DoctorTrombone> llogan: I just want to mix them all down to a stereo output file, with stereo tracks in stereo, and mono tracks on both channels.
[00:45] <DoctorTrombone> llogan: That might be true, but I explained what I wanted to do, didn't I?
[00:45] <llogan> now you did
[00:46] <c_14> And they all have the same sample rate and the same format?
[00:46] <c_14> And, pastebin the output.
[00:46] <DoctorTrombone> llogan: I did before, you may have missed it.
[00:46] <DoctorTrombone> c_14: Yes, same rate and format.
[00:47] <DoctorTrombone> http://pastebin.com/7uDPb736
[00:48] <DoctorTrombone> The video track is basically garbage, and I don't want one of the audio tracks but I can't tell which one at the moment.
[00:50] <DoctorTrombone> Honestly I'd probably use Sox if it understood .bik, but its format support is pretty limited.
[00:51] <c_14> -i file -filter_complex '[0:1][0:2][0:3][0:4][0:5][0:6][0:7]amerge=inputs=7,pan=stereo:c0<c0+c2+c4+c5+c6+c8+c9:c1<c1+c3+c4c+5+c7+c8+c10[aout]' -map '[aout]'
[00:51] <c_14> That should be it, or at least close enough.
[00:51] <DoctorTrombone> c_14: That looks brilliant.
[00:51] <DoctorTrombone> let me try it out
[00:53] <DoctorTrombone> It didn't work http://pastebin.com/idKPegAj
[00:55] <c_14> you messed up when copying what I typed
[00:55] <c_14> +c4+c5+c7
[00:55] <c_14> not +c4c+5+c7
[00:55] <DoctorTrombone> c_14: I used copy-paste, that's rather odd.
[00:56] <c_14> Hmm, if my syntax is so deprecated, why is it used on both the wiki and the manpages...
[00:57] <llogan> ignore that
[00:57] <DoctorTrombone> c_14: Thanks so much :) FFmpeg was pretty easy to understand, but edge cases are killers for any powerful tool.
[00:57] <c_14> np
[00:57] <c_14> llogan: I know it shouldn't break anything, I'm just wondering.
[00:58] <llogan> you may be able to use -ac 2 instead of pan.
[00:58] <DoctorTrombone> llogan: you mean after the whole merge inputs bit?
[00:58] <llogan> not sure if that will magically/lazily work
[00:58] <llogan> yes
[00:59] <DoctorTrombone> It worked fine, but having something like that for future reference is good.
[01:00] <valder> Here's a really naive question, but if I read video in from any movie file (ie mp4, mov, etc) can I assume the data is in RGB color space?  Are all finished movie files in RGB?  Can they be in YUV or some other format?
[01:00] <DoctorTrombone> Mm, that works as well. Thanks a bunch folks, and good day to you all.
[01:00] <Mavrik> valder, no, almost no video formats are in RGB
[01:01] <Mavrik> RGB is a terrible color space for video storage, vast majority will use YUV as their preferred color spae
[01:01] <Mavrik> *space
[01:01] <valder> ok.. I thought YUV was camera only.
[01:01] <valder> thanks
[01:03] <valder> So I have a routine that seems to be writing video files out.. however, I'm not sure if it is properly handling all types of files.  Would anybody mind doing a review?
[01:04] <valder> Here's the pastebin of the relevant code: http://pastebin.com/Q2eZLxKB
[01:06] <Mavrik> valder, you should probably keep timebase as AVRational and make sure you convert it properly when combining frames
[01:06] <Mavrik> or your video won't play well
[01:07] <Mavrik> also remember, alot of video formats have YUV420 enforced, so make sure your input videos use that color space or encoding will fail
[01:08] <valder> Mavrik: how do I go about verifying that? (YUV420) the assumption is that the user can supply any video file.  If it isn't supported (?) I should be able to tell and reject the file right? (how do I do that?)
[01:08] <Mavrik> input codec context has pix_fmt that tells you that
[01:08] <Mavrik> you can also use swscale to convert it
[01:09] <valder> right. so I commented out swscale b/c I didn't know if that was actually doing naything.
[01:10] <valder> I guess I should leave it in. :)
[01:10] <Mavrik> also you seem to be missing write_frame funcion :P
[01:10] <valder> oh let me paste the entire code
[01:10] <Mavrik> also stuff like that is horribly uncool:                     numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
[01:11] <Mavrik> you're persuming that your frames are always in RGB24 (which I doubt they will be :) )
[01:12] <valder> http://pastebin.com/LZurNs7C
[01:13] <valder> Mavrik: please tell me the "cool" way to do it. :)  I found an example that did it that way and followed it.
[01:13] <Mavrik> ah, well if you're grabbing frames from an existing video
[01:13] <Mavrik> you'll have their color space passed as pix_fmt in the codec context
[01:13] <Mavrik> then use that when allocating data right? :)
[01:14] <valder> so for each input video, I want to inspect the pix_fmt value to determine if I need to convert it or not.  Right now I don't think I'm doing that correctly(or at all).
[01:15] <valder> so I think my code is geared for a specific video file type.  If I try to mix in another one, it will crash (is that what you expect?)
[01:15] <valder> that might be why I"m having issues, I'm making the assumption the input videos are of the same format o
[01:15] <Mavrik> yes
[01:16] <Mavrik> you're also making the assumption that all videos use same PTS/DTS and timebase :)
[01:17] <valder> So what do I need to do to stream line it all?
[01:17] <valder> 1) on openign an input file I need to check pix_fmt
[01:17] <valder> 1a) if it's not the right pix_fmt I need to convert it
[01:18] <valder> 2) check the PTS/DTS and timebase (frames per second?) adjust reading in of the frames accordingly.  say I want a target of 24fps.
[01:18] <valder> I should then be able to mix in the files right?
[02:08] <valder> Mavrik?
[03:04] <active8> I lost the note on the command I used to slow down a video and keep the music pitch the same. It was not the filter PTS method - that produced crap sound. I think it was all ffmpeg - changing fps and rate, but may have been done otherwise, though I really don't recall stripping out the sound, slowing vid and aud separately and remuxing. Pretty sure it was all done wiith one ffmpeg command. Could someone please tell me how it's done? I m
[03:04] <active8> anaged to get 1/4 speed of flying fretboard fingers with good sound and no pitch shift.
[03:15] <vasilydernis> hey guys, i have a large file of say 700 seconds and i need to reduce it down to 600 seconds in length using 10 equal segments.. is this possible to do using ffmpeg in 1 command?
[03:15] <vasilydernis> or do i have to split the larger file using -ss and -t and then concat the segments?
[03:16] <vas123> i tried using select filter to no avail :(
[03:19] <c_14> Create a filter complex with 10 trim filters and concat it.
[03:20] <vas123> c_14: could you point me to where i can read up more on this?
[03:22] <c_14> https://ffmpeg.org/ffmpeg-filters.html#trim
[03:22] <c_14> https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join,%20merge)%20media%20files
[03:22] <active8> Example of the vid is youtube rush la villa strangiato acoustic - it's the intro
[03:24] <c_14> active8: https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video
[03:26] <vas123> c_14: so something like this? http://video.stackexchange.com/questions/10396/how-to-concatenate-clips-from-the-same-video-with-ffmpeg
[03:26] <active8> as  i said, i did not use PTS and PTS produces crappy scratchy audio. I'm sure it was done with a resampling at a diff rate and a change in FPS. If not, If I DID use PTS, then there was more to it than at the link you posted cause the audio is good. Pretty sure whatever I did, I learned it here. At least it was refined from help I got here.
[03:27] <c_14> Remember what month/day it was?
[03:27] <c_14> The channel is publicly logged.
[03:27] <c_14> Heck, if you know the month and your nick, you can just grep.
[03:29] <c_14> Oh, and if you find it ping me and I'll add it to that wiki page.
[03:30] <active8> ok. I guess the file timestamp might be a clue
[03:30] <active8> !welcome
[03:30] <active8> !topic
[03:31] <c_14> You can usually get the topic with /topic
[03:33] <active8> Ok some channels have "!" commands. Where's the log?
[03:34] <c_14> http://ffmpeg.gusari.org/irclogs/
[03:38] <Kirito> Whenever I use -threads 0 running ffmpeg, I've noticed that the progress indicator seems to glitch. What I mean is the bitrate and output size stops updating, but the time progress continues going up
[03:39] <Kirito> But the video itself seems to encode just fine
[03:39] <Kirito> Any idea why this may be..?
[03:39] <c_14> What version you running?
[03:39] <Kirito> ffmpeg version N-53307-g5a65fea
[03:39] <Kirito> Debian Wheezy build
[03:40] <Kirito> Actually it's not specific to -threads 0, it just seems to be a recent issue I've noticed
[03:41] <Kirito> Or rather it's only doing it when encoding with mkv as the container
[03:43] <Kirito> for i in *; do ffmpeg -threads 8 -i "$i" -map 0 -vcodec libx264 -tune animation -crf 18 -preset slow -c:a copy -c:s copy -c:d copy -c:t copy "./8bit/$i"; done --- is the command being run. Perhaps it has something to do with all the data I'm copying here, since ffmpeg complains that it can't recognize the codes of the font attachments the mkv container I'm encoding has?
[03:45] <c_14> Shouldn't be.
[03:45] <c_14> Try pressing d a few times to see if that refreshes the dispaly.
[03:47] <Kirito> Nope, it just prints "error parsing debug value0 // debug=0"
[03:53] <active8> c_14 the log is from the end of day 2-18-2014 to 2-19-2014 with my nick, active8. At 00:15 logan pasted the filter bit from the wiki. That's what I used last night and the sound is horrible. The one diff is that back then, I clipped out just the intro part - I didn't slowmo the whole vid. Could it be I used a copy codec or q:v 0 ? It's an mp4 and correct me if I'm wrong, that would involve both a re-encoding of h.264 and AAC audio to pu
[03:53] <active8> t the remaining frames back together, resulting in info loss.
[03:59] <c_14> What encoding settings?
[04:04] <c_14> Kirito: no clue
[04:05] <Kirito> Not a big issue anyways, I'll try and track down what specifically causes it later and submit a bug report to Debians tracker I guess, thanks
[04:08] <active8> c_14: you asking me about encoding settings?
[04:08] <active8> or Kirito?
[04:09] <active8> I used no specific settings
[04:09] <c_14> active8: you
[04:09] <active8> mp4 to mp4 letting ffmpeg use the default - not sure if that is -c:v mpeg4 or libx264
[04:10] <c_14> And the audio sounds bad?
[04:11] <active8> i am getting closer. I clipped the intro and ran (wait one while I clean it up)
[04:11] <active8> ffmpeg -i quitar-intro.mp4 -filter_complex "[0:v]setpts=2*PTS[v];[0:a]atempo=0.5[a]" -map "[v]" -map "[a]" -q:v 1 -q:a 1 guitar-intro-half-speed.mp4
[04:12] <c_14> I think a higher q is better quality...
[04:12] <c_14> Not sure though.
[04:17] <active8> I think lower is higher. remember testing that but ffmpeg reports 0 is not right - that it should be 1-5 yet I recall a page where it was run up to -q:v 10 (maybe a different encoder supports higher values and the example was for a different container. AVI, IIRC) So after running the above, it sounds like what I did in Feb. Running it again to get 1/4 speed results in the same quality also - which was not so good. either I should spec t
[04:17] <active8> he encoder to libx264, or i need to run filter_complex like it is in the trac wiki on the line above the last line (IOW -filter_complex "[0:v]setpts=2*PTS[v];[0:a]atempo=0.5[a],[0:a]atempo=0.5[a]") but I'm not sure if that syntax will work
[04:18] <active8> q isn't quality, BTW it's quantization and maybe that's why lower is better - not gunna puzzle that through right now
[04:18] <active8> 8)
[04:18] <c_14> >Effective range for -q:a is around 0.1-10. This VBR is experimental and likely to get even worse results than the CBR.
[04:19] <active8> ffmpeg said 1-5 when I tried 0 (which has worked in the past, IIRC. -qscale 0 worked, too, according to my notes/snips
[04:20] <active8> trac wiki on doubling the audio slowdown/speedup: ffmpeg -i input.mkv -filter:a "atempo=2.0,atempo=2.0" -vn output.mkv
[04:20] <active8> What I'm thinking for complex_filter: -filter_complex "[0:v]setpts=2*PTS[v];[0:a]atempo=0.5[a],[0:a]atempo=0.5[a]"
[04:21] <active8> think if it can be done, either it's as-is or s/,/;/
[04:21] <active8> or
[04:21] <active8> -filter_complex "[0:v]setpts=2*PTS[v];[0:a]atempo=0.5,atempo=0.5[a]"
[04:28] <active8> c_14: awesome!  I get better sound with: ffmpeg -i quitar-intro.mp4 -filter_complex "[0:v]setpts=4*PTS[v];[0:a]atempo=0.5,atempo=0.5[a]" -map "[v]" -map "[a]" -q:v 1 -q:a 1 guitar-intro-quarter-speed-complex2.mp4
[04:29] <active8> now maybe adding -c:v libx264 -c:a AAC might get even better. Is that the right codec and should it be right before the outfile name or before the filter?
[04:31] <c_14> shouldn't matter. I usually put it before the output file though.
[04:33] <active8> vlc says the audio is MPEG AAC Audio (mp4a)
[04:33] <active8> ffmpeg says there are two:
[04:35] <active8> DEA.L. aac  AAC (Advanced Audio Coding) (encoders: aac libfdk_aac )
[04:36] <active8> and D.A.L. aac_latm  AAC LATM (Advanced Audio Coding LATM syntax)
[04:37] <active8> So the question is what's the diff between aac and libfdk_aac which I can find somewhere at trac or ffmpeg (the green pages) but not sure if there isn't a terse, simple explaination 8)
[04:38] <c_14> Use libfdk_aac
[04:38] <c_14> https://trac.ffmpeg.org/wiki/Encode/AAC
[04:39] <active8> and thanks, c_14 for pointing me to the irc logs. Just knowing I used the atempo filter was enough to clue me in to the possibility that I set the quantization. 8)
[04:39] <active8> triple thanks!
[04:40] <ac_slater> hey all. I know this might not be a good place to ask (ill post to the mailing list too). I have a .yuv file created via ffmpeg. I want to read it via libavforat and avio. Any clues?
[04:40] <ac_slater> I didn't see any examples of such a thing. I guess I could calculate the offsets and read the data myself via avio_file_map
[05:20] <dahat> I'm developing an app to play basic videos with FFMPEG as a backend... and currently it accepts a local path into avformat_open_input(), now I am trying to expand it to target RTMP files, currently I can do the following at the command line, how can I apply the same args to avformat_open_input()? ffmpeg -rtmp_app <AppName> -rtmp_conn <AuthToken> -rtmp_playpath <PlayPath> -i rtmp://examplecom/
[06:12] <toeshred> Hello everyone, I am trying to change the default Audio stream. One is english with commentary, and the other is just english no commentary. I used the -map option to reorder so the 'no commentary' stream comes first, but the second audio stream always plays by default still, and has (default) next to it when I check the streams. I thought changing the stream order would also change the which audio stream
[06:12] <toeshred> plays first, but I guess it doesn't. Is there a way to do this?
[08:00] <valder> anybody around that can help me debug a segfault I'm getting in my code?  I'm coding a native decoder/encoder for Android.  It seems to work, as it generates a video however after it creates the video, the program segfaults as it returns to the java method that called it.
[08:01] <valder> Here's the pastebin if it helps. http://pastebin.com/NeYE41hP
[08:31] <dahat> I'm developing an app to play basic videos with FFMPEG as a backend... and currently it accepts a local path into avformat_open_input(), now I am trying to expand it to target RTMP files, currently I can do the following at the command line, how can I apply the same args to avformat_open_input()? ffmpeg -rtmp_app <AppName> -rtmp_conn <AuthToken> -rtmp_playpath <PlayPath> -i rtmp://streaming.example.com/
[08:34] <valder> dahat: would you mind reviewing what I wrote to see if there are any issues with it?   my program keeps crashing when the render method returns back to the calling JNI method.
[08:35] <dahat> valder: I can always look... though I cannot guarentee any degree of comptance with my review... coding against ffmpeg is something that is very new to me
[08:36] <valder> dahat: ditto. :)
[08:37] <dahat> I feel your pain... while there are a number of samples and bits of documentation out there... the general expectation seems to be 'figure it out yourself... then update the wiki or code with what you found'... which assumes quite a bit
[08:43] <toeshred> Hello everyone, I am trying to change the default Audio stream. One is english with commentary, and the other is just english no commentary. I used the -map option to reorder so the 'no commentary' stream comes first, but the second audio stream always plays by default still, and has (default) next to it when I check the streams. How do I change what is "(default)"?
[08:50] <valder> dahat: anything you can share would be great.  I'm pretty sure I've made a few mistakes.
[10:23] <termos> would it make sense to do run image blurring on YUV420P frames, or is it better to convert them to RGB?
[10:37] <rivarun> c_14: on interpolation -- thanks, fwiw i found slowmoVideo which seems to do the trick
[13:03] <anshul_mahe> is there any bad effect of joining B frame enabled video  stream and non B frame enabled stream
[13:17] <anshul_mahe> before concat I am using -vcodec libx264 -bf 0 in one video and in another video -vcodec libx264
[13:21] <termos> I am trying to create an AVFrame as decribed here http://stackoverflow.com/questions/12831761/how-to-resize-a-picture-using-ffmpegs-sws-scale but I end up with an AVFrame that has width,height = 0,0 and some strange linesize[] values. Any ideas?
[13:57] <anshul_mahe> how to use --stitchable option in libx264
[14:01] <anshul_mahe> i am using x264 0.142.2 verion of x264
[14:03] <anshul_mahe> i got it -x264opts  option
[14:13] <albertid_> Hi, where can I find information about the variables/constants in ffmpeg filters? Like iw and ih. Is there a variable that holds the length of the video?
[14:37] <ubitux> albertid_: these constants are filters specific
[14:37] <ubitux> look at the filters documentation, they are documented
[14:37] <ubitux> (or at least they should)
[14:43] <albertid_> ubitux, I did. I was looking for a way to tell fade-out to start at "(length of video) - 1s", but I guess there is no way
[14:44] <ubitux> the length of the video is never known
[14:44] <ubitux> filters are stream based
[14:44] <ubitux> you can extract an approximation to the best (or a wrong to the worse) from the container and use that
[14:46] <ubitux> ffprobe -v error -of flat -show_entries format=duration input.mov
[14:46] <ubitux> ffprobe -v error -of flat -show_entries stream=duration input.mov
[14:58] <albertid_> ubitux, thx
[15:25] <techfreak> sup guys
[15:25] <techfreak> https://gist.github.com/Garont/2c57b5bdba72d286d148
[15:25] <techfreak> anyone can help me?
[15:52] <techfreak> fucking hate ffmpeg
[15:53] <sacarasc> Why?
[15:54] <sacarasc> Looks like your file is broken... Claims to be mono, but isn't.
[15:54] <ubitux> also 1.0
[15:54] <sacarasc> That too.
[15:55] <techfreak> and how to fix that?
[15:55] <relaxed> techfreak: http://johnvansickle.com/ffmpeg/
[15:55] <sacarasc> You could try with a more up to date ffmpeg...
[16:27] <nehaljwani> How do use the pad filter?
[16:27] <techfreak> relaxed: thanks, works fine
[16:27] <nehaljwani> Suppose I have a video of res 1920x1440
[16:27] <nehaljwani> and I want to add padding at bottom to make it 1920x1080
[16:27] <nehaljwani> what should be the params to pad fileter?
[16:28] <c_14> nehaljwani: You do know that 1440 > 1080, right?
[16:28] <nehaljwani> oops
[16:28] <sacarasc> http://ffmpeg.org/ffmpeg-all.html#Examples-80 shows examples of how to use the pad filter.
[16:28] <nehaljwani> I should get some rest :-/
[16:29] <nehaljwani> what is the maximum 4:3 res that can fit in 1920x1080
[16:29] <relaxed> techfreak: you're welcome
[16:30] <sacarasc> 1440x1080?
[16:32] <nehaljwani> now I want to add padding so that the video is centralized.. what param should I give to pad?
[16:33] <c_14> you can use x=iw/2 and y=ih/2
[16:33] <c_14> I think that was it anyways
[16:33] <nehaljwani> what is iw/2 ?
[16:33] <c_14> input width divided by 2
[16:33] <c_14> https://ffmpeg.org/ffmpeg-filters.html#pad
[16:34] <c_14> aaaah, wait
[16:34] <c_14> (ow-iw)/2
[16:34] <c_14> that was it
[16:35] <nehaljwani> exactly.
[16:35] <nehaljwani> now I understand the parameters
[16:35] <nehaljwani> stupid googleing didn't help
[16:35] <nehaljwani> thanks c_14
[16:40] <nehaljwani> why did debian guys fork ffmpeg and made avconv?
[16:43] <nehaljwani> looks like it's a personal issue +1 to this :D
[17:11] <pa> if i have a movie, say in mp4, mkv or some other nice container, and i get a byte offset, that is the exact offset of a keyframe, how do i find what keyframe that is / tell avconf to start transcoding exactly from that keyframe?
[17:11] <sacarasc> avconv is part of libav, a fork of ffmpeg. Their channel is #libav. Or did you mean the libav* libraries?
[17:12] <sacarasc> Wait, I brought up libav.
[17:12] Action: sacarasc fails at think.
[17:13] <pa> ah i thought that ffmpeg changed to libav completely.. so it's only a fork
[17:13] <pa> weird
[18:28] <t4nk438> i try to apply ``afade=t=in:st=0:d=3`` to a input audio stream but nothing changes. What can be the matter?
[18:36] <t4nk438> sorry :*| http://pastebin.com/DV8qqrXa  why afade filter doesn't work for this.
[18:37] <t4nk438> http://pastebin.com/8HYcgixD
[18:39] <c_14> Because if you don't name the arguments, the order is assumed to be type:start_sample:nb_samples:start_time:duration:curve:
[18:39] <c_14> ignore that trailing ':'
[18:40] <c_14> And 0 is not a valid number of samples to apply the effect for.
[18:40] <c_14> Which is what it says on line 22
[18:40] <c_14> and 20
[20:13] <t4nk438> hi, is it a bug or normal behaviour? ``afade=t=out`` applied to the result of ``amovie`` makes it completely silent, ``afade=t=in`` applied to the result of ``amovie`` makes nothing. See http://pastebin.com/FupMtC4E The filters only apply as expected if input file is supplied via ``-i`` option. When i use ``amovie`` instead, things get corrupted.
[21:09] <mychele> hi everybody. I'm trying to understand if I can do this thing with ffmpeg and ffserver. I would like to install ffmpeg on a raspberry pi (actually I'm doing that, it's compiling) and use it as an input for a ffserver feed. So far so good. Then I would like to stream this feed to devices with iOs and Android, but I don't understand clearly what can I do. Thank you very much.
[21:37] <loadbang> When converting a directory of JPEG files in to a video, is the only way to do this is to rename all the JPEGs in a number sequnce? Wondering if there is a way to do everything in date order.
[21:39] <llogan> loadbang: see the glob pattern
[21:40] <llogan> http://ffmpeg.org/ffmpeg-formats.html#image2-1
[21:41] <llogan> ffmpeg -i -pattern_type glob -i "*.jpg" output
[21:49] <loadbang> llogan: *.jpg: No such file or directory
[21:50] <llogan> i don't know if it works in Windows
[21:51] <loadbang> oh no, OS X some reason put  instead of "
[21:52] <llogan> you could use single quotes instead if you prefer
[22:01] <dahat> Q: I'm developing an app to play basic videos with FFMPEG as a backend... and currently it accepts a local path into avformat_open_input(), now I am trying to expand it to target RTMP files, currently I can do the following at the command line, how can I apply the same args to avformat_open_input()? ffmpeg -rtmp_app <AppName> -rtmp_conn <AuthToken> -rtmp_playpath <PlayPath> -i rtmp://streaming.example.com/
[22:08] <shout-user93> hi
[23:04] <_dunno_> 'error while opening encoder' <-- 'ffmpeg -ss 00:01:23.45 -i INFILE.MPEG2 -t 01:23:45.00 -vf yadif,crop=692:432:12:72,scale=352:288 -c:a libmp3lame -b:a 128k -c:v mpeg4 -b:v 1000k -vtag DX50 OUTFILE.AVI
[23:09] <_dunno_> error while opening encoder for output stream #0.0 -- maybe incorrect parametres such as bit_rate, rate, width or heigth
[23:10] <_dunno_> fflogger thanks for the reminder - but there is not much to paste - see above
[23:13] <_dunno_> seems to be that scale part - but it's a multiple of 16 - and also within mpeg4p2 specifications, isn't it?
[23:35] <_dunno_> as long as the scale values are bigger than the inputs original size it works - but i cannot downscale
[00:00] --- Fri Sep 12 2014


More information about the Ffmpeg-devel-irc mailing list