[Ffmpeg-devel-irc] ffmpeg.log.20140909

burek burek021 at gmail.com
Wed Sep 10 02:05:01 CEST 2014


[00:00] <onyx> but when I do that, it doesnt add the new thumbnails, it replace the existing ones
[00:02] <onyx> ideas?
[00:05] <c_14> use -start_number and lot's of shell magic
[00:06] <onyx> example?
[00:11] <c_14> START_NUM=$(ls *.png | tail -n1 | sed -r 's/^.*([[:digit:]]+).*$/\1/')
[00:11] <c_14> or something
[00:12] <c_14> It might be better to get the command to start logging and then see what's actually going wrong...
[00:12] <cbsrobot> c=1; while (true); do out=$(printf "%06d.jpg" $c); ((c++)); ffmpeg -i in -vframes 1 $out; sleep 15; done;
[00:13] <cbsrobot> or similar
[00:13] <onyx> cbsrobot: what does that do?
[00:13] <cbsrobot> onyx run: c=1; while (true); do out=$(printf "%06d.jpg" $c); ((c++)); echo $out; sleep 15; done;
[00:13] <cbsrobot> and you'll see
[00:14] <onyx> run it by itself or in the ffmpeg command?
[00:15] <cbsrobot> just itself
[00:15] <cbsrobot> and wait for at least 15 seconds !
[00:17] <onyx> is outputing 00001.jpg, 000002.jpg...
[00:18] <cbsrobot> that's your thumbnails
[00:19] <circ-user-9V9BU> hi everyone - i've got an ffmpeg build script that compiles v1.1.1 well, and succeeds without errors to build the latest stable release. when i run the updated ffmpeg with no parameter it fails with ffmpeg: symbol lookup error: ffmpeg: undefined symbol: vpx_codec_vp9_cx_algo. i've attempted various different builds of libvpx to no avail. i'm currently attempting to build off the git repo in case that helps, but if anybody is familia
[00:21] <onyx> cbsrobot: sorry I dont get it, I dont see any thumbnails on the server
[00:22] <cbsrobot> well then run the former command I sent and replace the arguments to match your requirements
[00:25] <geekdotneo> build off the git repo directly gave me the same result. is anyone here familiar with vpx_codec_vp9_cx_algo issues?
[00:26] <onyx> cbsrobot: could you modify my command to work that way? http://pastebin.com/cRWfj5Zm
[00:39] <c_14> onyx: just take what he posted and replace 'in' with 'http://livestream-url
[00:46] <onyx> ok one sec
[00:49] <onyx> c=1: Invalid argument
[00:52] <onyx> it is printing 0001.jpg etc... on terminal
[00:52] <onyx> but nothing shows up on the server
[00:52] <c_14> What's your current command?
[00:53] <onyx> http://pastebin.com/8piayHVb
[00:59] <onyx> for what I can see the command only prints image name bu tis not taking the thumbnail
[01:02] <c_14> ehhh, the other one
[01:02] <c_14> c=1; while (true); do out=$(printf "%06d.jpg" $c); ((c++)); ffmpeg -i http://livestream_url -vframes 1 $out; sleep 15; done;
[01:05] <onyx> ok one sec
[01:07] <onyx> ok this is working
[01:08] <onyx> but how can I set it to only run from 5am till 8pm?
[01:19] <c_14> replace '(true)' with '[[ "$(date +"%T")" > '05:00:00' ]] && [[ "$(date +"%T")" < '20:00:00' ]]'
[01:20] <c_14> you'll need to start the command at 5am every day though
[01:20] <c_14> hmm
[01:20] <c_14> Do you want to restart the numbering every day?
[01:21] <c_14> Or do you want it to keep increasing until it hits overflow?
[01:21] <onyx> the goal is to have 5 days of times lapse
[01:21] <onyx> so I figure it will work like this
[01:22] <onyx> from 5 till 8pm thumbnails created
[01:22] <onyx> then around 9pm the time lapse video is created
[01:22] <onyx> 10 pm all .jpg files are delated
[01:22] <onyx> repeat...
[01:22] <c_14> Right, then you can just throw that command into cron to be executed at 5 am
[01:23] <onyx> yep
[01:23] <c_14> And if you're going to start it at 5, just make it '[[ "$(date +"%T")" < '20:00:00' ]]'
[01:23] <onyx> ok
[01:23] <c_14> There's no point checking if the time is greater than 5am if you start the program at 5 am.
[01:23] <onyx> yep
[01:24] <c_14> That is, unless somebody starts messing around with your system clock.
[01:25] <onyx> mmm
[01:25] <onyx> c=1; while '[[ "$(date +"%T")" < '20:00:00' ]]'; do out=$(printf "%06d.jpg" $c);
[01:26] <onyx> [[ "$(date +"%T")" < 20:00:00 ]]: command not found
[01:26] <c_14> get rid of the single quotes
[01:26] <c_14> the ones around the [[]] part
[01:27] <onyx> ok running now
[01:27] <onyx> oh how do I set the path to the images folder?
[01:28] <c_14> put the path before the %06d
[01:28] <onyx> do out=$(printf "/path/to/images/folder/%06d.jpg"
[01:28] <onyx> like that?
[01:28] <c_14> ye
[01:28] <onyx> cool
[01:28] <cbsrobot> and rather use png than jpg !
[01:29] <onyx> why is that?
[01:29] <cbsrobot> jpeg is lossy
[01:29] <onyx> oh the image quality will be better with png then huh
[01:37] <onyx> ok cron setup...
[01:43] <onyx> is running... lets hope it doesnt stop randombly...
[01:43] <onyx> off to gym, I'll report later =)
[02:23] <gcl5cp> how do i pass x264 option (analyse=0x1:0) to ffmpeg?
[02:25] <joules> ffmpeg -h encoder=h264
[02:29] <joules> I suppose with -x264opts but never heard of analyse.
[02:29] <joules> is that specific to x264?
[02:38] <gcl5cp> x264opts doesn't work, use : as separator, i tried analyse="0x1:0"
[02:38] <c_14> escape it
[02:38] <c_14> -x264opts "0x1\:0"
[02:38] <c_14> or something
[02:38] <c_14> assuming it uses \ as an escaper...
[02:41] <gcl5cp> still separate
[02:41] <c_14> Use more backslashes.
[02:41] <c_14> If at first you don't succeed, apply backslashes.
[02:41] <c_14> But if 2 don't work pastebin your command and output.
[02:42] <gcl5cp> ok: [libx264 @ 0xa8bc5c0] bad option '0': '1'
[02:43] <c_14> What's the command?
[02:45] <gcl5cp> deblock is another option uses ':'. command>  -c:v libx264 -tune stillimage -crf 30 -r %s -x264opts keyint=900:me=dia:ref=1:analyse=0x1\\:0:me_range=4:chroma_me=0:bframes=1:b_pyramid=0:b_adapt=0
[02:45] <gcl5cp> deblock=1:-3:-3
[02:48] <c_14> Hmm, same error when you have the same command with only one backslash?
[02:49] <gcl5cp> yes. here another psy_rd=2.00:0.70
[02:49] <c_14> Try 3 backslashes, then 4, then 5. If that doesn't work, I'll boot my other computer and start experimenting.
[02:51] <gcl5cp> i trying to set "pass1", to avoid stats file generated
[02:52] <gcl5cp> 7*\ doesn't work
[02:53] <gcl5cp> bad option '0': '1'
[03:00] <c_14> >options that use ":" as a separator themselves, use "," instead. They accept it as well since long ago but this is kept undocumented for some reason.
[03:00] <c_14> Right, documentation is helpful.
[03:01] <c_14> So just replace the ':' that separate options with ',' and forget all the escaping.
[03:02] <gcl5cp> wooo, so simple. thank c_14 very mucho
[03:04] <c_14> np
[03:11] <onyx> hey c_14 is it possible to add a time stamp to this thumbnails?
[03:11] <c_14> Use the drawtext filter.
[03:12] <c_14> -vf drawtext='%{localtime:[strftime foo]'
[03:12] <c_14> See strftime(3) for time formats.
[03:13] <c_14> You can also set the font to use, the size + color. See the wonderful docs for more.
[03:17] <gcl5cp> to those who are boring. here a question.
[03:17] <gcl5cp> what is the best x264 setting to slideshow (no motion)? to fast encode, details lossless and big compression.
[03:18] <gcl5cp> and -tune stillimage is not the answer.
[03:29] <gcl5cp> is there a example script/command to do thumbnails with time stamp? onyx c_14
[03:30] <onyx> gcl5cp: im reading this http://einar.slaskete.net/2011/09/05/adding-time-stamp-overlay-to-video-stream-using-ffmpeg/
[03:31] <c_14> onyx: that entry is rather outdated
[03:31] <c_14> The method used is deprecated.
[03:31] <onyx> =(
[03:32] <c_14> well, most of it is fine, you just have to replace the %T with %{localtime: %T}
[03:32] <c_14> if you want to use the %T strftime sequence
[03:33] <c_14> gcl5cp: do you want one thumbnail from a video or lots of thumbnails from a video?
[03:34] <onyx> so "drawtext=fontfile=/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans-Bold.ttf: \text='\%{localtime: %T}': fontcolor=white at 0.8: x=7: y=460"
[03:34] <onyx> ?
[03:35] <gcl5cp> awaken interest in thumbnails/screenshots. i think a script should use ffmpeg + imagemagick
[03:35] <c_14> onyx: should work
[03:36] <c_14> gcl5cp: what would you need imagemagick for?
[03:36] <gcl5cp> to compose all in one
[03:38] <gcl5cp> i am talking about "Movie thumbnails/screenshots creator"
[03:38] <c_14> You mean those things that make pages with a couple of thumbnails on them?
[03:39] <gcl5cp> http://moviethumbnail.sourceforge.net/
[03:40] <c_14> ye, those things
[03:41] <c_14> I used to use something similar, except that it was a 5000 line bash script...
[03:41] <c_14> That one used FFmpeg + imagemagick.
[04:16] <gcl5cp> 5000? why? c_14
[04:18] <gcl5cp> in python should be 500-900
[07:22] <joules_> !debian
[07:22] <joules_> what you guys think of lightworks?
[07:22] <joules_> I likes it.
[07:23] <joules_> Although I'm not sure I get full use of all my cores/hthreads encoding h264
[10:06] <luke_l> I have a question. When using ffplay to play a video over http and seek. ffplay sends redundant requests with range of increacing start range and no ending range. and the video buffers a lot.
[10:06] <luke_l> the issue is reported at https://github.com/yixia/VitamioBundle/issues/202.
[10:07] <luke_l> I have question about http download of ffmpeg. Is there someone have time to discuss?
[10:07] <relaxed> file a bug report https://trac.ffmpeg.org/
[10:08] <relaxed> ask your question and if someone knows they'll answer
[10:17] <relaxed> luke_l: ^^
[10:19] <luke_l> don't know how to @relaxed. hello relaxed, do you have have experience of ffmpeg http.c ?
[10:20] <luke_l> relaxed: do you have time to see https://github.com/yixia/VitamioBundle/issues/202 this issue?
[10:21] <Olive6767> Hi, I'm spliting a video using the following cmd: ffmpeg.exe -i in.mp4 -codec copy -f segment -segment_time 180 -reset_timestamps 1 out%02d.mp4 , the output numbering starts at 00. Is there anyway I can make it start at 01?
[10:22] <relaxed> luke_l: sorry, I can't help you with that.
[10:22] <luke_l> Olive6767 a simple way is using shell to rename all the segments.
[10:22] <luke_l> relaxed: thanks all the same.
[10:23] <Olive6767> luke_l: sure, but no way to do it directly in my ffmpeg cmd?
[10:23] <relaxed> Olive6767: look at ffmpeg -h muxer=hls
[10:24] <luke_l> relaxed: awsome answer.
[10:24] <Olive6767> relaxed: -start_number doesn't seem to work, I think it's for pics output only
[10:24] <relaxed> Olive6767: oops, I meant ffmpeg -h muxer=segment
[10:26] <Olive6767> relaxed: -segment_start_number ;-) thx :)
[10:32] <joules_> damn mcdeint at medium is definately better.
[10:33] <joules_> yadif->mcdeint->fps this is going to take a while.
[10:37] <relaxed> joules_: kind of old, but http://guru.multimedia.cx/deinterlacing-filters/
[10:39] <benbro> I have mkv video that recorded with 15 frames per seconds but ffprobe gives me 30k
[10:39] <benbro> what does 30k fps means?
[10:39] <benbro> can I re-encode it and tell it its 15 fps?
[10:40] <relaxed> how does it playback using ffplay?
[10:40] <Mavrik> benbro, 30kfps?
[10:41] <benbro> relaxed: I'll try now with ffplay. in vlc the audio lags behind the video and the lag increase over time
[10:45] <benbro> ffplay plays with a lag
[10:46] <benbro> http://dpaste.com/24SB0MH
[10:47] <benbro> this is the output of ffprobe
[10:47] <benbro> I know that the recorded frame rate is 15 fps. can I fix it?
[10:47] <joules_> relaxed: cool, mcdeint does a good job with yadif, but even with yuv4 and pcm_s16le it's ~18fps on a 12core from ramdisk to a ssd scratch disk.
[10:47] <joules_> be a massive file.
[10:50] <joules_> wouldn't mind it faster actually - "ffmpeg -threads 16 -i /tmp/vhs.avi -vf yadif=1:1,mcdeint=0:1,crop=678:576:14:0,scale=720:576,setdar=dar=4/3,fps=25 -f avi -c:v yuv4 -c:a pcm_s16le /mnt/disk/tmp/vhs.avi"
[10:51] <Mavrik> benbro, ffplay -r 15 file.mkv should do the trick
[10:52] <benbro> Mavrik: trying
[10:52] <benbro> Mavrik:  Failed to set value '15' for option 'r': Option not found
[10:53] <benbro> Mavrik: under windows
[10:53] <Mavrik> it seems that's only supported by ffmpeg then.
[10:53] <benbro> Mavrik: how can I convert (copy streams) with 15 fmps?
[10:53] <benbro> fps
[10:54] <joules_> don't transcode in a lightning storm.
[10:54] <joules_> benbro: what are you doing?
[10:55] <benbro> joules_: I capture the screen in 15 fps but ffprobe tells me it's 30k fps
[10:55] <benbro> I'm trying to "fix" the fps
[10:57] <joules_> 30k! high speed capture!
[10:58] <benbro> joules_: in the terminal while capturing I saw 15 fps
[10:58] <joules_> does it play fine?
[10:58] <joules_> or 2x
[10:58] <benbro> joules_: the audio lags behind the video
[10:58] <benbro> and the lag increases over time
[10:59] <benbro> but the audio and video seems to be fine separatly
[10:59] <benbro> doesn't feel like 2x
[11:01] <joules_> benbro: so @ 30fps it plays fine?
[11:02] <benbro> joules_: it plays fine but the audio lag increase over time
[11:03] <joules_> can you pb the ffprobe.
[11:07] <joules_> benbro: does -vf fps=30000/1001 fix it?
[11:09] <benbro> joules_:  what's the complete command?
[11:10] <joules_> benbro: if it's muxed (video+audio) incorrect then it has to be done again.
[11:10] <benbro> joules_: video+audio
[11:10] <benbro> so it can't be fixed?
[11:11] <joules_> benbro: the video is obviously playing too fast for the audio a symptom of going out of sync over time is when the fps is 30000/1001 not 30.
[11:12] <benbro> joules_: ok. can I fix it in the command line?
[11:15] <joules_> not sure
[11:16] <joules_> benbro: ffmpeg -i <input> -vf fps=30000/1001 <output> but I'm not sure what that does to the audio.
[11:18] <joules_> maybe -c:a copy *shrug*
[11:36] <joules_> benbro: ffplay -vf setpts="0.999*PTS" <file> ?
[11:40] <benbro> joules_: trying
[11:42] <benbro> joules_: the audio is not in sync. not sure what it does
[11:59] <joules_> !pts
[11:59] <joules_> !setpts
[12:09] <krullie> I'm looking for an overview of what all the entries in the ffprobe output represent. Most of them are selfexplanatory but others aren't.
[13:50] <benbro> I have a video with audio sampled at 41KHz but it thinks it is 48KHz and it makes the audio lag behind the video
[13:50] <benbro> how can I fix it?
[14:24] <benbro> joules_: when stretching the video by 1.09 the audio and video are in sync
[14:26] <c_14> benbro: you can try setting -ar as an input option. Might override the detection.
[14:26] <benbro> c_14: when converting?
[14:26] <benbro> I already have video and audio out of sync. now I need to somehow fix it
[14:27] <benbro> and found out that the audio is ~1.09 slower (constant)
[14:27] <c_14> I was referring to what you said earlier about the audio being sampled at 41KHz but being detected as 48KHz.
[14:28] <benbro> c_14: that what I thought but 48/41 is 1.707 and my factor is 1.09
[14:29] <benbro> not sure what can give me a factor of 1.09
[14:29] <c_14> 44.1KHz
[14:29] <c_14> probably
[14:29] <c_14> 44.1 and 48KHz are the two most common sampling rat.s
[14:29] <c_14> *rates.
[14:31] <benbro> checking
[14:32] <benbro> c_14: you are right
[14:32] <benbro> how can I fix the file?
[14:33] <c_14> Hmm, what's the audio codec? What are you doing with the file?
[14:34] <c_14> you can try `ffmpeg -ar 44100 input -ar 44100 -c copy output'
[14:34] <c_14> I don't think ffmpeg can actually modify the sampling rate with codec copy, but it might be able to fix the metadata so it's detected correctly.
[14:35] <benbro> http://dpaste.com/3J4TYB9
[14:37] <c_14> Ok, try what I said and then ffprobe the output to see if it's detected correctly. If that doesn't work, try `ffmpeg -ar 44100 -i capture.mkv -c:v copy -c:a pcm_s16le -ar 44100 out.mkv'
[14:39] <benbro> thanks
[14:43] <benbro> the second command gives me "option sample rate not found"
[14:43] <c_14> -ar:a maybe?
[14:44] <benbro> same error
[14:44] <benbro> windows
[14:44] <c_14> What if you remove the output -ar
[14:44] <c_14> So it's just ffmpeg -ar 44100 -i capture.mkv -c:v copy -c:a pcm_s16le out.mkv
[14:45] <benbro> Option sample_rate not found
[14:47] <c_14> ffmpeg -i capture.mkv -c:v copy -c:a pcm_s16le -af aresample=44100 out.mkv
[14:51] <benbro> that does something :)
[14:52] <c_14> Let's hope it does the correct thing.
[14:53] <benbro> that less important
[14:56] <benbro> now the video freezes
[14:56] <benbro> it's out of sync
[14:56] <benbro> syncing manually works but the 44.1/48 factor is incorrect
[14:59] <benbro> thanks
[16:05] <Fjorgynn> killall windows
[17:19] <wintershade> hey guys! a quick question. ffmpeg with -c:v mpeg4 gives me this error: "closed gop with scene change detection are not supported yet, set threshold to 1000000000". I used -flags +cgop. do I need to disable this, or is there something else I can do to keep cgop? thanks!
[17:27] <JEEB> wintershade, I don't think the mpeg-4 part 2 encoder has been poked in years so I guess it's just a never-completely-finished feature :P
[17:29] <JEEB> wintershade, and I don't think you'll find anyone liking mpeg-4 part 2 enough to start poking that thing. part 10 came and conquered the market relatively quickly after 2003
[17:29] <JEEB> so I guess you could hire someone to make it better but otherwise I just recommend you move to libx264 and MPEG-4 Part 10
[17:34] <wintershade> JEEB: heh, I see. I'm aiming to make some legacy-compatibile videos. I mentioned it yesterday, they are aimed to work under my parents' and my gf's legacy home theater.
[17:34] <wintershade> JEEB: if it was me, I'd be using theora all the way :P
[17:35] <JEEB> even fosstards have moved away from that :P
[17:36] <wintershade> JEEB: not mine...
[17:36] <wintershade> JEEB: you mean from theora? I kinda like it.
[17:37] <iive> wintershade: why do you need closed gop? are you going to edit the output?
[17:38] <wintershade> iive: nope, I'm just looking at the documentation, and the legacy divx players apparently require cgop.
[17:39] <iive> aha.
[17:40] <wintershade> they are apparently pretty conservative about +cgop-qpel-gmc
[17:40] <wintershade> some support +gmc, but most of them apparently don't.
[17:40] <wintershade> *didn't.
[17:41] <iive> closed gop is related to frame reordering.
[17:42] <iive> it is something that exists in mpeg2, aka it should be quite simple to support.
[17:42] <wintershade> iive: yup, however if I enable it in ffmpeg's built-in mpeg4 encoder, I get an error.
[17:42] <iive> actually, it shouldn't need anything to be supported, as it is encoder feature.
[17:42] <iive> well, the above is not exactly error.
[17:43] <wintershade> iive: ...which ffmpeg's mpeg4 encoder doesn't have. error or not, it won't encode with +cgop
[17:44] <iive> it says that it sets the threshold to +inf, it basically disables scene detection. it should continue...
[17:44] <wintershade> iive: well, it does not.
[17:44] <iive> gop means group of pictures. every gop starts with I-frame (keyframe).
[17:45] <iive> when you have B-frames and frame reordering, something you can have a frame or two, that are using reference frame, before the I frame. this is called open gop.
[17:45] <iive> closed gop basically means that all following frames should be using references only after the I frame.
[17:46] <wintershade> iive: I know all that. I've read the docs :)
[17:46] <iive> i'm not even sure why it is not compatible with scene detection.
[17:47] <iive> other than that scene detection could spam a lot of keyframes in certan cases
[17:48] <wintershade> iive: ...hey, at least I get to fill a 4.3 dvd with bits :D
[17:48] <wintershade> useless bits, but still.
[17:48] <iive> :)
[17:49] <iive> isn't xvid easier to use and with more useful presets?
[17:49] <wintershade> iive: ...presets? where?
[17:49] <iive> or maybe profiles... i'm sure it had options to set divx compatibility...
[17:49] <wintershade> iive: I'm actually tinkering with both libxvid and mpeg4 as c:v, so any good idea could be worth it.
[17:51] <iive> libxvid does have its own option called profile, that takes options like dxnhtntsc, dxnhtpal
[17:52] <wintershade> iive: hey, that's right! I totally forgot about those. but... how do I load them into ffmpeg? and where can I see a list of those?
[17:55] <iive> hum... don't see it in the ffmpeg libxvid options... and code :(
[17:58] <wintershade> iive: I know... I found them in mencoder, but I can try to reconstruct them... I suppose.
[18:06] <wintershade> I'm off, thanks everyone!
[18:37] <LiohAu> I need help for a project that involve stereoscopic videos, anybody here has knowledge with that?
[18:37] <jonascj_> LiohAu: generally it is better to just ask your question
[18:38] <LiohAu> well its hard to explain because Im really at the begining and I know almost nothing :D
[18:39] <jonascj_> LiohAu: I assume you still have some problem or something you need help figuring out. Whatever that is, you should be asking that :)
[18:39] <LiohAu> But to explain more my problem. I want to use two CMOS camera module (like these ones : https://www.sparkfun.com/products/11745 ) and I would like to generate a stereosopic h264 video that I can stream
[18:40] <LiohAu> And I dont know where to start
[18:41] <jonascj_> I have some videos from my gopro camera and I would like to "compress" them such that they still look fine and sound okay, but does not take up 4GB/hour of video. This is what ffprobe says about the videos: http://paste.linuxassist.net/view/4093d674 . Any suggestion on what I could do apart from cutting back on the fps (50fps is the camera minimum at 720p, but I only need 25fps). Resolution 720p seems okay also. So could I go for another encoding?
[18:42] <LiohAu> I dont even know if I should not use an hardware encoder instead of a software one
[18:42] <jonascj_> LiohAu: Read about stereoscopic formats. Some put right/left frame besides one another, some put them above one another, and some maybe deliver two different files etc. What is your target - some specific TV-set, vlc on a computer or?
[18:43] <LiohAu> The goal is to make an FPV system, so I would like to display the video using the oculus rift SDK
[18:43] <jonascj_> LiohAu: and what are you requirements, live 4K @ 120Hz streamed, or just some 720p which you record one day, then combine together the next and stream the third
[18:44] <jonascj_> okay so a stereoscopic live feed from some remotecontrolled quadcopter, your dog or similar?
[18:44] <LiohAu> requirements = oculus rift resolution so 1080p and they say 90hz
[18:45] <LiohAu> jonascj_: any moving object
[18:45] <jonascj_> LiohAu: your cameras are not 1080p :)
[18:45] <LiohAu> yes, I wont buy these ones
[18:45] <jonascj_> oh okay
[18:45] <LiohAu> it was a sample
[18:45] <jonascj_> LiohAu: how will you get the video to your oculus rifts - streamed via network or?
[18:46] <jonascj_> does the moving object carry a computer which will be able to stream it, will the cameras them selves be able to stream over a network or?
[18:46] <LiohAu> the goal is to stream it via network yes, so the camera will be connected to a real PC with a wifi connection
[18:47] <jonascj_> so the moving object carries with it a pc which can stream the camrea output over wifi to some receiver?
[18:47] <LiohAu> yes
[18:48] <LiohAu> I read that I can achieve low latency encoding with h264
[18:48] <jonascj_> I know close to nothing about the performance of ffmpeg, but I'd say you could achive combining the two camera feeds to a single frame-side-by-side stereoscopic video at 90Hz live most modern laptops.
[18:49] <jonascj_> but now at least you know that you should ask people: "what psecs to I need to ocmbine two 1080p streams into a stereoscopic side-by-side feed at 90Hz (live)?"
[18:49] <jonascj_> *specs do I need to
[18:50] <jonascj_> This will probably be easy with on requirements on the latency / delay, but if you require very low latency to control this moving object based on the feed then that will most likely be your headache :)
[18:50] <LiohAu> http://www.ampltd.com/pc104/h264/h264-hd2000.php < I was looking at board like this one, dont you think it would be easier ?
[18:53] <jonascj_> LiohAu: you will still have a headache on the receiving side - streaming almost always involves some buffering and how to get that to a minimum when you want something truly live
[18:54] <jonascj_> but that board looks interresting on the producing side (the moving object). Don't know if it can combine the feeds to stereo though. You might still need to do this on the receiving side
[18:54] <LiohAu> oh? dont you think It would be easier to combine before sending?
[18:54] <jonascj_> LiohAu: but can the board combine it?
[18:55] <LiohAu> I dont think so :(
[18:55] <jonascj_> then you will have to have another piece of equipment on the moving object, or combine them when they arrive.
[18:55] <LiohAu> but I can get the two feeds from the boards, and combine them on the PC of the moving object
[18:56] <jonascj_> But one thing is encoding the video, another much cheaper operation (i believe) is to put corresponding frames side by side. That should be a fast operation I'd say.
[18:56] <jonascj_> LiohAu: then all you would use the board for is to encode the video to h264. If you need something like a laptop on the moving object at any rate chances are you might aswell capture and encode on thelaptop as well
[18:57] <jonascj_> I don't know about that though- it is a performance issue. Hardware encoding will most likely always beat software encoding, but it might not be an issue for you.
[18:58] <jonascj_> I should think your biggest issue is the network streaming. Using ffmpeg to encode and combine + ffserver to destribute the stream + vlc to view the stream will most likely give you seconds of delay. At least that is the best I got when I tried ffmpeg for capture + ffserver to distribute stream + vlc to view, for a webcam :P
[19:00] <jonascj_> but then again I don't really know what I am doing - but I really think the actual network streaming is going to be your biggest problem - how not to get unacceptable latency from that.
[19:03] <jonascj_> or rather the software involved in that - e.g. you probably cannot rely on existing software to feed the video to your oculus rift sdk based receiver.
[19:07] <jonascj_> On my http://paste.linuxassist.net/view/4093d674 gopro videos, could I go from 50fps to 25fps and bitrate 20001kb/s to 5000kb/s?
[19:08] <LiohAu> there are videoconferencing software that achieve low latency I believe
[19:09] <jonascj_> LiohAu: sure, but can that interface with your oculus rift?
[19:10] <LiohAu> I guess I have to develop a lot of things
[19:10] <jonascj_> Also low latency is relative, for video conference you could probably do with 0.5sec or something similar. If you wan't to remote control you quad copter flying around 0.5sec might not cut it.
[19:10] <LiohAu> anyway ffserver use RTP to transport the media as videoconferencing software do
[19:10] <jonascj_> I don't know the latency of skype video call - it is probably quite low
[19:28] <jonascj_> LiohAu: maybe something already exist made for this - streaming to oculus rift from two cameras
[19:32] <LiohAu> jonascj_: maybe I have to look at the oculus developers forums
[19:41] <jonascj_> LiohAu: that would also be a good place to ask - or in some fpv radio control communities (quadcoptes, model air planes etc.)
[20:13] <webadpro> Hello all. I would like to convert an rtsp stream to an rtmp stream
[20:13] <webadpro> Anyone knows of an easy command to simply copy the stream and not save it? this is for broadcaster live video
[20:13] <webadpro> broadcasting*
[20:14] <c_14> ffmpeg -i rtsp://url -codec copy -map 0 rtmp://url
[20:15] <webadpro> ffmpeg -i "rtsp://192.168.1.60" -codec copy -f flv  rtmp://192.168.1.59:1935/live/room1
[20:15] <webadpro> Would this make sense?
[20:15] <c_14> sure
[20:16] <webadpro> it starts then I get an error
[20:16] <webadpro> sorry, i thought it wasnt that long
[20:17] <tuukka> Hello have anyone noticed that ffprobe leaks memory.. at least 2.2.7
[20:17] <c_14> webadpro: for the error
[20:17] <c_14> tuukka: valgrind?
[20:17] <webadpro> http://pastebin.com/ppXwFMrV
[20:17] <tuukka> yes valgrind
[20:18] <c_14> Hmm, can you pastebin the output and check with git head?
[20:19] <tuukka> c_14: I will I have to compile more recent x265. But it seems it's allocated MUTEX
[20:19] <webadpro> c_14: also it has 160GB of free space, at first i thought that was the issue.
[20:23] <c_14> webadpro: Is there an rtmp server listening on that ip/port?
[20:23] <webadpro> Yes there is.
[20:24] <webadpro> because the following works fine, but without sound
[20:24] <webadpro> ffmpeg -rtsp_transport tcp -i "rtsp://192.168.1.60" -f flv -r 25 -s 640x480 -an "rtmp://24.122.174.118:1935/live/room1"
[20:24] <webadpro> But this one seems to save everything and takes space...
[20:25] <webadpro> ignore the IP of the rtmp.. the right one is the .59
[20:26] <webadpro> hey ignore this& adding the rtsp_transport seemed to have fixed it
[20:26] <webadpro> time to test :)
[20:26] <webadpro> Thanks a lot
[20:29] <webadpro> can you tell me if that -copy command uses space?
[20:32] <c_14> It might need some ram, but nothing on the hard drive.
[20:46] <c_14> tuukka: Does that leak happen with every input file you have? How much is being leaked?
[20:51] <jonascj_> Should I specify a preset along with "ffmpeg -i in.mp4 -c:v libx264 -b:v 5000k -r 24 out.mp4" ? Or will the presens (fast, medium, slow, veryslow, etc.) just contain presets for the framerate, bitrate etc.=?
[20:52] <c_14> The presets govern the filesize vs encoding speed (and when using average/constant bitrate encoding the quality) of the resulting file. The default afaik is medium.
[20:52] <pmarty> does ffplay/ffmpeg support offloading parts of decoding process to gpu through VA API for playback? i see in my config.log *_vaapi_hwaccel lines.
[20:53] <c_14> see the -hwaccell option.
[20:53] <pmarty> it doesn't work for ffplay
[20:53] <pmarty> version 2.3.3
[20:54] <c_14> https://trac.ffmpeg.org/ticket/3359
[20:55] <tuukka> c_14: Here is the pastebin http://pastebin.com/59HFP1Hg
[20:56] <tuukka> c_14: avformat_find_stream_info was the main reason I started valgrind
[20:56] <jonascj_> c_14: so specifying a preset like slow might give me better quality for the same settings compared to medium which is the default?
[20:57] <c_14> jonascj_: not might, will. You just might not be able to see it depending on other factors.
[20:57] <pmarty> c_14: i see. but the ffmpeg tool already supports that, right? and it's possible to use it as sort of playback tool :)
[20:57] <pmarty> although documentation mentions only vdpau
[20:59] <jonascj_> c_14: okay, I'll have to try it out. -b:v 5000 and -r 24 cut my gopro mp4's from 4GB/hour to 1GB/hour. Since it is just some lecture I would like to go lower. Might be I should try to go from 720p to 480p
[21:05] <c_14> tuukka: I just ran a valgrind with ffprobe on the same file and I only lost 80 leaked. Of the 537 bytes you leaked, only 80 were leaked in FFmpeg code so that part matches. The remaining bytes that were leaked were leaked in either libjack code or in libgnutls code. Not knowing the internals of either library, I don't know how the internal functions handle memory and if they should clean themselves up or if
[21:05] <c_14> they expect their caller to clean them up.
[21:08] <c_14> pmarty: ffmpeg supports va-api accelerated decoding, but you can't use it as a playback tool directly. You could use ffmpeg's va-api decoding to decode the video and then output through a pipe as rawvideo to another program which will then play the decoded rawvideo stream
[21:08] <tuukka> c_14: but this avformat_find_stream_info leak presents (altought is only 40 bytes) in my app..
[21:10] <pmarty> c_14: but ffmpeg does support output devices meant for playback like sdl (works for me) or xv (it's not present in my build for some reason)
[21:13] <c_14> tuukka: Yes, both avformat_network_init and avformat_find_stream_info appear to be leaking 40 bytes each. This is probably a bug and should be reported on trac. You might want to report the issue with a build not linked against either libgnutls or libjack though so that the other leaks don't obscure the leaks in FFmpeg code.
[21:16] <tuukka> c_14: Yea.. I track this little bit down and file a bug
[21:16] <pmarty> "ffmpeg -hwaccel auto -i trailer_1080p.mov -f sdl out"  this works but it's kinda slow
[21:17] <pmarty> -hwaccel vaapi is missing
[21:18] <webadpro> c_14: By ram, will it fill up.. or you mean use 100mb of ram simply to process?
[21:18] <jonascj_> Any mac users? I am trying to figure out if it is easy to install ffmpeg on mac or not. https://www.ffmpeg.org/download.html#build-mac seems to indicate that there is prebuild binaries available. Can those be installed / used without the need for complicated installation procedures?
[21:18] <tuukka> c_14: How do I disable jack?
[21:18] <c_14> webadpro: simply to process
[21:19] <tuukka> c_14: There is no option for that
[21:19] <c_14> tuukka: --disable-outdev=jack or something
[21:19] <webadpro> c_14: Perfect. than thats just fine. :)
[21:19] <c_14> tuukka: hmmm, wait
[21:20] <c_14> --disable-indev=jack
[21:21] <webadpro> c_14: what does -f flv -r 25& what does the 25 mean
[21:21] <c_14> 25 frames per second
[21:22] <c_14> ie it sets the framerate to 25 fps
[21:22] <webadpro> right& is that a normal process
[21:22] <webadpro> or should I have it to 29?
[21:23] <c_14> Honestly, unless you need to follow certain constraints or something's broken you really don't have to mess with the framerate that often.
[21:23] <webadpro> ok
[21:23] <c_14> Ie if you want to output PAL you'll need 25 fps, NTSC uses 30 etc
[21:23] <webadpro> should I remove it
[21:24] <webadpro> and only have -f flv
[21:24] <c_14> I'd remove it, but keeping it shouldn't break anything.
[21:33] <fajung> i did all the compilation gude for ubuntu including the MulticoreWare x265, but when I try to set the PATH it returns ERROR: x265 not found | http://pastebin.com/tF45RdYJ
[21:38] <c_14> ehh
[21:38] <c_14> You never installed x265?
[21:39] <fajung> me?
[21:39] <c_14> cd into the x265/build/linux folder and make install
[21:39] <c_14> fajung: yes, you
[21:40] <c_14> And make sure you set the prefix for x265 to $HOME/ffmpeg_build
[21:42] <fajung> sorry to ask but how I set the prefix($HOME/ffmpeg_build)  in the make install (for the 265)?
[21:43] <c_14> the ./make-Makefiles.bash should pop up a ncurses thingymagiggy iirc
[21:43] <c_14> *an
[21:43] <sfan5> -DCMAKE_INSTALL_PREFIX=<dir>
[21:43] <sfan5> should work
[21:43] <c_14> or that
[21:44] <sfan5> like this: cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX=$PRFX $SRC/libx265/source
[21:53] <kingsob_> I am getting a bunch of undefined reference errors when building x264. I don't really understand how building x264 has anything to do with ffmpeg tho..  http://pastebin.com/uvAtLApb
[22:34] <relaxed> kingsob_: --disable-lavf
[22:36] <kingsob_> I added --disable-cli and it seems to compile fine now.. but I'll try --disable-lavf .. wouldn't mind having the x264 binary
[22:37] <sacarasc> Cyclical dependencies are fun. Compile thing A, compile thing B with thing A support, compile thing A with thing B support, compile thing B again just to make sure it all works. \o/
[22:39] <kingsob_> would --disable-lavf disable ffmpeg support for x264?
[22:39] <relaxed> yes
[22:39] <kingsob_> when I compiled x264, then ffmpeg, everything worked fine.. but now when I go to recompile x264 it blows up with above errors
[22:40] <kingsob_> ideally it shouldnt be trying to compile x264 again, so I think there is an issue with my chef setup, but ideally it would still succeeed, not blow up like it currently is
[22:41] <relaxed> I think they added lavf autodetect to configure
[22:43] <relaxed> if you're using ffmpeg there's no reason to compile x264 with lavf support.
[22:43] <kingsob_> ahh
[22:43] <kingsob_> I see what you're saying now
[22:44] <kingsob_> I think my previous question was backwards
[22:44] <kingsob_> would --disable-lavf disable x264 support for ffmpeg?
[22:44] <kingsob_> sounds like answer is no  :)
[22:45] <relaxed> correct
[22:46] <kingsob_> perfect, and I suppose that makes sense why --disable-cli solved the problem as well, since there is no x264 at all, so no dependency on ffmpeg..
[22:46] <kingsob_> thanks for your help!
[22:57] <onyx> c_14: my attempt to drawtext with the time on the thumbnails failed
[22:58] <c_14> What went wrong?
[22:58] <onyx> it didnt work no matter what I tried
[22:58] <onyx> hold on
[22:58] <onyx> let me show you the command
[22:59] <onyx> http://pastebin.com/iDdRUpQ3
[23:02] <c_14> I think you're suffering from excessive backslashing. try -vf 'drawtext=fontfile=/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans-Bold.ttf:text=%{localtime:%T}:fontcolor=white at 0.8:x=7:y=460'
[23:02] <c_14> Replace everything after -vframes 1 and before $out with that
[23:03] <onyx> [AVFilterGraph @ 0x2e95040] No such filter: 'drawtext'
[23:03] <onyx> Error opening filters!
[23:05] <c_14> Can you pastebin the output of `ffmpeg -version' ?
[23:06] <onyx> http://pastebin.com/x3rd4Yci
[23:08] <c_14> Right, you'll have to get (or compile) a copy of FFmpeg with the --enable-libfreetype (and preferably also the --enable-libfontconfig) option[s].
[23:08] <c_14> ie: `./configure --enable-libfreetype --enable-libfontconfig'
[23:09] <onyx> so I go into the ffmpeg folder and run that?
[23:09] <c_14> yep, then make and install as normal
[23:10] <onyx> wait I do have the static version
[23:11] <onyx> http://pastebin.com/tK2itX5p
[23:12] <c_14> Then just use that one.
[23:12] <onyx> ok let me try that
[23:15] <onyx> [Parsed_drawtext_0 @ 0x36c1ec0] Could not load font "%T}": cannot open resource
[23:16] <onyx> Parsed_drawtext_0 @ 0x36c1ec0] Unterminated %{} near '{localtime'
[23:16] <onyx> Im getting those 2 errors now
[23:17] <c_14> put a \ before the :%T
[23:17] <c_14> ie %{localtime\:%T}
[23:18] <onyx> Parsed_drawtext_0 @ 0x3c5f560] Could not load font "%T}": cannot open resource
[23:18] <onyx> [Parsed_drawtext_0 @ 0x3c5f560] Unterminated %{} near '{localtime'
[23:19] <c_14> yeah, let me test that.
[23:24] <onyx> any luck?
[23:27] <c_14> right
[23:28] <c_14> try -vf "drawtext=fontfile=/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans-Bold.ttf:text='%{localtime\:%T}':fontcolor=white at 0.8:x=7:y=460"
[23:28] <onyx> ok
[23:30] <onyx> ok seems to be working
[23:30] <onyx> one sec let me download the images...
[23:33] <onyx> atta boy
[23:33] <onyx> works beautifully!
[23:34] <joules> whats a container that supports raw video?
[23:35] <joules> I had -f avi ..which 3 hours later completely ignored my -v:c and -a:c options.
[23:36] <c_14> Matroska does.
[23:36] <joules> currently just doing a lossless mka..yeh
[23:36] <joules> but not sure the video editor can handle this format. :/
[23:37] <Suchiman> -v:c ? isn't it -c:v
[23:37] <Suchiman> -codec:video
[23:37] <c_14> it is/they are
[23:37] <joules> yes I hvan't got my glasses on and just woke up.
[23:37] <c_14> ie -c:a and -c:v
[23:38] <joules> maybe .mov ?
[23:38] <Suchiman> so since People are active, i might try to ask again ;)
[23:38] <joules> no one helps me
[23:38] <joules> shoot. ;d
[23:39] <c_14> joules: why are you using rawvideo?
[23:40] <joules> don't ask.
[23:42] <Suchiman> i've used directShow to capture video (lagarith compressed) and audio (pcm) stream from a grabber device into a mkv. if i stream copy the audio into a wav, it gets longer than the original mkv, if i remux the audio with the Video, audio is desync. if i first transcode it into mp4 (h264, aac), the duration is now 10 seconds longer, but a/v still in sync. if i
[23:42] <Suchiman> stream copy the audio now into a external file.... it gets again ~5 seconds longer. remux again and desync... what is going on ;)
[23:42] <c_14> Suchiman: might be the sampling rate
[23:43] <c_14> ie ffmpeg detecting the wrong sampling rate
[23:43] <c_14> joules: of the muxers I thought might support rawvideo, I was able to successfully throw rawvideo into avi and matroska.
[23:43] <c_14> Successfully as in I was able to play it again afterwards.
[23:44] <joules> didn't work for avi
[23:44] <c_14> Suchiman: what does ffprobe say about the sampling rate in the original file and then in the one that is longer?
[23:44] <c_14> joules: what ffmpeg version?
[23:46] <joules> it enforced avi default of "mpeg4 (Simple Profile)"
[23:46] <relaxed> avi should support rawvideo
[23:46] <c_14> What command did you use?
[23:47] <joules> "ffmpeg -i /tmp/vhs.avi -vf yadif=1:1,mcdeint=0:1,crop=678:576:14:0,scale=720:576,setdar=dar=4/3,fps=25 -f avi -c:v rawvideo -c:a copy /mnt/disk/tmp/vhs.avi"
[23:47] <Suchiman> c_14: http://pastebin.com/8Rm82GNp
[23:50] <c_14> joules: hmm, that should have worked. Before ffmpeg starts encoding, it lists the OUtput metadata and the stream mapping, what does that say?
[23:52] <joules> c_14: it was weird, I was checking it. came back and the whole thing was fuxed.
[23:53] <joules> Yes, saids what it should..testing with the same command (minus the deints). Should finish soon.
[23:53] <joules> "Stream #0:0 -> #0:0 (mpeg2video (native) -> rawvideo (native))
[23:55] <c_14> Suchiman: hmm, the source audio might be 44.1KHz instead of 48KHz, try adding a -af asetpts=0.91875*PTS
[23:56] <Suchiman> c_14: i'm not sure how ffmpeg does work internally but why doesn't this corrupt audio already when transcoding to mp4?
[23:58] <c_14> Presumably because it uses video timestamps to help align the audio timestamps.
[23:59] <Suchiman> c_14: and in which step should i use this cmd? both mkv and mp4 are in sync, desync only happens when de/re muxing, should i use asetpts when stream copying from the mkv into wav?
[23:59] <c_14> ie it knows that audio sample x occurs during frame y or something.
[00:00] --- Wed Sep 10 2014


More information about the Ffmpeg-devel-irc mailing list