[Ffmpeg-devel-irc] ffmpeg.log.20141231

burek burek021 at gmail.com
Thu Jan 1 02:05:01 CET 2015


[02:06] <schiho> Hi guys
[02:08] <schiho> I have attached an SDI Input to my Computer, the video input is "Rawvideo (UYVY422) what is the difference to yuv420p ?
[02:14] <c_14> the subsampling
[02:16] <troy_s> schiho: in a YCbCr 422 your luma channel is 1:1
[02:17] <troy_s> schiho: With your Cb / Cr (Blue / Yellow and Red / Green axes) are half res
[02:17] <troy_s> schiho: In 420, the Cb and Cr are quarter res. So if your sources were 1920x1080, in 422 your Y' is 1920x1080, and your Cb/Cr is 960x540
[02:18] <schiho> ohh
[02:18] <schiho> didn't know that the resolution changes in this manner
[02:18] <troy_s> schiho: In 422, your Cb / Cr would be 960x1080
[02:19] <troy_s> It's not resolution per se. It is chroma subsampling.
[02:19] <schiho> i see
[02:19] <troy_s> The general agreement is that our perceptual systems (subject to sociological stuffs for sure) are more susceptible to luminance than chroma
[02:19] <schiho> Is there much difference in processing overhead in between UYVY422 and YUV420p?
[02:19] <troy_s> So video attempts to leverage that to compress the signal, providing more data of luminance than chrominance.
[02:20] <troy_s> Not really. It is literally a scale.
[02:20] <troy_s> And playback might be different than say, online needs (by online in reference to post production, not internet)
[02:20] <schiho> i see what you mean, online, offline editing like
[02:20] <troy_s> So you may choose a faster and less quality scaling technique for playback because it might not matter as much, versus say, some post production pipe that has decided to use a subsampled format.
[02:20] <troy_s> Correct.
[02:21] <troy_s> The scaling of those Cb / Cr planes has a _tremendous_ impact on the quality of the bits, as does the quantization level.
[02:21] <troy_s> And in fact, you can also get false colors and plenty of other strange bits depending on the technique one uses to go from the merged YCbCr planes back to RGB.
[02:21] <troy_s> Glen Chan has a decent exploration of that IIRC.
[02:22] <troy_s> schiho: http://www.glennchan.info/articles/technical/chroma/chroma1.htm
[02:22] <schiho> link saved
[02:23] <schiho> Maybe i should elaborate further, i am reading from my camera the raw video input and i am streaming it over udp to the network after encoding
[02:23] <schiho> I already read all the live-streaming and low-latency pages from ffmpeg
[02:24] <schiho> There are some suspicous things, where i couldn't find anything
[02:24] <schiho> 1) When i change the gopsize to 1, so that only keyframes are sen't shouldn't the encoding process be faster?
[02:24] <troy_s> schiho: Well first, it isn't raw.
[02:24] <troy_s> :)
[02:25] <troy_s> It's a YCbCr conversion based off of the (likely) Bayer sensor.
[02:25] <troy_s> And while it is pure speculation, there's plenty of stuff going on there to make any sort of guess useful.
[02:25] <troy_s> What camera?
[02:25] <schiho> Blackmagic Cinema Camera + Baclkmagic Decklink Studio 2k PCI Card
[02:26] <schiho> So i am connected over SDI
[02:26] <troy_s> Remember that your camera has plenty of onboard chips to try and accelerate the blasted conversions, and those are nothing more than algorithms baked into a chip. So if the camera was designed primarily to use Foo chip to do A, and you choose B, you can see that you may very well take a performance hit.
[02:26] <troy_s> Right. So that is a different scenario. That is a YCbCr signal coming out the pipe then?
[02:27] <troy_s> And why aren't you shooting raw?
[02:27] <troy_s> ( :) )
[02:27] <schiho> this is just for playback, we are of course shooting raw :)
[02:27] <troy_s> Ok. So your issue is?
[02:28] <troy_s> It is sluggy?
[02:28] <troy_s> (I don't personally own a BMCC but I know of a couple of friends that own a pair each.)
[02:28] <schiho> my problem is pure latency
[02:28] <troy_s> How are you streaming it?
[02:28] <troy_s> And is there a bad latency direct off of the SDI?
[02:29] <schiho> [BMC CAMERA]---->[BLACKMAGIC SDI CARD]----->[FFMPEG to UDP with x264 (tune:zerolatency, preset:ultrafast)]------>ffplay
[02:29] <troy_s> I'd bet heavy on network issues then.
[02:30] <schiho> you mean, the bandwith limit ? i wouldn't think
[02:30] <schiho> i am currently streaming local
[02:30] <troy_s> (Although granted, going from 420 to 422 would increase your data relatively significantly as you can see. 420 has quarter the resolution on two planes, where 422 has half the pixel count on the two. That's a reasonable overhead.)
[02:30] <schiho> for test purposes
[02:31] <troy_s> Just network stuffs.
[02:31] <troy_s> What sorts of latency are you seeing?
[02:31] <schiho> it's delayed between 0,7 and 0,9 seconds
[02:31] <troy_s> And this is causing havoc for your audio playback I take it?
[02:31] <schiho> my ffmpeg option for audio is -an
[02:32] <schiho> so there shouldn't be audio?
[02:32] <troy_s> What is the issue for latency then?
[02:32] <relaxed> what is your command?
[02:32] <troy_s> If it is operating, I seem to recall 30-40hz being the threshold, and I'd seriously doubt that you can hit that with network.
[02:33] <schiho> -f dshow -i video=\"Blackmagic WDM Capture\" -pix_fmt yuv420p -vf scale=1280:720  -r 25 -vcodec libx264 -preset ultrafast -tune zerolatency -g 1 -f mpegts udp://192.168.0.10:1234"
[02:33] <schiho> ignore the backslahes
[02:33] <troy_s> Well that's going to take software scaling in there.
[02:34] <troy_s> Have you tried dumping raw?
[02:34] <troy_s> as in just copying the YCbCr stream?
[02:34] <schiho> you mean "dumpring raw" --->"sending raw over network" ?
[02:34] <troy_s> That's going to A) scale it B) encode it
[02:34] <troy_s> Yes. Just transmit the YCbCr across the network direct from camera to destination. No scaling. No encoding.
[02:34] <relaxed> do you think -g 1 is wise?
[02:35] <schiho> i was just messing around with gop size, as i had a theory, that it will decrease latency as the encoder won't wait for a nother frame?
[02:36] <troy_s> schiho: You are going to get latency subject to FFMPEG's ability to run. You are wiser to get a baseline of "what happens if I simply transmit the raw YCbCr from the camera to the destination"
[02:36] <troy_s> schiho: The g1 is again, an encoding parameter. Not going to be faster if you are re-encoding.
[02:37] <troy_s> schiho: So try copy first.
[02:37] <schiho> allright i will use vcodec copy
[02:37] <schiho> let me try this out
[02:37] <troy_s> schiho: Start there.
[02:37] <schiho> (just a side note: at the end i want to stream to my mobile phone :) )
[02:38] <troy_s> schiho: Sure. Wireless and bandwidth might be a slug.
[02:38] <troy_s> schiho: Can you control what is dumped off of that BMCC YCbCr out? Is it via the SDI out?
[02:39] <schiho> yeah, one sec i will check it out
[02:40] <schiho> no not really, i can just set the SDI-Mode to "HD" or "4K"
[02:40] <troy_s> HD is a good start. :)
[02:41] <troy_s> 4k is 4x the information.
[02:41] <troy_s> (as a rough starting point.)
[02:41] <schiho> :)
[02:41] <troy_s> See if you can get 2k working smoothly then build upwards.
[02:41] <schiho> 4k is not in my interest for now
[02:41] <schiho> and dumping now a raw video i get "circular buffer" error
[02:41] <troy_s> I can't remember which camera has the better latitude.
[02:41] <schiho>  -f dshow -i video=\"Blackmagic WDM Capture\" -pix_fmt yuv420p -vcodec copy -f mpegts udp://192.168.0.10:1234"
[02:42] <troy_s> ?
[02:42] <troy_s> Uh... the -f isn't needed is it? you are copying.
[02:42] <schiho> ffplay udp://192.168.0.10:1234
[02:42] <schiho> you mean the -f mpegts
[02:43] <troy_s> Yes.
[02:43] <troy_s> (basically we don't want FFMPEG doing anything)
[02:43] <schiho> just removing "-f mpegts" isn't working
[02:43] <schiho> "Unable to find a suitable output format for 'udp://192.168.0.10:1234'
[02:45] <troy_s> relaxed: Idea?
[02:51] <troy_s> schiho: No clue on how to do a -vcodec copy UDP.
[02:52] <troy_s> Hum. Maybe you have to transcode for UDP?
[02:53] <schiho> hmmm yes, me neither :) however, maybe we can concentrate on other questions too. So you wouldn't expect that the encoding takes too long?
[02:53] <troy_s> Well the encoding is going to probably (speculating) take up the vast bulk of time.
[02:53] <troy_s> try a -f mpegts at the start instead of dshow?
[02:54] <schiho> i already did a test with a raw-video from disk, for a 90 seconds video it took me 17 seconds to encode
[02:54] <troy_s> Well we have to try and get rid of that reencode.
[02:54] <troy_s> And just transmit the data I'd think.
[02:55] <troy_s> But I have NO clue if that is even possible when streaming.
[02:55] <schiho> but -f mpegts at the beginning won't work i thought the "-f dshow" defines the input?
[02:55] <troy_s> Oh probably.
[02:55] <schiho> or forces the input
[02:56] <troy_s> (duh)
[02:56] <troy_s> What are you receiving with?
[02:56] <troy_s> Not FFPLAY right?
[02:56] <schiho> sure, with ffplay i am receiving
[02:56] <troy_s> That has built-in latency as I understand it
[02:56] <schiho> ffplay udp://192.168.0.10:1234
[02:56] <troy_s> can you try it with mplayer and the benchmark option?
[02:57] <troy_s> as in mplayer -benchmark methinks?
[02:57] <schiho> will do that, (sidenote: i am on windows )
[02:58] <troy_s> Should still be fine.
[02:58] <troy_s> (Although an interesting point might be to stick an el-cheapo Linux headless box in there for transmitting to see if you can scrape down the latency that way too.)
[02:58] <troy_s> (Even a super small micro styled box would probably work better than a full blown Windows instance.)
[03:00] <troy_s> (And I think your pix_fmt is in the wrong place, and not it would apply with copy.)
[03:01] <schiho> i can remove it, it should automatically detect it
[03:01] <schiho> i mean it seems to work
[03:01] <schiho> frame=  117 fps= 25 q=-1.0 size=  510799kB time=00:00:04.68 bitrate=894059.8kbits/s
[03:01] <schiho> but i cannot replay it
[03:02] <troy_s> ?
[03:02] <schiho> the pasted line is output from ffmpeg
[03:02] <troy_s> So did you get it working?
[03:02] <schiho> no i get the circular buffer error on ffplay
[03:03] <troy_s> Can you ditch ffplay for the time being? I'm reasonably certain I remember reading about latency issues wit hit.
[03:03] <schiho> sure
[03:03] <schiho> i will use mplay
[03:04] <troy_s> Please. That circular buffer is some threaded loopy thing with FFPLAY and I am deep enough over my head.
[03:04] <schiho> :) thats a good hint... i shouldn't use it for debugging now
[03:05] <pinkette> why does apple h264 encoder looks like crap
[03:05] <troy_s> schiho: Win?
[03:05] <schiho> @troy_s give me a minute i have to restart my computer
[03:05] <schiho> brb
[03:18] <t4nk655> i am back
[03:19] <schiho> so know i am back :)
[03:20] <schiho> you won't believe but latency of mplay is wors then vlc, ffplay
[03:32] <pinkette> would upgrading from  amd phenom 2 x4   to  amd fx-8300  x8;   make big difference  running ffmpeg
[03:37] <schiho> Does anyone know if the option "threads" in ffmpeg uses frame based or slice based threading?
[03:37] <c_14> ffmpeg -h encoder=[encoder]
[03:37] <c_14> Check the therading capabilities output
[03:46] <schiho> threading/capabilities> no
[03:47] <schiho> so libx264 has no treading options?
[03:47] <c_14> libx264 has threading
[03:48] <schiho> in my output "ffmpeg -h encoder=libx264"
[03:48] <schiho> it says "Threading capabilities: no"
[03:49] <schiho> http://pastebin.com/J1WjLQyQ
[03:51] <c_14> according to the manpage it does slice and frame threading
[03:52] <c_14> depending on the -thread_type option
[03:52] <c_14> https://ffmpeg.org/ffmpeg-codecs.html#Options-19
[03:52] <c_14> The help should state that though...
[04:00] <schiho> Hmm... there is barely a speed gain with and without threads
[04:00] <c_14> The default is -threads 0 which is automatic
[04:02] <schiho> Ok now i see the difference, -threads 1 is 1/4 slower
[04:02] <schiho> where did you read the default? what is the default for thread_type ? slice or frame?
[04:07] <c_14> according to this, frame
[04:07] <c_14> http://mewiki.project357.com/wiki/X264_Settings#threads
[04:08] <pinkette> what would you say would be a stronger computer?   2014 $2000.00 computer   or   year 2000  supercomputer
[04:08] <schiho> i see, for me slice has better performance
[04:09] <c_14> Also, not sure why CODEC_CAP_SLICE_THREADS and CODEC_CAP_FRAME_THREADS were never added to the capabilities for libx264...
[04:09] <c_14> Might have to ask someone.
[04:09] <c_14> pinkette: depends on the usecase probably
[04:09] <c_14> For a desktop workload the 2014 one.
[04:09] <schiho> yes, maybe you should do that
[04:11] <schiho> I am wondering what ffmpeg does when you stream with libx264, is he sending an image as soon it is finished or is it waiting for more images and send them as a bunch? If it is sending the images 1 by 1 the threads option with frame will not have any effect, or even worse effect
[04:14] <c_14> I don't know much about the buffering logic, sorry.
[04:15] <c_14> That is, besides the -bufsize option
[04:17] <schiho> hmmm
[04:17] <schiho> so i will now sleep a bit thank you c_14 very much for your knowledge! and s_troy you as well, thank you very much providing me with so much information
[04:18] <c_14> np
[04:26] <schiho> ok i cannot sleep :) i will try the vbufsize
[04:28] <schiho> the -bufsize option will limit the size of one image? is this correct?
[04:29] <c_14> https://trac.ffmpeg.org/wiki/EncodingForStreamingSites
[04:29] <troy_s> schiho: Sorry
[04:29] <troy_s> schiho: Was dinner
[04:29] <troy_s> schiho: Did you make any progress?
[04:30] <troy_s> pinkette: It should.
[04:30] <schiho> not with the raw streaming, but i've send an email to the mailing list with the problem. I searched online and there is nothing describing rawvideo streaming
[04:31] <troy_s> schiho: I'm not sure that you can perhaps.
[04:31] <troy_s> schiho: But if you can figure out a way to simply transmit and catch the data and then decode, you surely will reduce your latency.
[04:31] <troy_s> schiho: I'm reasonably sure there must be a method to do that via UDP
[04:32] <schiho> yeah should be
[04:34] <schiho> c_14: i already looked into that document.. But the description from another document is a bit different : "With a single VBV, every single frame is capped to the same maximum size. this means that the server can instantly send all frames after encoding them"
[04:35] <c_14> Where'd you find that?
[04:36] <schiho> http://x264dev.multimedia.cx/
[04:36] <troy_s> schiho: I'd also try mpeg2 based because that encode shouldn't take long.;
[04:36] <schiho> i am not sure if he is talking about the same buffer, but the idea, limiting the framebuffer is for me trivial
[04:37] <troy_s> schiho: And did you use the -benchmark to get quantitative results?
[04:38] <schiho> troy_s: yes i read about mpeg2 and results here: http://www.waitwut.info/blog/2013/06/09/desktop-streaming-with-ffmpeg-for-lower-latency/
[04:39] <troy_s> schiho: The thing is, what is that YCbCr coming out there? is it not already in a wrapper?
[04:40] <troy_s> I'd have thought that there was a codec, but perhaps it is some plain planar YCbCr
[04:40] <schiho> allright i dont know what i did but the latency is now very low
[04:41] <troy_s> schiho: What is your command?
[04:41] <troy_s> Also, DarkShakiri was one of the more knowledgeable folks around the codec parts for a long while... terse too. http://x264dev.multimedia.cx/archives/249
[04:44] <pinkette> does ffmpeg support x265 ?
[04:44] <c_14> If you build it with support, yes.
[04:45] <troy_s> Erf. I really should look at the color code again and see if the silly sRGB thing was lifted.
[04:45] <pinkette> what about the one where you download
[04:46] <c_14> `ffmpeg -codecs | grep 265'
[04:46] <c_14> Check if it has DE
[04:47] <schiho> troy_s: i didn't really change anything, i thought maybe it's because of the slice option of threading, but it's not
[04:47] <c_14> and/or check the configuration line if it was built with --enable-libx265
[04:48] <schiho> troy_s: but with the threadtype slice i's remarkable amout better then without
[04:49] <schiho> Delay in Seconds mPlayer with slice: 0.23,  0.229,  0.288,  0.23
[04:49] <troy_s> schiho: Because slice encodes a single frame across cores.
[04:49] <schiho> Delay in Seconds mPlayer no Slice: 0.406, 0.289, 0.241, 0.287
[04:49] <troy_s> schiho: Which puts all threads into encoding a single frame. Great for encoding latency, at a tradeoff of quality probably.
[04:49] <schiho> yes and i suppose the default is frame
[04:50] <troy_s> schiho: That about does it then. Doubt you will get it lower.
[04:50] <schiho> exactly
[04:50] <troy_s> schiho: What codec?
[04:50] <troy_s> schiho: Is that x264?
[04:50] <schiho> libx264
[04:50] <troy_s> schiho: With the zerolatency option?
[04:50] <schiho> yes
[04:50] <troy_s> I'm sure that is close to as good as it will get.
[04:50] <schiho> that makes me happy troy! :)
[04:50] <troy_s> The _only_ better option would be zero encoding if it is possible, and transmit that
[04:51] <troy_s> But again, that's just me speaking as a YCbCr codec nerd, not as an ffmpeg expert. I'm not sure it is even possible.
[04:51] <schiho> yeah i should definetely try this out
[04:51] <troy_s> schiho: The good news is that with what you have now, you have your mobile streaming working now.
[04:51] <troy_s> schiho: Also, I would seriously consider running a headless Linux box if this is an important thing. You can almost certainly shave some latency off that.
[04:52] <troy_s> .25 latency isn't horrible though.
[04:52] <schiho> what do you mean exactly by headless linux, you mean virtual box?
[04:52] <troy_s> I'd say that is very manageable.
[04:52] <schiho> http://pastebin.com/bd4rrzDK  => you can see here where i began
[04:52] <troy_s> No. I mean if this is a 'thing' i'd heavily consider a Brix or NUC box and run it headless.
[04:52] <troy_s> What processor are you running now?
[04:52] <troy_s> Because threads will make a hell of a difference for you.
[04:52] <schiho> intel core i7
[04:53] <troy_s> How many threads are you running with that command?
[04:53] <troy_s> It should be 8
[04:53] <schiho> it's the default, and i think default is automatic according to the number of cpus
[04:53] <troy_s> set it manually just to be sure.
[04:54] <troy_s> If you have a quad core i7, roll with 8 (although sometimes for reasons that are known only to schedulers, total +1 or some other dark alchemy may be better.)
[04:54] <schiho> yes i set it manually now, but for some reason i cannot go more than 7
[04:54] <troy_s> ?
[04:54] <troy_s> That's odd.
[04:55] <troy_s> Quad core with HT should work with 8.
[04:55] <troy_s> Is FFMPEG barfing out an error?
[04:57] <schiho> nope nothing
[04:57] <troy_s> What is your line?
[04:57] <schiho> and i am using 12% of my overall cpu
[04:57] <schiho> -f dshow -i video=\"Blackmagic WDM Capture\" -pix_fmt yuv420p  -r 25 -vcodec libx264 -preset ultrafast -tune zerolatency -threads 16 -thread_type slice -f mpegts udp://192.168.0.10:1234"
[04:57] <troy_s> Yes... that should be higher if the threads are more optimized.
[04:57] <troy_s> 16?
[04:58] <troy_s> What happens with 8?
[04:58] <schiho> jep, this is the maximum recommended
[04:58] <troy_s> You will add scheduling overload with too many
[04:58] <troy_s> (You lose time flipping jobs.)
[04:58] <troy_s> So I'd try 8 and 4.
[04:58] <troy_s> and test the benchmarks.
[04:59] <schiho> okey you know what, i was killing the process of ffmpeg everytime i was starting a new one, so ffmpeg.exe was still in the processes but rubbish
[04:59] <troy_s> That's no good.
[04:59] <schiho> so i killed now every instance of ffmpeg exe
[05:00] <troy_s> (You _really_ need that Linux install. Have a spare SSD kicking around?)
[05:00] <troy_s> Best 60$ you can probably spend right now. :)
[05:00] <schiho> :) i will get one tomorrow, it's 05.00 am here atm
[05:00] <troy_s> You get my PMs?
[05:01] <schiho> ah yeah got it
[05:01] <schiho> i didn't see it
[05:01] <schiho> however, i am not sure if ffmpeg is starting hidden threads, so that i cannot see it under process in the task manager
[05:02] <pinkette> is it just me or h263 is horrible
[05:02] <pinkette> what is best h263  encoder
[05:03] <troy_s> schiho: The threads will be under the instance, not multiple instances.
[05:03] <schiho> yes, i am watching them now under the ressource monitor
[05:04] <schiho> when i explicitly write 1 threads under windows it appears with 4 , writing 2 it appears with 10
[05:05] <schiho> i am sure ffmpeg is starting threads for other tasks as well with every instance, maybe for communication
[05:06] <schiho> oh i've realized that the blackmagic camrea has an ssd 512gb inside :)
[05:10] <troy_s> schiho: Decent camera. Form factor is junk. Off speed lacking is junk. But really solid latitude.
[05:11] <troy_s> schiho: (On the smaller sensor version. The larger sensor suffers from the CMosis buggy)
[05:15] <schiho> and the camera gets hot reallz fast....i mean for me it|s good, wheni am freezing, i am putting my hands on it
[05:16] <schiho> and i already replaced the display and now the battery is dieng
[05:17] <schiho> troy_s: allright mate, i will sleep now, and hopefully install linux on my machine tomorrow
[05:17] <schiho> troy_s: thank you for you help, i will come around this channel tomorrow night
[05:19] <troy_s> schiho: Good stuff.
[05:19] <troy_s> schiho: Feel free to hit me via email
[05:19] <troy_s> schiho: I'll pm you.
[05:20] <troy_s> schiho: You should have it. I'd be interested to see if you can shave some ms off of that with a Linux install on a small 120 SSD.
[05:23] <schiho> troy_s: Great ;)
[05:24] <pinkette> is it normal that x265 is very slow ?
[05:37] <klaxa> yes
[13:38] <mr_lou> I'm trying to create blu-ray folder with tsMuxeR (not GUI version). I can run the same command line 5 times in a row without changing anything. It'll fail 4 times but be succesful 1 time. The errors generally just "Segment fault, core dumped". but there's also been a "moov atom not found" every now and then.
[15:06] <Subsentient> I need to force a constant bitrate converting m4a to mp3.
[15:07] <Subsentient> It won't listen to anything I try including -minrate or -maxrate
[15:07] <Subsentient> It starts off at some insanely high bitrate and then goes down to like 32
[15:15] <Subsentient> iive: Nobody?
[15:15] <Subsentient> Nobody knows how to do something that simple?
[15:16] <iive> Subsentient:  what is the audio encoder that you are using?
[15:16] <Subsentient> libmp3lame
[15:16] <Subsentient> ffmpeg -i Zealot\ -\ 04.\ Undercat\ \(feat.\ Zealot\).m4a -b:a 320k -minrate 320k -maxrate 320k -bufsize 2048k -codec:a libmp3lame Zealot\ -\ 04.\ Undercat\ \(feat.\ Zealot\).mp3
[15:16] <Subsentient> that's the full command
[15:17] <Subsentient> iive: It deteriorates down to like 50kbps halfway through conversion
[15:17] <Subsentient>  
[15:18] <iive> i don't think mp3lame even looks at minrate/maxrate
[15:19] <iive> try -b:a 128k
[15:19] <Subsentient> iive: Still deteriorates
[15:25] <iive> also, have in mind that the position of the option does matter.
[15:26] <Subsentient> iive: I know it does. Does my command line look correct?
[15:26] <Subsentient> iive: Show me how you'd convert an m4a to mp3 with a constant bitrate of 320kbps.
[15:27] <iive> for me `ffmpeg -i some.mp4 -vn -b:a 320k c:a libmp3lame test.mp3` does work
[15:27] <iive> ops, -c:a
[15:28] <iive> and i've tried 64,128,192 and they all seem to work.
[15:30] <Subsentient> iive: m4a, not mp4
[15:30] <iive> the input shouldn't matter.
[15:30] <iive> i actually used .ts
[15:31] <iive> m4a is just short of mp4audio
[15:32] <iive> i added -vn to disable to video.
[15:38] <Subsentient> iive: I solved it by converting the m4a to flac and then the flac to mp3
[15:39] <iive> i tried with m4a i found, I cannot reproduce this problem.
[15:40] <Subsentient> iive: Thanks for your help.
[15:41] <Subsentient> I get cranky.
[15:41] <Subsentient> I never have luck with ffmpeg\
[15:41] <Subsentient> it always does weird stuff to me.
[15:41] <iive> are you running it on windows?
[15:52] <Subsentient> iive: No, Linux.
[15:52] <Subsentient> i686
[16:37] <crschmidt> Hi. I'm trying to convert a video to a time-lapse style preview image. I know I can do this using the image2 muxer and another tool (like `convert`), but I'd like to do it in a single ffmpeg command if possible. The problem I'm having is that I can't seem to 'speed up' the gif output; the frame delay is always consistent with the frames that are being dropped. so if I use (eg) -vf "select='not(mod(n,60))'", the frame delay is set to 1s, whe
[16:37] <crschmidt> Is there any way to do this in a single command?
[16:38] <c_14> try changing the output fps?
[16:41] <crschmidt> c_14: using -r? Adding -r combined with the -vf above does not change the frame delay of the resulting .gif
[16:43] <c_14> Why not just use setpts?
[16:46] <crschmidt> c_14: Because I have no clue what I'm doing :) Looks like the right answer; thanks!
[16:48] <crschmidt> for posterity: ./ffmpeg  -i flying.MP4  -filter:v "setpts=0.01*PTS"  -t 5 -r 5 output.gif seems to give me approximately what I was looking for
[16:51] <raytiley_> need help googling... is there a way to export the volume data for a video file as a text file... something like generating a thumbnail for every second, but with audio data instead?
[18:23] <BadHorsie> So I have a couple of raspberry pis with cameras and I'm wondering what's good for watching them, Just for testing I have a cron taking pictures every minute and rsync'ing them to my laptop, course faaaar from ideal haha...
[18:24] <BadHorsie> So, knowing nothing on the subject as you can see, I told myself, "why not go to ffmpeg and ask them what's cool and new and have something to learn?"
[18:24] <c_14> use-case ?
[18:25] <BadHorsie> Some initial searches (given I know close to nothing I am probably not coming up with hte right keywords even haha) have stuff about rtsp/rtmp...
[18:26] <BadHorsie> I would like to see them on a browser maybe... Like a security system I guess
[18:26] <BadHorsie> They are masking-tape'd to a couple of telescopes
[18:27] <c_14> You can take them all and combine them as separate streams, or take them and layout them so they're all next to each other, you can output to a file or something networky
[18:28] <BadHorsie> Can I do that with ffmpeg or what software/techniques should I be investigating?
[18:28] <c_14> I only listed things that I know ffmpeg is capable of.
[18:29] <BadHorsie> Nice
[18:29] <BadHorsie> Can you expand a bit on the "networky" ? What would you recommend ?
[18:31] <c_14> For looking at with a browser?
[18:33] <BadHorsie> I... guess... I don't really know much so I'm biased by my lack of experience... Is there a better alternative? I was guessing I could use WebGL/Flash or maybe some other sort of software
[18:34] <BadHorsie> Eventually I would like to add some analysis to the frames, like opencv or something like that... Just to see if I'm looking at anything useful (not a cloud)
[18:35] <BadHorsie> But that would be on my laptop maybe, not the raspberry pi
[18:35] <c_14> You can, but at that level you usually have to find a flash streamer, web server etc. I'd probably use something hls-like. Either with a webserver that supports hls or just a file I can open.
[18:36] <c_14> You can use opencv with ffmpeg, but that usually requires some form of programming.
[18:37] <BadHorsie> Yeah I don't mind programming... I have fooled around with processing images before, it's the codecs, transports, video techniques that I know nothing about haha
[18:38] <c_14> The easiest start point for you would probably be just to grab all the individual streams from the rpis and stream them to one central location for processing/viewing
[18:46] <BadHorsie> Hmm mlike ffserver on the rpi and ffmpeg client on the laptop ?
[18:48] <c_14> not really
[18:48] <c_14> basically just a bunch of ffmpeg processes sending videos ta an ffmpeg process on the laptop
[18:50] <BadHorsie> Aight lemme read on it, thanks so much for the hints, happy new year!
[20:09] <ac_slater> hey all. I'm interested in demuxing/decoding in 'realtime' via libavformat, etc (ie - not the command line utility). I can't figure out how the command line utility implements the `-re` flag.
[20:10] <ac_slater> any clue?
[20:14] <c_14> Check ffmpeg.c for all occurrences of rate_emu
[20:20] <ac_slater> c_14: interesting
[20:21] <ac_slater> I see now
[00:00] --- Thu Jan  1 2015


More information about the Ffmpeg-devel-irc mailing list