[Ffmpeg-devel-irc] ffmpeg.log.20150313

burek burek021 at gmail.com
Sat Mar 14 02:05:01 CET 2015


[01:30:09 CET] <andrebq> How can I extract one frame every 1 minute from video A (which have 10min) and save it to another video that should have 10 sec (1 sec for every frame from the original video)
[01:43:34 CET] <pinPoint> so anyone play around with prores?
[02:22:42 CET] <pentanol> pinPoint prores?
[02:22:56 CET] <pinPoint> yes
[02:23:15 CET] <pinPoint> converting, encoding, to prores 422,444+
[02:26:43 CET] <pentanol> pinPoint ah that another one codec, no, I didn't use this.
[02:43:40 CET] <freshbird> hi guys,how to get the number of P or B frame with ffmpeg
[03:00:37 CET] <solrize_> hi, i ran apt-get install avconv on debian 7 and it installed a version supposedly compiled in september 2014 which sounds pretty current, but it's making bad ogg files i think due to this bug: http://comments.gmane.org/gmane.comp.video.ffmpeg.user/35419
[03:00:45 CET] <solrize_> supposedly fixed in more recent versions
[03:01:03 CET] <JEEBsv> solrize_: compilation date tells you nothing
[03:01:04 CET] <solrize_> i tried building from the git repo but the result doesn't seem to have any codecs?  do i have to install those separately?
[03:02:00 CET] <JEEBsv> you need to install the -dev packagea for thelibrariea and enable them in the configure
[03:02:13 CET] <JEEBsv> as in libvorbis etc
[03:02:22 CET] <solrize_> hmm ok i'll try that
[03:02:36 CET] <JEEBsv> deocders for everything and some encoders are included in lavc
[03:03:09 CET] <JEEBsv> so you generally only need to enable ext encoder libraries that you need
[03:03:42 CET] <solrize_> i just installed libvorbis-dev and libtheora-dev, but it occurs to me, maybe those are the libs where those bugs are, so i should rebuild them from repos too?
[03:04:23 CET] <solrize_> also the ffmpeg repo doesn't seem to have avconv  not sure if i should really care... the debian command line ffmpeg says to use avconv though
[03:08:22 CET] <relaxed> solrize_: http://johnvansickle.com/ffmpeg/
[03:08:54 CET] <relaxed> solrize_: avconv is from the FFmpeg fork libav.org
[03:09:03 CET] <solrize_> oh hmm
[03:09:56 CET] <solrize_> currently trying to configure ffmpeg to rebuild with ogg libs
[03:14:10 CET] <solrize_> compiling.... got message WARNING: pkg-config not found, library detection may fail.
[03:14:10 CET] <solrize_>   from configure like last time
[03:14:10 CET] <solrize_> i wonder if that's the prob?
[03:14:57 CET] <relaxed> install pkg-config
[03:15:19 CET] <solrize_> oh that's a standard apt thing, i didn't realize that... thanks, cool
[03:15:44 CET] <solrize_> reconfiguring and recompiling
[03:20:08 CET] <solrize_> cool, converting away, this will take a while.... i notice it's transcoding at a lower bit rate than the libav version
[03:20:28 CET] <solrize_> well now it's climbed up a bit, probably will average out
[03:21:47 CET] <solrize_> is the -threads option supposed to use multiple cores on the cpu?
[03:21:52 CET] <relaxed> https://trac.ffmpeg.org/wiki/TheoraVorbisEncodingGuide
[03:23:09 CET] <solrize_> hmm thanks.  i just ran "ffmpeg -i filename.mp4 filename.ogg" and it's munching away
[03:23:38 CET] <solrize_> seems to be using libvorbis
[03:31:05 CET] <solrize_> i'm ont getting those error messages on playback any more, so thanks!  the video is jumpy but i think that's because the default compression is too much... i'll try to set a higher bitrate
[03:43:39 CET] <solrize_> hmm, reconfigured/compiled with --enable-pthreads and running with -threads 8 and got no speedup at all
[03:57:38 CET] <solrize_> https://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide  wow there's a wikibook about ffmpeg
[05:47:11 CET] <koz_desktop> What does it mean if I get this error message: Past duration 0.999992 too large
[12:12:42 CET] <rtsplease> Hello there
[12:15:44 CET] <rtsplease> Apologies if this shouldn't go here, it's a question about libav code usage. I'm having a very old issue and I can't find the solution. Decoding RTSP streams, the UDP buffer size is insufficient by default... and cannot be modified via URL. Can this be done from libav code? Or should I recompile ffmpeg changing the UDP_MAX_PKT_SIZE ?
[13:07:08 CET] <hay207> hi guys i want to mux an audio only file with a video only file, of type webm, how to do so?
[13:07:44 CET] <hay207> ffmpeg -i audio.webm -i video.webm -map 0:a:0 -map 1:v:0  -acodec copy -vcodec copy -shortest  merge.webm
[13:07:50 CET] <hay207> results in
[13:08:06 CET] <hay207> Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
[13:09:31 CET] <hay207> here is pastebin output
[13:10:05 CET] <hay207> http://pastebin.com/q4YGKPCv
[13:13:42 CET] <hay207> i guess its because only vp8 supported? and my video is vp9
[13:25:45 CET] <hay207> any help?
[13:48:21 CET] <loster> Hi
[13:48:23 CET] <relaxed> hay207: that version is way too old.
[13:48:47 CET] <relaxed> hay207: use http://johnvansickle.com/ffmpeg/
[13:49:13 CET] <loster> Can anyone please tell me how can I change all the files that I have in my one of directory which have nested directory and contains mp4 files to 3pg so taht I can use on my phone
[13:50:51 CET] <loster> I am using ubuntu
[13:50:53 CET] <loster> anyone?
[13:51:43 CET] <relaxed> loster: find . -name "*.mp4" -print0 | xargs -0 bash -c 'for i; do ffmpeg -i "$i" -q:v 3 "${i%.*}".3gp; done' -
[13:52:20 CET] <relaxed> run that in the top level dir
[13:55:49 CET] <footer> thanks relaxed
[15:15:25 CET] <hay207> thanks relaxed
[16:04:39 CET] <pomaranc> how do you use the -timestamp option? it just says it's not an output option
[16:47:59 CET] <whackie> I'm trying to convert AIFF to MBWF (RF64 + bext) - anyone here who could share some insight?
[17:11:20 CET] <durandal_1707> whackie: use option -rf64 always
[17:11:43 CET] <jgh-> is there a way of ensuring that multiple outputs are synchronized?  I.e. I'm doing a transcode of a single source to multiple layers and sending to RTMP, but they seem to be out of sync with each other so switching layers is awkward.
[17:12:40 CET] <whackie> durandal_1707: just found what you said, thanks for the confirm :)
[17:17:49 CET] <jonascj> if I do "ffmpeg -i input.mkv -vf scale=-1:720 -acodec copy output.mkv" what determines the bitrate?
[17:25:52 CET] <Mavrik> jonascj, whatever ffmpeg binary has been drinking that morning :P
[17:26:10 CET] <Mavrik> it defaults to... something (which changes with versions) and it's probably terrible :)
[17:26:53 CET] <Mavrik> jonascj, I suggest you add explicit "-codec:v libx264 -crf 23"
[17:26:58 CET] <Mavrik> and tweak crf for quality :)
[17:30:18 CET] <jonascj> Mavrik: I have a 5GB h264 video with a duration of 120 minutes. Combining audio and vidoe that gies 5*1024^3*8/(120*60*1000) = 5900 kbit/s. I should never be able to get more than that
[17:30:47 CET] <Mavrik> uh
[17:30:48 CET] <jonascj> but running the command I just write I get "bitrate = 1248 kbit/s" and similar during processing.
[17:30:59 CET] <Mavrik> and? :)
[17:31:15 CET] <Mavrik> I don't understand the issue.
[17:31:39 CET] <Mavrik> Or what your goal is :)
[17:32:21 CET] <jonascj> My issue is that (combingin audio and video into one bitrate) my input file is ~6000kbit/s, and when doing "ffmpeg -i input.mkv -acodec copy -vf scale=-1:720 output.mkv" I get "bitrate = 1242kbit/s" or similar. What happended to the rest of the data?
[17:32:46 CET] <Mavrik> jonascj, nothing, the video track got scaled down and recompressed at worse quality.
[17:33:22 CET] <jonascj> of course going from 1920*1080 to 1336*720 should give a factor of two on it's own (roughly half the amount of pixels), but then 1242kbit/s is less than half of 6000kbit/s
[17:33:37 CET] <Mavrik> Video compression isn't linear.
[17:34:02 CET] <Mavrik> It's lossy and bitrate when using CRF quality control follows the video CONTENT not static factors :)
[17:34:16 CET] <Mavrik> If you look at the video itself you'll see that you lost detail.
[17:34:35 CET] <Mavrik> 1300-1500 is a good bitrate for medium quality 720p though
[17:34:49 CET] <jonascj> so should I feel happy about downscaling a 6000kbit/s video and getting ~1100kbit/s? How can I loose less detail / data?
[17:35:11 CET] <Mavrik> jonascj, use lower -crf parameter
[17:35:15 CET] <jonascj> of course data is lost when I go from 1920*1080 to 720*1336, but other than that
[17:35:18 CET] <Mavrik> lower the crf number, better quality and higher bitrate
[17:36:00 CET] <Mavrik> note that if original video was encoded with worse encoder you get some compression for "free"
[17:38:08 CET] <jonascj> aright, thanks!
[17:42:45 CET] <jim__> Hi.
[17:43:10 CET] <jim__> Can someone tell me please, how can I find "tbr" by using ffmpeg libraries?
[17:43:32 CET] <jim__> tbr = tbr is guessed from the video stream and is the value users want to see when they look for the video frame rate
[17:43:59 CET] <jim__> Using m_codecContext->time_base.den and m_codecContext->time_base.num doesnt give accurate frame rate
[17:50:17 CET] <Jonas__> pts and dts?
[17:50:58 CET] <Jonas__> I know avprobe can -show_packets for a file to dump out pts and dts for audio and video frames
[17:51:13 CET] <Jonas__> it's part of libav but I assume ffmpeg has an equivalent
[17:52:00 CET] <Jonas__> jim__, highlight for notice
[18:07:18 CET] <itctech> How would I go about using movflags inside of ffserver configs?
[18:10:57 CET] <pzich> with -movflags?
[18:11:46 CET] <itctech> Yes. Do I need to specify AVOptionVideo in my ffserver.conf or should I supply this option during my call to ffmpeg?
[18:12:52 CET] <itctech> It would be like AVOptionVideo -movflags +faststart
[18:14:08 CET] <pzich> I'm seeing it specified on the command line instead of the config. I haven't used it in an ffserver config so I don't know if that will work
[18:16:33 CET] <itctech> I'm trying to livestream from a dv camcorder but provide it in aac+xvid. I may be going about this the entirely wrong way though.
[19:46:31 CET] <debianuser> Hello. How can I change the aspect ratio of an .avi file without reencoding it? `ffmpeg -i aspectbad.avi -map 0 -c copy -aspect 16:9 aspectgood.avi` looks like changing both SAR and DAR resulting in file looking exactly same as it was: http://pastebin.com/VwMKND5a Are there any other options?
[20:10:02 CET] <itctech> Is there a way to see exactly what options ffserver is sending to ffmpeg?
[21:26:48 CET] <d3fault> extremely noob question (hopefully): why do sample size and sample type/correlate not correlate? why do people put sample size 16 in a signed int. why not an unsigned short (which has a size of 16)? etc. i guess i'm asking what the difference between the two are
[21:27:01 CET] <d3fault> errr, sample type/encoding*
[21:57:43 CET] <kepstin-laptop> d3fault: well, the difference is that one is signed, the other isn't...
[21:58:18 CET] <kepstin-laptop> an audio signal in a signed int is usually centered at 0, and alternates between positive and negative numbers to represent the signal.
[21:58:28 CET] <d3fault> kepstin-laptop, i understand what data types are in general (i code c++), i just don't get why if the sample size is 16 bits, you'd choose a 32 bit container for it...
[21:58:50 CET] <kepstin-laptop> oh, I see, you were talking about int vs short
[21:58:59 CET] <d3fault> as an example
[21:59:03 CET] <d3fault> but char vs float also come to mind
[21:59:28 CET] <kepstin-laptop> you'd use an int (32bit) if you wanted to do some caculations on it that might overflow the 16bit number, then you could cleanly clip it or scale it afterwards back to 16bit
[22:00:18 CET] <d3fault> weird. should i just use 16 bit sample sizes + signed int sample types and not worry about it xD?
[22:01:43 CET] <kepstin-laptop> an interesting thing about the 'float' type is that you normally use a nominial range of -1..0..+1 to represent the signal, which makes some types of transforms easier to understand.
[22:02:40 CET] <kepstin-laptop> if you're not doing any manipulation to the signal, there's no reason to use a different sample size/format than what you get from the audio file decoder
[22:03:05 CET] <d3fault> i suck at math, so i'm lucky if i can understand any transforms xD
[22:03:14 CET] <d3fault> kepstin-laptop, i'm synthesizing my audio :-P
[22:03:19 CET] <d3fault> so i get to choose
[22:04:16 CET] <kepstin-laptop> if you're synthesizing, it probably makes sense to use float unless you have a good reason otherwise, less to worry about with dynamic range or clipping.
[22:04:53 CET] <d3fault> ok. and then what for sample size? 32-bits for same reason?
[22:06:02 CET] <kepstin-laptop> 'float' C type is usually a 32-bit "single precesion" floating point number.
[22:06:40 CET] <d3fault> correct. but that's what lead me here to begin with. i _CAN_ choose an 8 bit sample size with a float sample type (and i'm leik "wtf?")
[22:06:46 CET] <d3fault> led*
[22:08:00 CET] <kepstin-laptop> I don't know of any 8-bit floating point number types...
[22:08:28 CET] <d3fault> I don't know of any 16-bit signed integer types...
[22:08:56 CET] <kepstin-laptop> IEEE 754 defines one, but most C implementations don't expose it, I think.
[22:09:33 CET] <d3fault> ^a real man. cites IEEE specs from memory
[22:09:41 CET] <kepstin-laptop> I have the wikipedia page open ;)
[22:09:45 CET] <d3fault> xD
[22:10:20 CET] <kepstin-laptop> But yeah, you don't normally specify the bit size and number format separately, since they're kind of tied together in most cases
[22:10:37 CET] <kepstin-laptop> so you'd be using 32bit floating point samples, or 16bit signed integer samples, or whatever.
[22:11:07 CET] <kepstin-laptop> (and when saving to a file, you have to specify the endian, too...)
[22:11:27 CET] <d3fault> so shouldn't they be combined into one!?!?
[22:12:44 CET] <kepstin-laptop> d3fault: they usually are; my understanding is that in ffmpeg it's a list like "u8, s16, s32, flt, dbl", etc. which define the size of the type and how the number is formatted.
[22:13:07 CET] <kepstin-laptop> and then when saving to a file, you see stuff like 's16le' to say signed 16 bit integer, little endian
[22:13:31 CET] <d3fault> tru tru, i do usually see them combined. but in Qt Multimedia they are separate =o
[22:13:41 CET] <kepstin-laptop> some tools do take the format and size separately, but they'll obviously reject any combination that's not supported.
[22:14:49 CET] <kepstin-laptop> so it would probably take the format and size as separate parameters, combine them and check what underlying type to use, then report success or failure....
[22:14:56 CET] <d3fault> i see. yea i like ffmpeg/etc's way of doing it: make you choose from a list of supported combinations. qt multimedia gives me too much freedom and confuses me xD
[22:15:38 CET] <d3fault> but yes, it does have an "isSupportedFormat" check like you mention
[22:16:09 CET] <d3fault> thx kepstin-laptop, i understand slightly better now.... and am gonna use floats for more range
[22:17:03 CET] <kepstin-laptop> yeah - keep in mind that with float, you normally want treat -1.0 to 1.0 as the max possible range. You can go over or under for temp calculations, but the output should stick to that range normally.
[22:17:46 CET] <kepstin-laptop> since when you're exporting to a final audio file, it'll probably use s16 samples, which would clip during conversion if you go outside the range.
[22:18:21 CET] <kepstin-laptop> (one common way to work around that is to apply a dynamic range compressor that "crushes" everything that goes outside the valid range before converting down)
[22:19:18 CET] <d3fault> hmm yea i'll be serializing the pre-synth'd data (flex sensors attached to back of my fingers), so that's not an issue for me
[22:21:44 CET] <kepstin-laptop> huh, flex sensors on the fingers, eh? sounds like a neat audio control interface.
[22:22:23 CET] <kepstin-laptop> would want to use it to direct a synth rather than be the audio input directly, I can't flex my fingers back and forth 440 times per second ;)
[22:22:25 CET] <d3fault> and keyboard/mouse control too. i call it "project go outside" rofl. i love coding, but hate sitting in a chair
[22:23:11 CET] <d3fault> kepstin-laptop, i'm just going to modify the frequency of a sin wave for the time being... but i mean the possibilities are endless (drum machines, pianos, etc)
[22:25:11 CET] <kepstin-laptop> d3fault: if you have something you want to show off something, feel free to stop be #musicbrainz, we've got some folks that find this sort of thing interesting.
[22:26:59 CET] <d3fault> heh i love musicbrainz. well everything is available on d3fault.net, but is also in extremely alpha state. so much to code, so little time
[22:30:36 CET] <pinPoint> d3fault: what is the website about?
[22:33:59 CET] <d3fault> the making of the wearable musical computer operating system, i guess. i have tons of projects (check the "Archive" link), most of which are not music/hardware related, and most all of which have barely any progress xD. everything is released under the DPL (LGPL 3 with GPL fallback removed). why? for fun/life/lulz/etc
[22:34:53 CET] <pinPoint> wearable musical computer OS?
[22:35:10 CET] <pinPoint> like a watch/clothing?
[22:35:15 CET] <d3fault> i don't have a fancy name for it. maybe just "d3fault" ?
[22:35:41 CET] <d3fault> pinPoint, flex sensors attached to back of my fingers
[22:36:55 CET] <d3fault> and for now i will use a 1.8" LCD dangling in front of my face mounted on a bent/shaped hanger... but ideally i
[22:37:05 CET] <d3fault> *i'd upgrade to glasses/contacts/eye-implants
[22:38:08 CET] <pinPoint> well for me, I gotta learn theory, playing piano which is what i'm doing now
[22:38:40 CET] <d3fault> yea i like piano... i'll surely have a piano mode... but the thing i don't like piano... is that you have to sit down to play it
[22:38:50 CET] <pinPoint> its a pain in the butt
[22:38:59 CET] <d3fault> hah!
[22:39:59 CET] <pinPoint> i dislike my brain working like this
[22:40:25 CET] <pinPoint> my fingers and brain all over the piano.. I'm doing Hanon exercises
[22:40:25 CET] <d3fault> hack it
[22:41:02 CET] <pinPoint> i'm only in about 1.5months, eventually my brain is going to have to break and submit to my will
[22:41:06 CET] <pinPoint> brb, halo match
[22:42:13 CET] <d3fault> yea my brain can't multithread the piano keys for shit, i need to practice dat hardcore :(.... but i can't commit to a normal piano, i'll doit on this finger synth once i finish building it
[22:50:57 CET] <ac_slater_> hey all. I'm working on muxing streams via libavformat. I'm trying to determine how av_write_frame actually does its job. As in, what happens if I write a bunch of AVPackets with proper timestamps at the same time?
[22:51:53 CET] <ac_slater_> I guess the question is ... do I have to feed the stream with packets at some rate?
[22:53:54 CET] <pinPoint> d3fault: i'm doing my piano on a novation controller connected to my osx->logic pro
[22:54:19 CET] <pinPoint> and using the default yamaha piano sequence
[22:58:20 CET] <Mavrik> ac_slater_, that's very dependant on underlying protocol and muxer, but av_write_frame will write all data out directly if possible
[22:58:48 CET] <d3fault> pinPoint, xD that's all jolly well and good, but i neeeeeed mobility (been sitting in chair too long). almost every instrument requires you to be still (exceptions: flutes, etc)
[22:58:58 CET] <ac_slater_> Mavrik: interesting. My mux target is MPEGTS
[23:00:48 CET] <ac_slater_> Mavrik: I guess my real question is actually feeding the muxer. As in, do I have to feed it at the rate of my data or does the muxer 'queue' up packets via PTS/DTS and write them appropriately? ie - I have no idea how muxers work... I could read the code I suppose
[23:01:31 CET] <Mavrik> ac_slater_, um, they literally just write through the data as you call it
[23:01:40 CET] <Mavrik> they don't care about the rate
[23:01:53 CET] <Mavrik> you say "write", it'll morph the data and write it directly to file
[23:02:22 CET] <Mavrik> also, av_write_frame won't do interleaving, so it'll just write packets as you give it to them
[23:02:33 CET] <Mavrik> av_interleaved_write_frame will make sure they're interleaved by streams
[23:03:00 CET] <Mavrik> IIRC none of the muxers will actually reorder the packets, so it your order won't be in DTS order, you'll get an error "non-monotonic timestamps"
[23:03:24 CET] <ac_slater_> Mavrik: very interesting
[23:06:25 CET] <ac_slater_> for some reason  I was thinking the 'interleaving' write call kinda meant you can feed the muxer out of order packets and it would do magic
[23:06:37 CET] <ac_slater_> it's all good though, I actually dont need any of that.
[23:07:00 CET] <ac_slater_> I really just need ffmpeg to take in my already-encoded stream data, mux it as MPEGTS and dump out to a pipe
[23:07:05 CET] <ac_slater_> Mavrik: does that sound crazy?
[23:07:12 CET] <ac_slater_> (ie - something libavformat can handle)
[00:00:00 CET] --- Sat Mar 14 2015


More information about the Ffmpeg-devel-irc mailing list