[Ffmpeg-devel-irc] ffmpeg.log.20141214

burek burek021 at gmail.com
Mon Dec 15 02:05:01 CET 2014


[00:22] <Lambertini>  hi ppl , tou trying to make a video stream with ffserver, and I start broadcasting but does not appear the video when stop the ffserver it appears the last second, can anyone help me?
[00:24] <wyatt8740> so the video doesnt start until you kill ffserver, and you only get a second of it then?
[00:32] <bencc> I've encoded mp4 with ffmpeg and when streaming it to chrome I'm getting
[00:32] <bencc> Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH
[00:33] <bencc> could this be something with ffmpeg or only communication between chrome and the server?
[00:33] <wyatt8740> do other browsers work?
[00:37] <bencc> wyatt8740: yes. FF can play the whole file
[00:37] <bencc> Chrome fails every time after few mintues
[00:48] <Lambertini>  hi ppl , tou trying to make a video stream with ffserver, and I start broadcasting but does not appear the video when stop the ffserver it appears the last second, can anyone help me?
[00:54] Action: wyatt8740 repeats
[00:54] <wyatt8740> so the video doesnt start until you kill ffserver, and you only get a second of it then?
[01:25] <c_14> bencc: I'm guessing server and chrome. The server is sending a content length and chrome is complaining when the content exceeds this length (probably).
[01:28] <bencc> c_14: maybe because of bad chunks. I'll check. thanks
[03:01] <tomato> When converting a video to an MP3, how can I make sure the MP3 has the highest possible bitrate?
[03:02] <tomato> Is there an "automatic" parameter that is able to set it to the highest bitrate possible?
[03:02] <c_14> -b:a INT_MAX
[03:02] <c_14> But you don't want that.
[03:03] <c_14> https://trac.ffmpeg.org/wiki/Encode/MP3
[03:03] <c_14> Just pick a -V you're comfortable with.
[03:03] <c_14> eh
[03:03] <c_14> -q:a
[03:03] <tomato> c_14: I've looked at that page before, but does "-q:a" really do the job?
[03:04] <c_14> yes
[03:04] <c_14> -q:a 0 will be basically as good as you can get with mp3
[03:04] <c_14> If you want something better, use FLAC
[03:05] <tomato> c_14: why is FLAC better?
[03:06] <aetas> lossless
[03:06] <tomato> aetas: is it faster too?
[03:06] <aetas> faster at?
[03:07] <tomato> aetas: will the conversions be any faster?
[03:10] <aetas> there are a few compression settings to it that affect the time, so its hard to answer but generally speaking, precision always takes longer to some degree
[03:11] <aetas> I use it for any encodings that I may have to reprocess before I take them down to aac
[03:12] <aetas> you can always try it on default, and then play with the -compression_level a bit to see if it fits for you
[03:13] <Nosomy> >converting a video to an MP3
[03:14] <Nosomy> why not copy audio stream of "video"?
[03:14] <aetas> hes asking about the audio transcoding, not the video part of it
[03:15] <aetas> Im assuming he already has that figured out or hasn't gotten to it yet
[03:15] <tomato> MP3 is probably the best format as it's compatible on most devices and at the same time the size is small
[03:15] <aetas> its more about what he wants to use it for
[03:15] <aetas> and we dont really know yet
[03:16] <aetas> why am i talking to you like you're someone else
[03:58] <BtbN> tomato, actualy, most devices that play mp3 also support aac
[03:58] <BtbN> even rather cheap mp3 players
[03:58] <BtbN> Mostly thanks to iTunes
[04:01] <rieve> trac doesnt work?
[04:02] <BtbN> works for me
[04:02] <rieve> adding tickets too? I'll try it again and when it doesnt work I'll post a pastebin here.
[04:03] <BtbN> I won't open a ticket just to test that
[04:05] <rieve> ok now it worked
[04:06] <rieve> Can you try to reproduce https://trac.ffmpeg.org/ticket/4186 ?
[06:11] <_pr0t0type_> Hey guys, is there a place I can go to online that details the specific file format for PCM?
[06:11] <_pr0t0type_> or LPCM?
[06:11] <_pr0t0type_> files
[06:20] <kepstin-laptop> pcm (raw audio files) are just binary raw audio data, there's no explicit format. Just binary encoded samples one after the other
[06:21] <kepstin-laptop> in order to read them, you need to provide more info, like what format the numbers are encoded in, how many channels (they usually interleave samples from different channels), sampling rate, etc.
[06:33] <waressearcher2> I use that command to make 10 seconds video fade out: "ffmpeg -i 1.avi -vcodec mpeg4 -vb 5000k -vf "fade=out" -y -f avi 2.avi"  but the output video fades out in 1 second and then 9 seconds black screen, how to make it fade out for all 10 seconds ?
[06:34] <pzich> waressearcher2: https://www.ffmpeg.org/ffmpeg-filters.html#fade
[06:34] <waressearcher2> right, links are good
[07:04] <Nosomy> don't have setting psy-rd on ffmpeg?
[07:04] <Nosomy> only alpha value?
[07:06] <Nosomy> eg, >ffmpeg ... -vcodec libx264  -x264opts psy-rd=0.5:keyint=300 .... <-ok
[07:06] <Nosomy> eg, >ffmpeg ... -vcodec libx264  -x264opts psy-rd=0.5:0.1:keyint=300 .... <-not ok
[07:15] <Nosomy> how declare --psy-rd 0.5:0.1 on ffmpeg?
[08:24] <waressearcher2> I use that command to fadein: "ffmpeg -i ../1.wav -ss 584.8 -t 20 -af afade=t=in:st=10:d=10 -ac 2 -ar 44100 -ab 320k -y -f ac3 2_2.ac3" but it doesn't work
[09:54] <AlecTaylor> hi
[09:54] <AlecTaylor> Trying to think of a fun use-case scenario involving audio steganography& maybe music social-networking with secret subtext? - Any other/extended suggestions? :)
[12:56] <waressearcher2> about these options "-af afade=in" and "-vf fade=out" I was a bit frustrated using them because when you use "fade in" for audio or video your audio goes from silent to a normal and video goes from black screen to a normal video, and "fade out" for audio or video your audio goes from normal to a silent and video goes from normal screen to black,
[12:56] <waressearcher2>  if you think of it "fade in" should go "into" fading, so audio should go from normal to sort of "fade in" to silence and video should go from normal to "fade into" black, and in case of fade out its like goind out of somewhere so "fading out" of darkness or silence to normal, so that does confuses me
[12:59] <pzich> ffmpeg did not establish the idea of a fade in and fade out, those terms have been around since long before. don't think about it as going into our out of the fade, but of a clip having an in point and an out point, and you fading the clip in and out
[13:27] <DrSlony> waressearcher2: no.
[13:28] <waressearcher2> DrSlony: no what ?
[13:28] <DrSlony> your blasphemy.
[13:32] <waressearcher2> what ?
[14:25] <waressearcher2> I use that command to join audio and video: "ffmpeg -i in.avi -i in.ac3 -c:v copy -c:a copy -map 0:v:0 -map 1:a:0 -y -f avi out.avi" is it correct ? I think audio is longer than video so when playing out.avi the video is just freeze at the end, how to make the lenght of output to be the shortest one ?
[14:26] <pzich> use -shortest
[14:26] <waressearcher2> the audio 4 seconds longer
[14:26] <pzich> like http://superuser.com/q/332078
[14:43] <wbchatinterface> hi, I want to ask is there any way to specify frames order while reencoding video? (for example :1,2,3,3,3,3,7,8,9,9,9,9,10,11,12,13,14...200012,200012,200012,200013,200014,7,7,7,2,40121,200020,200021,200022...)
[14:44] <waressearcher2> wbchatinterface: could be the only way to convert video to a bunch of png images and then reassemble them into video in preffered order
[14:45] <DottorLeo> hi!
[14:46] <waressearcher2> DottorLeo: hi, so you are from Italy ?
[14:48] <DottorLeo> need a hand with command line for convert an AIF to OGG. usually i encode via GUI (FREEAC) but that program doesn't decode AIF. I want to do AIF->OGG with the additional command line quality -q 8.5
[14:48] <DottorLeo> waresseracher2 yes ;)
[14:56] <wbchatinterface> if person will replace all similar frames in static scenes with preceding I-frame, will it improve compression for reencoding?
[15:02] <wbchatinterface> ok, ty
[15:21] <aleb> Hi, where can I find what "50 tbr, 12800 tbn, 100 tbc" means when I run ffmpeg -i x.mp4
[15:23] <pzich> by googling 'ffmpeg tbr tbn tbc' and opening the first result: http://stackoverflow.com/a/3199582
[16:01] <DrSlony> ^ i can verify, this google trick actually works
[16:11] <t4nk469> hello! is there any way to change colors (brightness, saturation etc.) in the output video using ffmpeg with C++ (not CLI)?
[16:11] <Mavrik> um
[16:12] <Mavrik> that's a strange question
[16:12] <Mavrik> what do you mean "with C++"
[16:12] <t4nk469> I mean using ffmpeg for developers
[16:12] <pzich> libffmpeg?
[16:13] <t4nk469> yes
[16:13] <Mavrik> t4nk469, what ffmpeg for developers?
[16:13] <Mavrik> if you're calling out to libav* libraries, then you need to use the same filters ffmpeg executable uses
[16:13] <Mavrik> if not... then you need to see what your bindings library does
[16:13] <Mavrik> there's no official "ffmpeg for C++"
[16:14] <t4nk469> well, I mean actually already compiled libraries which can be used for programming
[16:14] <t4nk469> so, does ffmpeg allows color changing by itself? or additional libs are needed?
[16:15] <Mavrik> t4nk469, the color filters are part of libavfilter yes.
[16:16] <Mavrik> there's actually several of them, see ffmpeg-filters documentation on homepage
[16:16] <t4nk469> Mavrik: thank you))
[16:43] <gaussblurinc_> hi!
[16:43] <gaussblurinc_> I have two videos, how to convert one video with another video options?
[16:47] <techtopia> hello
[16:47] <techtopia> if im encoding a file with two audio sources, how can i pick the second audio source for the encode
[16:49] <pzich> techtopia: https://trac.ffmpeg.org/wiki/How%20to%20use%20-map%20option
[17:00] <gaussblurinc_> pzich: can you help me with my problem? (encoding video with options/codecs from another video)
[17:01] <pzich> gaussblurinc_: you could use ffprobe to get the resolution, framerate, codec and bitrate, then pass those in. I don't know if there's anything beyond that
[17:01] <pzich> I'm pretty sure there's not a way to just pass it a file to look at
[17:03] <gaussblurinc_> pzich: oh, I did it. could you, please, look at command to convert file? wait a minute
[17:08] <gaussblurinc_> pzich: here: http://pastebin.com/zfesmE4r
[17:08] <gaussblurinc_> pzich: am I right?
[17:11] <pzich> I think it should be 128k
[17:19] <gaussblurinc_> 128k for sound?
[17:22] <Nitori> -ab 128k
[17:22] <ralphcor> How do I limit the number of cores ffmpeg uses on Linux?  I'm happy for it to take ages if it means a more quiet machine that lets me get on in the mean time.  Google suggests -threads but that looks to be an old option.
[17:27] <DrSlony> ralphcor nice, cpulimit, taskset,
[17:29] <Nitori> ralphcor, I usually use cpulimit (not an ffmpeg option, but an external tool) for something like that.
[17:29] <DrSlony> i usually clean my heatsink and fans :)
[17:30] <ralphcor> nice will still have it use all the cores if nothing else wants them.  cpulimit means more overhead in monitoring and sending STOP and CONT.  taskset looks a good fit, thanks.  I'll give it a go.
[19:15] <t4nk505> hey i wanna use overlay (webcam) but have too much frame dropps (real time buffer too full). For local recordings less problems
[19:15] <t4nk505> i use this command
[19:15] <t4nk505> ffmpeg -loglevel info -re -rtbufsize 100000k -f dshow -i video="UScreenCapture" -f dshow -i video="screen-capture-recorder" -filter_complex "[0:v]setpts=PTS-STARTPTS[background];[1:v]setpts=PTS-STARTPTS,scale=-1:120[foreground];[background][foreground]overlay=main_w-overlay_w+1:main_h-overlay_h" -f dshow -i audio="virtual-audio-capturer" -c:v libx264 -force_key_frames expr:gte(t,n_forced*2) -b:v 1000k -minrate 1000k -maxrate 1000k -bu
[19:16] <t4nk505> -preset:v ultrafast -pix_fmt yuv420p -tune film  -c:a libmp3lame -q:a 2 -ar 44100 -f flv "rtmp://localhost:1935/app/live_50837324_SypWjj7g2x5Lezd5PbCSPG7HtjTlz4"
[19:17] <t4nk505> without overlay all is ok
[19:17] <t4nk505> this is not CPU problem, less than 60% usage
[19:18] <t4nk505> thanks for help
[19:46] <techtopia> is there someway to update the x264 revision ffmpeg is using
[19:54] <techtopia> it's running an old version of x264
[19:57] <c_14> static or dynamic build of ffmpeg? same or different major version of x264?
[19:58] <c_14> The easiest is to probably just update x264 and rebuild ffmpeg.
[19:58] <techtopia> i don't have a compiler or anything
[19:58] <techtopia> what would i need to rebuild it?
[19:59] <techtopia> i could set up a nix vm i guess
[20:01] <c_14> https://trac.ffmpeg.org/wiki/CompilationGuide
[20:03] <techtopia> thanks
[20:03] <t4nk505> techtopia windows or linux?
[20:11] <sergio-br2> Hi
[20:12] <sergio-br2> I have the ffmpeg source here, and trying to compile, but I'm having errors
[20:12] <sergio-br2> like this
[20:12] <sergio-br2> libswscale/x86/yuv2rgb.c:93:28: error: ‘yuva420_rgb32_mmx’ undeclared (first use in this function)
[20:12] <sergio-br2> I can't find this yuva420_rgb32_mmx in any other place
[20:17] <xavery> why does calling avformat_find_stream_info() on the same AVFormatContext lead to a segmentation fault? should I recreate the AVFormatContext before calling it next time on the same input file?
[20:20] <xavery> basically, I'm using the code from avio_reading.c, but I added another call to avformat_find_stream_info() after the first one. is this the expected behaviour?
[20:25] <Mavrik> xavery, I would suggest checking out GDB to see what cuases the segfault.
[20:25] <sergio-br2> I'm using 2.4.4 source
[20:26] <DrSlony> sergio-br2 why not 2.5 or git?
[20:27] <sergio-br2> because I can compile only 2.4.4 here
[20:27] <Mavrik> sergio-br2, that sounds like you either didn't clean it properly or you don't have libswscale enabled
[20:27] <DrSlony> sergio-br2 it sounds like you cant :P
[20:28] <sergio-br2> I have libswscale enabled
[20:28] <sergio-br2> *yup, because I can't compile 2.5 or git here, heh
[20:28] <DrSlony> why not?
[20:28] <sergio-br2> I can compile 2.4.4 without SSE2 enabled
[20:29] <sergio-br2> for 2.5 I don't remember
[20:29] <sergio-br2> so when I enable SSE2 in 2.4.4, I can't compile
[20:30] <DrSlony> if your cpu supports sse2, try 2.5 again and file a proper bug report with full logs
[20:30] <sergio-br2> the thing is, how can this yuva420_bgr32_mmx and other variables can work if there is no declaration before?
[20:30] <DrSlony> well, git not 2.5
[20:30] <DrSlony> in git, the only reference to that is ffmpeg/libswscale/x86/yuv2rgb.c:93:
[20:31] <Mavrik> sergio-br2, that's the ASM code
[20:31] <DrSlony> #if HAVE_7REGS && CONFIG_SWSCALE_ALPHA
[20:31] <sergio-br2> yup, in the return
[20:31] <DrSlony>                     return yuva420_rgb32_mmx;
[20:31] <DrSlony> #endif
[20:31] <Mavrik> sergio-br2, do you have mmx disabled or something as silly?
[20:31] <Mavrik> what platform are you compiling for?
[20:31] <sergio-br2> no, I have mmx enabled too
[20:32] <sergio-br2> I'm compiling in linux
[20:32] <DrSlony> what is 7REGS?
[20:32] <Mavrik> DrSlony, probably x64 :)
[20:33] <sergio-br2> so, it's ASM code and I don't need to declare it before?
[20:34] <sergio-br2> Do I need some flag for this?
[20:36] <sergio-br2> the truth is that this is the libretro port... so I don't know if some flag or whatever is missing in the makefile
[20:36] <Mavrik> hmm
[20:36] <Mavrik> sergio-br2, you'll probably have to look at the source itself and see why the function prototype isn't declared
[20:37] <sergio-br2> hum, this yuva420_bgr32_mmx is not a variable? it's a function?
[20:38] <Mavrik> sergio-br2, I don't know, did you check the source yet?
[20:38] <sergio-br2> yes, i checked only libswscale/x86/yuv2rgb.c
[20:40] <sergio-br2> I have one doubt
[20:40] <sergio-br2> amd64 builds needs x86 folders too right?
[20:41] <sergio-br2> and needs mmx stuff enabled, if you want sse2 right?
[20:49] <xavery> Mavrik, the segfault occurs in libavformat/utils.c:2559 - 'if (st->info->found_decoder >= 0 && avctx->pix_fmt == AV_PIX_FMT_NONE)', and the stacktrace points directly to the second call of avformat_find_stream_info(). I'm using a custom-compiled ffmpeg 2.5, but the very same thing happens with the libraries from my package manager.
[20:53] <xavery> of course, if I recreate the AVFormatContext and the AVIOContext before making the second call, everything works okay. I was just wondering if recreating them is the right thing.
[20:57] <Zucca> I'm out of ideas and I don't know ffmpeg cli completely. So. I have several random video files. They have same video codec and audio codec but resolution (and lenght) is random. I try merge those videos into one. So this is what I've been trying to run but glitches occur when the resolution of a input video is different: ffmpeg  -f concat -i <(find /path/to/videos/ -name '*.MP4' -printf "file '%p'\n") -acodec
[20:57] <Zucca> copy -vf scale=hd480 -vcodec libx264 -r pal -crf 22 output.mkv
[20:57] <Zucca> I'm on linux bash.
[20:58] <Zucca> I guess I had to use -complex_filter ? But how's that working with quite random video files?
[20:59] <jarainf> You'd prolly be best off with simply encoding the videos a first time and then concatenating them
[21:00] <Zucca> :/ I'd hope I could get that done with a single command. But yeah. i could just run pre concatenate command and create temp files then concatenate them.
[21:00] <Mavrik> Zucca, you cannot create a valid video stream that changes resolution.
[21:00] <Mavrik> Zucca, you will have to reencode to the same one.
[21:00] <Zucca> Mavrik: I have scale filter there.
[21:01] <Mavrik> ah, I see.
[21:01] <Mavrik> your line got cut off.
[21:01] <Mavrik> Zucca, you'll have to use concat filter probably, not demuxer
[21:01] <Zucca> It still "glitches". Resolution is changed.
[21:01] <Mavrik> https://trac.ffmpeg.org/wiki/Concatenate#differentcodec
[21:02] <Zucca> Mavrik: Thanks. :)
[21:02] <Mavrik> you'll probably want to normalize fps and stuff like that as well :)
[21:03] <Zucca> Yeah. After I get this working I try to create a bash script or alias that would automatise all this.
[21:04] <Zucca> It's not once or twice when I have returned from vacation and I have several random videos that need to be merged into one. And I think ffmpeg is the answer.
[21:05] <Mavrik> yeah, the issue with merging random videos together is that codecs really don't like different frame configurations :/
[21:05] <techtopia> so i set up the enviroment to compile ffmpeg c_14
[21:05] <Mavrik> you're getting glitching because -f concat will just send all videos as a single video to the H.264 decoder and it'll try to deode it as a single video with same settings :)
[21:06] <techtopia> when i run configure from the mingw bash shell
[21:06] <techtopia> it just sit's there, doing nothing
[21:08] <techtopia> http://i.imgur.com/Kf5gdm6.jpg
[21:08] <techtopia> any ideas?
[21:08] <Mavrik> techtopia, msys is very slow.
[21:08] <Mavrik> give it time.
[21:09] <techtopia> oh so it is doing somthing
[21:09] <techtopia> so how does it know to use the latest x264, as that is sat up a dir and not in the ffmpeg dir
[21:10] <Mavrik> techtopia, it doesn't, did you compile and install x264? :P
[21:10] <techtopia> no just downloaded the git for it and the git for ffmpeg
[21:10] <techtopia> and ran compile from the ffmpeg dir
[21:11] <techtopia> so lets say i compile x264 and have an x264.exe binary
[21:12] <techtopia> i put that in the ffmpeg source folder, and then run configure?
[21:13] <Mavrik> no, you put libx264.dll or libx264.a into a library search path in which configure script will look for it
[21:13] <Mavrik> "make install" should do that for you and default to /usr/local/
[21:14] <Mavrik> you can't link to an exe file ;)
[21:14] <techtopia> this is all above my head
[21:16] <Mavrik> time to learn how compiling C programs works then.
[21:16] <techtopia> right so x264 is compiling now, when it's done i will end up with a libx264.dll?
[21:16] <techtopia> how do i put that into a "library search path"
[21:17] <jleclanche> I have two mp4 files of the same video in two different languages. Is there a way I can rip out the audio from one, and add it as a separate audio channel to the other?
[21:17] <techtopia> yes,
[21:17] <c_14> jleclanche: channel or stream
[21:17] <Mavrik> techtopia, how bout you open the compile guide for ffmpeg on it's wiki? :)
[21:17] <c_14> There's a big difference.
[21:17] <techtopia> demux one of them, and then remux the audio into the other
[21:17] <techtopia> im on it mavrik
[21:17] <jleclanche> c_14: whats the difference?
[21:17] <techtopia> i went through it all before coming back here
[21:17] <Mavrik> techtopia, that's about all you have to do :)
[21:18] <techtopia> but nothing is explicit on what to do
[21:18] <techtopia> it's all very vague and assumes i know things i do not
[21:18] <Mavrik> well, usually people who don't know how to compile stuff use the pre-built ffmpeg :)
[21:19] <techtopia> but the pre built ffmpeg is using x264 which is over a month out of date
[21:20] <Mavrik> techtopia, last x264 change was 2 months ago
[21:20] <techtopia> no it was november 12th
[21:20] <Mavrik> [Sun, 12 Oct 2014 18:01:53 +0100 (21:01 +0400)]
[21:21] <c_14> jleclanche: a channel is something like mono/stereo/5.1 as in mono is 1 channel, stereo is 2, 5.1 is 6. A stream is a "stream" of audio consisting of multiple (at least one) channel. You probably want a stream.
[21:21] <techtopia> Current x264 version r2491 released on 13-Nov-2014. Download: http://download.videolan.org/pub/x264/binaries/win32/x264-r2491-24e4fed.exe
[21:21] <c_14> ffmpeg -i one-audio.mp4 -i other-audio.mp4 -map 0 -map 1:a -c copy out.mp4
[21:22] <techtopia> current ffmpeg is using x264 core 142 r2479
[21:22] <techtopia> it's outdated
[21:25] <Mavrik> techtopia, whatever you're lookin' at... it's wrong :)
[21:26] <techtopia> thats from the media info of an encode i just did with latest ffmpeg from here http://ffmpeg.zeranoe.com/builds/win32/static/ffmpeg-20141212-git-10ef8f0-win32-static.7z
[21:26] <techtopia> released 2014-12-12
[21:28] <techtopia> i need it to be using the latest x264 or will have to go back to staxrip
[21:28] <techtopia> but would rather use ffmpeg
[21:29] <Mavrik> then follow the guide
[21:30] <Mavrik> even though you're making a colossal waste of time since last 30 changes of x264 or so have practically no effect on quality and only marginal effect on speed
[21:31] <techtopia> it's a rule set i have to follow
[21:31] <techtopia> cannot use an x264 more than 30 days after the release of a new one
[21:32] <Mavrik> even if the stable release is more than 3 months old? :P
[21:32] <techtopia> yes
[21:32] <Mavrik> you need to bitchslap the person giving you that rule.
[21:32] <techtopia> i would but i can't heh
[21:32] <Mavrik> (or charge alot of money :P )
[21:32] <Mavrik> anyway, ./configure && make install for x264
[21:33] <Mavrik> and then configure and make ffmpeg
[22:13] <vanya> Hello!
[22:14] <vanya> I'm trying to install HOG3D feature extractor. I'm facing some trouble and the error says some ffmpeg files are deprecated.
[22:14] <vanya> Does anyone know what I can do about it?
[22:14] <c_14> probably related
[22:15] <c_14> that or pastebin all output
[22:45] <iive> vanya: the link from stackoverflow above is the explanation for the deprecation message.
[22:45] <iive> read it.
[23:27] <bencc1> do main and high profile cost more cpu on the broadcaster, receiver or both?
[23:27] <bencc1> I'm streaming live webcam and wonder which h264 profile I should use
[23:55] <Mavrik> bencc1, main and high are each more CPU expensive for encoder and decoder
[23:56] <iive> and use more memory (references)
[23:56] <Mavrik> mhm
[23:56] <JEEB> uhh
[23:56] <JEEB> references are a level thing
[23:56] <JEEB> profiles are just features
[23:57] <iive> however almost all HD embedded chips i've seen support high4
[23:57] <JEEB> also you can in theory make high profile video without CABAC (albeit it doesn't make much sense), thus I don't really like putting the _profile_ as the reason for being more computationally hard
[23:57] <Mavrik> JEEB, IIRC baseline and main do have different maximum referenced picture count
[23:58] <JEEB> possibly yes, but those are limited by the levels rather than the profile
[23:58] <Mavrik> mhm
[23:58] <Mavrik> but doesn't really change the point why baseline was made in the first place was to save on hardware costs
[23:58] <JEEB> of course, and you probably mean constrained baseline because just baseline has some really weird features
[23:58] <JEEB> (it isn't a subset of main)
[23:59] <JEEB> profiles limit features, levels tell the decoder how much memory it needs to decode the stream (aka refs etc)
[23:59] <Mavrik> yes, I do mean constrained baseline... the list of hardware and players supporting baseline was surprisingly short last time I checked
[00:00] --- Mon Dec 15 2014


More information about the Ffmpeg-devel-irc mailing list