[Ffmpeg-devel-irc] ffmpeg.log.20140817

burek burek021 at gmail.com
Mon Aug 18 02:05:01 CEST 2014


[01:08] <FirstContact> hi all this is a question about the MOOV atom, and its relation to the MDAT atom in mp4 files.  My question is: does anyone here know where the actual data is stored that contains the location of the MDAT atom?  or how it's stored or any info at all about it?
[01:09] <FirstContact> I have a bad mp4 file and I think I see the location of the MDAT atom but don't know how to tell the file that it is located there
[01:10] <FirstContact> Ive been looking at the specifications but not seeing where the offsets are actually stored
[03:41] <jrgill> Scaling from 1280 x 720 to 854 x 480, would you setsar 1:1 or setdar 16:9?
[03:42] <jrgill> (originally 1:1 and 16:9 respectively)
[04:01] <c_14> Nah, why?
[06:32] <mgalgs> is there a way to specify a default transition between all input sources? I want to create a video from a sequence of images and cross fade between *every* image...
[07:02] <Technicus> Hello, does anyone here have experience setting up a video streaming service?
[07:02] <Technicus> I'm looking for suggestions to for setting up a service that recieves RTMP and broadcasts video.
[08:22] <t4nk376> I am trying to take videos uploaded to my server and encode the Video as MP4 and the Audio as MP3. But im having trouble figuring out how to do it with ffmpeg
[09:45] <camper> greetings, I'm using stopmotion, a software that depends on ffmpeg to create video from a sequence of stills, and the software uses a command line relaying on Image file demuxer (avconv -r 12 -b 1800 -i "$IMAGEPATH/%06d.jpg" "$VIDEOFILE" ; i want an expression that captures more file, at least jpg files with capital letters in the extension
[10:05] <camper> sorry, lost power
[10:09] <ochorocho> hey ... i want to stream videos using ffmpeg. i want the videos to be converted to mp4 or webm on the fly. Is there a way to do this using ffmpeg and php?
[10:47] <Mavrik> ochorocho, you'll need a proper streaming server for what you want
[10:47] <Mavrik> like Wowza
[10:51] <ochorocho> Mavrik: ok ... hm. so id better go with mplayer or vlc?
[10:57] <Mavrik> no.
[10:57] <Mavrik> because none of those do what you want.
[10:57] <Mavrik> (If I understood correctly.)
[10:57] <Mavrik> Preencode videos to mp4 and webm and then serve them later.
[11:00] <ochorocho> Mavrik: basically i dont want to cache the videos, i want them served in the right format for html5.
[11:01] <Mavrik> And Im telling you you cant encode MP4 while serving it without a proper streaming server.
[11:02] <Mavrik> Wowza licenses are relatively cheap and it has a live transcoder by default
[11:02] <ochorocho> ok ...
[11:02] <ochorocho> Mavrik: thank you.
[11:14] <DopeLabs> you need a segmenter
[11:16] <ochorocho> DopeLabs: talkin to me? ... can you give me a example? i read abozt segmenter but did not get it...
[11:16] <DopeLabs> reading above
[11:16] <DopeLabs> yea
[11:17] <ochorocho> :-)
[11:17] <DopeLabs> so whats stopping you from just.. converting them to mp4
[11:18] <ochorocho> i dont want to cache all the files in my library ...
[11:18] <DopeLabs> what are you doing with the uploads now
[11:19] <Mavrik> DopeLabs, usually the way more sensible decition for that usecase is just to stream via RTMP / HLS
[11:19] <Mavrik> since those containers aren't streaming hostile and you get way lower latency
[11:19] <Mavrik> segmented MP4 is kinda problematic support-wise... plus you get a full segment latency
[11:20] <DopeLabs> (yes i use a streaming server for all this hooha)
[11:20] <Mavrik> mhm
[11:22] <ochorocho> DopeLabs: currently i do nothing with the uploads. just want to serve the vids in a convenient way without filling up my harddisk to much.
[11:22] <gilbahat> Hi, I have built ffmpeg with opencl and I am trying to run accelerated decode/encode task, but GPU consumption is always on 0%. I have tried to set platform_idx and device_idx explicitly and still nothing.
[11:23] <DopeLabs> so when the upload completes, have it fire off ffmpeg to convert them to mp4 and delete the original file (if your worried about space, etc)
[11:24] <ochorocho> DopeLabs: the videos on the server i use are all my originals i want to keep.... its my media library
[11:25] <gilbahat> for the same reason, it doesn't appear to use vdpau either. is there any way to ascertain that opencl or vdpau is being used by ffmpeg?
[11:25] <DopeLabs> what format is most of it currently?
[11:25] <ochorocho> DopeLabs: mov
[11:25] <Mavrik> ochorocho, why don't you use a streamable container then?
[11:26] <DopeLabs> mov's are always fine for me (and what he said)
[11:26] <DopeLabs> (heh... upload all that shit to youtube and let them deal with it... its free!)
[11:29] <ochorocho> DopeLabs: ;-) ...
[11:31] <DopeLabs> i have about 4k audio recordings of dj sets.. between 1 and ~4 hours each.. i already did a mass upload to soundcloud. and now im working my way through them all again.. this time using ffmpeg and showwaves/showspectrum to create videos from the audio recordings, then upload to yt...
[11:33] <DopeLabs> gonna take a little longer than soundcloud did, which i basically just fed a script the whole directory and pushed ~900GB to soundcloud lol
[11:37] <Chaz6> Want
[11:37] <DopeLabs> ?
[11:38] <Chaz6> What's your soundcloud name?
[11:38] <DopeLabs>  /dubstepfm
[11:38] <DopeLabs> same with youtube
[11:39] <DopeLabs> now i just need to figure out a way to get ffmpeg to spit out real time volume dblevels
[11:40] <DopeLabs> from a *cast audio stream
[11:41] <DopeLabs> so i can then have fun with pipes and xaos and make audio reactive real time fractal generation
[11:52] <gilbahat> turns out there's an undocumented parameter called x264opts
[11:52] <gilbahat> GREAT
[11:53] <DopeLabs> if its undocumented.. how did you find out about it
[11:53] <gilbahat> random web search, some forum post - the usual way to find undocumented stuff
[11:54] <DopeLabs> so its totally mentioned in the man pages
[11:54] <gilbahat> which man page? just grepped ffmpeg manpage for it and found nada
[11:54] <DopeLabs> man ffmpeg-all
[11:55] <DopeLabs> just search for opts and its like the 2nd or 3rd hit
[11:55] <gilbahat> hrm. didn't know this thing existed. tried man ffmpeg, etc
[11:55] <DopeLabs> yea -all has... well.. all of it
[11:57] <gilbahat> could've been worse I guess, at least it's there somewhere. now I have to guess why it's not making things any faster :P
[11:57] <DopeLabs> what do you want to go faster?
[11:57] <DopeLabs> and whats the command your using
[11:58] <gilbahat> transcoding a file
[11:58] <DopeLabs> well generally -preset ultrafast
[11:58] <gilbahat> http://pastebin.com/f4KD3nTV
[11:59] <gilbahat> I mean, I am trying this specific variant with x264opts opencl and without it, and noting by debug log that opencl does get used, and still not running any faster
[11:59] <gilbahat> ymmv on opencl x264, true, but I was hoping it will be faster.
[12:00] <DopeLabs> your trying t ouse opencl for what exactly
[12:00] <JEEB> x264's opencl is more of a POC
[12:00] <gilbahat> encoding
[12:00] <JEEB> it runs a worse version of lookahead ME (IIRC)
[12:00] <JEEB> and can be faster under certain presets
[12:01] <DopeLabs> as far as i know.. the only hw x264 support ffmpeg has is for decoding
[12:01] <JEEB> x264 has opencl-based stuff in there
[12:01] <JEEB> not something most people would want to use, of course
[12:01] <JEEB> but to use lookahead ME you b
[12:02] <JEEB> basically need slower presets to begin with
[12:02] <DopeLabs> ^
[12:02] <JEEB> :V
[12:02] <JEEB> so yes, if you have quick enough GPU and fast enough memory lanes between RAM and VRAM it can be quicker
[12:03] <JEEB> but you're already able to make stuff faster without it, not to mention that the algorithm running on the GPU is worse off :P
[12:05] <gilbahat> I see. so right now, if i'm working on an ffmpeg-based transcoding service, I'm better off running it on CPU nodes rather than GPU nodes (edge cases notwithstanding)
[12:05] <JEEB> definitely
[12:05] <Chaz6> One thing missing from cloud services are fpga nodes
[12:05] <DopeLabs> for x264 encoding at least
[12:05] <Chaz6> I see $$$
[12:05] <JEEB> you only want CPU and RAM basically
[12:05] <gilbahat> Chaz6 - I fully agree
[12:07] <gilbahat> JEEB: currently it doesn't look like much RAM is needed, unless you aimed to avoid reading/writing input/output files to disk
[12:07] <DopeLabs> yea its basically all cpu bound
[12:07] <JEEB> slower presets in x264 buffer more pictures, and if you have o9k cores at some point it becomes useful to run multiple instances
[12:08] <JEEB> of course, base it all against your own usage scenario
[12:09] <gilbahat> I guess it's worth tracking x264 though, this could change if their OpenCL support matures, right?
[12:09] <JEEB> not much possibility for that
[12:09] <JEEB> for effective encoding GPU stuff just isn't useful :P
[12:10] <JEEB> it's mostly a Proof Of Concept created by MCW
[12:10] <JEEB> and then after months of fixing it got merged into mainline x264
[12:10] <DopeLabs> um well
[12:10] <DopeLabs> nvm..
[12:11] <gilbahat> but they are supposed to have dedicated encoder units. if that isn't useful, that's dead silicone, not something you'd expect nvidia/amd/intel to have...
[12:11] <JEEB> yes, specific ASICs are more useful
[12:11] <JEEB> but those are effectively black boxes :P
[12:12] <JEEB> and currently the best ASICs are @ intel basically, so no dGPU needed there, either
[12:12] <gilbahat> JEEB: no, I mean that modern nvidia GPUs have a dedicated encoder unit in them
[12:12] <JEEB> yes, I know
[12:12] <JEEB> that's ASICs
[12:12] <gilbahat> ah, I thought you mean discrete/standalone ASICs
[12:12] <JEEB> basically they noticed that no-one but idiots were drinking their kool-aid regarding doing encoding with GPGPU
[12:12] <DopeLabs> heh.. asic.. i remember that term from btc mining lol
[12:13] <JEEB> so they actually put something more useful in there
[12:13] <JEEB> that said, even with the haswell intel encoders, depending on your needs they don't give you the bang for the buck you'd get from x264
[12:13] <gilbahat> but that won't be covered by OpenCL or any other 'general purpose' interface because it's properietary stuff I guess
[12:14] <JEEB> you never know what interface they stick their shit into :P I mean, it was bad enough with "CUDA decoding" when nvidia put their decoding ASICs under that API
[12:15] <DopeLabs> i would rather use gpu based for x264 encoding than cpu..its much faster / less expensive on the resources
[12:15] <JEEB> DopeLabs, GPU hardware is good for some calculations and algorithms, efficient modern video encoding is _not_ one of those
[12:15] <JEEB> which is exactly why makers switched to ASICs
[12:16] <JEEB> instead of trying to do it with GPGPU
[12:16] <DopeLabs> well it def better with gpu than cpu (on my machines at least)
[12:16] <JEEB> you have no fucking idea what you're talking about
[12:17] <DopeLabs> im pretty sure i know how to measure system resource usage between an encoder thats using cpu and one thats using gpu thanks
[12:17] <JEEB> ...
[12:17] <JEEB> jesus christ, of course offloading does _that_
[12:17] <JEEB> now pretty please read what exactly your solution is doing
[12:18] <JEEB> for fuck's sake
[12:18] <JEEB> if it's actually using GPGPU for encoding it's most probably useless as fuck as far as compression goes - if it's ASIC-based it's less retarded, but still worse bang for the buck than x264 unless your CPU is fucking underpowered
[12:19] <JEEB> also as I said a few times, actually currently the best goddamn ASICs are at Intel, so you don't even need to get a fucking dGPU for that :V
[12:20] <Chaz6> on slight detour, the logitech c920 1nc c930e have onboard h264 encoders, but only the c920 encoder is accessible from directshow/v4l2 x.x
[12:20] <Chaz6> *and
[12:20] <Chaz6> fucking logitech
[12:20] <JEEB> regarding whatever else that application you're using is doing on the GPU, some of that might actually be useful - as in stuff regarding actual image manipulation (scaling/filtering/whatever), and possibly decoding the video with the integrated decoding ASIC
[12:21] <JEEB> but if we limit ourselves to encoding, GPUs are a fucking waste in general. If you've got one of those models with ASICs and can actually use it, that /can/ be of use, but is meant for a subset of things that x264 is meant for (low-latency, no need for hard compression etc)
[12:22] <DopeLabs> its just a screen capture/streamer app.. much faster and less expensive than the cpu encoders (at least on my particular system)
[12:22] <JEEB> please at least look into things and read what I've said :P Maybe after that you might understand a bit better
[12:22] <JEEB> and/or put your thoughts into words better
[12:22] <DopeLabs> i understand fine
[12:23] <gilbahat> JEEB: but you see, intel tied the thing to desktop CPUs mainly and it's not available on cloud services
[12:23] <JEEB> gilbahat, aye
[12:23] <JEEB> DopeLabs, ok, so do you know if your solution is using the actual GPU hardware or a separate ASIC?
[12:24] <DopeLabs> well as its a macbook pro from a few years ago...
[12:25] Action: JEEB sighs
[12:25] <DopeLabs> its actually older than i thought it was lol
[12:26] <JEEB> so yes, my recommendation stands: please learn more about these things and read what I noted to you, so you can move away from misleading/wrong on the internet arguments
[12:26] <JEEB> it's just tiring to lecture someone for the umpteenth time on what is actually useful in hardware encoding and what is not :V
[12:26] <DopeLabs> im not arguing anything.. im not here to get any advice about it... i already know which works better (for me)
[12:27] <gilbahat> thank you very much, you've helped set things straight with this
[12:27] <JEEB> you might actually want to look into some things regarding that. as in, what's it using and how
[12:27] <JEEB> and how much worse the compression efficiency is, etc
[12:28] <JEEB> because it can be very much misleading to note things like "<DopeLabs> i would rather use gpu based for x264 encoding than cpu..its much faster / less expensive on the resources"
[12:28] <DopeLabs> (for me)
[12:28] <DopeLabs> on that particular laptop.. over there
[12:29] <JEEB> well, even for you, you don't have the information needed to actually note why it is and if it really is, other than you lose cpu usage
[12:29] <JEEB> if you learn about these things you can actually put your ideas into proper words :P
[12:29] <JEEB> which are much less misleading
[12:29] <DopeLabs> do i realy need anymore information about it?
[12:29] <JEEB> yes
[12:30] <JEEB> because then you know what kind of solution you want and what solutions you don't want
[12:30] <DopeLabs> its not going to change the way i encode on that laptop at all
[12:31] <JEEB> even for a herp derp person like you this should be a fucking obvious case: there's vendor a) giving you GPGPU based encoding , and there's vendor b) giving you ASIC based encoding
[12:31] <JEEB> right now you are oblivious and neither of those market their solution as such
[12:31] <JEEB> because the "GPGPU" marketing word is much more effective
[12:31] <DopeLabs> i already know which is better...
[12:31] <JEEB> thus, if you don't learn, how do you know which of them is actually which it is
[12:31] <JEEB> but you don't!
[12:31] <DopeLabs> but i do
[12:31] <JEEB> you don't want to learn more
[12:32] <JEEB> as I said, neither of them will market it
[12:32] <JEEB> if they use the actual general computing shit on the GPU versus some encoding-specific ASIC
[12:32] <gilbahat> is there any reason either x264 or ffmpeg never implemented nvenc or QSV support?
[12:32] <DopeLabs> you dont have to tell me that asic's are more efficient
[12:32] <JEEB> nvenc had some crap regarding the licensing
[12:33] <JEEB> DopeLabs, yes but how the fuck would you learn which would be which
[12:33] <JEEB> buy both!?
[12:33] <JEEB> if you don't want to do fucking research
[12:33] <DopeLabs> im not buying either one
[12:33] <JEEB> :V
[12:33] <JEEB> seriously, I'm just trying to tell you that actually learning about this shit can be fucking useful
[12:33] Action: Mavrik hugs JEEB.
[12:34] <DopeLabs> and again.. i have
[12:34] <JEEB> then why the fuck can I see you being ignorant here and spewing shit that can be easily quoted so that it makes no fucking sense
[12:34] <JEEB> I can quote that one line of yours again if you want
[12:34] <DopeLabs> i went through all this shit when transitioning from gpu to asic based hashers for btc's
[12:34] <DopeLabs> sure
[12:34] <DopeLabs> quote it up
[12:35] <DopeLabs> i also said.. for this particular laptop, and for (me)
[12:35] <JEEB> <DopeLabs> i would rather use gpu based for x264 encoding than cpu..its much faster / less expensive on the resources
[12:35] <JEEB> yes, you said that later
[12:35] <JEEB> that specific line is as it is
[12:35] <DopeLabs> everyones setup is different
[12:35] <JEEB> :V
[12:35] <JEEB> anyways, for fuck's sake stop being ignorant and actually learn about what your solution uses and why it is useful to you
[12:35] <JEEB> that way your arguments will become better
[12:36] <JEEB> and if you choose to not be ignorant you can actually do better selections in the future
[12:36] <DopeLabs> so when i have ableton live, vdmx, anda bunch of other shit going on, yes ill take the gpu/graphics card encoding solution rather than the cpu based one so i can have the resources i need thanks
[12:36] <JEEB> as I said, neither vendor A or vendor B will tell you which hardware encoding APIs they will use in their marketing shit
[12:37] <JEEB> and if you have no interest towards "what's inside", you will happily take the one that's worse :P
[12:37] <JEEB> but I'm pretty sure I've said enough here and I hope you've learned something
[12:37] <DopeLabs> spose it depends on what your application is and how your using it to be able to define which is 'worse'
[12:38] <JEEB> no, with the scenario I just put to you there is clear worse and clear better
[12:38] <DopeLabs> likewise
[12:38] <JEEB> vendor a's shit will suck because doing the encoding on the actual general computational hardware makes no fucking sense
[12:39] <JEEB> vendor b's shit will still suck for heavy compression needing things, but from the two it's the better one
[12:40] <JEEB> I specifically gave you two alternatives where they both do what you seemingly wanted -- offload the encoding off from the CPU part of the CPU or whatever else you're going to use
[12:40] <JEEB> gilbahat, if x264 implemented either of those it would no longer be x264
[12:40] <JEEB> because those are black box solutions
[12:40] <JEEB> and I thought there was some sort of QSV support already in? I remember talking with the intel guy
[12:41] <JEEB> in Libav/FFmpeg that is
[12:41] <JEEB> nvenc had some licensing-related issues
[12:48] <DopeLabs> mmm yt supporting 60fps now/soon
[12:50] <gilbahat> JEEB: it's not uncommon to yield to a hardware implementation, but that's a philosophical argument that's not the least bit interesting. ffmpeg bumped into licensing issues you say?
[12:51] <JEEB> I think the implementation of NVenc needed some stuff that was in the SDK that wasn't usable
[12:51] <JEEB> I don't remember the exact details
[12:51] <JEEB> -devel IRC logs should have some stuff
[13:11] <anshul_mahe> how to call avcodec_decode_video2 so that i only get information in frame side data no decoding
[13:15] <anshul_mahe> I do not want to decompress any frame, just extract raw closed caption in user dat
[13:16] <JEEB> then you probably want to only use lavf, not lavc. although you will have to parse the actual video stream by yourself in that case, lavf will only give you packets methinks
[14:38] <anshul_mahe> thanks jEEB
[14:51] <Fjorgynn> fish
[15:29] <cdecl_xyz> hello, is this the right place to ask questions about swscale ?
[15:29] <Fjorgynn> what is swscale
[15:29] <cdecl_xyz> libswscale
[15:29] <Fjorgynn> aha
[15:31] <cdecl_xyz> i have questions regarding libswscale and how to do proper colorspace conversion (full range -> smpte range). is this the right place?
[15:33] <Fjorgynn> just ask
[15:36] <cdecl_xyz> in sws_getContext, sws_setColorSpaceDetails gets called with ff_yuv2rgb_coeffs[SWS_CS_DEFAULT] for input and output color space range. my question is, if I want the color space output to be SWS_CS_ITU709, do i also have to change the input color space? I assume yes. But then I would have to know which space the source is in right? So the user must set it or is this stored in certain containers?
[18:06] <pfloyd> my tv doesn't support 1080p at 50Hz and I'm trying to play back a 25fps PAL HD bluray rip on my popcorn hour. The result is very jerky playback, especially when panning. I'm assuming it's frame skipping/dup'ing and not TV judder since normal NTSC content is ok. can I convery with ffmpeg from 25fps PAL to 24p to smooth the playback?
[18:12] <c_14> You can try one of the commandlines from 'https://ffmpeg.org/pipermail/ffmpeg-user/2013-July/016098.html'
[18:12] <c_14> Change codec options and input/output filenames/formats to whatever you need.
[18:13] <Mavrik> pfloyd, what you just wrote makes absolutely no sense
[18:13] <pfloyd> but the video does need to be re-encoded (i.e. some lossiness) then, right?
[18:14] <Mavrik> why would 24fps video which isn't synced to your display framerate work better than a 25fps progressive video?
[18:14] <Mavrik> you have another issue probably.
[18:14] <pfloyd> Mavrik: probably. all I know is 25p content plays like complete shit on my PCH+TV and my TV does not support 50 Hz.  regular NTSC HD content plays as I'd expect, with the normal judder due to lcd/led tv technology
[18:15] <Mavrik> 25Hz support? do you have an american TV?
[18:15] <pfloyd> yes it is an american tv and I'm trying to play pal content
[18:16] <Mavrik> hmm
[18:16] <Mavrik> has to be one shitty tv
[18:16] <Mavrik> but yeah, you have to reencode for that IIRC
[18:17] <pfloyd> *shrug* probably is "shitty" but it's good enough for me. thank you c_14
[18:18] <Mavrik> note that while cheap and easy, c_14's command line will make movies slightly wierder due to them not having proper telecine
[18:19] <Mavrik> but then again, most PAL content was authored that way, so *shrug* :)
[18:21] <pfloyd> yeah that's true. but at least it won't look like utter shite haha
[18:22] <pfloyd> my friend's plasma has no such issues, but I'm guessing it's because it can output at 1080p @ 50Hz ?
[18:22] <Mavrik> probably, depends on what TV does
[18:22] <pfloyd> i'll have to have him check next time I'm over there
[18:22] <Mavrik> mine can do 24p/25p as well and then does some wierd "movie" postprocessing stuff on image when that kind of input is detected
[18:23] <Mavrik> your image might be mauled by the player as well, since it sounds like it's forcing 50Hz output for 25p video
[18:23] <Mavrik> (or is your video 1080i at 50?)
[18:37] <pfloyd> no it has the option to force whatever output I want. it looks like it's coming out at 60 Hz
[18:37] <pfloyd> if I put it on auto, it tries to to 50 Hz and my tv throws a "not supported" message when it tries to play
[18:39] <pfloyd> and while he has an older model ,his is a popcorn hour also so I think it's the tv :/ but I just bought it a few years back no way is $wife gonna let me get something else hahahh
[18:41] <Mavrik> mhm
[18:42] <Mavrik> I'm just asking because my player has the option to either sync output to video or just output everything at 1080 at 60 or whatever is set
[18:45] <pfloyd> yeah same here
[18:46] <pfloyd> but yeah I figured the crux of the problem was "cheap tv" hahah
[18:49] <pfloyd> Mavrik: one other question. it's been a while since I've used ffmpeg. my command line is:   ffmpeg -y -threads 0 -i input.mkv -filter:v setpts=25025/24000*PTS -vcodec libx264 -preset slow -crf 21 -acodec ac3 -ac 6 -ab 1500k -ar 48k -filter:a atempo=0.959040959040959 -f matroska -r 24000/1001 output.mkv  but the output size is quite a bit smaller (646MB vs. 2.2 GB). I'd like to keep the video quality high and thought -crf 21 and -preset slow  would be good, but ...
[18:49] <pfloyd> ... wow 1/3 the size?
[18:58] <pfloyd> I guess I can find out the input video bitrate and use -b <that> and -qscale 0  ?
[18:58] <spaam> using -crf 21 and slow and complain about file size its to small? :S
[19:01] <pfloyd> well those settings used to work without a huge size drop but I haven't used ffmpeg in about 4-5 years heh
[19:02] <pfloyd> but then again, the input files I was dealing with were already very compressed back then (ipod/iphone videos)
[19:02] <pfloyd> so that makes sense.  so what should I do to make it automatic without having to pre-determine the bitrate and then pass that to -b?
[21:39] <cdecl_xyz> in sws_getContext, sws_setColorSpaceDetails gets called with ff_yuv2rgb_coeffs[SWS_CS_DEFAULT] for input and output color space range. my question is, if I want the color space output to be SWS_CS_ITU709, do i also have to change the input color space? I assume yes. But then I would have to know which space the source is in right? So the user must set it or is this stored in certain containers?
[00:00] --- Mon Aug 18 2014


More information about the Ffmpeg-devel-irc mailing list