[Ffmpeg-devel-irc] ffmpeg.log.20140725

burek burek021 at gmail.com
Sat Jul 26 02:05:01 CEST 2014


[01:26] <kippi> hey
[01:26] <kippi> should this work? use these two filters? -vf "select='gt(scene, 0.01)',showinfo" -af silencedetect=n=-10dB:d=20
[01:30] <c_14> it should
[01:30] <Hello71> !tias
[01:30] <c_14> Those are 3 filters though.
[02:31] <troy_s> michaelni: Please reconsider the YCbCr conversion bug.
[02:32] <troy_s> michaelni: The PNG values are 100% correct, assuming whatever viewer you use dumps them as 1:1 RGB.
[02:32] <troy_s> michaelni: They had to be encoded from raw data as gamma 1.0 because otherwise automatic color transforms will occur.
[02:33] <michaelni> troy_s, i checked in multiple viewers the pngs are identical except +-1
[02:33] <troy_s> (And given that the values are YCbCr and not RGB, this would be entirely problematic
[02:33] <michaelni> try gimp
[02:33] <troy_s> Yes but the +- 1 is the issue.
[02:33] <troy_s> The rounding shouldn't be happening from what I can see, unless bit shifting is mangling?
[02:35] <michaelni> if its a +-1 rounding issue, why dont you mention this ?
[02:35] <michaelni> also why is the difference png totally wrong ?
[02:35] <michaelni> it shows differences in areas that are 100% identical
[02:36] <michaelni> its really hard to debug something with such low quality information
[02:38] <michaelni> troy_s, also if you argue gamma=1 is correct why does only one of the 2 pngs have gamma=1 ?
[02:38] <troy_s> michaelni: Those are only tags.
[02:38] <troy_s> michaelni: You can ignore them.
[02:38] <troy_s> michaelni: GIMP for example, should.
[02:39] <michaelni> but they affect how the png is displayed in some software
[02:39] <troy_s> michaelni: So will plain RGB in some software as it tries to color manage!
[02:39] <troy_s> michaelni: There is no format that I can give you aside from a float EXR that will interpret the values 110% correct
[02:40] <troy_s> And if I do that, then I am at the whim of the software reading the EXR correctly as well as the fact we add more conversions in
[02:40] <troy_s> (Unsigned integer to float)
[02:40] <troy_s> So A) The quality of information is absolutely as precise as I can assert to being correct.
[02:41] <troy_s> B) there is no raw data wrapper that I can do better than PNG in terms of accessibility. Tags may or may not be abided by for some software, but the raw channel values are accurate in those files.
[02:42] <troy_s> C) The pure transform of the RGB image into YCbCr is as high quality as possible based off of a 32 bit float EXR with correct SMPTE ranges and passed through a matrix +offset transform in float, then rounded.
[02:43] <troy_s> So in terms of data values, we have exactly six to examine, all of which are 1:1. RGB (YCbCr) in the high quality source, and RGB in the dest.
[02:44] <troy_s> (Nine if you wish to roll the input RGB image through the transform logic in FFMPEG.)
[02:47] <michaelni> troy_s, lets start again from square 1, what bug is this about ?
[02:47] <michaelni> you opened 2 bug reports
[02:47] <troy_s> 1) RGB to YCbCr transforms using 709 (two scenarios) yield incorrect results.
[02:47] <troy_s> (Likely due to rounding.)
[02:47] <troy_s> Fair?
[02:48] <troy_s> (At 8 bit is the initial testbed 444)
[02:48] <troy_s> I discarded the first bug on your advice to reduce the first round of issues to single points.
[02:49] <troy_s> (The longer target here is to try and fix YCbCr to RGB and vice versa at a higher level, but again I took your advice to break it down further.)
[02:50] <troy_s> (FF still struggles with the transforms, despite being 1000x better since the 601 hard coded values were dropped.)
[02:50] <michaelni> you opened 3 bug reports, one you closed, what is the difference between the other 2 ?
[02:50] <troy_s> One is for studio range, the other full range.
[02:51] <troy_s> (The first changed from testing the high level, to reducing it to very repeatable steps that are strictly FF's domain)
[02:52] <troy_s> (The errors will accumulate and make it impossible for us to figure out exactly where FF can be repaired or improved.)
[03:05] <michaelni> troy_s, should i delete the difference pngs from the tickets ? they dont represent the difference, so noone else wastes time from looking at them
[03:07] <michaelni> troy_s, or is the other diff png correct ? (i checked just the one from 3801)
[03:07] <troy_s> They both are
[03:07] <troy_s> They show which blocks are different in the SMPTE patterns
[03:08] <troy_s> In the case of studio range tests, which slivers of the grad are deviating.
[03:08] <troy_s> (It is a binary diff basically)
[03:08] <troy_s> (Both of them)
[03:08] <troy_s> michaelni: Cooking, but will answer any and all questions as I can.
[03:10] <michaelni> well, the one from 3801 certainly is not correct in the sense that multiple areas which are error free are displayed in different colors, i would have expected a color difference PNG to be 50% gray where there is no error not various different colors
[03:11] <michaelni> also the png is described by: "A visual difference between the 8 bit SMPTE input versus theoretical and FFMPEG YCbCr output."
[03:11] <troy_s> Anything red is a difference
[03:11] <troy_s> Yes.
[03:11] <michaelni> yeah iam starting to realize this now
[03:12] <troy_s> From 8 bit correctly encoded PNG
[03:12] <troy_s> Converted to YCbCr theoreticals
[03:14] <michaelni> actual color difference would have been significantly more usefull
[03:14] <troy_s> michaelni: Sorry. That can probably be possible via Compare in IM
[03:14] <troy_s> michaelni: But I had trouble enough asserting no channel mangling.
[03:15] <troy_s> michaelni: My apologies. I will work toward better compares for the next round.
[03:15] <michaelni> i can do it in gimp, i am just a bit unhappy about having to reverse engeneer what the bug is about
[03:16] <troy_s> michaelni: My issue. I thought I was communicating effectively.
[03:16] <troy_s> michaelni: I could have just dumped the two RGB channel images, but that too probably would have been suboptimal.
[03:22] <michaelni> troy_s, can you add the expressions you used to generate the correct ycbcr data to the tickets ?
[03:30] <troy_s> michaelni: Absolutely. After dinner.
[03:30] <michaelni> ok, thanks, no hurry
[03:30] <troy_s> michaelni: The basic YCbCr transform is a matrix and offsets
[03:30] <troy_s> (I even calculated the coefficients off of the original primaries and white points)
[03:31] <troy_s> michaelni: (_And_ emailed Poynton about them, which he replied)
[03:47] <troy_s> michaelni: The ODS is worth looking at
[03:48] <troy_s> michaelni: (in the original report) as it shows a perfect theoretical breakdown line by line, generated off of nothing but the coefficients themselves as input data (only coefficients and RGB values as input)
[03:49] <troy_s> michaelni: It generates the YPbPr (0..1 luma and -0.5 to 0.5 chroma)
[03:49] <troy_s> michaelni: And builds off of that to get to both studio and full range data
[03:49] Action: michaelni feels like it will be easier to reverse engeneer the equation from the data
[03:49] <troy_s> (I included 601 and 240M in it as toggles as well, so that the other cases can be tested.)
[03:50] <troy_s> It isn't
[03:50] <troy_s> The equation for YPbPr is dead simple actually
[03:50] <michaelni> no, you can write it down ?
[03:50] <troy_s> Yep
[03:50] <troy_s> But it really is a matrix
[03:50] <troy_s> (With three offsets)
[03:50] <michaelni> you can write it as matrix i dont mind at all
[03:51] <troy_s> Sure. That matrix is in the ODS if you care.
[03:51] <troy_s> (IIRC)
[03:51] <michaelni> i dont think so
[03:51] <michaelni> IIRC theres some idealized thing there
[03:51] <michaelni> but this is about rounding +-1
[03:51] <michaelni> so we need to get Z256 -> Z256 not R->R
[03:52] <michaelni> theres more than one way to round, not to mention dither and noise shaping
[03:53] <troy_s> michaelni: Right then. Let me check my files. I can get an ODS quick
[03:54] <troy_s> michaelni: I have a hunch that those minor precision errors are accumulating and making the drift quite significant at the tail end.
[03:54] <troy_s> (FFMBC obviously had the same issues, but I chose to bring it upstream because FFMPEG now has a much more robust chain in place)
[03:55] <troy_s> (Aside from the sRGB hard coded for XYZ transforms, that no one will care about until 2020 is all over.
[04:14] <troy_s> michaelni: Who is coding the colorimetry aspect in swscale for XYZ?
[04:14] <troy_s> michaelni: I would love to chat with them.
[04:16] <troy_s> (I don't believe it is the AV folks is it?)
[04:25] <michaelni> i think the xyz code had 3 or 4 contributors
[04:26] <Hello71> oh hey, an op. haven't seen one of those in here in... uh.
[04:26] <troy_s> michaelni: Hrm. Anyone lead point on it currently?
[04:27] <michaelni> i think noone of them touched the code after their contribution
[04:27] <troy_s> Drat.
[04:27] <troy_s> That's a hugely important chunk of code WRT 2020 and other formats that are upon us.
[04:28] <troy_s> (For example, transforming from 2020 to a standard sRGB display, or a wide gamut, etc. (Likely means a kick in the ass to ffplay to gut or fix the OpenGL code, not that I think that it is even possible thanks to chipset vendors bogging it up.)
[04:29] <troy_s> michaelni: If anyone who is familiar with the ffmpeg base better than I is capable and interested, I am more than willing to help carve it out. My colorimetry side should be able to at least see breakages.
[04:32] <michaelni> troy_s, iam happy to help with awnsering questions about swscale, also if you want to take over maintainership of the xyz code thats welcome as well
[04:33] <troy_s> michaelni: Might take me a while to try and get familiar with it, plus god knows how many horrible questions (of the stupid sort) I'd end up nagging you with.
[04:33] <michaelni> yes iam a bit affraid of that ;)
[04:34] <michaelni> but i think your knowledge about that colormetry & xyz stuff is probably better than mine
[04:35] <michaelni> and i have too much things i already maintain so i cant give xyz and surrounding code as much attention as it should get
[04:37] <michaelni> and git grep xyz libswscale/     as well as     git grep XYZ libswscale/
[04:37] <michaelni> should explain how it all works
[04:40] <troy_s> Hrm. git grep...
[04:40] <troy_s> michaelni: First thing first
[04:41] <troy_s> michaelni: How difficult is it to add a -primaries option to swscale?
[04:41] <troy_s> michaelni: I say primaries rather than coefficients because A) we can glean the luma coefficients from the primaries, and B) the primaries would be forward looking and cover every possible variation on 2020.
[04:43] <michaelni> "adding" options is easy, doing something with them may or may not be easy
[04:43] <michaelni> see libswscale/options.c
[04:43] <troy_s> michaelni: I think there is enough sort of hint work in place (I need not deal with the bitshift nightmare, just worry about the numerical float values.)
[05:25] <michaelni> troy_s, btw about the rgb->ycbcr, see rgb24ToUV_c & rgb24ToY_c & chrRangeToJpeg_c() & lumRangeToJpeg_c()
[05:25] <troy_s> Erf.
[05:25] <troy_s> Let me decode that.
[05:26] <troy_s> What am I looking at?
[05:26] <michaelni> troy_s, also you can use -cpuflags 0 to disable all asm/SIMD to make sure it uses the C code (which is easier for debuging)
[05:26] <troy_s> Why are the two named differently?
[05:26] <troy_s> Oh... so I can just pass that on the CLI?
[05:26] <michaelni> -cpuflags 0, yes
[05:26] <troy_s> My problem is that my ability to look at the gnarly bitshifts is very opaque to me
[05:27] <troy_s> I can handle float values and even unsigned ints at times, but the shifting makes my head go bonkers.
[05:27] <michaelni> well its fixed point no magic really, float would be too slow and too unpredictable for regression tests
[05:27] <troy_s> michaelni: Two questions then, A) If there is a rounding issue it is in one of those four functions for studio and full respectively and B) where are the coefficient tables located now?
[05:28] <troy_s> michaelni: Speaking of which, having a regression test that tests the YCbCr chain would be excellent. An image or even just values to known values (like the 25 odd in the SMPTE test)
[05:30] <michaelni> input_rgb2yuv_table
[05:30] <rcombs> ghuuuuuuuu, just tried to use ffmpeg for a testing application on Ubuntu, failing to remember that it's actually avconv
[05:30] <rcombs> if I wanted avconv, I'd have asked for it
[05:31] <troy_s> rcombs: Nightmare.
[05:31] <rcombs> yeah
[05:35] Action: michaelni falls asleep, troy_s ill look tomorrow if you had more questions
[05:36] <troy_s> michaelni: I'll attach the ODS for the calculations as per your request.
[05:36] <troy_s> michaelni: And we will go from there.
[05:36] <troy_s> michaelni: Hopefully someone might have some tips on how to sort out the rounding issue.
[05:36] <troy_s> (or whatever it is)
[05:36] <troy_s> Then I'll tackle 601 and 240M just to make sure it's working correctly.
[05:36] <troy_s> And then onto decoding. Yuck.
[06:25] <t4nk949> Hello
[13:45] <vklimkov> looking for a person familiar with ffmpeg on android. ping if interested
[13:47] <Mavrik> I strongly suggest you ask a concrete question.
[13:58] <vklimkov> Marvik: it's not about question. suggesting kind of job
[18:40] <NeedFFMpegHelp> Hi, I need help with running FFMpeg with my Osprey card.
[18:40] <NeedFFMpegHelp> http://pastebin.com/wrnANGkR
[18:40] <NeedFFMpegHelp> ^^ That is my output
[18:40] <NeedFFMpegHelp> ^^ And the command
[18:51] <NeedFFMpegHelp> Does anyone have any idea's?
[18:54] <sfan5> "Input/output error" isn't really helpful, but that's not your fault
[18:58] <NeedFFMpegHelp> sfan5, Ok, is there something I can do to get a more helpful message?
[18:58] <sfan5> probably not
[18:59] <NeedFFMpegHelp> sfan5 - Thank you... If it where you, what would be the next step?
[18:59] <sfan5> google the problem if not already done
[18:59] <NeedFFMpegHelp> lol
[18:59] <NeedFFMpegHelp> I see.
[19:00] <NeedFFMpegHelp> So, it appears to be a crossbar issue, where FFMpeg doesn't support crossbar devices.
[19:01] <NeedFFMpegHelp> I was hoping that someone would have a work around, or if FFMpeg plans on adding support for it?
[19:04] <NeedFFMpegHelp> As there are people who agree that it would add a great deal of value to the project: http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=889
[20:02] <llogan> NeedFFMpegHelp: submit a feature request ticket on the bug tracker if there isn't one already.
[20:42] <MarcelvanLeeuwen> Hi!
[20:43] <MarcelvanLeeuwen> is it possible to encode a dts-ma to dts?
[20:43] <BigArah> What's Up #FFMPEG!?
[20:43] <BigArah> relaxed you up in here?
[20:44] <c_14> MarcelvanLeeuwen: ffmpeg -i file -c:a dts -c:v copy outfile
[20:44] <MarcelvanLeeuwen> okay thanks going to test
[20:45] <BigArah> ffmpeg -i /path/file.mp4 2>&1 | grep "Duration";
[20:45] <BigArah> how would I be able to save that variable and use it again?
[20:47] <llogan> BigArah: https://trac.ffmpeg.org/wiki/FFprobeTips
[20:48] <llogan> shows a "better" way to get the duration. how you use the duration is up to you.
[20:48] <BigArah> ok, I can run that from php then and pass it back
[20:48] <BigArah> $duration = shell_exec(ffprobe...);
[20:49] <BigArah> How do you feel about that llogan ?
[20:50] <llogan> i know nothing of PHP, but the example shows eval usage which may be what you are looking for.
[20:50] <BigArah> yeah
[20:50] <BigArah> thanks man
[20:50] <BigArah> I appreciate that
[20:52] <NeedFFMpegHelp> llogan: thanks for the reply... I'll do that. What do you think the likely hood of that feature being added?
[20:53] <llogan> NeedFFMpegHelp: hard to say. depends on developer interest. unless you supply a patch to ffmpeg-devel mailing list to implement it.
[20:54] <llogan> or you can try placing a bounty. we have a bountysource account if you prefer using that service.
[20:54] <NeedFFMpegHelp> Does that bounty source account accept BitCoin?
[20:55] <llogan> i think so
[20:55] <trn> Hello, would anyone be willing to help me with what is probably an extremely noobish issue?
[20:55] <NeedFFMpegHelp> Very nice.. Can you give me the details? I'd like to look into that.
[20:55] <trn> I've complete info with question and all relevant info in pastebin at http://goo.gl/m3p9Gs
[20:56] <llogan> NeedFFMpegHelp: i think you just make a ticket on the FFmpeg bug tracker, then wait about 15 minutes, then it will appear bountysource. you then tell it how much of a bounty you're willing to spend to get it implemented.
[20:56] <trn> Introduction info at the top, nearly unmolested output (only removed irrelevant repeating data so it's 1000 lines instead of 100000), and I hope I've woreded the question correctly. :)
[20:57] <trn> I'm going to be using the ffmpeg libraries, but want to know what I'm doing in the cli before converting to C.  (I've also never used ffmpeg before, so please excuse my ignorance.)
[20:57] <NeedFFMpegHelp> llogan: Thank you, I see that BountySource takes BitCoin, so that is perfect. Thanks so much.
[20:58] <trn> Also I hope it's enough info, this is just a few lines of a isolated troublesome function in a prototype application thats several thousand lines of code running across several hundred machines.
[20:58] <llogan> NeedFFMpegHelp: let me know how you like or dislike it. i don't think anyone has used it yet, but the FFmpeg account hasn't been there for very long.
[20:59] <trn> So before I embarass myself on the mailing list, anyone want to take a look?
[21:01] <llogan> trn: that's a lot of info. if you ask on mailing list you can omit everything above line 728.
[21:02] <llogan> except for a few sentences explaining the issue
[21:02] <trn> Everyone who asked questions on the list that I looked at was almost always asked for more information, so I'm just trying to be complete.
[21:03] <llogan> yes, i understand, and users often do the opposite.
[21:03] <Primer> Is ffmpeg now avconv?
[21:03] <sacarasc> No.
[21:03] <sacarasc> avconv is from a fork.
[21:03] <sacarasc> ffmpeg uses ffmpeg.
[21:03] <Primer> So why are distros now using avconv?
[21:04] <trn> And relevant output is difficult to isolate when they ask for "complete" output when I have multiple ffmpeg processes running and sending output and accepting input via different network and IPC mechanisms.
[21:04] <Primer> Sorry if this is a sore subject
[21:04] <sacarasc> Because they don't ship ffmpeg, but a form called libav.
[21:04] <sacarasc> *fork
[21:04] <trn> llogan: 999 - 1020 is my issue :)
[21:04] <Primer> but I needed to splice a video and have been doing that with ffmpeg for a long time, but my newly installed Linux mint 17 has no ffmpeg
[21:04] <llogan> because the Debian maintainer (who was formerly a FFmpeg developer) switched to the fork, and then forced his decision to the users.
[21:05] <Primer> wow
[21:05] <llogan> You can download a static build. http://ffmpeg.org/download.html
[21:05] <llogan> or compile http://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
[21:05] <sacarasc> Or build yourself: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
[21:05] <Primer> lovely, thanks
[21:06] <Primer> hopefully this will work, whereas avconv is failing at this: avconv --ss 00:05:32 -t 30 -i file.mov clip.mov
[21:07] <Primer> it seeks to the correct location, but the duration isn't enforced, it just goes to the end
[21:07] <llogan> trn: i can't really look at it now, but if you post on ML maybe I (or someone else) can. make sure to provide your details in the message itself instead of a link to pastebin
[21:07] <llogan> also see https://trac.ffmpeg.org/wiki/MailingListEtiquette if you're feeling nervous about the ML.
[21:08] <Primer> nope, it also just goes to the end
[21:08] <Primer> So any thoughts? ffmpeg -y -ss 00:07:06 -t 30 -i REC_0004.MOV -c:v copy -c:a copy clip.mov
[21:08] <trn> llogan: I will do so.  I'm more nervious about posting on the mailing list because this is part of a large application and I don't really want an archived record forever available of what an idiot I am. :)
[21:09] <Primer> http://pastie.org/9420798
[21:09] <llogan> it's not idiotic. and if you don't want a record just use a fake name.
[21:10] <llogan> Primer: can you provide the input?
[21:10] <Primer> I did
[21:10] <Primer> err
[21:10] <Primer> you mean the file?
[21:10] <Primer> it's huge :)
[21:11] <Primer> -rwxr-xr-x 1 daniel daniel 1.1G Jul 25 11:18 REC_0004.MOV
[21:11] <trn> llogan: Thanks, but if my boss knows I wasted 2 days when all I needed was "-set_magic_bs 1" or "#define MAGIC_BS 1" I'm going to lose prestige in the office.
[21:11] <Primer> If I were to upload that from here, my co-workers would be pissed
[21:12] <trn> I'll try to do something tho.
[21:12] <llogan> Primer: is the issue only with this particular input?
[21:12] <BigArah> hey llogan I got it to work with shell_exec() and the old grep command passing it as a value
[21:12] <BigArah> thanks for your help
[21:13] <Primer> llogan: I've not made clips from videos in a very long time. I suppose I can test with some other input.
[21:13] <Primer> http://mobius-actioncam.com/ is what made the video, if you're curious
[21:13] <trn> Also one unrelated quick questions before I go read the source code....  is the ffmpeg-bundled NUT container handling considered inferior or superior to linking mplayer/ffmpeg git libnut?
[21:14] <trn> Because I'm using NUT rather extensively to pipe around arbritrary multimedia data both IPC and around a cluster.
[21:17] <NeedFFMpegHelp> !
[21:19] <trn> My only other option for intra-application cluster transport and IPC is MPEG-2 Part 1 transport streams, which seems a lot more complex.
[21:19] <Primer> llogan: I'll upload the source file when I get a chance and get back to you. Thanks for your interest in this matter.
[21:19] <trn> Because some parts of the application may cut, splice, and later concatenate streams without parsing them, and NUT passed all the tests.
[22:16] <FrEaKmAn_> hi all..
[22:16] <sfan5> hello
[22:16] <FrEaKmAn_> if I want to convert multiple files to mp4
[22:16] <FrEaKmAn_> and then concat those files
[22:16] <FrEaKmAn_> is it good to have same framerate?
[22:16] <FrEaKmAn_> convert to same fm?
[22:16] <sacarasc> I think it won't work without the same frame rate.
[22:17] <FrEaKmAn_> ok..
[22:18] <FrEaKmAn_> so for vcodec I will use libx264 and for audio aac
[22:18] <FrEaKmAn_> but what I should define for framerate?
[22:19] <sacarasc> What different ones do you have?
[22:19] <FrEaKmAn_> I don't know.. users upload them
[22:19] <FrEaKmAn_> and then I convert and concat
[22:25] <sacarasc> Do you know where they might be sourcing from? Will all the videos they want combined be from the same source?
[22:25] <trn> FrEaKmAn_: I am working under similar circumstances and we deal with an internal 'standard' of 720p 25fps and convert everything to that, upscaling smaller resolution streams, upscaling larger ones, and then sync'ing the video to the audio.
[22:26] <FrEaKmAn_> trn: how do you sync video with audio?
[22:26] <FrEaKmAn_> sacarasc: not really
[22:26] <trn> FrEaKmAn_: Usually we get 24, 29.97, and 60 in the wild for input.
[22:27] <trn> FrEaKmAn_: Right now by letting vsync handle it.
[22:27] <FrEaKmAn_> what do you use for parameter?
[22:28] <trn> 25 fps just seemed like a good intermediate.  I'm in no position to tell you really, I just started using the ffmpeg stuff this week.
[22:28] <trn> And we're using the libav* ffmpeg libraries directly.
[22:28] <FrEaKmAn_> ok
[22:31] <trn> I know async format is depreceated, but I would *assume* something like: -async 1 -vsync 2 -r:v 25 for the output would work.
[22:33] <trn> Where you could probably then concat the produced outputs.
[22:34] <trn> I would assume you could just use multiple -i $in1 -i $in2 etc and do it all in one swoop, unless you have some situation that makes that impractical.
[22:36] <FrEaKmAn_> trn: thanks
[22:53] <zybi1> hi
[22:54] <zybi1> how to re-encode a 70 mbit mkv-file to a about 35 mbit mov file please?
[22:59] <trn> Quick licensing question ... If I'm linking to ffmpeg libraries on an application that runs only on my server and the servers are under control of our organization, and ffmpeg is compiled --enable-gpl --enable-nonfree, but we are not distributing anything, the server software dynamically linking to the compiled ffmpeg can remain propriatary, correct?
[23:00] <trn> Because there is no distribution of the server source in binary or source format, only server output, which I assume can't be reasonably considered a derivative work :)
[23:04] <JEEB> well, due to the --enable-nonfree flag you wouldn't be able to distribute binaries of that configuration anyways, as it bars you from doing it (licenses being incompatible)
[23:06] <JEEB> with regards to other points, follow the license(s) involved. IANAL. Generally if you are not distributing any binaries and not dealing with the A* gnu licenses, you should be OK.
[23:06] <trn> That was what I thought too. :)_
[23:10] <Primer> llogan: I was able to limit the recording time by specifying -t after -i
[23:12] <llogan> Primer: i'm not sure why before -i did not work as expected, although the behavior of -t differs depending on location
[23:12] <Primer> right
[23:12] <Primer> the docs is what lead me to try -t after -i :)
[23:12] <Primer> See? PEOPLE SOMETIMES READ THE DOCS!
[23:13] <llogan> zybi1: https://trac.ffmpeg.org/wiki/Encode/H.264#twopass this should give you a general idea
[23:13] <YaMoonSun> I downloaded an .mp4 and I want to extract the .aac codec from it and contain it via .m4a or make a precise encode via .mp3 - I keep running into errors. http://i.imgur.com/7gdVVhU.png
[23:15] <llogan> ffmpeg -i input -map 0:a -codec:a copy output.m4a OR ffmpeg -i input -vn -codec:a copy output.m4a OR ffmpeg -i input -c:a libmp3lame -q:a 4 output.mp3
[23:17] <YaMoonSun> I used the -strict - 2 or whatever and it preceeded to give me an error. Did I just have the command wrong, or?
[23:17] <YaMoonSun> Tyvm btw
[23:18] <zybi1> thanks llogan
[23:19] <llogan> YaMoonSun: option placement matters. you're attempting to apply -strict to the input (to the decoder).
[23:21] <YaMoonSun> I thought placing commands prior to -i would affect thee entire encode and not just the input - Every time I read out whole directory paths I give myself a headache. I need to start replacing them with variables to simplify this.
[23:22] <llogan> ffmpeg [options for input] -i input [options for output] output
[23:22] <YaMoonSun> So the strict command goes where?
[23:23] <YaMoonSun> Why can I not open This Is A File.mp4, but I can open thisisafile.mp4?
[23:24] <llogan> -strict, in your case, is an output option.
[23:24] <llogan> i don't know why you can't open the file. you haven't provided any context
[23:24] <llogan> but you don't need to use strict because your input is already AAC, so you don't need to re-encode
[23:25] <YaMoonSun> So my entire command was wrong for the function that I was attempting. I was trying to encode when I should have been trying to extract.
[23:25] <YaMoonSun> Damn
[23:26] <trn> llogan: Maybe you can help me without reading my 'novel'. :)
[23:27] <trn> Is there some good explanation anywhere of PTS/DTS/timebase/etc.  And how that applies to either container format and/or data pakcets in the streams in the container.
[23:28] <trn> It took forever to wrap my head around the fact that both the stream and the container can specify aspect ratios and they might not always be the same. :)  I'm a slow learner.
[23:30] <trn> But now for the player side of things, I determine screen aspect ratio, and can properly zoom into a 16:9 stream inside a 4:3 container on a 16:9 output to avoid having big black borders ...
[23:30] <trn> since it would otherwise show both letterboxed and pillarboxed.
[23:37] <YaMoonSun> That .m4a command worked nice, but the .mp3 reduced the bitrate from 192kbps to 141kbps
[23:45] <c_14> YaMoonSun: judging by your question, you probably want to use -c:a copy instead of -acodec aac
[23:46] <c_14> Also, when converting to mp3 you'll probably want to define a bitrate or quality.
[23:47] <YaMoonSun> Cheers - I'm trying to figure out how to edit with Audacity without losing quality now =/
[23:48] <sacarasc> If the output is lossy anywhere along the line, you've already lost quality.
[23:48] <c_14> You'll always lose quality when encoding from a lossless codec to a lossless codec.
[23:48] <c_14> s/lossless/lossy
[23:49] <c_14> No idea why I said lossless there...
[23:49] <trn> Question on prototyping using the ffmpeg cli vs. using the libraries directly.
[23:49] <trn> In my application I'm reading from buffered sockets and using O_NDELAY.
[23:50] <trn> Is there any way to replicate O_NDELAY behavior when using a Unix named pipe as ffmpeg input?
[23:51] <YaMoonSun> The m4a is flawless (considering the source), but unable to be opened in audacity, so I don't know what to do if I want to crop parts of the audio out without losing quality.
[23:52] <c_14> You can crop directly during the extraction from the mp4.
[23:52] <c_14> Or while converting to mp3.
[23:53] <YaMoonSun> Within ffmpeg? =0
[23:53] <c_14> https://trac.ffmpeg.org/wiki/Seeking%20with%20FFmpeg
[23:55] <YaMoonSun> I don't know if I should be excited or scared. The program is amazing, but pretty complex. Going to take weeks to get the commands memorized.
[00:00] --- Sat Jul 26 2014


More information about the Ffmpeg-devel-irc mailing list