burek021 at gmail.com
Mon Feb 16 02:05:01 CET 2015
[01:43] <zetaPRIME> Can CLI ffmpeg be fed a set of PNG frames directly through stdin without writing them to disk first?
[01:45] <zetaPRIME> I'm trying to make a C#+monogame application that can encode rendered video in a way that works crossplatform and doesn't force my application to be under a particular license
[01:48] <zetaPRIME> and as far as I can tell there are no libraries that can encode video that aren't full GPL themselves
[01:50] <JEEB> well if you are fine with what libavcodec itself supports there's a relatively wide range of stuff that's under LGPL
[01:50] <JEEB> it's mostly third party libs that are GPL, like libx264
[01:50] <zetaPRIME> so the best I've come up with is creating a png frameset and mixing the audio track with ffmpeg-sharp or something, then piping both into CLI
[01:51] <zetaPRIME> I was thinking openh264 since it's mentioned on the supported codec list
[01:51] <JEEB> uhh
[01:51] <JEEB> I really wouldn't go that way
[01:51] <JEEB> I mean, it depends on what exactly you want
[01:51] <JEEB> because there's two use cases I can think of
[01:51] <JEEB> one is lossless capture of the game
[01:52] <JEEB> another is capture for live streaming
[01:52] <JEEB> for number one there's plenty of alternatives within libavcodec
[01:52] <zetaPRIME> it's not actually a game
[01:52] <zetaPRIME> it's a sprite-based animation suite
[01:52] <JEEB> so you most probably want to output the stuff lossless from the suite
[01:53] <JEEB> in that case there's plenty of stuff in base libavcodec
[01:53] <JEEB> ut video being one of those that has input plugins for most video editors for most OSs
[01:53] <zetaPRIME> ut?
[01:53] <JEEB> initials of the original developer
[01:53] <JEEB> I made the LGPL implementation
[01:53] <JEEB> in libavcodec
[01:54] <JEEB> original was GPL
[01:54] <zetaPRIME> well, codec use can be changed later if I'm going to be using ffmpeg
[01:55] <JEEB> anyways, I think C interop should be relatively simple from C#
[01:55] <zetaPRIME> the problem is giving it video, since ffmpeg-sharp doesn't do video encoding at all and it was last active in january 2012
[01:55] <JEEB> so wrapping lavf/lavc
[01:55] <JEEB> should be quite simple
[01:55] <JEEB> there are C examples on how to use the APIs in the examples dir
[01:55] <zetaPRIME> I don't know how to do managed-to-native interaction at all
[01:55] <JEEB> this is the time to learn then
[01:56] <JEEB> it should be relatively simple, or at least I'd be surprised if it was hard :P
[01:56] <zetaPRIME> well...
[01:56] <JEEB> it gets less simple with C++ where there are different ABIs etc
[01:56] <JEEB> while in C it should just work, more or less
[01:56] <zetaPRIME> I can generate a set of png-format frames without issue
[01:56] <zetaPRIME> the problem is just getting them into ffmpeg
[01:57] <zetaPRIME> I could write to disk then give it the fileset, but I'd rather not depend that much on disk I/O
[01:58] <JEEB> http://blogs.microsoft.co.il/sasha/2012/01/24/managed-unmanaged-interoperability-in-the-2010s-part-1-pinvoke-managed-to-unmanaged/
[01:58] <zetaPRIME> thus the initial question; can it be fed png data directly through stdin by any means?
[01:58] <JEEB> I know you can feed ffmpeg stuff through stdin, but I'm definitely not sure about multiple pngs :P
[01:58] <JEEB> http://stackoverflow.com/questions/9521916/wrapping-c-for-use-in-c-sharp/9555506#9555506 basically this series could be quite useful I think?
[01:59] <JEEB> I've not done dotNET development too much since 2003 or so
[01:59] <zetaPRIME> and pinvoke conceptually seems like an absolute minefield for crossplatform use
[01:59] <zetaPRIME> like if I didn't care about crossplatform I could just use directshow and NAudio and call it a day, but I don't know if that would even work in wine+mono
[01:59] <JEEB> well I'm just saying that it shouldn't be too hard and there should definitely be tools for doing it with mono or whatever
[01:59] <m4t> zetaPRIME: if you're on linux and can't do stdin, try a fifo
[02:00] <m4t> eg mkfifo myfifo
[02:00] <JEEB> you should be the one better versed in this shit, not me
[02:00] <m4t> ./pngprog > myfifo
[02:00] <m4t> ffmpeg -i myfifo
[02:00] <JEEB> http://www.mono-project.com/docs/advanced/pinvoke/
[02:00] <zetaPRIME> uhhhh
[02:00] <JEEB> mono has this too it seems
[02:01] <JEEB> it seems like the docs @ mono-project.com are pretty good :P
[02:02] <zetaPRIME> m4t: ...not what I'm asking? I just need to know whether CLI ffmpeg can take any of the inputs that I'm capable of producing directly via stdin =/
[02:02] <JEEB> it can take shit in from stdin, yes
[02:02] <JEEB> if it can take in the data you require from there, you would have to test
[02:02] <JEEB> write a few pngs down and cat them to ffmpeg :P
[02:03] <JEEB> see if the result is according to what you wanted
[02:03] <JEEB> ffmpeg -i - -f png -c:v utvideo out.avi
[02:03] <JEEB> for example
[02:03] <JEEB> there was a multiple picture format as well, but I do not remember that without going to the documentation at 3am
[02:04] <JEEB> also ugh
[02:04] <JEEB> -f before -i
[02:04] <JEEB> because it's for input, not output
[02:04] <JEEB> my bad
[02:07] <zetaPRIME> ...problem there is that I don't actually have a nonwindows system to test on
[02:07] <zetaPRIME> http://stackoverflow.com/questions/13294919/can-you-stream-images-to-ffmpeg-to-construct-a-video-instead-of-saving-them-t though it looks like this is doing almost exactly what I want to do
[02:08] <JEEB> well I'm pretty damn sure you can cat things on windows as well
[02:08] <JEEB> if with nothing else then a cygwin or mingw cat binary will do it
[02:08] <JEEB> http://superuser.com/a/434876
[02:08] <JEEB> there, I googled it for you
[02:10] <JEEB> and yeah, image2pipe seemingly is the multiple pictures format
[02:10] <JEEB> you will have to set -c png or so for png, though, I guess?
[02:11] <JEEB> also you definitely don't want to use -c:v mpeg4, but instead -c:v utvideo or whatever other lossless format you like
[02:11] <zetaPRIME> hmm. well, actually trying to implement this is going to have to wait until there's something actually rendering, though that shouldn't take too long once I have my most basic design elements laid out in code
[02:11] <JEEB> or heck, if you don't distribute the ffmpeg binary yourself you might even get libx264 and friends :P
[02:11] <JEEB> (or you can distribute an LGPL one and then the user can switch to a GPL one if he/she wants)
[02:12] <zetaPRIME> or users can just replace the binary on their own--yeah
[02:12] <JEEB> you just parse -codecs:v
[02:12] <JEEB> and enable formats that you know are good enough and available
[02:12] <zetaPRIME> ...also, I'm guessing stdio in c# is accessible as a Stream, in which case I can just use rendertarget2d.tostream directly~
[02:12] <JEEB> except I guess :v doesn't work with -codecs
[02:13] <JEEB> oh well, I always try that and end up getting the full listing :P
[02:13] <JEEB> because it just makes sense to be able to limit the output to specific types like you can do with -c
[02:14] <zetaPRIME> ...now I just need to figure out how to do audio mixing in a way that ffmpeg-sharp can work with
[02:14] <JEEB> oh right, audio :P
[02:14] <JEEB> unless you want to do NUT muxing yourself
[02:14] <JEEB> and feeding that from stdin
[02:15] <JEEB> you either do separate file output, or you start poking the APIs in that case :P
[02:15] <zetaPRIME> can ffmpeg just take an existing video container and insert an audio track after the fact?
[02:15] <JEEB> yes
[02:15] <JEEB> ffmpeg -i input1 -i input2 -map 0:v -map 1:a out.file
[02:15] <JEEB> maps all video streams from input nr0 and all audio streams from input nr1
[02:15] <JEEB> and outputs them as out.file
[02:16] <JEEB> of course you really don't want to do it like this without specifying output video/audio format :P
[02:16] <JEEB> because by default you re-encode to whatever is the FFmpeg default
[02:16] <JEEB> which is generally not what you want
[02:16] <JEEB> -c copy should do stream copy for all tracsk
[02:16] <JEEB> *tracks
[02:17] <zetaPRIME> separate audio is perfectly fine for disk-first if I just ffmpeg-sharp it to an mp3 or ogg or what havest thou
[02:17] <JEEB> you will want raw pcm or something lossless if output is a lossless intermediate
[02:18] <JEEB> or aac or vorbis or opus if you want non-lossless
[02:18] <zetaPRIME> audio being lossless is significantly less important than video, I'd say
[02:19] <JEEB> yes, but I mean that if you want to do a lossless intermediate then both should be lossless
[02:19] <JEEB> which is what you generally want when moving the thing to a video editor or whatever
[02:20] <zetaPRIME> well, mp3 320-44k should be "good enough" in my experience
[02:22] <zetaPRIME> though I suppose I could stdin a second time for the "ziptogether"
[02:23] <JEEB> lossless audio is such a small thing after lossless video so I see no reason to not do it for the intermediate output mode or whatever
[02:23] <JEEB> if someone wants to do a "final encode" of the thing right there, right now, then there the thing can be coded lossy
[02:27] <zetaPRIME> yeah, I just don't want to have to write lossless twice in a row
[02:28] <JEEB> well, you generally want two modes like that :P
[02:29] <JEEB> one for an intermediate (for editing/input for multiple encodes), and one for "this is all I want"
[02:31] <zetaPRIME> the final-output mode definitely has to be mp4 h264, because youtube
[02:33] <JEEB> no
[02:33] <JEEB> youtube actually takes pretty much everything and /always/ re-encodes
[02:33] <JEEB> so in theory you want to input to it the best thing you can :P and while for bandwidth-limited people it will be exactly what you say, that's not the optimum thing
[02:36] <zetaPRIME> well, mp4 h264 is definitely the least prone to youtube's occasional gray-video-syndrome from what I've heard
[02:42] <JEEB> it's true that youtube doesn't often update it's FFmpeg, but I still contain that most stuff you can throw from current FFmpeg to it will work
[02:42] <JEEB> ugh, early morning without sleep and it shows... "doesn't often update its FFmpeg"
[02:42] <JEEB> also libx264 can do lossless as well so not like it's the worst thing to pick :P
[07:50] <zetaPRIME> so it looks like I'll have to do all my actual pcm mixing in my own code on the CPU, and in order to actually play this live I'll have to render a buffer ahead and make very thorough abuse of monogame's creation of soundeffect objects from a RIFF-formated datastream
[07:55] <JamJams> Hey is R3D supported in ffmpeg? I see it in the list however I get errors when I try to import an R3D file from my Camera.
[14:02] <Muthu> hello
[14:02] <Muthu> can anyone help to do segmenation for iphone using ffmpeg
[14:04] <Muthu> please help me
[16:53] <rsully> to use ffmpeg with any decent options (libfdk, etc) on ubuntu do I have to compile it myself?
[16:55] <JEEBsv> yes
[16:55] <JEEBsv> ffmpeg with fdk-aac linked is not distributable
[16:56] <JEEBsv> same goes for faac and aacplus libraries
[16:56] <JEEBsv> since all of them are license-wise incompatible with (l)gpl
[16:57] <rsully> yep :/
[16:57] <rsully> is there a script that walks you through compilling and getting dependencies? or should i just follow the wiki guide
[17:19] <rsully> is ffmpeg-snapshot master or the current stable release?
[17:35] <__jack__> rsully: building ffmpeg is (really) easy
[18:01] <wintellect_> What's the best way to get ffmpeg into LinuxMint?
[19:00] <blez> how can I stream some video to html5 friendly webm?
[19:32] <shevy> is there a long option for -y available? overwrite output files
[19:32] <shevy> or perhaps put differently, how do I find out both short and long names to commandline arguments?
[19:46] <__jack__> does not exist, said the man
[19:47] <JEEB> if there was it'd be --yes :P
[19:47] <JEEB> because that's what -y means
[20:32] <shevy> lol
[20:32] <shevy> ffmpeg -i foo.avi --yes --no
[21:11] <Thaconut> Hi! I'm trying to use this script (https://gist.github.com/Taconut/f719bc7ac2b84fa335c7) to compile FFMPEG programmatically, but I've run across a problem: its taking up way too much space! Since I'm only using a small part of FFMPEG, most of the content there isn't required. How can I tweak this script to only include the following functionality?:
[21:11] <Thaconut> Converting audio/webm and audio/mp4 to audio/mp3
[21:11] <Thaconut> Getting the length in seconds of a song with ffprobe
[21:12] <Thaconut> Injecting ID3 tags into mp3s
[21:12] <shevy> hehe
[21:12] <shevy> quite a long list
[21:12] <Thaconut> Yeah
[21:13] <Thaconut> But FFMPEG is like a swiss army knife, and I only need the scissors :P
[21:13] <Thaconut> Also I meant audio/mpeg not audio/mp3
[21:13] <Mavrik> well select only what you need in configure then.
[21:14] <Mavrik> maybe don't use a copy-pasted script you don't understand ?
[21:14] <shevy> Thaconut, when you do: "ffmpeg -i name_of_song.mp3" you get a long text result
[21:14] <BtbN> that script doesn't even set where it wants to put stuff
[21:14] <shevy> one line has this entry: "Duration: 00:07:04.93, start: 0.025057, bitrate: 128 kb/s"
[21:14] <Thaconut> I know
[21:14] <Thaconut> I'm using ffmpeg already
[21:15] <shevy> the Duration part tells you how many seconds that is. Now you can use grep or a programming language to scan for these :::: entries there
[21:15] <Thaconut> But my host doesn't allow me to just install it
[21:15] <Thaconut> I already did that
[21:15] <Thaconut> Regex worked fine
[21:15] <Thaconut> But I need to add that functionality when `make`ing ffmpeg
[21:15] <shevy> you want to modify the source of ffmpeg?
[21:15] <Thaconut> No
[21:16] <Thaconut> I just want to compile it with limited functionality in hopes that I can save space
[21:16] <Mavrik> btw, for stream info, use ffprobe with JSON or other structured output
[21:16] <Mavrik> don't parse human readable strings -_-
[21:16] <Thaconut> Too late :/
[21:17] <Thaconut> But I will change that if I can get FFMPEG working on my production server.
[21:17] <Thaconut> Thanks
[21:17] <Thaconut> It seems as though configure allows me to enable/disable certain functionality
[21:18] <Thaconut> But I'm not sure what I need to enable to get these features
[21:18] <shevy> the power of GNU configure
[21:18] <shevy> how do you know that "these features" are available
[21:18] <Thaconut> Because I'm using them with a complete installation of FFMPEG on my development server
[21:18] <Thaconut> and it works beautifully
[21:19] <shevy> if you just do "ffmpeg" then you will see the compiled options
[21:19] <shevy> configuration: --prefix=/usr --enable-gpl --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-version3 --enable-x11grab --extra-libs='"-ldl"' --disable-libopenjpeg
[21:19] <shevy> oops sorry, that was too much, my bad
[21:20] <Thaconut> So I'm guessing I need libmp3lame
[21:20] <Mavrik> Thaconut, use "./configure --help" for all options, "ffmpeg -codecs" to see list of codecs, "ffmpeg -formats" to see list of formats
[21:20] <Mavrik> and use those :)
[21:20] <Thaconut> Cool! Thanks!
[21:20] <Thaconut> That makes things easier
[21:20] <shevy> libmp3lame is lame aka http://sourceforge.net/projects/lame/files/lame/3.99/lame-3.99.5.tar.gz
[21:20] <Harzilein> shevy: if you mean that the line was truncated, it appeared up to "--disable-libopenjpeg", so i guess if that's not too much and you don't regularly paste 200 column lines, most support channels should be fine :)
[21:21] <ubitux> Thaconut: eval $(ffprobe -v error -of flat=s=_ -show_entries format=duration in.mp3); echo $format_duration
[21:21] <shevy> I mean I think it was too much, or at least on my IRC client, it appears as if I pasted like 5 lines in a row kind of :\
[21:21] <Thaconut> It looks nice and tidy on mine
[21:21] <shevy> I have bad eyes so I use huge fonts!
[21:21] <Harzilein> :D
[21:22] <Thaconut> Can I convert to an mp3 without lame?
[21:23] <shevy> hmm
[21:23] <relaxed> no
[21:23] <Thaconut> And I'm assuming that means I'm going to have to compile lame as well?
[21:23] <relaxed> Thaconut: did you see http://johnvansickle.com/ffmpeg/ ?
[21:24] <Thaconut> Wow!
[21:24] <Thaconut> You just made my life infinitely easier!
[21:24] <Thaconut> That's awesome! Thanks!
[21:24] <relaxed> you're welcome
[22:05] <ahop> Hi! Can ffmpeg convert .3gp to .mp3 ?
[22:05] <ahop> Or do I need another tool like lame , etc. ?
[22:05] <c_14> If ffmpeg can read the 3gp file, and has lame support built in, yes.
[22:05] <ahop> Is there an easy CLI command to convert .3gp to .mp3 ?
[22:06] <c_14> https://trac.ffmpeg.org/wiki/Encode/MP3
[22:06] <BtbN> Isn't 3gp a video format?
[22:06] <c_14> look at the VBR encoding example
[22:06] <c_14> 3gp is just a container iirc
[22:06] <ahop> BtbN in fact I have .3ga but similar to 3gp
[22:08] <ahop> I tried something like : ffmpeg -i input.wav -codec:a libmp3lame -qscale:a 2 output.mp3
[22:08] <ahop> but "Unrecognized option 'codec:a'"
[22:08] <Mavrik> you're using ancient ffmpeg.
[22:08] <Mavrik> if at all.
[22:09] <c_14> Try with a static build
[22:09] <c_14> http://johnvansickle.com/ffmpeg/
[22:09] <ahop> is there a single .exe file version ?
[22:11] <ahop> if I want to have a single file version I should take this one ? : http://ffmpeg.zeranoe.com/builds/ **Static** ?
[22:11] <c_14> ye
[22:12] <ahop> ok
[22:16] <ahop> ffmpeg -i *.wav -codec:a libmp3lame -qscale:a 2 (inputname).mp3
[22:16] <ahop> How can I do that ^^ (ie convert all .wav files in the folder)
[22:18] <ahop> is it possible to do it with ffmpeg itself, or do I need to use a windows batch (i hate batch!)
[22:20] <c_14> Just use a batch file (even if you hate it) or a scripting language of some sort.
[22:21] <ahop> ok :)
[22:23] <ahop> it works! thanks a lot
[23:06] <acosonic> Hi everyone, does anyone know is there such player like splayer and eyecareplayer which filter brightness while playing videos?
[23:11] <pzich> acosonic: I'm not sure exactly what you mean, but you could certainly apply a filter to adjust the brightness/contrast, or apply curves or levels
[00:00] --- Mon Feb 16 2015
More information about the Ffmpeg-devel-irc