[Ffmpeg-devel-irc] ffmpeg.log.20140422

burek burek021 at gmail.com
Wed Apr 23 02:05:01 CEST 2014

[00:25] <xreal> JEEB: ?
[04:21] <ItsMeLenny> hello
[04:22] <ItsMeLenny> how would one change the colour balance? particularly i want to multiply the red and blue channels by 2
[04:23] <ItsMeLenny> in ffmpeg
[04:28] <klaxa> ItsMeLenny: maybe look at this? http://ffmpeg.org/ffmpeg-filters.html#colorchannelmixer
[04:29] <ItsMeLenny> klaxa, i have that open in a text file, i never know how to adjust on 3 tone levels :P
[04:29] <klaxa> neither do i, sorry :P
[04:30] <ItsMeLenny> :P
[05:37] <ItsMeLenny> klaxa, tried a few different values, dont think i got close :P
[06:38] <sor_> is there a way to force ffplay to constantly update the video file?
[08:49] <K4T> hi
[08:49] <stalintrotsky> Hi, I have a question about converting parts of a video into a series of images
[08:49] <stalintrotsky> IS there a way to make subtitles from a subtitle stream appear in these images?
[08:51] <K4T> Can someone tell me how to pass x264 parameter "--stitchable" to ffmpeg? I tried following commands:
[08:51] <K4T> ffmpeg.exe -i "teledysk.vob" -c:v libx264 -x264-params stitchable:keyint=75 output.mp4
[08:51] <K4T> ffmpeg.exe -i "teledysk.vob" -c:v libx264 -x264-params stitchable=1:keyint=75 output.mp4
[08:52] <K4T> but none of them work
[08:52] <K4T> correct way using x264.exe is: x264.exe --stitchable --keyint 75 -o "teledysk.264" "teledysk.flv"
[08:52] <K4T> like u can see there is no =VALUE after --stitchable in x264
[09:04] <blippyp_> stalintrotsky: ffmpeg -ss 00:00:10 -i input.mkv -vf "ass=subtitles.ass" -ss 23 -t 145 images/output%4d.png
[09:12] <K4T> anyone?
[09:12] <blippyp_> I'm looking right now K4T ;)
[09:13] <blippyp_> I thought there was a way to pass an encoder's paramater diretly to it, but I guess I was wrong - I can't find it...
[09:13] <K4T> now I`m sad ;[
[09:14] <blippyp_> be patient, someone with more knowledge about it may still answer
[09:14] <stalintrotsky> blippyp_, how would I use the subtitles from inside an mkv file I'm using?
[09:14] <stalintrotsky> Thanks for responding, by the way
[09:15] <blippyp_> and if they don't, come back again later and ask again - (like 8am-8pm) as there are more people awake... ;)
[09:15] <blippyp_> stalintrotsky: you have to export them to a .ass file first - one sec
[09:15] <stalintrotsky> oh
[09:15] <stalintrotsky> I think I can handle that
[09:15] <stalintrotsky> Thanks blippy
[09:15] <stalintrotsky> I was just wondering if there was a more convenient way
[09:16] <jonascj> Hi all. What approach would you guys take to streaming images from a webcam to a webpage? How is this done for example: http://www.fishcam.com/
[09:16] <blippyp_> ffmpeg -i input.mkv -an -sn -s:c cop subs.ass
[09:16] <blippyp_> np
[09:16] <stalintrotsky> thanks
[09:16] <blippyp_> jonascj: ffserver
[09:17] <blippyp_> was that you I was talking to the other day about that? with mjpeg?
[09:17] <blippyp_> it worked for me (using mjpeg)
[09:18] <jonascj> blippyp_: yeah, I got ffserver and ffmpeg working :) Now I just need to figure out if I should use mjpeg, mp4 with html5 or similar. I was just wondering if you guys had any suggestions.
[09:18] <blippyp_> but swf had much better video quality imo (only tried the two)
[09:18] <blippyp_> I'd go with swf - it worked very nicely from my tets
[09:18] <blippyp_> tests
[09:19] <jonascj> i.e. just in a browser?
[09:19] <blippyp_> yup
[09:19] <blippyp_> hold on, let me find my files
[09:19] <jonascj> My browsers just keep trying to download the mjpeg picture. If I use it as source in an <img>-tag they just display the first frame they receive
[09:19] <blippyp_> I saved them as an example...
[09:19] <jonascj> blippyp_: no need to - I just wanted some input :)
[09:20] <blippyp_> I wouldn't use mjpeg - it really looked back compared to the swf video
[09:20] <blippyp_> here is what I did:
[09:21] <blippyp_> I used your ffserver.conf file - but modified the <stream webcam.mjpeg> to <Stream .webcam.swf>
[09:21] <blippyp_> and also changed the format in the Stream section to swf instead of mjpeg
[09:22] <blippyp_> I left everything else the same (although you might want to 'tweak' some of those settings better later)
[09:22] <blippyp_> I started ffserver with: ffserver -f ./ffserver.conf
[09:22] <jonascj> yeah :)
[09:22] <blippyp_> then ran ffmpeg with: ffmpeg -f v4l2 -pix_fmt yuvj422p -i /dev/video0 -f alsa -i plughw:0 -af volume=1 http://localhost:8090/webcam.ffm
[09:23] <blippyp_> my html page was very simple
[09:23] <jonascj> http://thecheat.colding.com:8090/webcam.swf is what I have now using swf. Some settings needs to be tweaked :P
[09:23] <blippyp_> <html><body><object width="320" height="240" data="http://localhost:8090/webcam.swf"></object></body></html>
[09:24] <blippyp_> maybe, but your initial settings worked fine for me - but I connected locally - so across the internet in a different country or whatever might need some tweaking - you'd have to test it
[09:24] <jonascj> of course my swf stream now is full height becaues I haven't wrapped it in <img>.
[09:24] <blippyp_> but it worked great - I really liked it
[09:24] <blippyp_> mjpeg always started at the beginning when I opened it
[09:25] <jonascj> the mjpeg picture looks great for me but the swf looks horrible :)
[09:25] <blippyp_> the swf was more 'real' time - it stay with what I was currently doing
[09:25] <jonascj> but that must be a matter of settings.
[09:25] <blippyp_> really?
[09:25] <blippyp_> weird?
[09:25] <jonascj> http://thecheat.colding.com:8090/webcam.swf
[09:25] <blippyp_> checking it out
[09:25] <jonascj> compared to http://thecheat.colding.com:8090/webcam.mjpeg
[09:27] <jonascj> But I'll just have to fiddle the settings I think. Maybe enable some more libraries in my ffmpeg
[09:27] <jonascj> right now it is compiled with "./configure --enable-libv4l2". Maybe having some other options would be good
[09:28] <jonascj> blippyp_: thank you for your input - I'll try to make something round the .swf format or maybe .mp4 which is supported in html5 and newer browsers.
[09:29] <blippyp_> yeah - not sure why - but if you tweak your settings it would alter how the video is displayed...
[09:29] <stalintrotsky> blippy, is there a filter for ssa subs?
[09:29] <blippyp_> like I said, because I did mine locally, it may have been much better for me - you'd have to test it - but I can certainly agree with you that you mjpeg does look better
[09:30] <blippyp_> if you wanted to stay 'with the times' mp4 or webm would likely be the way to go
[09:30] <blippyp_> ssa? no - just ass format - unless you can find an external filter for it
[09:31] <blippyp_> jonascj: no problem, it was fun - I learned a few things from it - and I'm just as happy as you - I meant to figure out how to use ffserver again anyways, so it was a pleasure...  ;)
[09:32] <blippyp_> that fishcam is a cool idea....
[09:32] <blippyp_> I love fish
[09:40] <K4T> where can I download FFMPEG build with libfdk_aac enabled?
[09:40] <blippyp_> what os are you using?
[09:40] <K4T> windows
[09:40] <K4T> pls dont tell me that I have to compile it ;[
[09:40] <blippyp_> I'm guessing you'd have to re-compile it yourself
[09:40] <blippyp_> sorry
[09:41] <K4T> nooooooooooooooooooooooooooo :<
[09:41] <blippyp_> if you can get on Arch, it's enabled by default
[09:41] <blippyp_> either way - it's probably much easier to encode under any linux distro
[09:42] <K4T> is it hard to compile it on Windows?
[09:42] <K4T> or maybe is there any other encoder which is build into ffmpeg on windows?
[09:42] <blippyp_> I just re-compiled mine to enable the frei0r plugins (was very disappointed with them so far though)
[09:42] <blippyp_> I doubt it's any harder on windows than linux - but you need the right software and libraries
[09:43] <K4T> hm, so you can compile it on Windows, yes? You have environment ready, yes?
[09:43] <blippyp_> what do you mean by other encoder which is built into ffmpeg?
[09:43] <blippyp_> yes you can compile on windows
[09:44] <K4T> I though you can compile it now casue you have everytinhg prepared to do this :P I think I missunderstand you.
[09:44] <blippyp_> there was someone on here the past couple of days who was doing exactly that - if you stick around and keep an eye out and ask, you'll likely find someone who is familiar enough with doing it on windows who can help give you pointers
[09:45] <blippyp_> it's not that I have 'everything' prepared - but much of it
[09:45] <K4T> I have to encode audio to AAC, is it possible with current ffmpeg build on windows?
[09:45] <blippyp_> and the extra libraries are very easy to get with a package manager...  ;)
[09:45] <blippyp_> it should be
[09:46] <blippyp_> but I think it's still experimental - I use ac3 (mostly because I couldn't be bothered by the extra typing required for the aac)
[09:48] <K4T> I will try to compile :<
[09:49] <blippyp_> it's not that hard - just takes a little time and effort at first... ;)
[09:50] <K4T> https://trac.ffmpeg.org/wiki/CrossCompilingForWindows this looks good and easy
[09:52] <K4T> which Linux distro should I choose to compile ffmpeg for windows?
[09:53] <blippyp_> that probably doesn't matter - you might get away with using a livecd, and then you wouldn't need to install anything - Just burn it on DVD or put it on a USB and boot into it...  ;)
[09:54] <K4T> I will use Ubuntu on VBox
[09:54] <K4T> thank you for advices
[09:54] <blippyp_> no problem - hope it works out for ya :)
[09:54] <K4T> same :D
[09:54] <blippyp_> ubuntu is probably a good choice for you tbh
[10:27] <brontosaurusrex> K4T, you can also use something lighter, like #!
[10:31] <K4T> like what?
[10:32] <brontosaurusrex> crunchbang <  a lite debian respin
[10:32] <K4T> I have installed Ubuntu on VBox right now, trying to compile ffmpeg
[10:32] <K4T> but I think it will be a long journey for me...
[10:32] <brontosaurusrex> i used to compile ffmpeg there all the time (only nix version thought) and it was working fine
[10:35] <brontosaurusrex> nothing wrong with ubuntu either anyway ...
[10:53] <K4T> brontosaurusrex, I discovered that I can use libvo_aacenc to encode audio with AAC with official ffmpeg build. Hope it will be enough.
[12:47] <K4T> I`m trying to make 2 pass encoding with ffmpeg with following command: ffmpeg.exe -y -i "movie.vob" -c:v libx264 -pass 1 -an -f mp4 NUL && ffmpeg.exe -i "movie.vob" -c:v libx264 -pass 2 "ouput.mp4"
[12:48] <K4T> but after 1 pass I`m getting that error: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[12:49] <droetker2> I am trying to find a way, to download an ism/manifest. ISMDownloader can download the highest quality, but the highest quality, for some reason is bad. I want to download "QualityLevel Index="1" and mux into mkv. How?
[12:49] <K4T> here is full log: http://pastebin.com/dtavXfF2
[12:49] <droetker2> Any ismdownloader available, that allows me to pick the quality level?
[12:51] <JEEB> K4T, [libx264 @ 00000000047b1a00] constant rate-factor is incompatible with 2pass.
[12:51] <JEEB> this is what derps your second pass :P
[12:51] <JEEB> you didn't set any rate control -> CRF 23 is the default
[12:51] <K4T> but I didnt set constant reate factor in my command
[12:51] <JEEB> and CRF doesn't need multiple passes
[12:51] <JEEB> yes, it's the default
[12:51] <K4T> ow, so what should I do to remove CRF?
[12:51] <JEEB> saner than setting some random bit rate
[12:52] <JEEB> just set a rate control mode
[12:52] <JEEB> if you need a specific size
[12:52] <JEEB> basically set a -b:v
[12:52] <JEEB> video bit rate
[12:53] <K4T> thank you, I`m trying
[12:53] <JEEB> but yeah, you only need multipass if you want the best results hitting a specific file size (average bit rate)
[12:53] <JEEB> CRF is what you want to use if you only care about getting some level of quality
[12:54] <blippyp_> K4T: are you limited in your file size? otherwise you should just use a crf (no second pass)
[12:54] <K4T> I`m just trying to reconstruct encoding setting from VIMEO video file which is prepared for mobile devices
[12:55] <K4T> settings*
[12:55] <blippyp_> what's the resolution of the video?
[12:55] <K4T> 480x270px
[12:56] <JEEB> anyways, rate control doesn't really have much to do with things related to that :)
[12:56] <JEEB> it's just that vimeo wants to make sure that no output is bigger than X
[12:57] <JEEB> even if it means that quality can get derped (it really depends on the source clip)
[12:57] <blippyp_> what's the maximum size limit? and how long is the video?
[12:58] <K4T> blippyp_, it doesnt metter because I will have to encode many different video files, I just need encoding settings
[12:58] <K4T> I think JEEB helped me
[12:58] <blippyp_> K4T: determine your bitrate by this: MaxFileSize*8192/TimeOfVideo
[12:59] <K4T> ok
[12:59] <JEEB> generally if you want to support most mobile devices, -profile:v mainline , -level:v 3 and then keep refs within the level
[12:59] <JEEB> and the more stuff you want to support the more lax your settings can be
[12:59] <JEEB> newer devices support pretty much anything you can poke them with, up to high profile H.264, level 4 or more
[12:59] <JEEB> and not mainline, baseline profile
[13:00] <JEEB> and if you don't care about hitting an exact file size, just use crf, use the highest crf value that still looks good
[13:00] <JEEB> start from 23 or so, and go down if it looks bad, or go up if it looks good
[13:00] <JEEB> you can test with a few thousand frames or so, not the whole clip
[13:01] <K4T> thank you for advices, they are very important for me
[13:17] <Mavrik> hmm, I think you can just put Main on pretty much everything right now, haven't seen a device that wouldn't do it for awhile
[13:21] <JEEB> yeah, it all depends on what you're aiming for
[13:30] <Jeroi> hello
[13:31] <Jeroi> I have problems to use fmmpeg libs with windows Qt Creator qt5 minge 4.8.1
[13:31] <Jeroi> complains about errrno_t don't define type
[13:32] <Jeroi> size_t was not decalred in this scope
[13:33] <Jeroi> and similar errors
[13:33] <Jeroi> I have include to pro file mingw/include folder
[13:35] <K4T> is it possible to change video resolution, but force width to be equal some const value so height will be automatic calculated by ffmpeg?
[13:38] <blippyp_> K4T -vf "scale=-1:360"
[13:39] <blippyp_> -1 on either w or h keeps the aspect ratio
[13:39] <K4T> yes, but this is problematic when video height is not div by 2 ;/
[13:39] <JEEB> the scale video filter examples contain a way to do that right :P
[13:39] <K4T> [libx264 @ 000000000432eb00] height not divisible by 2 (480x203)
[13:39] <JEEB> for things that need mod2
[13:39] <JEEB> or whatever other mod
[13:40] <JEEB> http://ffmpeg.org/ffmpeg-all.html#Examples-61
[13:40] <JEEB> "Increase the size, making the size a multiple of the chroma subsample values"
[13:41] <JEEB> this is what you need, the chroma subsampling (hsub/vsub) usage
[13:42] <K4T> what kind of black magic is this? :O
[13:46] <K4T> should I resize in 2nd pass too?
[13:47] <JEEB> of course
[13:47] <JEEB> otherwise the pictures you'd be coding wouldn't be the same with both passes
[13:48] <K4T> scale=480:trunc(ow/a/2)*2 - this is solution for height which have to be div by 2
[13:48] <K4T> JEEB, thank you one more time
[14:39] <axelri> Hi! I've decoded a png to an AVFrame and want the final video to display that picture during several frames in the video file. My current solution is to do an "avcodec_encode_video2" for every frame I want to display, and then write the allocated packet to file. This seems ineffecient though, since I'm encoding the same picture multiple times. Can I write the same packet multiple times to the file, or will that mess up the video timi
[15:00] <K4T> heh, that pad filter is so confusing ;/
[15:02] <klaxa> in how far?
[15:02] <klaxa> i think the example makes it pretty clear
[15:03] <dannyzb> when i encode a file ( mp4 output ) to 2 outputs : 1 standard MP4 and one HLS format , but same video track , does it only encode once ?
[15:04] <dannyzb> ie. same encoding parameters in different containers for output
[15:04] <JEEB> it encodes twice
[15:04] <dannyzb> how do i avoid dual encoding?
[15:04] <JEEB> if you are encoding the same kind of tracks you might want to separate encoding and final muxing
[15:05] <dannyzb> how do i do that?
[15:05] <JEEB> you could use mkv or mpeg-ts or nut to output something first into stdout, and then you grab that with another ffmpeg or so, and then you -c copy there to those two outputs
[15:06] <JEEB> ffmpeg -i INPUT ENCODER_SETTINGS -f matroska - | ffmpeg -i - -c copy out.mp4 -c copy WHATEVER_YOU_DO_WITH_HLS
[15:06] <JEEB> something like that?
[15:07] <dannyzb> JEEB ; would that use a lot of ram ?
[15:08] <JEEB> I don't see it using much more, of course you'd be running two ffmpeg processes now, but you are not running two encoders any more
[15:08] <JEEB> and the stdin stuff is generally buffered
[15:08] <klaxa> oh, you can use '-' for output now?
[15:08] <JEEB> I think you've always been able to do that as long as you set the format?
[15:08] <JEEB> there's no extension so ffmpeg can't guess
[15:08] <klaxa> i think i had some problems with it and used pipe:1
[15:09] <dannyzb> JEEB: perfect , thanks !
[15:09] <JEEB> pipe:1 should be the same
[15:09] <dannyzb> uuhhh now i remembered something
[15:09] <klaxa> maybe i used it incorrectly back then
[15:09] <dannyzb> i have one input , and i output 3 times : 1 highres mp4 , 1 highres HLS , 1 lowres MP4
[15:09] <JEEB> also the other ffmpeg should probably set -f before -i as well, to make stuff go faster
[15:10] <dannyzb> can i split the output from the source to go to both the pipe and a different outpu ?
[15:10] <JEEB> dannyzb, well then you will have to do two encodes with the first process and have that output one to stdout and the other somewher else :P
[15:10] <dannyzb> yeaaa cool
[15:10] <dannyzb> one last thing , i've seen online ffmpeg doesn't use separate threads for 2 outputs .. is that true ?
[15:10] <dannyzb> that would make a serious waste of resources
[15:11] <JEEB> it does use quite a few threads but you'd have to ask someone who knows lavf for that :P
[15:11] <dannyzb> first i need somebody who can tell me what lavf is ;O)
[15:11] <JEEB> shorthand for libavformat
[15:11] <JEEB> the muxing/demuxing/input/output thing
[15:11] <JEEB> although I guess it's more correct to say AVIO
[15:12] <JEEB> or so
[15:12] <JEEB> for the input/output part
[15:12] <JEEB> since you give an AVIOContext to lavf if you want to open some custom IO of your own
[15:12] <dannyzb> is uuhuhh i have an idea
[15:12] <dannyzb> what if i make a double pipe
[15:12] <dannyzb> ie .. input + copy first pipe - second pipe gets data as input so HDD is only read once
[15:12] <dannyzb> then the third pipe is what you said
[15:13] <JEEB> you are not making any sense :P
[15:14] <dannyzb> ffmpeg -i file -f copy | -i - -o #firstoutput -o #secondoutput -f matroska | #output HLS+mp4#
[15:15] <dannyzb> #secondoutput = -
[15:15] <JEEB> I don't really see that making any more sense than just doing the encoding in the first process
[15:15] <dannyzb> it doesn't for MP4+HLS
[15:15] <dannyzb> it does when i encode two different files
[15:15] <dannyzb> 2 output streams to 3 files would read twice
[15:16] <JEEB> what
[15:16] <JEEB> you're still not making sense
[15:16] <JEEB> you've got one input, you encode that for two outputs
[15:16] <JEEB> one goes into pipe to get muxed for two things
[15:16] <JEEB> the other goes straight to output
[15:16] <JEEB> done
[15:16] <dannyzb> ah right ..
[15:16] <dannyzb> and it would multithread perfectly
[15:17] <dannyzb> one thread per output format
[15:17] <JEEB> the fuck I know, but I'm pretty sure if you will get bottlenecks they'd be somewhere else
[15:17] <JEEB> than just because a thread not being somewhere
[15:17] <dannyzb> JEEB: not sure , what i prefer about a pipe is i create a second ffmpeg process
[15:17] <JEEB> I don't know about lavf or the IOs
[15:17] <JEEB> but I'm pretty sure that if what I just fails
[15:17] <JEEB> it fails because of something else
[15:18] <dannyzb> JEEB : I read everywhere online ffmpeg wastes a lot of resources for splitting output
[15:18] <dannyzb> separating output to 2 separate ffmpeg processes negates that
[15:18] <JEEB> I have no idea, ask actual lavf/AVIO knowing people
[15:18] <JEEB> but don't assume
[15:18] <dannyzb> what channel is that?
[15:19] <JEEB> if you really want you can poke the -devel channel, but I think you're just worrying about things that don't exist
[15:21] <dannyzb> JEEB: when 80% of my HDD/CPU is going to conversions i just wanna make sure (:
[15:21] <JEEB> that sounds like stuff is properly threaded just fine :P
[15:22] <dannyzb> well
[15:22] <JEEB> also you're running two libx264 encodes
[15:22] <dannyzb> everything online says otherwise but it's from 2011/2012 when separate outputs were just added .. it's likely not relevant anymore
[15:22] <JEEB> as well as most probably resizing for the "low quality" stream
[15:22] <JEEB> those are more probably doing any kind of bottlenecking if anything is
[15:23] <dannyzb> JEEB: yea , thing is , i found out decode takes considerably more resources than the encode
[15:23] <JEEB> then your input is either consuming to read or consuming to decode
[15:23] <dannyzb> JEEB : SATA2 Raid10 with nothing running besides 1 encode
[15:24] <dannyzb> i wouldn't call it a read bottleneck
[15:24] <dannyzb> so it's decode
[15:24] <JEEB> parsing a lot of shit can take cpu time and friends just fine :P
[15:24] <JEEB> more likely then that it's the decoding, but still
[15:24] <JEEB> assumption is the mother of fuck-ups
[15:24] <dannyzb> maybe it's the resize?
[15:25] <JEEB> resize can very well be a bottleneck as well
[15:25] <JEEB> since swscale is not exactly great
[15:25] <JEEB> anyways, herp derp
[15:36] <dannyzb> JEEB : basically , i noticed that even though i have identical output formats , the higher resolution the source file - the longer it takes to encode
[15:36] <dannyzb> is there a way to make it input faster ?
[15:40] <c_14> I'm pretty sure it's not that the input is slower, but that ffmpeg has to work with more pixels. Which in turn takes longer.
[15:42] <dannyzb> c_14 : the output is the same resolution - how would more pixels slow it down ? interpolation ?
[15:43] <c_14> Pretty much, it can't just delete pixels because that would destroy the picture.
[15:44] <JEEB> you were wondering about threading and you are so oblivious to how this shit actually works? color me surprised :P
[15:44] <JEEB> s/wondering/oh so sure that your problem was the lack of threading in the IO/
[15:44] <JEEB> anyways, your input has to be fully decoded
[15:44] <JEEB> then it gets scaled and possibly converted to another pixel format
[15:44] <JEEB> then it gets fed to the encoder
[15:45] <dannyzb> if for some reason it does any of those things twice , that would be a massive waste of CPU .. either that or running any of those on the same thread as output
[15:45] <dannyzb> thats why i was interested (:
[15:46] <JEEB> so both your outputs scale/change pixfmts?
[15:46] <JEEB> and the scaling is the same?
[15:47] <JEEB> if yes, then it might be doing that twice, once for each encode (if you have the setup you noted before)
[15:47] <dannyzb> -x264opts crf=$crf:vbv-maxrate=800:vbv-bufsize=270000 -vcodec libx264 -movflags +faststart -preset veryfast -tune film -y /var/www/media/video/hd/{$hash}.mp4
[15:47] <dannyzb> two outputs with these settings , second one with a different CRF and resolution
[15:47] Action: JEEB facepalms
[15:47] <Hello71> {$hash}
[15:48] <JEEB> yeah, sure -- ok, you have made it clear that you're ignoring what I write and you are just oblivious to shit
[15:48] <JEEB> thanks for confirming that I'm OK to use my time in a more productive way
[15:48] Action: JEEB is off
[15:48] <dannyzb> JEEB: dude wtf .. resolution is different and pixfmt how do i know .. thought thats the encoder job
[15:50] <dannyzb> and if you actually read my response you would see i said i use different resolutions for each output
[15:54] <dannyzb> JEEB: ignoring the bickering -- thanks for the help man
[16:40] <dannyzb> JEEB: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs#Teepseudo-muxer
[16:40] <dannyzb> apparently piping was integrated into ffmpeg internally
[19:05] <t4nk594> hi, im trying to cross-compile ffmpeg from an arch linux machine for win64. i compiled opus/vpx/ogg in a separate directory (with this https://trac.ffmpeg.org/wiki/UbuntuCompilationGuide guide) but i get the error while executing the configure which results in: ERROR: opus not found
[21:24] <dericed> is there a preferred way to convert PAL to NTSC. I'm using -r:v ntsc to duplicate frames, but would a pulldown process look better?
[21:31] <blippyp_> dericed: http://lists.ffmpeg.org/pipermail/ffmpeg-user/2013-July/016098.html
[21:36] <blippyp_> jonascj: did you get your camera working the way you want yet?
[21:48] <droetker> I want to download a ism/manifest. The manifest, has several qualitylevels, where 0 is the best quality. I want the second best quality though. How do I convert/raw dump the "quality level 1" of the ism/manifest
[22:17] <sor_> is there a way to force ffplay to constantly update the video file?
[00:00] --- Wed Apr 23 2014

More information about the Ffmpeg-devel-irc mailing list