[Ffmpeg-devel-irc] ffmpeg.log.20121026

burek burek021 at gmail.com
Sat Oct 27 02:05:01 CEST 2012


[01:11] <oG`LoKi> why when I -vf "transpose=1" to a video and output it to the same format it's half the size in bytes ?
[01:11] <oG`LoKi> i rotated it 90 degrees but it ends up less than half the size of the orignal
[01:12] <llogan> oG`LoKi: because the default settings apply a bitrate of 200 kilobits/s which is fairly low most of the time
[01:12] <llogan> libx264 default settings are different and more sane
[01:12] <oG`LoKi> whats the - option for that?
[01:13] <llogan> it depends on your desired output format
[01:13] <oG`LoKi> http://pastebin.com/FsLD6jgF
[01:14] <oG`LoKi> it was a .mov
[01:14] <oG`LoKi> and i converted it to a .mov
[01:14] <llogan> so the problem is that the output quality is too low?
[01:15] <oG`LoKi> the video actually looks just as good as the orignal
[01:15] <oG`LoKi> i was just wondering how that is
[01:15] <llogan> you're using x264 to encode which is the best H.264 encoder out there.
[01:16] <llogan> using the transpose filter requires ffmpeg to re-encode, and by default (lib)x264 is used as the encoder for .mov output
[01:16] <llogan> ...and the default settings for libx264 are pretty good.
[01:16] <llogan> see the CRF section of https://ffmpeg.org/trac/ffmpeg/wiki/x264EncodingGuide
[01:16] <llogan> if you want to monkey with it more
[01:16] <oG`LoKi> ok just kind of confused at how the file is less than half the size
[01:17] <oG`LoKi> than that of the orignal
[01:18] <llogan> you must consider the fact that whoever made the original may not have known what they were doing, and additonally not all encoders are equal.
[01:18] <oG`LoKi> apple iphone made it
[01:18] <oG`LoKi> lol
[01:20] <oG`LoKi> i had the phone up right and when i copied the file to my computer it ended up sideways so i need to rotate it. I'm fine with what ffmpeg produced. Just wondering if i did something wrong.
[01:21] <llogan> the only thing i recommend is to add "-acodec copy" so you can copy the audio instead of re-encode it
[01:21] <oG`LoKi> ahhh yea it did take a long time to encode a 15 second clip.
[01:22] <llogan> encoding takes time when you're using an "efficient" encoder
[01:24] <chrisballinger> hello
[01:24] <chrisballinger> Does anyone here have any experience hooking up Core Audio to avcodec_encode_audio2
[01:25] <durandal_1707> what is Core Audio?
[01:25] <chrisballinger> Mac OS X / iOS audio API
[01:25] <wm4> hooking up to do what?
[01:26] <chrisballinger> I'm trying to write a streaming audio/video app, and have video (mostly) working
[01:26] <chrisballinger> but the audio part is harder for me to figure out because there's way less documentation
[01:27] <chrisballinger> My input data looks like this: AudioBuffer audioBuffer = audioBufferList.mBuffers[y];         uint16_t *audio_frame = (uint16_t*)audioBuffer.mData;         UInt32 mNumberChannels = audioBuffer.mNumberChannels;         UInt32 mDataByteSize = audioBuffer.mDataByteSize;
[01:28] <chrisballinger> mDataByteSize is 2048
[01:28] <chrisballinger> but when I'm filling up the samples buffer it seems like the c->frame_size is wrong (1152)
[01:32] <chrisballinger> does anyone here have experience encoding live data with ffmpeg?
[07:32] <hendry> how can i mix/align a .m4a audio track onto a raw .mkv?
[08:44] <cbsrobot> hendry: itsoffset and map
[09:10] <hendry> cbsrobot: not sure how you list the "channels" IIUC for the map
[13:04] <natrixnatrix89> when I run ffmpeg on a capture device and don't set time limit with -t, I can stop it by pressing q at any time I want..
[13:04] <natrixnatrix89> Would it be possible to achieve the same thing with cron? that it starts ffmpeg.. and then stops that process the next day at specified time by sending "q"?
[13:05] <natrixnatrix89> I could use -t and set specific duration.. but if ffmpeg is recording a live stream.. and if it is intermittent.. then -t would be for the resulting video not the real time elapsed recording.. So I'm wondering if there's any way to stop ffmpeg on a specific time.. or by specific exec command from php for example..
[13:43] <ne2k> how do I specify when reading or writing a raw video file whether it is interlaced or not? I've taken a raw file and split it into two files, each containing alternate fields, but when I play them back (using ffplay) the rate is doubled. I don't know how to tell it to treat it as a non-interlaced, 25frames per second file, rather than an interlaced 50fields per second file
[13:44] <Tjoppen> try -r 25 before -i
[13:52] <ne2k> Tjoppen: ffplay doesn't recognize -r
[13:53] <ne2k> nor -i for that matter
[13:53] <Tjoppen> transcode then. or remux to a container
[13:54] <Tjoppen> like ffmpeg -r 25 -i foo%d.jpg out.mov
[13:54] <Tjoppen> and -vcodec copy of course
[14:08] <natrixnatrix89> when I run ffmpeg on a capture device and don't set time limit with -t, I can stop it by pressing q at any time I want..
[14:08] <natrixnatrix89> Would it be possible to achieve the same thing with cron? that it starts ffmpeg.. and then stops that process the next day at specified time by sending "q"?
[14:08] <natrixnatrix89> I could use -t and set specific duration.. but if ffmpeg is recording a live stream.. and if the stream is intermittent.. then -t would be for the resulting video not the real time elapsed recording.. So I'm wondering if there's any way to stop ffmpeg on a specific time.. or by specific exec command from php for example..
[14:17] <Tjoppen> natrixnatrix89: send SIGTERM to it
[14:17] <Tjoppen> just like pressing ^C
[14:17] <Tjoppen> ffmpeg has a signal handler that makes it write footers etc. (I think)
[14:24] <natrixnatrix89> hmm. you're right..
[14:24] <natrixnatrix89> so basically pressing 'q' = ctrl+c
[15:06] <ne2k> Tjoppen: is it not possible to simply specify that a yuv file is to be interpreted at a specific frame rate and interlace/non-interlace setting?
[15:06] <ne2k> Tjoppen: if I have to put it in a container to do this, can you suggest how I go about containering raw YUV video?
[15:08] <ne2k> should I look at y4m?
[15:12] <Tjoppen> again, try mov
[15:19] <ne2k> Tjoppen: sorry, didn't pick up that last time
[15:21] <ne2k> Tjoppen: I still can't work out how to force it to consider the file as either interlaced or deinterlaced
[15:21] <Tjoppen> pass
[15:22] <ne2k> Tjoppen: presumably there is nothing in a rawvideo file that says whether or not it is interlaced
[15:23] <Tjoppen> rawvideo is.. raw video :)
[15:23] <wm4> are there any swscale conversions for which color ranges (when using sws_setColorspaceDetails()) don't work correctly yet?
[15:26] <ne2k> Tjoppen: exactly. so I can't work out how it knows when playing it back or transcoding it whether it's 50 fields per second interlaced or 25 frames per second progressive
[15:27] <ne2k> does it guess?
[15:28] <ne2k> http://ffmpeg.org/pipermail/ffmpeg-user/2011-May/000868.html this is a thread I've found that seems relevant but doesn't really provide answers
[15:31] <Tjoppen> wait.. rawvideo should require specifying resolution and pixel format
[15:31] <ne2k> Tjoppen: it does.
[15:32] <Tjoppen> how it knows is that you specify it
[15:32] <ne2k> Tjoppen: and you can also specify which field is first in interlacing using -top. but you don't seem to be able to specify that it is /not/ interlaced
[15:32] <Tjoppen> *shrugs*
[15:32] <ne2k> Tjoppen: resolution and pixel format do not include interlaced/progressive information
[15:32] <ne2k> afaik
[16:26] <ne2k> I'm banging my head here. I'm trying to use yuvcorrect and yuyv2y4m to force the type to be progressive, and it just isn't working -- the file is still being considered as interlaced when I play it
[16:27] <ne2k> unless I've done something really stupid earlier in the process
[16:32] <ne2k> I think I might have done something stupid earlier in the process. or made a stupid assumption
[16:42] <wm4> I made a quick and dirty test, which seems to show that swscale limit conversions seem to be broken? http://bpaste.net/show/MaUnvL0dunAGeDUiQOuD/
[16:44] <wm4> the output basically converts a single (gray) pixel value using different colorspaces and ranges
[16:45] <wm4> most obviously, the destination range is ignored if the destination is rgb
[16:47] <swedish-chef> hi everyone, I'm having some issues with corrupted h.264 files after ffmpeg crashed
[16:47] <ne2k> no, actually, I don't think I've been stupid. I think I just can't get this damn thing to work
[16:47] <swedish-chef> basically I use ffmpeg for screen recording on my application, so it's a child process
[16:48] <swedish-chef> when the parent process crashed, it gets killed as well, and the resulting file becomes corrupted
[16:49] <swedish-chef> so my questions are 1) is there an easy way to recover the file? 2) are there any settings that I can use (apart from recording MPEG1 file) to prevent this kind of issue from happening when the application crashes?
[18:27] <ne2k> ok. I've worked out what the problem is. the problem is that when I take an interlaced mpeg2 file and convert it to rawvideo yuv, it gets deinterlaced. I do not want this. I want an interlaced. if I output yuv4mpegpipe it doesn't do this, but yuv4mpegpipe only supports planar formats -- I want simple yuyv or similar. how can I stop it deinterlacing when converting to rawvideo?
[18:47] <dericed> ne2k: why do you want interlaced raw video? For an mpeg2 stream in broadcast I can understanding needing odd lines first then even lines for display, but do you really need data for odd lines stored in the bitstream prior to the even ones. If so, I guess you could use yadif to output one frame per field and have a half height, double frame rate raw stream that would use interlaced order in the bitstream
[18:48] <ne2k> dericed: I am experimenting with part frame encoding for very low latency. when I get my hands on a piece of hardware it will give me the lines from composite video in order (i.e. interlaced) -- i'm just trying to do some experiments before I get that hardware using offline processing
[18:49] <cellofellow> I'm trying to screen capture with x11grab. The size of my screen is 1366x768, but when I set it to that size, I get a BadMatch X Error.
[18:50] <cellofellow> If I set it significantly smaller (not a few pixels smaller, but like 1280x720, it does work.
[18:50] <cellofellow> But I want to capture the whole screen.
[18:52] <cellofellow> eBCbRVyQ
[18:52] <cellofellow> oops
[18:52] <cellofellow> http://pastebin.com/eBCbRVyQ
[18:52] <ne2k> nice password ;-)
[18:53] <cellofellow> what, the URL?
[18:56] <cellofellow> figured it out, nvm. It was shifted 24 frames sideways
[18:57] <cellofellow> s/frames/pixels/
[18:58] <ne2k> dericed: how would I make yadif do that? I can't work out what the options mean! http://avisynth.org.ru/yadif/yadif.html
[18:59] <ne2k> dericed: I don't really see how a deinterlacing filter would help. I just want NOT to deinterlace. the source is interlaced to start with
[18:59] <ne2k> it's just that ffmpeg is deinterlacing it for some reason
[19:01] <relaxed> by default?
[19:01] <ne2k> relaxed: I'll paste some output, give me two ticks
[19:01] <relaxed> and the command
[19:03] <ne2k> how do I just get it to output whether a file is being treated as interlaced?
[19:03] <ne2k> original file shows as 50fps in vlc. it came straight off a DVD
[19:04] <relaxed> it should give a hint on the video stream line in the output
[19:06] <ne2k> http://pastebin.com/zagKi0Zz
[19:06] <ne2k> "Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) -> 50.00 (50/1)" this is weird
[19:08] <relaxed> The input and output video streams appear to have the same framerate.
[19:09] <ne2k> it does, doesn't it
[19:09] <ne2k> I must be doing something stupid wrong, then
[19:09] <ne2k> will come back to it tomorrow. thanks for your help
[19:10] <ne2k> been working on this too long without a break! need space to let my head recharge
[19:10] <ne2k> ooh, it's the weekend! perfect
[19:21] <mudkipz> I have a command for livestreaming a region of my desktop using x11grab and alsa to an rtmp server.
[19:22] <mudkipz> If I use libmp3lame as the encoder for the audio then it works great for both Ustream and Livestream. However if I switch to AAC (I've tried various different ones) then it breaks when I stream to livestream but still works when I stream to ustream.
[19:23] <mudkipz> I don't know if I'm doing anything wrong. I'm going to pastebin my commands for both.
[19:26] <mudkipz> http://pastebin.com/2Rv5jXjw
[19:27] <mudkipz> The reason for switching to AAC is because apparently it's required for mobile device support.
[19:31] <mudkipz> With this version of the command it only spits out a single 'ALSA buffer xrun' before crashing. If I mess with it (by adding -re options) I can get it to try to stream giving me really low bitrates constantly dropping interspersed with 'ALSA buffer xruns' for about 3 seconds, the channel itself never actually goes live.
[19:35] <mudkipz> oh wait, I had those backwards, the one that says livestream should say ustream and vice versa.
[19:36] <mudkipz> In the pastebin I mean.
[19:38] <mudkipz> I fixed the pastebin and added my output. http://pastebin.com/Wv3XP1cH
[19:38] <mudkipz> I've been at this for some time now and am at my wits end. Any feedback would be appreciated.
[19:58] <mudkipz> Oh my god I just figured it out. Apparently livestream was dropping the stream because the bitrate was too high in the first couple frames.
[19:59] <JEEB> use vbv young padawan
[19:59] <mudkipz> Strange that it doesn't happen with libmp3lame though. I think it may be a server issue and not clientside
[19:59] <JEEB> if you are doing something over limited bandwidth, use vbv maxrate and bufsize
[20:01] <mudkipz> umm, how exactly do I do that? I thought the -b:a and -b:v should handle it.
[20:01] <wakko222> is there any way to make a 5.1 channel mp4 from a mpeg2 file?
[20:03] <JEEB> mudkipz, nah -- that's (unless the format/encoder is really dumb) just setting the end resulting average bit rate
[20:04] <JEEB> -maxrate and -bufsize set the vbv parameters
[20:04] <JEEB> maxrate is the maximum average bandwidth over bufsize, aka the minimum transfer speed needed to watch that thing without any buffering after the initial buffering
[20:04] <JEEB> and bufsize is the size of the buffer in which maxrate is calculated
[20:05] <JEEB> you can set the bufsize by time if you want by setting it to <maxrate>*<amount of seconds of buffering done>
[20:05] <JEEB> there, that was the quick n' dirty herp derp into maxrate/bufsize
[20:06] <JEEB> different encoders will keep to your set buffer size and maxrate differently, libx264 is currently the one that keeps to your limits the best
[20:06] <mudkipz> hmm, I'm not gonna lie, this is kind of over my head right now, but that just means I have some reading up to do.
[20:10] <JEEB> mudkipz, basically you have a buffer (which is what the player side of things should initially buffer as well), and then you have the maximum (average) rate within that given buffer. Thus, if a player initially buffers bufsize amount of bits, and then the client has at least the bandwidth of maxrate (and some for the container etc.) you should not have to buffer again and the transfer should be smooth
[20:10] <JEEB> it really isn't any harder than that
[20:11] <smellynosery> Hi - how can I output to a fifo?
[20:11] <mudkipz> Okay, I think I understand.
[20:11] <mudkipz> Thanks for your help!
[20:12] <JEEB> You would be surprised how many people either a) don't know / use vbv at all even when doing streaming , and b) try to calculate if vbv is actually working without a vbv-aware bit rate viewer :D
[20:13] <mudkipz> Hahaha, well considering until now I was one of those people I can't say I'm too surprised.
[20:27] <smellynosery> Can I set the video/audio PIDs when using the output format of mpegts?
[20:29] <mudkipz> Thanks again JEEB, using maxrate and bufsize is giving me way better results!
[20:29] <chrisballinger> Does anyone here have experience streaming video FROM mobile devices?
[20:36] <CR0W> Hi. Where could I get documentation about the sws_flags option? There's nothing in the manual.
[20:38] <JEEB> CR0W, it sets swscale flags manually
[20:38] <JEEB> http://ffmpeg.org/doxygen/trunk/swscale_8h-source.html
[20:38] <JEEB> you can see their explanation here
[20:38] <JEEB> or well... as good as it gets ^^;
[20:40] <JEEB> you generally do not touch it, as the swscale defaults tend to be sane'ish
[20:40] <JEEB> but f.ex. if you want that your code path gives same results throughout architectures
[20:40] <JEEB> you want to set the bitexact flag, for example
[20:42] <CR0W> JEEB okay, but there are macros there, what are the arguments for this option?
[20:43] <CR0W> JEEB I mean, eg. there's SWS_BICUBIC, what will I give in the cmdline for this, BICUBIC, or bicubic or what?
[20:43] <relaxed> CR0W: look at -sws_flags in ffmpeg -h
[20:44] <CR0W> JEEB okay, I know. Are you still here? If I don't set the cpu options, will it default to whatever my cpu supports?
[20:44] <JEEB> yes
[20:44] <CR0W> OK. And what algo does it use by default, bilinear?
[20:44] <JEEB> no idea, but you usually set that with the scale filter's settings, no?
[20:44] <JEEB> also the options are listed in libswscale/options.c it seems
[20:45] <JEEB> and probably what relaxed said
[20:45] <CR0W> relaxed I don't have sws_flags in ffmpeg -h.
[20:45] <relaxed> ffmpeg -h full
[20:46] <CR0W> relaxed OK, I see it, thanks.
[20:46] <JEEB> anyways, in most cases you really don't have to set sws_flags
[20:46] <JEEB> but it depends on the exact needs
[20:47] <CR0W> JEEB I'm often using ffmpeg to reencode my videos to my device which has a small screen (320x240) and I thought I'll use the best scaling available :)
[20:47] <JEEB> uhh
[20:47] <JEEB> wouldn't that be setting stuff in the scale filter?
[20:47] <JEEB> which would then internally set the sws flags
[20:47] <JEEB> or am I completely incorrect?
[20:47] <CR0W> I don't understand.
[20:47] <relaxed> -sws_flags sets the scaling algo
[20:48] <JEEB> oh, so the actual scaler is selected there manually always?
[20:48] <CR0W> I use -vf scale to scale the output.
[20:48] <JEEB> and not a setting of vf scale
[20:48] <CR0W> JEEB vf scale doesn't have an algo setting, only dimmensions.
[20:48] <JEEB> ok, in that case yes
[20:48] <relaxed> It would make sense to move it the the sale filter but this is ffmpeg.
[20:49] <JEEB> relaxed, inorite
[20:49] <relaxed> scale*
[20:49] <CR0W> It would be cool to add it to the manual.
[20:49] <JEEB> I actually said what I said because it's done like that in the x264cli's scaling filter that uses swscale
[20:49] <JEEB> it lets you set the algo
[20:49] <relaxed> also, the requirement for '-h full' is not helpful at all.
[20:50] <JEEB> also swscale flags always remind me of how I forgot to add the bitexact flag in my encoder tests and suddenly they were failing on everything but the standard code path, which was IA32/x86_64 asm'd path
[20:50] <CR0W> What does biexact do?
[20:50] <JEEB> just makes sure that the results are the same on all architectures
[20:50] <CR0W> Is that based of bilinear?
[20:51] <JEEB> https://dl.dropbox.com/u/175558/screenshots/vbindiff.png <- I only noticed my mistake when I found out that the vsynth->RGB output differed by one on the usual x86 asm path and the arm path
[20:51] <CR0W> JEEB Um so that's when I'd make a script and give someone who uses e.g. arm and to make sure he gets the same result?
[20:51] <JEEB> yes, if you need exactly the same results
[20:52] <JEEB> because the optimized code paths can be one-off
[20:52] <relaxed> doesn't fate do this for us?
[20:52] <JEEB> nope, you still need to set the flag
[20:52] <relaxed> I mean. doesn't fate test for this?
[20:52] <JEEB> nope
[20:52] <JEEB> if I forgot to add that swscale flag into my test
[20:52] <JEEB> it will gladly work
[20:53] <JEEB> until it gets run on something else :P
[20:54] <CR0W> How does sinc differ from lanczos? Wow, I need to upgrade my knowledge on resampling
[20:55] <JEEB> sinc in theory is one of the algorithms that are liked by theorics. But before selecting something you'd really want to see how it works in swscale :P
[20:56] <CR0W> Yeah I know. I just thought lanczos is another name for the sinc function.
[21:59] <iluminator105>  mplayer tv:// -tv driver=v4l2:width=640:height=480:device=/dev/video1 -fps 30 how would i capture this
[22:52] <chrisbal> hey guys, can anyone here help me with avcodec_encode_audio2?
[00:00] --- Sat Oct 27 2012


More information about the Ffmpeg-devel-irc mailing list