[Ffmpeg-devel-irc] ffmpeg.log.20121010

burek burek021 at gmail.com
Thu Oct 11 02:05:01 CEST 2012


[00:49] <bparker> I'm having a big problem with ffmpeg/libavformat not saving (or maybe corrupting?) options I'm setting in the muxer.... the problem is described and shown here: http://dpaste.com/811746/
[00:50] <bparker> like the metadata I set, as well as the bitrate, time base etc. are all removed/corrupted on the final output file, but dumping the context itself all shows the correct infop
[00:50] <bparker> info*
[00:50] <bparker> and I have no idea why this is happening :(
[00:50] <bparker> I'm using libavformat directly in my own application
[00:50] <bparker> and latest version (1.0) of ffmpeg
[00:51] <bparker> on x64 arch linux
[00:53] <ubitux> it's missing your cmd line
[00:53] <bparker> huh?
[00:53] <ubitux> for the encoded_by, -map_metadata is your friend
[00:53] <bparker> I'm not using the command line program
[00:53] <bparker> I'm using the libavformat library itself
[00:53] <ubitux> oh, ok sorry
[00:53] <bparker> in my own application
[00:53] <ubitux> then it's better with the code :)
[00:54] <bparker> well yes... but I was hoping there was something I obviously overlooked or that someone might know to check something, before having to create an entire (smaller) test program that reproduces this, since I can't post the full code
[00:55] <ubitux> at least paste the ffmpeg related code :p
[00:55] <ubitux> it's hard to guess what you are doing
[00:56] <bparker> ok
[00:56] <ubitux> it's a bit late for me to start reading some other code though so...
[00:56] <ubitux> gl ;)
[00:56] Action: ubitux &
[00:56] <ubitux> (hint: you have a dedicated ml for api usagee)
[00:56] <ubitux> (might get more help there)
[01:03] <lake> hi guys, I am using silencedetect filter and it works well thus far.
[01:03] <lake> It generates outout like this: https://gist.github.com/3862017
[01:04] <lake> I am trying to figure out the best way to remove those silent parts from my input.
[01:04] <lake> I want to automate it since I have about a hundred mpeg files I need to use it on.
[01:05] <lake> before i go writing a ruby or bash script, i figured i would ask if anyone had advice
[01:12] <ubitux> lake: i just sent a patch to simplify this
[01:12] <ubitux> but for your needs right now, just split the string :p
[01:13] <ubitux> lake: at some point, you will be able to exploit results like in the ffprobe example at the end: http://ffmpeg.org/pipermail/ffmpeg-devel/2012-October/132180.html
[01:13] <lake> ubitux: can you please link me to your patch. i would be more than happy to try it out and comment if it helps
[01:13] <ubitux> :)
[01:13] <bparker> ubitux: http://pastebin.com/F073SBxv
[01:13] <bparker> there's the code
[01:14] <lake> ubitux: btw, thanks so much for your email months ago. you were quick to respond to me! thanks!
[01:14] <ubitux> what mail? :p
[01:15] <ubitux> i send quite a bunch of mails everyday i'm sorry i don't remember :p
[01:15] <ubitux> bparker: did you base your code on the files in doc/examples?
[01:16] <bparker> yea
[01:16] <lake> ubitux: i don't expect you to. i'm just excited to see you here. Re: [ffmpeg] Silencedetect filter
[01:16] <lake> from July
[01:16] <lake> lol
[01:17] <ubitux> i don't remember :(
[01:18] <lake> no worries, really, i was just asking for more information and you helped. thanks.
[01:19] <ubitux> okay :)
[01:19] <ubitux> bparker: ok so one issue at a time...
[01:19] <ubitux> are you able to keep the encoded_by with the ffmpeg cmd line?
[01:20] <ubitux> if so, check how its done in ffmpeg.c/ffmpeg_opt.c
[01:21] <bparker> I tried this: ffmpeg -i ~/121009-1AAA.mp4 -metadata encoded_by=test -vcodec copy test.mp4
[01:21] <ubitux> about the encoding settings, same thing, compare with ffmpeg tool, and try to play with libx264 AVOptions maybe (x264opts for instance)
[01:21] <bparker> but test.mp4 does not have encoded_by tag
[01:21] <bparker> I've been playing with them for days on end now ><
[01:22] <bparker> really out of ideas as to what to try
[01:22] <ubitux> there might be a bug with the metadata then
[01:22] <ubitux> or just unsupported feature
[01:22] <ubitux> like the mp4 muxer just ignoring them.
[01:23] <ubitux> anyway, it's 1:22 o clock here, and i have to wake up in 5½h
[01:23] <ubitux> so i'm leaving for real now :)
[01:23] <bparker> mov ignores it also, but then I tried flv and it worked.
[01:23] <bparker> sigh
[01:23] <bparker> ok
[01:24] <bparker> well thanks
[04:43] <pzich> http://www.reddit.com/r/AskReddit/comments/117q87/c/c6k1tou
[08:05] <lfiebach> Hi, what can be wrong if ./configure for ffmpeg says libv4l2 not found ?
[08:14] <ubitux> lfiebach: because you added --enable-libv4l2 switch without having the libv4l2 library available?
[08:14] <ubitux> note: this library is a wrapper, you don't actually need it most of the time
[08:15] <ubitux> it just helps with some particular devices
[08:19] <lfiebach> ok, what do i need for webcam support ? Only indevs ?
[08:20] <lfiebach> --enable-libv4l is enabled but sysroot is different so i think ffmpef did not find it
[08:28] <lfiebach> ubitux:
[08:47] <ubitux> lfiebach: yes indevs should be enough
[08:47] <ubitux> grep V4L2 config.h to confirm it is enabled
[08:47] <ubitux> you should have: #define CONFIG_V4L2_INDEV 1
[08:48] <ashka> hello
[08:49] <ashka> I'm trying to loop a video for a precise amount of time
[08:49] <ashka> (I mean, not x times, just something like precisly 1h)
[08:49] <ashka> I'd like to cut out a part out of the end of the original video as well.
[08:49] <ashka> I've found -loop but it doesn't exists
[08:49] <ashka> I'm using ffmpeg version 0.8.3-6:0.8.3-7
[08:50] <ubitux> you sure it's ffmpeg?
[08:50] <ubitux> we are in ffmpeg 1.0
[08:50] <ashka> that's the latest version available in my repos
[08:50] <ashka> it's sid
[08:51] <ubitux> that's not ffmpeg then, it's a fork, but well.
[08:51] <ashka> I should comp ffmpeg1.0 and come back then ?
[08:51] <ubitux> let's try to solve your problem instead :)
[08:51] <ubitux> so anyway, what is this loop thing, i don't understand
[08:52] <ubitux> you want to store loop info in your output file?
[08:52] <ashka> I have a video which is a musical thing
[08:52] <ashka> it's very short, something like 30 sec
[08:52] <ashka> and I'd like to loop it for a certain amount of time, like 1 hour
[08:52] <ubitux> oh ok.
[08:53] <ubitux> mmh let me think..
[08:54] <ubitux> i'm not sure that's supported but i'm gonna test something
[08:54] <ubitux> give me a while
[08:54] <ashka> sure, thanks for your help
[08:59] <ubitux> mmh i'm kind of able to do it with a loop count, looking for doing it with a duration now
[09:00] <ashka> ubitux: don't give yourself a headache huh
[09:01] <ashka> the original has a fixed time
[09:01] <ashka> I can fussy the loop count to get the right duratio
[09:01] <ashka> duration*
[09:01] <ashka> s/fussy/guess/
[09:02] <ubitux> mmh it doesn't work :(
[09:03] <ubitux> a cmd line like ffmpeg -f lavfi -i movie=loopme.mkv:loop=0 -t 60 -y out.mkv was supposed to work, but it doesn't unfortunately
[09:04] <ubitux> i'm opening a bug
[09:10] <ashka> well at least I made you discover a bug
[09:11] <ubitux> the ticket if you want to watch it: https://ffmpeg.org/trac/ffmpeg/ticket/1799
[09:11] <ubitux> now i don't see much solution
[09:11] <ashka> do you have the line to loop it x times ?
[09:11] <ubitux> except concatening manually several times with the concat filter or something
[09:12] <ubitux> ashka: i thought it was working, but it isn't :p
[09:12] <ashka> even for a non-given duration
[09:12] <ashka> ?*
[09:12] <ubitux> yes
[09:12] <ubitux> it's the same issue
[09:12] <ashka> oh well.
[09:12] <ashka> thanks for your help
[09:12] <ubitux> sorry :)
[09:13] <ubitux> i'm trying to find another trick.
[09:14] <ubitux> maybe with a pipe..
[09:14] <ubitux> ashka: what's the input format?
[09:15] <ashka> WebM
[09:15] <ashka> but I could convert it
[09:16] <brontosaurusrex> ashka, how about playback side? mplayer can loop
[09:16] <ashka> I'm open to any solution
[09:16] <ashka> yet, the machine has no X server
[09:17] <brontosaurusrex> uhmm, explain where the playback will happen, elaborate, ill be back in 10, smoke time ....
[09:18] <ashka> I want to get the new video into a file
[09:18] <ubitux> ashka: i'm going to try to add the feature somehow, do you have some time? :)
[09:18] <ashka> looped video* actually
[09:18] <ashka> ubitux: if you've got motivation, well yes
[09:18] <ashka> I've got pleny
[09:18] <ashka> plenty*
[09:19] <ubitux> perfect, then give me something like 1 hour to see if that's possible easily
[09:19] <lake> I have videos recorded at 640x480. When I use dvdauthor it complains about "unknown mpeg2 aspect ratio"
[09:20] <lake> my capture script looks like this: https://gist.github.com/3863702
[09:21] <ubitux> ashka: while i'm trying to patch it, you should start cloning the repository and build ffmpeg
[09:21] <ubitux> so you can apply the patch later
[09:21] <ashka> I have just cloned it
[09:21] <ashka> thanks for your time
[09:32] <divVerent> again one of those "why doesn't this work" questions... https://gist.github.com/3863736
[09:32] <divVerent> reordering the filter_complex commands doesn't fix it
[09:32] <ubitux> i don't think -filter_complex options can be stacked
[09:33] <divVerent> they can't? damn
[09:33] <divVerent> oddly, I get the same error when I reverse the order of the options
[09:33] <divVerent> so it apparently does store them both
[09:33] <ubitux> -filter_complex '[0:0]null[VIDEO_IN]; [VIDEO_IN]null[VIDEO_OUT]'
[09:33] <divVerent> ubitux: yes, that works
[09:33] <divVerent> but that's hard to edit ;)
[09:33] <divVerent> I prefer the command line to stay orderly... so if possible, I'd like to avoid that
[09:33] <ubitux> calling -filter_complex again will overrid the string
[09:33] <ubitux> (i believe)
[09:33] <divVerent> ubitux: nope ;)
[09:33] <divVerent> when I reverse them
[09:33] <divVerent> it still complains about [VIDEO_IN]null[VIDEO_OUT]
[09:34] <divVerent> there can be more than one graph, just how to connect them
[09:34] <ubitux> it might be parsed, then overrided
[09:34] <divVerent> why would anyone code that ;)
[09:34] <ubitux> cause you're not supposed to need multiple -filter_complex?
[09:34] <divVerent> what is the nb_filtergraphs variable good for then?
[09:34] <ubitux> ok i think i've the loop patch but i'm unable to test it :(
[09:35] <ubitux> divVerent: ah dunno, then i might be wrong
[09:35] <ubitux> maybe for multiple outputs?
[09:35] <ubitux> (output files)
[09:35] <divVerent> apparently, the feature to have multiple graphs is intended
[09:35] <divVerent> and I am trying to figure out how to use it
[09:36] <divVerent> yes, possibly the graphs have to be independent
[09:36] <divVerent> that'd be an annoying limitation though
[09:40] <ashka> oh btw ubitux
[09:40] <ashka> what is it with avconv ?
[09:40] <ashka> ffmpeg says that avconv should be used instead
[09:44] <ubitux> it's the fork i was talking about
[09:44] <ubitux> http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html
[09:44] <ashka> oh okay
[09:45] <ubitux> tl;dr: ^F packaging on that page
[09:45] <ashka> btw, are you using a fork of the git repo for your patch ?
[09:45] <ashka> I could push your changes into my local repo to build ffmpeg
[09:45] <ubitux> i'll likely send you a patch to git am, if i succeed
[09:46] <ubitux> that's not yet guaranted :(
[09:51] <ubitux> oh i get it working.
[09:51] <ubitux> but it will only work with mpeg files (so you have to re-encode or remux it)
[09:54] <ubitux> ashka: do you have a working ffmpeg git/master?
[09:57] <ubitux> ashka: anyway, with git/master: wget 'http://b.pkh.me/0001-lavf-file-WIP-loop.patch'; git am 0001-lavf-file-WIP-loop.patch
[09:57] <ubitux> and then something like ./ffmpeg -fileloop 1 -i loopme.mpg -t 3600 -y out.mkv
[09:58] <ubitux> (assuming you ffmpeg -i loopme.webm loopme.mpg)
[09:58] <ubitux> i'll submit this patch later
[10:54] <ubitux> ashka: so? :)
[10:55] <ashka> ubitux: it just finished compiling
[10:55] <ashka> currently trying it out
[10:56] <ashka> [matroska,webm @ 0x1da3340] Unknown entry 0x18538067\n[matroska,webm @ 0x1da3340] Unknown entry 0x1A45DFA3
[10:56] <ashka> got a whole lot of these all of the sudden
[10:56] <ashka> it's spamming
[10:56] <ashka> not sure if it's still writing output
[10:56] <ubitux> what's your cmd line?
[10:56] <ashka> no, output is stuck
[10:56] <ashka> ffmpeg -fileloop 1 -i in.webm -t 3600 out.mkv
[10:57] <ashka> oh nvm
[10:57] <ashka> I didn't see the line assuming blah
[10:57] <ashka> my bad
[10:59] <ubitux> explaination: mpeg streams are concatanable, so you i can just restart sending packets from the beginning
[10:59] <ubitux> it's not possible with mkv
[11:00] <ubitux> ideally we should fix the movie=...:loop=0 thing, but hopefully the patch should be a temporary workaround for your needs :p
[11:00] <ubitux> btw, you might even be able to ffmpeg -fileloop 1 -i loopme.mpg -t 3600 -c copy -y out.mpg
[11:01] <ashka> oh really ?
[11:01] <ashka> should be way faster
[11:01] <ashka> I'll try
[11:01] <ashka> hmm nope
[11:01] <ashka> av_interleaved_write_frame(): Invalid argument
[11:01] <ubitux> ok :(
[11:01] <ashka> ([matroska @ 0x222dca0] Can't write packet with unknown timestamp)
[11:01] <ubitux> note the mpg ’ mpg
[11:01] <ashka> oh
[11:02] <ashka> [mpeg @ 0x2e38ca0] packet too large, ignoring buffer limits to mux it\n[mpeg @ 0x2e38ca0] buffer underflow i=0 bufi=44059 size=44461
[11:02] <ashka> still, is that okay ?
[11:02] <ubitux> dunno i didn't try
[11:02] <ashka> yet it looks like it's copying anyway
[11:02] <ashka> I'll wait for it to be done
[11:20] <ashka> hmm
[11:20] <ashka> ubitux: it worked
[11:20] <ashka> I have a little additional question
[11:21] <ashka> can I cut out a few frames every time at the beginning of the original ?
[11:21] <ashka> since it's a music thing I need to make it so it syncs
[11:21] <ubitux> in multiple steps :p
[11:22] <ubitux> ./ffmpeg -i in.webm -ss 12 loopme.mpg
[11:22] <ubitux> to skip 12 seconds
[11:22] <ashka> oh
[11:22] <ashka> how can I skip 15 frames ?
[11:22] <ubitux> mmh a bit more painful
[11:22] <ashka> hmm
[11:22] <ashka> maybe a precise amount of ms
[11:23] <ubitux> is it only video?
[11:23] <divVerent> hm... I suppose I am compiling ffmpeg wrong... with virtually identical options, my mplayer based libavcodec encoding code outperforms ffmpeg's own conversion by far... just looking for ideas where to look
[11:23] <ashka> video + audio
[11:23] <ubitux> then use a precise ts
[11:23] <ubitux> -ss 12.345
[11:23] <ashka> okay, I'll try
[11:23] <divVerent> ah, okay... the x264 parameters differ, just why... ;)
[11:23] <JEEB> haha, yeah -- ffmpeg has problems if you just want to do trim(0,100) or something like that, because you have to set stuff by times
[11:24] <JEEB> divVerent, ffmpeg should nowadays use stuff pretty close to x264's defaults by default
[11:24] <divVerent> exactly
[11:24] <divVerent> that's why I wonder
[11:24] <divVerent> so does my mplayer based encoding code (NOT mencoder)
[11:24] <divVerent> I found one differing x264 option... took it out now
[11:24] <JEEB> I think one of the only things it does differently from command line x264 is that libx264 doesn't limit refs by level
[11:25] <JEEB> but that's comparing x264cli and libx264
[11:25] <divVerent> tune=animation... okay, that brings ffmpeg up to 54fps, still with mplayer I get like up
[11:25] <divVerent> yes, and I am comparing two different libavcodec/libavformat frontends
[11:25] <divVerent> which are compiled against the same library
[11:26] <divVerent> CPU usage is 200% (I have two cores) in both cases
[11:26] <divVerent> AH... I see ONE difference. The aac codec...
[11:26] <divVerent> wonder if libfdk_aac is slow ;)
[11:26] <divVerent> nope, that's not it
[11:27] <divVerent> is there any way to "Profile" ffmpeg in a simple way?
[11:27] <divVerent> like, to get simple output like "25% time in decoding, 30% in filtering, 50% in encoding, -5% in lying"? ;)
[11:27] <ubitux> someone was asking on the devel channel yesterday i think :p
[11:28] <divVerent> HA! found the cause
[11:28] <divVerent> ffmpeg not at fault ;)
[11:28] <divVerent> stupid notebook... suddenly reduced clock speed
[11:28] <ubitux> you have a -benchmark(_all?) option btw
[11:28] <divVerent> still, such profiling would be nice
[11:29] <divVerent> BTW, style question regarding ffmpeg options: be lazy or add stream suffix always? ;)
[11:29] <divVerent> for options that only the v codec but not the a codec knows
[11:29] <divVerent> e.g. -tune:v animation
[11:30] <ubitux> no idea :)
[11:30] <divVerent> and thanks, -benchmark_all is nice but weirdly inaccurate
[11:30] <divVerent> probably needs processing the output (summing up by the various step names)
[11:31] <Mavrik> divVerent: you'll have to use a tool like gprof on ffmpeg build with debug information in
[11:31] <Mavrik> for profiling
[11:32] <divVerent> right, I didn't want the large cannons though ;)
[11:32] <divVerent> just the little info useful for tuning options, like, in which of the major steps most time is spent
[11:32] <divVerent> so I know if I e.g. have to change the swscale parameters or the x264 codec ones
[11:32] <divVerent> gprof slows down the run a lot, which makes it somewhat unattractive
[11:33] <divVerent> (okay, actually, compiling with -pg does </nitpick>)
[11:34] <Mavrik> yeah well, that's because it's collecting data about function calls :)
[11:34] <divVerent> sure, I know what gprof does... it's just not the tool of choice in many cases
[11:34] <divVerent> in fact, I'd be highly surprised if it even works right when linking against a non-profiling libx264
[11:38] <ashka> ubitux: works fine :) thanks a lot for the workaround
[11:38] <Mavrik> divVerent: it works, just the x264 data is missing :)
[11:38] <ubitux> ashka: great, i'll submit a patch tonight, it might get upstream later
[11:38] <Mavrik> anyway, as you noticed, if you want fast benchmarking you won't get accurate results
[11:39] <divVerent> sure
[11:41] <divVerent> why is the hall of shame page down "until it is updated"... C&D?
[12:11] <ashka> oh btw ubitux
[12:11] <ashka> this is minor, but you might be able to fix it
[12:12] <ashka> a video of 10h will last 10:00:00.01
[12:12] <ashka> totally minor
[12:14] <divVerent> ubitux: my current (mostly autogenerated) filter chain:
[12:14] <divVerent> -filter_complex '[SUBVIDEO_IN]scale=max(480\,floor(320*dar/2+.5)*2):max(floor(480/dar/2+.5)*2\,320)[scaled]; [scaled] setsar=1:1 [VIDEO_OUT];[0:0][0:1]overlay[SUBVIDEO_IN]'
[12:51] <divVerent>     .long_name         = NULL_IF_CONFIG_SMALL("FFM (FFserver live feed)"),
[12:51] <divVerent> is this format considered stable in ffmpeg?
[12:51] <divVerent> i.e. can I "safely" use this as interchange format from a program to ffmpeg?
[12:51] <divVerent> it is probably the simplest somewhat feature complete format we have
[12:52] <divVerent> the one catch probably is that it depends on some enums in ffmpeg headers, especially AVCodecID and that pixel format enum
[12:53] <divVerent> (in my application, I'd only want to send rawvideo and PCM audio, and let ffmpeg do the encoding)
[12:53] <burek> divVerent, did you check format 'nut'
[12:54] <burek> ffm is ffserver-specific format
[12:54] <divVerent> nut is quite complex
[12:54] <divVerent> I want something I can generate from like 100 lines of code
[12:54] <divVerent> or is there a spec of nut and what ffmpeg's muxer does is way over the top?
[12:55] <divVerent> I mean, probably nut is far from that complex, when you restrict it to the particular use case (rawvideo/pcm)
[12:55] <burek> check the source code :)
[12:55] <divVerent> the source code is exactly what doesn't help me here :)
[12:55] <divVerent> nutenc.c is quite complex still
[12:56] <divVerent> a spec would help here, obviously
[12:56] <burek> did you try
[12:56] <burek> :)
[12:57] <divVerent>  I actually did some months ago
[12:57] <divVerent> but found nothing useful
[12:57] <divVerent> ah, now I see
[12:57] <divVerent> the spec is hidden in mplayer sources
[12:57] <burek> wtf
[12:58] <divVerent> http://code.google.com/p/mplayer-mirror/source/browse/trunk/DOCS/tech/mpcf.txt?r=11131 looks like what I had wanted
[12:58] <burek> fflogger doesn't like nut format apparently :)
[12:58] <divVerent> hehe
[13:00] <divVerent> one thing about nut format I don't get though
[13:00] <divVerent> is it allowed if the timestamps are "messy"? ;)
[13:00] <divVerent> like, can one happily encode half a second of video, then half a second of audio, etc.
[13:01] <divVerent> or do timestamps have to monotonous across all streams (like e.g. ogg requires IIRC)
[13:02] <burek> http://wiki.xiph.org/Nut_Container
[13:57] <divVerent> is it a bug or a feature that I can't extract subtitles from an ogm file to ass directly
[13:57] <divVerent> but can when going via mkv?
[13:57] <divVerent> ogm uses CODEC_ID_TEXT subtitles
[13:57] <divVerent> trying to convert this to ass says that there is no decoder for the codec, which is true
[13:57] <divVerent> but using the "copy" codec to plug into mkv, then going from mvk to ass works
[13:59] <divVerent> i.e. "works as intended", or "to the tracker"?
[13:59] <divVerent> https://gist.github.com/3865174 - the shell script part in question
[14:01] <divVerent> $t here is the codec_type
[14:05] <divVerent> the mkv file claims to have the subtitles in "subrip" format, which is what I would also expect here
[14:07] <ubitux> i don't understand the question/problem
[14:07] <ubitux> what are you trying to do?
[14:07] <divVerent> I want to export ogm subtitles as .ass
[14:07] <divVerent> so I can use them with the vf_ass filter
[14:10] <divVerent> haha, I now see why my hack works
[14:10] <divVerent>     {"S_TEXT/UTF8"      , AV_CODEC_ID_SUBRIP},
[14:10] <divVerent>     {"S_TEXT/UTF8"      , AV_CODEC_ID_TEXT},
[14:10] <divVerent>     {"S_TEXT/UTF8"      , AV_CODEC_ID_SRT},
[14:10] <divVerent> I start with AV_CODEC_ID_TEXT, which I plug into mkv via -codec copy
[14:10] <divVerent> it becomes S_TEXT/UTF8 in the mkv
[14:10] <divVerent> now, when READING this mkv file again, it becomes AV_CODEC_ID_SUBRIP
[14:10] <divVerent> which can be converted to .ass fine
[14:12] <divVerent> so... doesn't that mean that the ogm would have chances to work, if the demuxer decided on AV_CODEC_ID_SUBRIP instead of AV_CODEC_ID_TEXT?
[14:13] <divVerent> if yes, this sounds like trac material
[14:27] <ubitux> divVerent: and ffmpeg -i in.ogm out.ass doesn't work?
[14:27] <divVerent> exactly
[14:28] <divVerent> to do it with two short commands: ffmpeg -i in.ogm -codec copy -map 0 temp.mkv && ffmpeg -i temp.mkv out.ass
[14:28] <divVerent> works fine
[14:28] <divVerent> oops, actually the latter may not work
[14:28] <divVerent> needs -vn -an probably ;)
[14:28] <divVerent> but you get the idea
[14:37] <ubitux> can i have a sample?
[14:37] <divVerent> don't have one at a place from where I can upload... but my guess is that any ogm with subs will work
[14:38] <divVerent> hm... maybe I can make one quickly somehow
[14:41] <divVerent> I just encoded real crap quality... don't care ;)
[14:42] <relaxed> divVerent: -map 0:s would copy just the subs
[14:42] <divVerent> damn... ffmpeg refuses to write the ogm file I want... need ogmmerge then ;)
[14:43] <divVerent> it also refuses to plug srt INTO ogm
[14:43] <ubitux> :)
[14:46] <divVerent> http://ompldr.org/vZnRxbw/out2.ogm
[14:46] <divVerent> test file for this
[14:46] <ubitux> thx
[14:47] <ubitux> nice video test source
[14:47] <divVerent> hehe
[14:47] <divVerent> it's a test image generator I am working on
[14:47] <ubitux> maybe we could improve our -f lavfi -i testsrc :)
[14:48] <divVerent> probably not. Different applications need different test images.
[14:48] <divVerent> haha, that one also uses a LCD hack
[14:48] <divVerent> to get simple digit rendering code ;)
[14:48] <ubitux> :)
[14:50] <divVerent> my filter BTW is a dynamically loadable filter for some mplayer fork... it PROBABLY should be easy to port to other code bases
[14:50] <divVerent> given it basically works on raw yuv444p output
[14:50] <divVerent> in planes
[14:50] <divVerent> the background is BTW a nice test for telecine/detelecine filters ;)
[14:50] <ubitux> Dialogue: 0,0:00:00.50,0:00:02.50,Default,Hello, world!
[14:50] <divVerent> mplayer's -vf filmdint horribly fails it, -vf pullup works
[14:50] <ubitux> ok got it.
[14:51] <divVerent> but only via mkv, right?
[14:51] <divVerent> these are the exact times I set
[14:51] <ubitux> just a quick hack
[14:51] <ubitux> -            st->codec->codec_id = AV_CODEC_ID_TEXT;
[14:51] <ubitux> +            st->codec->codec_id = AV_CODEC_ID_SUBRIP;
[14:51] <divVerent> hehe, I see
[14:51] <ubitux> in libavformat/oggparseogm.c
[14:51] <ubitux> i'm looking at making a text decoder
[14:51] <ubitux> i though we had one..
[14:52] <ubitux> divVerent: do you know the markup of subtitles in ogg?
[14:52] <ubitux> no markup at all? that's really plain text?
[14:52] <divVerent> don't know
[14:52] <divVerent> never seen them have any markup
[14:52] <divVerent> they always look plain
[14:52] <ubitux> not even <i> and crap like that?
[14:52] <divVerent> haha, now I see the difference between CODEC_ID_SRT and CODEC_ID_SUBRIP
[14:52] <divVerent> I am not aware of any
[14:52] <ubitux> ok
[14:53] <ubitux> yeah the SUBRIP is to workaround a problem
[14:53] <ubitux> originally the packets included the timestamps
[14:53] <divVerent> right
[14:53] <divVerent> like in SRT files
[14:53] <ubitux> and it was a pain for mkv demuxer for examples
[14:53] <ubitux> (it had to write the ts in the payload)
[14:53] <divVerent> and SUBRIP is the timestamp-less version
[14:53] <divVerent> which uses pts
[14:53] <ubitux> yes, that's actually the "codec"
[14:53] <ubitux> the srt demuxer should be fixed
[14:53] <ubitux> to output subrip packets
[14:54] <divVerent> so is CODEC_ID_SRT still in use?
[14:54] <divVerent> ah, THERE it still is used ;)
[14:54] <ubitux> and we could get rid of CODEC_ID_SRT
[14:54] <ubitux> :)
[14:54] <divVerent> right
[14:54] <ubitux> i need to do a lot of work on the subtitles
[14:54] <divVerent> personally, I think the right way to handle ogm is to use the SUBRIP format
[14:54] <ubitux> a long work in progress :)
[14:54] <divVerent> because you almost always embed srt files into ogm
[14:54] <ubitux> really?
[14:54] <divVerent> this is just how these are made with ogmmerge
[14:54] <divVerent> it wants srt input
[14:54] <divVerent> and just sticks them in with no conversion of markup if any
[14:54] <ubitux> what happens if you merge a file with markup?
[14:54] <divVerent> it doesn't care
[14:54] <divVerent> it just plugs it in
[14:54] <ubitux> great..
[14:55] <ubitux> well then i guess the patch i propose could be pushed
[14:55] <ubitux> if you can wait until tonight i'll submit it
[14:55] <divVerent> ogm isn't very common any more
[14:55] <ubitux> (or you can submit the patch right now)
[14:55] <divVerent> and I doubt thsi has ever been specified
[14:55] <divVerent> given ogm was created as a hack to make an "avi replacement that can embed subtitles"
[14:55] <divVerent> based on ogg
[14:55] <ubitux> what about ogg?
[14:56] <divVerent> at least ffmpeg's ogg demuxer has no subtitle support
[14:56] <divVerent> not sure if the container supports them
[14:56] <divVerent> of course, one can always cause the ogm specific code to be invoked ;)
[14:56] <divVerent> as ogm basically is a superset
[14:56] <divVerent> with FOURCCs and such crap
[14:56] <divVerent> xiph.org probably should know if this ever was intended...
[14:56] <ubitux> i mean, does ogg defines the way to store text subtitles?
[14:57] <ubitux> anyway, i'll submit tonight for comments
[14:57] <ubitux> we'll see
[14:57] <divVerent> right
[14:58] <divVerent> if anyone has a complaint, they will say so ;)
[14:58] <ubitux> rhaa i need to find some time for all the subtitles thing :(
[14:58] <divVerent> this MAY break players that use lavf to decode subtitles (hint: mplayer) if for some reason they handle ogm/text but not ogm/srt
[14:58] <divVerent> can't imagine why though
[14:58] <ubitux> why would it break?
[14:58] <ubitux> the demuxer is changing the codec
[14:58] <divVerent> exactly
[14:58] <ubitux> so mplayer will be aware of it
[14:59] <divVerent> if a player for some reason only supports CODEC_ID_TEXT but not CODEC_ID_SUBRIP
[14:59] <divVerent> then it will break
[14:59] <divVerent> in fact, it looks like it WILL break in mplayer2 at least
[14:59] <divVerent> but wonder how it plays srt-in-mkv then
[15:00] <divVerent> https://gist.github.com/3865504 - this code section makes me think that
[15:00] <ubitux> oh that sucks.
[15:01] <divVerent> just, IF that is the case, shouldn't mkv playback with such subs already be broken
[15:01] <ubitux> i remember seeing some patches in mplayer indeed
[15:01] <ubitux> but not mplayer2
[15:01] <ubitux> since it's still mostly based on libav by default
[15:02] <ubitux> but mplayer2 has its own demuxer by default so..
[15:02] <divVerent> it IS broken
[15:02] <ubitux> mkv* demuxer
[15:02] <divVerent> in mplayer2
[15:02] <divVerent> but only with -demuxer lavf
[15:02] <divVerent> because it has its own mkv demuxer
[15:02] <divVerent> and thus by default doesn't hit this issue
[15:02] <ubitux> the mkv demuxer is pretty nice in mplayer2 :p
[15:02] <divVerent> okay, go ahead then
[15:02] <ubitux> so no reason to fallback on lavf :D
[15:02] <divVerent> it's easy to fix in mplayer2
[15:02] <ubitux> no it's not
[15:02] <divVerent> sure it is
[15:02] <ubitux> because libav has no SUBRIP codec id
[15:02] <divVerent> they just have to support CODEC_ID_SUBRIP too
[15:02] <ubitux> have fun.
[15:02] <divVerent> haha
[15:03] <divVerent> even then, should this really stop ffmpeg?
[15:03] <divVerent> okay, mplayer-svn then can add CODEC_ID_SUBRIP to that list ;)
[15:03] <ubitux> it's already done i believe
[15:03] <divVerent> and mplayer2 uses libav anyway so they won't ever see the patch
[15:03] <ubitux> (in mplayer)
[15:03] <ubitux> mplayer2 is supposed to be buildable against the two
[15:03] <ubitux> so you need some conditional crap
[15:03] <divVerent> I just wonder one thing
[15:03] <ubitux> you might want to discuss this with uau :)
[15:03] <divVerent> the alternative would be better CODEC_ID_TEXT support, BUT...
[15:04] <divVerent> in case of ogm, the proper type is actually SUBRIP I am pretty sure
[15:04] <ubitux> we could introduce a codec id text decoder
[15:04] <ubitux> but it's different
[15:04] <divVerent> I think I once saw <i> tags on the screen with mplayer years ago
[15:04] <divVerent> in ogm files
[15:04] <ubitux> it would mean raw text
[15:04] <divVerent> so apparently someone did it
[15:04] <divVerent> right, in case of ogm, the actual source is typically srt though
[15:04] <divVerent> and ogm encodes no "more exact" info
[15:04] <ubitux> maybe we could assume text is subrip in all/most of the case
[15:05] <ubitux> because most muxers will end up muxing crap at some point
[15:05] <ubitux> under the "text" name
[15:05] <divVerent> is the markup even a subrip feature
[15:05] <divVerent> or is that just an extension by many players and then used by srt scripts?
[15:05] <ubitux> it's supposed to be
[15:05] <ubitux> i don't know much the history
[15:05] <ubitux> but it's associated with it at least
[15:06] <divVerent> basically, in my opinion two things are needed to fulyl resolve all of this ;)
[15:06] <divVerent> 1. the ogm demuxer change (it is correct for typical ogmmerge usage, and ogmmerge IS the one reference ogm muxer)
[15:06] <divVerent> 2. adding proper CODEC_ID_TEXT support wouldn't be bad either ;)
[15:06] <ubitux> ok so far
[15:07] <divVerent> as CODEC_ID_TEXT can still come out of other sources, even mkv
[15:07] <ubitux> it's pretty easy to write actually
[15:07] <divVerent> sure, probably copypaste the srt file and remove all parsing ;)
[15:08] <ubitux> yes
[15:09] <ubitux> ok i'm going to write it asap
[15:09] <ubitux> hopefully submitted tonight
[15:09] <ubitux> so much pending patches today..
[15:10] <ubitux> lavfi meta inject, loop in file protocol, webm regression, ogg/text/subrip, and now text decoder...
[15:10] <ubitux> quite a productive day
[15:13] <divVerent> and I have replaced my mplayer encoding use by ffmpeg for a change... just wondering whether it'd be a good or bad idea to release these horrible shell scripts ;)
[15:13] <divVerent> which do language based stream selection, hardsubbing (both of DVD and ASS subs) and still support custom filter options by the caller
[15:13] <ubitux> :D
[15:14] <divVerent> is there BTW an easier way to do this:
[15:14] <divVerent>                 scale=\
[15:14] <divVerent>                         $mode($w\\,floor($h*dar/$div+.5)*$div):\
[15:14] Action: ubitux thinks he's going to support all the mpl2 vplayer and crap in one row..
[15:14] <divVerent>                         $mode(floor($w/dar/$div+.5)*$div\\,$h)\
[15:14] <divVerent> I basically want to scale with 1:1 pixel aspect so that it in both dimensions is >= 480x320 (iPhone half res)
[15:15] <divVerent> $mode is max here :P
[15:15] <divVerent> so the general idea is, width = larger of (original width, target height * DAR)
[15:15] <divVerent> and height = larger of (original height, target width / DAR)
[15:15] <ubitux> did you look at the different variables in libavfilter/vf_scale, and the function in eval?
[15:15] <divVerent> yes
[15:15] <ubitux> i don't see any avg()
[15:15] <divVerent> but I saw no easier way
[15:15] <ubitux> :(
[15:16] <divVerent> the $div crap is also needed...
[15:16] <divVerent> mainly because x264 refuses odd dimensions
[15:16] <divVerent> rint() is missing BTW
[15:17] <ubitux> it's pretty easy to add functions in eval
[15:17] <divVerent> oh, BTW, the reason why I do so weird scaling... the iPhone basically has a zoomed out view (default, image is letterboxed) and zoomed in view (i.e. cropped to screen aspect, scaled as large as possible)
[15:17] <divVerent> and I optimize for the zoomed in view here
[15:18] <divVerent> I just don't like havikng to use an expression evaluator for this... there should really rather be a way to say "fit into 480x320" or "crop and center to 480x320"
[15:19] <ubitux> i agree with this
[15:19] <ubitux> like the rescale in imagemagick? :)
[15:19] <divVerent> yes
[15:19] <divVerent> I was just looking up imagemagick's syntax for that ;)
[15:19] <ubitux> i often wonder about this
[15:19] <divVerent> 480x320^
[15:19] <ubitux> -resize WxH
[15:19] <divVerent> would be imagemagick's name for what I want
[15:19] <divVerent> and 480x420 would be the letterboxing version
[15:20] <divVerent> 480x320! is the aspect-breaking version
[15:20] <ubitux> maybe it would be possible to have a keep-aspect-ratio thing
[15:20] <ubitux> anyway, i won't do it, but feel free to send a patch :)
[15:20] <divVerent> yes, basically I was thinking of adding flags for this
[15:20] <divVerent> two modes for that, obviously... just like imagemagick
[15:21] <divVerent> of course
[15:21] <divVerent> if the expression evaluator could work with complex numbers...
[15:21] <divVerent> then we could do -filter:v "scale=aspect_letterboxed(w,h,480,320)"
[15:21] <divVerent> and scale would work with a single expression returning w*i+h ;)
[15:22] <ubitux> you can define custom functions
[15:22] <divVerent> sure
[15:22] <ubitux> with eval
[15:22] <divVerent> but with the current way, you'd need two expressions still
[15:22] <divVerent> like, width_letterboxed(w,h,480/320):height_letterboxed(w,h,480/320)
[15:22] <divVerent> s!/!,!
[15:22] <divVerent> g
[15:22] <ubitux> oh, right.
[15:22] <divVerent> that'
[15:22] <ubitux> that's because of the nasty format :)
[15:22] <divVerent> s how I got to complex numbers
[15:23] <ubitux> args[strcpsn(s,":")]='x'
[15:23] <ubitux> here you go \o/
[15:23] <uau> divVerent: there's no good spec for what markup subrip "should" support AFAIK
[15:23] <uau> so CODEC_ID_SUBRIP would still be ambiguous
[15:23] <divVerent> ubitux: the x is really the smallest issue ;)
[15:24] <ubitux> mmh i'm stupid yeah.
[15:24] <divVerent> also, width_letterboxed() even contains one ;)
[15:25] <uau> also there are files with markup that could not conform to any sane spec
[15:26] <ubitux> divVerent: i don't think that's really a problem to define two local functions in vf scale named lboxw() and lboxh() :p
[15:26] <uau> like relying on the behavior of some players where libass tags are interpreted too (because srt support is implemented with an ASS renderer, and the implementation fails to properly quote things that can be interpreted as ASS tags)
[15:29] <creep> hi
[15:37] <divVerent> ubitux: it is not
[15:37] <divVerent> but -vf scale=lboxw(DAR,320,240):lboxh(DAR,320,240)
[15:37] <divVerent> is still a lot more verbose than
[15:37] <divVerent> -vf scale=320:240:lbox
[15:37] <ubitux> yup better syntax :)
[15:38] <divVerent> also, lboxw is not sufficient alone ;) also need to round to codec specific mutliples
[15:38] <divVerent> so...
[15:38] <divVerent> -vf scale=round(lboxw(DAR,320,240),8):round(lboxh(DAR,320,240),8)
[15:38] <divVerent> vs
[15:39] <divVerent> -vf scale=320:240:lbox:round=8
[15:39] <divVerent> also, this is stupid anyway
[15:39] <divVerent> real men would use
[15:39] <divVerent> -vf scale=DAR 320 240 lboxw 8 round DAR 320 240 lboxh 8 round
[15:39] <divVerent> and then wonder... why not...
[15:39] <ubitux> you can add another filter
[15:39] <ubitux> using the same internals as scale
[15:40] <ubitux> with a different syntax
[15:40] <ubitux> i don't remember how scale syntax was extended
[15:40] <divVerent> -vf scale=8 DAR 320 240 4 dupn lboxw exch round lboxh exch round
[15:40] <divVerent> ;)
[15:40] <divVerent> it's longer, bit shows you know your RPN ;)
[15:40] <ubitux> the current vf scale syntax parsing is quite hacky atm
[15:40] <divVerent> yes, especially the comma abuse
[15:41] <divVerent> using comma as separators both inside and outside is stupid
[15:41] <divVerent> but... I know no better idea
[15:41] <ubitux> comma? what comma?
[15:41] <divVerent> semicolon is also already a separator in filter chains
[15:41] <divVerent> in function args
[15:41] <ubitux> oh in the eval
[15:41] <divVerent> you actually can't do -vf scale=func(x,y):func(x,y)
[15:41] <ubitux> yeah but i wasn't talking about that
[15:41] <divVerent> but need -vf "scale=func(x\\,y):func(x\\,y)"
[15:41] <ubitux> look at the sws flags parsing
[15:41] <divVerent> mplayer has the same issue :P
[15:41] <ubitux> or interl=1 thing
[15:42] <divVerent> ah, I see
[15:42] <ubitux> maybe you can just add another hack like strstr(args,"ratiorules=")
[15:42] <divVerent> hehe
[15:42] <divVerent> hack-on-hack-on-hack... ;)
[15:42] <ubitux> :)
[15:43] <divVerent> my favorite solution for -vf scale is still using complex numbers as an alternate interface
[15:43] <divVerent> if only one expression is given, real part is w and imaginary part is h ;)
[15:43] <ubitux> my favorite solution would be to have it in swscale if the api allows it
[15:43] <ubitux> (through sws flags)
[15:43] <divVerent> hehe
[15:43] <divVerent> swscale that huge mess ;)
[15:43] <ubitux> i'm not sure if you can change the specified sizes with sescale
[15:43] <divVerent> I recently found lots of nasty bugs/features in swscale
[15:43] <ubitux> maybe michaelni can tell
[15:44] <divVerent> my favorite one: it loves writing between the image row end and the next row
[15:44] <divVerent> i.e. it writes into the stride spacing
[15:44] <divVerent> I know why it does that, it makes for faster SIMD code
[15:44] <divVerent> and I also know the workaround - make sure your width is 16 bytes aligned
[15:44] <divVerent> the main issue is that this fact is nowhere documented
[15:45] <divVerent> and at any time, someone could write a SIMD scaler that works in 32 bytes blocks
[15:45] <divVerent> ffmpeg.c is MOSTLY unaffected by this issue
[15:46] <divVerent> except that when the block size of swscale is raised, there may happen reads beyond allocation which needs slightly larger av_malloc where images are allocated
[15:47] <ubitux> did you notice some valgrind issues?
[15:47] <divVerent> no
[15:48] <divVerent> I had abused libswscale to scale part of an image to part of another image
[15:48] <ubitux> then there is no problem ;)
[15:48] <divVerent> (convert, actually)
[15:48] <divVerent> there is, it does read beyond the stride
[15:48] <divVerent> and write
[15:48] <divVerent> but NORMALLY this is no issue
[15:51] <divVerent> I can produce a valgrind log with a "tightly allocated" image, but the thing is that this is no bug, it's just missing documentation ;)
[15:52] <divVerent> it's sensible that libswscale behaves the way it does
[15:53] <divVerent> ideally, there should be a macro in swscale.h that defines the alignment libswscale wants, and a comment explaining that writes may happen in blocks of that size
[16:40] <ashka> hmm
[16:40] <ashka> ^ nevermind that
[16:47] <tuxhat> hey
[16:48] <tuxhat> can some give me a good idea on codecs for screencasting with ffmpeg
[16:48] <tuxhat> someone^
[16:50] <zap0> how do they relate?
[17:55] <Spideru> Hi. Is there a way - from ffplay - to see when ffmpeg is streaming an rtp channel? Thank you
[18:58] <burek> Spideru, can you rephrase your question please?
[18:59] <Spideru> burek: ok, thank you. If i start ffplay (with SDP file) and then connect an rtp stream with ffmpeg, all things works well. Then, if I stop ffmpeg, I can't recognize it from ffplay. How can I do that?
[19:00] <Spideru> I need to know from ffmpeg and ffplay If the stream is not working
[19:00] <burek> what does this mean: and then connect an rtp stream with ffmpeg
[19:01] <burek> can you show some sample command lines?
[19:01] <burek> (please use pastebin if commands are too big)
[19:01] <Spideru> yes thank you
[19:03] <Spideru> burek: http://pastebin.com/eTg9MW1e
[19:03] <Spideru> the stream is perfect and finally the patch for SDP generation is up :)
[19:04] <Spideru> but I would to know if something stop working
[19:04] <burek> Spideru, if I get this right, you are using ffmpeg to feed some unknown rtp server (?) and then you use ffplay to connect to the server?
[19:04] <Spideru> I use ffmpeg to feed ffplay server
[19:05] <burek> what is ffplay server?
[19:05] <Spideru> :| just realized now that I'm using it in wrong way
[19:05] <burek> ffplay is just a player
[19:05] <burek> your streaming server is located at 192.168.1.95
[19:06] <burek> and ffmpeg is just a stream source
[19:06] <Spideru> Well, what can I use to wait and get the ffmpeg stream source?
[19:06] <burek> I don't understand :/
[19:07] <Spideru> Ok, I try to explain it better
[19:08] <Spideru> I would to transfer audio live stream from a client to a server. What can I use to do that?
[19:08] <Spideru> Now I'm using ffmpeg as client, and ffplay as server. But ffplay is not a server
[19:09] <Spideru> So, what I should use instead of ffplay?
[19:09] <Spideru> and, ffmpeg is the right choice as client side?
[19:10] <burek> ffmpeg - as a source
[19:10] <burek> ffserver - as a server/broadcaster
[19:10] <burek> ffplay/vlc/winamp/... - as a player
[19:10] <Spideru> ahh so -> ffmpeg ----> ffserver <---- ffplay ?
[19:10] <burek> yes
[19:11] <Spideru> \o/ thank you!
[19:11] <burek> :) :beer:
[19:11] <Spideru> of course
[19:11] <Spideru> where are you from?
[19:11] <burek> still earth :)
[19:11] <burek> serbia :)
[19:11] <Spideru> I can offer you a pizza if you'll come to Italy :)
[19:12] <burek> I've tried your pizzas and they are good :)
[19:12] <Spideru> I know, and my fat too
[19:14] <burek> :)
[19:17] <Spideru> burek: What kind of protocol should I use? RTP or RTMP?
[19:18] <Spideru> I'll prepare two pizzas instead one
[19:19] <burek> :)
[19:19] <burek> i use ffm between ffmpeg-ffserver and flv between ffserver-media players
[19:19] <burek> so its more compatible for streaming
[19:23] <Spideru> ffm? Never heard about it before
[19:23] <Spideru> of course excluding youporn
[19:24] <burek> read a little bit about it
[19:25] <Spideru> Yes, I'll read all the doc, of course.
[19:26] <Spideru> and thank you
[19:26] <Spideru> your help saved me a lot of time
[19:28] <burek> you're welcome :)
[19:36] <Spideru> 2.7.2 The audio and video lose sync after a while.
[19:36] <Spideru> Yes, they do.
[19:36] <Spideru> Wonderful
[20:31] <wallerdev> does the latest ffmpeg release support apple prores 422 decoding? it seems to give me a black video output
[20:59] <wallerdev> built from git head and am seeing the video now, so that's good
[21:34] <Jan-> hi
[21:35] <Jan-> We need to uncompress an MPEG-2 file (Derived from a DVD) to an uncompressed AVI. We've got usable video (using -vcodec rawvideo), but what would the equivalent be for audio? I assume we just need to specify 16 bit PCM somehow?
[21:36] <relaxed> -acodec pcm_s16le
[21:36] <relaxed> ffmpeg -codecs | less
[21:37] <Jan-> I'm aware of the pcm codecs but I wasn't aware which one was most suitable.
[21:37] <Jan-> The There's no documentation other than that they exist.
[21:38] <relaxed> signed 16bit little endian is the most common
[21:38] <Jan-> hmm
[21:39] <Jan-> sound works
[21:39] <Jan-> picture less well
[21:39] <Jan-> I guess "Rawvideo" just gives us raw 8 bit rgb
[21:39] <Jan-> is there something else we might try that a nonlinear editor would deal with better?
[21:40] <relaxed> which one are you using?
[21:40] <Jan-> premiere
[21:40] <Jan-> but it hardly matters
[21:40] <Jan-> most of 'em would read an uncompressed AVI... usually.
[21:42] Action: Jan- tries v210
[21:43] <Jan-> bingo :)
[21:44] <Jan-> it always starts with "error decoding stream #0.1"
[21:44] <Jan-> is that bad?
[21:44] <relaxed> It can't be good.
[21:45] <relaxed> is the exit status 0?
[21:46] <Jan-> it didn't exit
[21:46] <Jan-> it just said error decoding stream and continued.
[21:46] <Jan-> To be fair weirdness is sort of expected, this is a VOB off a dvd.
[21:46] <Jan-> the issue is that we need to extract a chunk of it that crosses a VOB boundary, and trying to get a nice clean decode of every single frame each side of the join is tough.
[21:47] <relaxed> you could use mplayer's -dumpstream to read the dvd
[21:47] <Jan-> wanted to avoid having to do that if possible
[21:48] <Jan-> om nom nom all ur disk space r belong 2 mplayer etc
[21:48] <Jan-> but I guess we're already having to convert the entire preceding vob.
[22:09] <lake> what does -vcoded copy mean?
[22:10] <lake> the command in question is this: ffmpeg -i input-file.m2ts -ab 256k -vcodec copy -acodec aac output-file.mp4
[22:10] <tmatth> lake: "vcodec copy" means "use the same video codec in the output as the input"
[22:11] <relaxed> copy the video stream
[22:11] <lake> tmatth: does that result in loss of quality?
[22:11] <llogan> lake: simply put, it means to "copy and paste" the video stream from input to output. it does not re-encode.
[22:11] <llogan> no loss o' quality
[22:12] <relaxed> cp llogan pasteeater
[22:12] <lake> would this result in any loss of quality: ffmpeg -i input-file.m2ts -ab 256k -vcodec x264 -acodec aac output-file.mp4
[22:12] <lake> i assume it would result in a reduced file size
[22:13] <llogan> you probably mean "libx264", not "x264"
[22:13] <llogan> and yes, it would reduce quality with the default settings.
[22:13] <lake> llogan: yes, sorry, just trying to wrap my mind around it
[22:14] <llogan> since you are re-encoding to a lossy format..but you may not notice a difference with high enough quality settings/bitrate
[22:14] <lake> so, we are talking about transcoding vs compressing then?
[22:14] <llogan> of course x264 can encode lossless as well, but it doesn't mean that there will be no loss (such as going from rgb to yuv).
[22:15] <lake> sorry, i'm a noob, but finding that ffmpeg is amazing
[22:18] <llogan> transcoding refers to re-encoding while using information from the input such as motion vectors. re-encoding means to decode the stream into individual pixels and encode with no additional information from the input
[22:18] <llogan> at least that's how i interpret it
[22:19] <llogan> most of the time those terms are used as if they are the same process
[22:20] <wallerdev> is there a way to specify the library path for libx264 when building from source?
[22:20] <wallerdev> seems to be not finding it or using the wrong one
[22:24] <llogan> wallerdev: did you try --extra-cflags and --extra-ldflags?
[22:25] <wallerdev> no i didnt
[22:25] <llogan> some examples here: http://ffmpeg.org/pipermail/ffmpeg-user/2012-August/008552.html
[22:26] <wallerdev> would it just be something like --extra-ldflags "-L/usr/local/lib" or somethin
[22:26] <llogan> i don't know how "good" those examples are
[22:26] <wallerdev> ill check that out thanks
[22:26] <relaxed> --extra-cflags="-I/path/to/prefix/include" --extra-ldflags="-L/path/to/prefix/lib"
[22:27] <llogan> that's better
[22:31] <wallerdev> well hopefully it works, otherwise something is just messed up with my system haha :)
[22:32] <llogan> burek, relaxed: interested in making a mini guide on trac wiki on how to make static builds?
[22:53] <ubitux> ashka: i don't think i'll submit the patch now, it still has some issues
[22:53] <ashka> oh okay
[22:53] <ashka> well the workaround is okay for me so it's great
[22:54] <ubitux> :)
[22:54] <mykul> hi, i have a nub question for you all: how can I use ffmpeg to combine (multiplex) audio files into one
[22:55] <llogan> when did noob become nub? nubs seem to outnumber noobs now.
[22:56] <mykul> haha
[22:56] <mykul> i like the way it sounds, nuhb
[22:56] <mykul> nublet
[22:59] <mykul> it's&cuter
[23:00] <Spideru> burek: how can I tell to ffm (of ffserver) that I don't want video inside feed?
[23:00] <llogan> mykul: you mean to simply add several audio streams into one container, or to concat them all into one continuous stream?
[23:01] <ubitux> mykul: look at concat, amerge, pan and amix filters
[23:01] <ubitux> depending on your needs
[23:01] <mykul> llogan, not concat, but have them combined so that they play at once.  is that what adding to one container means?
[23:02] <mykul> thanks ubitux, i'm drowning in options :D
[23:03] <Spideru> Found it thank you
[23:05] <llogan> mykul: not exactly what i had in mind. i guess you want "option 3" that i didn't think to list. as ubitux mentioned, see amerge and amix.
[23:06] <mykul> cool, thanks llogan, i am armed with some language now.
[23:06] <ubitux> https://www.ffmpeg.org/ffmpeg.html#amerge
[23:14] <mykul> amix worked great
[00:00] --- Thu Oct 11 2012


More information about the Ffmpeg-devel-irc mailing list