[Ffmpeg-devel-irc] ffmpeg-devel.log.20121123

burek burek021 at gmail.com
Sat Nov 24 02:05:02 CET 2012


[00:05] <kierank> durandal_1707: yes
[00:08] <durandal_1707> well our wrap encoder does not make use of that feature
[00:12] <Skyler_> x264 should do 10-bit RGB yes
[00:13] <durandal_1707> and it accepts it in what format?
[00:14] <Skyler_> ??  10-bit RGB
[00:14] <Skyler_> x264 doesn't do colorspace conversion
[00:15] <Skyler_> packed 48-bit/64-bit RGB/BGR/RGBA
[00:15] <durandal_1707> and how 10bit values are stored, in short?
[00:15] <Skyler_> yes, same as YUV
[00:22] <durandal_1707> so it cant accept planar rgb?
[00:39] <Skyler_> I don't think it does, but it could probably be improved pretty easily to do that
[01:24] <cone-522> ffmpeg.git 03Michael Niedermayer 0742dde253ec71: dcadec: fix av_log level
[01:24] <cone-522> ffmpeg.git 03Michael Niedermayer 07801a2a1df05d: mpeg12: fix av_log level and context
[01:24] <cone-522> ffmpeg.git 03Michael Niedermayer 072fc0cbd9a62b: truemotion2: Fix av_log level and context
[01:24] <cone-522> ffmpeg.git 03Michael Niedermayer 073616afced6f3: rmdec: fix av_log level and context
[01:42] <Compn> sigh
[01:49] <michaelni> Compn, ?
[02:42] <cone-522> ffmpeg.git 03Michael Niedermayer 070560b28f1279: ffv1dec: remove incorrect assert()
[03:46] <cone-522> ffmpeg.git 03Michael Niedermayer 07e9c372362cb7: id3v2: restructure compressed and unsync code
[03:54] <StevenLiu> ffmpeg -y -ss 00:00:10.00 -i input.mkv -strict experimental  -acodec aac -vcodec libx264 -r 15 -s 1280x720 -b:v 2000k  -b:a 88k -ar 44100  -map 0:0 -map 0:1 -copyts -bsf h264_mp4toannexb -f mpegts -v debug -debug_ts -t 10 output-2.ts
[03:54] <StevenLiu> output-2.ts's pts from 0, perhaps the copyts have no use?
[03:55] <StevenLiu> who know how to copy the source mkv's pts to the output-2.ts?
[03:56] <StevenLiu> when use -copyts, the ffmpeg_opt.c   ts_offset is a minus number
[03:56] <StevenLiu> it's comput error?
[04:49] <burek> is this possible with ffmpeg
[04:49] <burek> [03:04:07] <Verte> Just doing research at the moment. I have a situation where I'll have three (possibly more later) video files/streams coming in, where I want to take all three streams and create a single output stream that periodically switches between the inputs with a fade or something. If I can have the other two streams appear with a picture-in-picture type arrangement that would be icing on the cake, but not necessary. My initial research s
[04:49] <burek> [03:04:07] <Verte> ays ffmpeg should do the trick, properly set up. Just checking that I'm not barking up the wrong tree before I start.
[04:51] <Compn> ffmpeg can do cctv of 4 streams i think
[04:51] <Compn> but i dont think it can do fade in/out and switching of streams, since theres no api of changing filters mid file
[04:52] <Compn> i mean you could just program a script that starts ffmpeg on 4cctv, then halts ffmpeg, then starts a new ffmpeg on cctv1 then halt, then ffmpeg cctv2 then halt, then ffmpeg cctv3 then back to the 4 pip cctv...
[04:53] <Compn> some simple frontend to switch between cameras
[04:53] <Compn> sounds like a lot of work :)
[04:54] <burek> hmh
[04:54] <burek> maybe to find something that would switch between /dev/videoX nodes or something
[04:54] <burek> but that is also problematic, due to possible different formats
[04:55] <burek> it would be cool if ffmpeg could, like vlc, just redirect 3/4 outputs to /dev/null while 1/4 would always be "grabbed" out of the decoder's output (in raw mode) and sent to output encoder
[04:56] <burek> but i guess there are also problems with that..
[04:56] <burek> at least pixel format or something
[09:40] <burek> guys, what do you think about using google docs for documents sharing, like diagrams and stuff, something like this for ex: https://docs.google.com/drawings/d/1hhjtTZXY_yhREMNR0omVRdsAcbQRSB9E26SEUBVmtVU/edit
[09:41] <burek> there is a history tracker, so you can revert edits and share work
[09:50] <ubitux> burek: i hate the idea of using a centralized service from a shady company to manage the work of an organization
[10:24] <burek> ubitux, are there any better alternatives?
[10:37] <durandal_1707> Daemon404: ping
[10:40] <durandal_1707> michaelni: is there already unscaled rgb<->gbrp in sws?
[10:43] <nevcairiel> at least in one direction there is (gbrp->rgb), not sure about the other way
[11:35] <ubitux> isn't -aspect supposed to work with codec copy?
[11:35] <ubitux> (mov ’ mov)
[11:38] <TimNich> ubitux: "ost->st->sample_aspect_ratio" set for stream copy in ffmpeg.c
[11:39] <TimNich> but maybe the tape atom is not copied
[11:39] <TimNich> s/tape/tapt
[12:13] <ubitux> new api subtitles check
[12:14] <ubitux> subrip decoder with new api check
[12:14] <ubitux> ass enc & mux now.
[13:13] <durandal_1707> how do you do 32 -> 24 bit conversion for audio?
[13:34] <kierank> >> 8 ?
[13:36] <Compn> lol
[13:36] <Compn> kierank : i think he means float or planar and which ones
[13:36] <Compn> or maybe just the command line
[13:37] <Compn> j-b : did you see kostya's blog about g2m* ? :)
[13:49] <cone-735> ffmpeg.git 03Mans Rullgard 07c262649291e7: build: add rules to generate preprocessed source files
[13:49] <cone-735> ffmpeg.git 03Mans Rullgard 075e39bb073a1d: mpegvideo: simplify dxy calculation in hpel_motion()
[13:49] <cone-735> ffmpeg.git 03Mans Rullgard 074a606c830ae6: av_memcpy_backptr: optimise some special cases
[13:49] <cone-735> ffmpeg.git 03Michael Niedermayer 0725ca8aef54b6: Merge commit '4a606c830ae664013cea33800094d4d0f4ec62da'
[13:49] <durandal_1707> kierank: that conversion is not using any dithering
[14:02] <cone-735> ffmpeg.git 03Mans Rullgard 07457cc333b424: configure: properly support DEC/Compaq compiler
[14:02] <cone-735> ffmpeg.git 03Michael Niedermayer 07d28467b62efe: Merge commit '457cc333b424994ecf80a82369325771e0397fd9'
[14:07] <cone-735> ffmpeg.git 03Mans Rullgard 0733db40f8d38d: configure: sort cpuflags section by architecture
[14:07] <cone-735> ffmpeg.git 03Michael Niedermayer 077ca97b6b3c92: Merge remote-tracking branch 'qatar/master'
[14:11] <nevcairiel> all conversions in swresample (or avresample) for that matter dont use dithering afaik
[14:12] <ubitux> swr has dithering iirc
[14:12] <nevcairiel> for resampling
[14:12] <nevcairiel> but not for sample conversion, iirc
[14:52] <michaelni> nevcairiel, it works without resampling, just tried: ./ffmpeg -i tests/data/asynth-44100-2.wav -acodec pcm_u8 -af aresample=dither_method=1:dither_scale=100 test100.wav
[14:54] <durandal_1707> michaelni: and how that example would do 32 to 24 ?
[14:55] <durandal_1707> which is not triggered by default
[15:48] <michaelni> durandal_1707, it could be done by adjusting dither_scale but that isnt convenient to the user at all
[15:56] <cone-735> ffmpeg.git 03Michael Niedermayer 075da885b84d0a: dv: use av_assert
[16:22] <kierank> durandal_1707: well it would have to be based on knowledge of the underlying bit depth
[16:23] <kierank> because all the 20/24 bit stuff i have is presented in 32-bit
[16:23] <kierank> i want that conversion to not alter the data
[16:25] <nevcairiel> I had ruggles test that once, and apparently a 24-bit in S32 -> float -> 32 conversion did not alter the data at all, so that was a bonus
[16:25] <nevcairiel> at least in avresample that is
[16:26] <durandal_1707> kierank: there is bits_per_coded_sample
[16:26] <kierank> oh that's true
[16:27] <durandal_1707> and i'm really interested in real 32 to 24 bit conversion
[16:27] <kierank> it's not in avframe, right?
[16:27] <nevcairiel> its only in codec context, and you cant communicate that to lavfi or swr/avr iirc
[16:28] <nevcairiel> also the right one technically is bits_per_raw_sample
[16:28] <kierank> when i have 20/24-bit audio 32-bit it would also have to shift to the native bitdepth and then dither down
[16:28] <kierank> which is doable
[16:29] <nevcairiel> if the lower bits are already all 0, the dither should just be a NOP, at least those dither algorithms i know
[16:29] <durandal_1707> 24 bit in 32bit would not be dithered when doing conversion because bits_per samples stuff should be ideally used
[16:31] <j-b> Compn: I did not
[16:33] <durandal_1707> more stuff should be exported in AVFrame
[16:34] <durandal_1707> the current design is so terrible
[16:46] <cone-735> ffmpeg.git 03Michael Niedermayer 0756540bb3b5a1: h263dec: switch 2 asserts to av_assert
[16:46] <cone-735> ffmpeg.git 03Michael Niedermayer 078328df74f3dc: motion_est: use av_assert* instead of assert
[18:17] <cone-735> ffmpeg.git 03Michael Niedermayer 07533a8b2a7d3f: x86/mpegvideoenc_template: use av_assert
[18:17] <cone-735> ffmpeg.git 03Michael Niedermayer 07c322f19855df: vf_mandelbrot: give all av_log a context
[18:17] <cone-735> ffmpeg.git 03Michael Niedermayer 070efcf16a3e6e: replace av_log(0, by av_log(NULL,
[18:32] <cone-735> ffmpeg.git 03Paul B Mahol 076f9ca8cbe05a: fate: add AST demuxer test
[18:32] <cone-735> ffmpeg.git 03Paul B Mahol 07a8ebbf87be86: fate: add ADPCM AFC decoder test
[18:34] <Compn> durandal_1707 : how do you RE those adpcm codecs? ever thought about writing down the process for future devels ?
[18:34] Action: Compn afk
[18:34] <wm4> more importantly, why are there so many of these codecs
[18:35] <durandal_1707> wm4: because it is extremly trivial to write them
[18:36] <durandal_1707> Compn: i did not RE this one
[18:50] <cone-735> ffmpeg.git 03Stefano Sabatini 079a7256e8e07a: ffprobe: free dictionary in opt_show_entries()
[19:19] <ubitux> ok so
[19:19] <ubitux> i need to change the format of SSA packets
[19:19] <ubitux> currently they have the timing in the payload, exactly the same issue as SRT packets
[19:20] <ubitux> for srt, we solved the problem by adding a new codec without timings: "SUBRIP" codec id
[19:20] <ubitux> for ass, should i introduce a "ASS" one?
[19:20] <ubitux> or should i simply use a strcmp "Dialogue:"?
[19:21] <ubitux> contrary to srt, it should be solid: you can't have a style called "Dialogue:"
[19:22] <j0sh> speaking of subtitles, have you thought about supporting cues/styling, eg as in webvtt?
[19:23] <j0sh> just wondering if there were any thoughts put into that direction
[19:24] <ubitux> we support basic styling right now, we should be able to consider more advanced parsing when the new api is in place
[19:24] <j0sh> oh cool is there a draft/rfc of the new api somewhere?
[19:24] <ubitux> the new api is basically done, but i need to change the mess with timing first
[19:25] <ubitux> j0sh: i have a WIP/mess commit if you want an idea
[19:25] <ubitux> https://github.com/ubitux/FFmpeg/compare/master...astsub
[19:25] <j0sh> awesome, thanks
[19:25] <ubitux> look at the lavc/srtdec.c for how decoders should use it
[19:26] <ubitux> lavu/subtitles.h
[19:26] <ubitux> i need to add doxy, but really it's a WIP
[19:27] <ubitux> basically the idea is to queue "chunks"
[19:27] <j0sh> cool, alright
[19:27] <ubitux> which have types, that can be a basic style, a chunk of raw text etc
[19:28] <j0sh> are the chunks structured as a tree?
[19:28] <ubitux> no, it's a dynamic array
[19:28] <ubitux> [italic=1] [text="hello"] [bold=1] [text="foobar"] [bold=RESET] ...
[19:29] <ubitux> that list can be constructed as flat, or as nested, and when nested you have a function to flatten it
[19:29] <ubitux> (that's exactly what happens in the case of subrip)
[19:30] <j0sh> i wonder if it wouldn't be cleaner as a tree (that was my first thought after seeing *AST)
[19:30] <j0sh> to handle nested styles, etc
[19:30] <j0sh> kinda like the html DOM
[19:31] <ubitux> nested to flat is easy, flat to nested might be slightly more tricky but it should be doable
[19:32] <j0sh> is this for the internal representation? if so, why ascii and not enums?
[19:33] <ubitux> it will be exposed
[19:33] <ubitux> since that's how user will browse the decoded subtitles (if they want to)
[19:33] <j0sh> i guess most of these tags should fit in a 64bit word anyway
[19:33] <ubitux> and thus having the ascii makes possible to reorder the list in a meaningful manner, insert some in the middle, etc
[19:34] <j0sh> hm yeah
[19:36] <ubitux> anyway, yes it might makes sense to have a tree for nested one
[19:36] <ubitux> (which should help the encoder in case of a nested format)
[19:37] <ubitux> but i'm still unsure about how to mix the both
[19:37] <ubitux> well anyway, the priority is to clean up the timing mess
[19:38] <j0sh> i suppose flat/sequential styles could just be expressed as sibling nodes
[19:39] <j0sh> but you can still build a tree using a dynamic array
[19:40] <j0sh> what's the issue with timing?
[19:40] <j0sh> demuxers not stripping timing info from the payload?
[19:40] <ubitux> yes
[19:41] <ubitux> initially the timing info was part of the payload
[19:41] <ubitux> which makes impossible a lot of timing manipulation
[19:41] <ubitux> and forces demuxers like matroska to all kind of hacks
[19:42] <ubitux> (printing the timing in the payload)
[19:42] <ubitux> we solved the problem with srt/subrip
[19:42] <ubitux> now it still exists for ass/ssa
[19:43] <ubitux> (and others formats as well but that's not that important right now since they are not muxed outside their standalone format)
[19:45] <j0sh> i noticed the changes in srt, yeah
[19:58] <Compn> durandal_1707 : ah , well maybe next one write it down :P
[20:00] <Compn> wm4 : who knows. theres a lot of them :)
[20:10] <durandal_1707> Compn: about what adpcm codec are you talking about?
[20:26] <ubitux> michaelni: lavf/assdec.c, i'm unable to call read_seek2() with ffmpeg
[20:27] <ubitux> i tried -ss as input and output option, i tried a transcode and codec copy, it's never called
[20:27] <ubitux> how am i supposed to trigger it?
[20:29] <Compn> durandal_1707 : dialogic one
[20:33] <durandal_1707> Compn: ok i though about afc
[20:47] <michaelni> ubitux, ffmpeg would need to call avformat_seek_file() for that to work
[20:47] <ubitux> mmh
[20:47] <ubitux> seems only ffplay does
[20:48] <nevcairiel> read_seek2 is not really implemented, and the API behind it was never finished
[20:48] <nevcairiel> which is kinda sad, better seeking would be good
[20:48] <michaelni> its not implemented but the API should be fine
[20:49] <michaelni> or what is missing ?
[20:50] <nevcairiel> considering nothing really implements read_seek2, it might as well change or vanish one day without being missed, is all i mean =p
[20:50] <nevcairiel> its just not "stable" in such a away that someone bothered to actually use it
[20:51] <ubitux> why ffmpeg doesn't use avformat_seek_file btw?
[20:51] <michaelni> ubitux, i suspect theres no reason ...
[20:51] <michaelni> except that noone changed it to it
[20:51] <nevcairiel> its really just an alias for av_read_seek at this point, with different argument semantics, so i guess there never was a big point
[20:52] <wm4> I just wish there was a way to detect whether seeking would end beyond the end of the file (which apparently isn't possible to detect with your normal API right now)
[20:53] <michaelni> the big point is that no demuxers implement it becuase no user apps use it and no other demuxers and ffmpeg doesnt use it because no demuxers use it 
[20:53] <michaelni> chicken and egg problem :)
[20:54] <wm4> oh, mplayer (and forks) behave completely different with demux_lavf and demux_mkv when seeking past the end of the file
[20:55] <wm4> in part that's because demux_lavf seeks backwards when the seek forward fails (wat), but not only
[20:58] <ubitux> AVSEEK_FLAG_FRAME @_@
[20:59] <michaelni> wm4, in general trying to seek backward if forward failed makes sense instead of total failure but it sometimes can lead to unintended consequences, the new API would avoid this
[21:01] <ubitux> does AVSEEK_FLAG_BACKWARD make sense with the new api?
[21:02] <michaelni> ubitux, it should not be needed
[21:03] <ubitux> ok
[21:05] <wm4> michaelni: I'm not sure what this total failure would be; either the demuxer keeps demuxing from the old position, or the file has ended
[21:06] <michaelni> wm4, consider you have a file of 90minutes you are at 45min and seek to 91min
[21:07] <michaelni> its better there to seek to something around 89-90 min than to fail
[21:07] <wm4> if it fails at that position, it should not change the position at all; seeking to 89 min would be pointless
[21:08] <michaelni> the same issue exists on the start if you deny its an issue at the end
[21:09] <michaelni> you are at 45min and seek to -1min, if that fails thats just bad
[21:09] <ubitux> nice, seems to work with avformat_seek_file
[21:09] <michaelni> seeking to 0min is better
[21:09] <wm4> michaelni: if the demuxer can't seek, it should just not seek, instead of jumping to a random time
[21:10] <michaelni> wm4, you miss the point really
[21:10] <michaelni> there a problem in the API and no change to the failing behavior can fix it
[21:10] <michaelni> the new API fixes it
[21:11] <michaelni> consider you are at 0min and seek to -1 and that ends at 1min, thats bad
[21:11] <wm4> what is the new api? the old new api, or something newer?
[21:12] <michaelni> consider the same seek to 0min (that cant be done for whatever reason) and end at 1min but if you where at 45min before, theres nothing bad here
[21:12] <michaelni> the old API just allows you to specify a target and a direction so the API does not know where you where before or what exactly you want
[21:13] <michaelni> the new API allows specifying a exact range that is acceptable to you and a target
[21:13] <michaelni> so you can specify that you want to seek forward (or backward) from your current position and at the same time where it should end
[21:14] <wm4> I think it would be pretty much better if you could just query the next/previous seek points from the demuxer
[21:14] <wm4> rather than having a complicated seek API that somehow tries to handle every intent possible
[21:18] <michaelni> wm4 you already can, its av_read_frame(), the keyframe flag tells you the seek points in each stream more or less
[21:19] <wm4> that's great if that is so
[23:05] <ubitux> angry fate is angry
[23:33] <ubitux> michaelni: what is the skipping you are talking about in the seek patch?
[23:37] <michaelni> ubitux, when you want to seek to 123 and theres a keyframe at 124 and one at 99
[23:37] <michaelni> then if you seek to 99 and discard video until 123 then you get a exact seek
[23:37] <michaelni> but if you seek to 124 you dont
[23:38] Action: michaelni is angry at fate clients not running rsync before git pull
[23:38] <michaelni> or running it after git pull but before testing new revissions
[23:39] <nevcairiel> i run rsync, then run like 6 fate clients in sequence, it has happened that there was a sample change in the middle of that .. wasnt there like a rule of samples on rsync for at least a few hours before pushing the fate change? :d
[23:40] <ubitux> michaelni: oh ok i see
[23:41] <ubitux> nevcairiel: i remember the shared original script was doing something like that indeed
[23:42] Action: ubitux doesn't care about the server bandwidth and do a git pull + rsync + fate run for each instance
[23:43] <nevcairiel> the default fate script in the repository does the git pull,  but rsync is still left for yourself to handle
[23:44] Action: ubitux never realized fate-run.sh had a update function
[23:44] <michaelni> iam doing a git pull and if it pulls new data then do a rsync, not that theres a reason to skip rsync probably
[23:44] <ubitux> http://lucy.pkh.me/run.sh
[23:44] <ubitux> :p
[23:45] <nevcairiel> and then fate.sh does another git pull :D
[23:46] <michaelni> it seems the ppc & icc boxes run rsync via a separate cronjob or something 
[23:46] <nevcairiel> depending on your config, possibly even in another directory =p
[23:47] <ubitux> nevcairiel: yes, but actually to the local repository
[23:49] <cone-68> ffmpeg.git 03Michael Niedermayer 0709456d0df134: riff: ignore ff_read_riff_info() failure.
[00:00] --- Sat Nov 24 2012


More information about the Ffmpeg-devel-irc mailing list