[Ffmpeg-devel-irc] ffmpeg-devel.log.20130420

burek burek021 at gmail.com
Sun Apr 21 02:05:02 CEST 2013


[00:13] <durandal_1707> michaelni: i need frame->nb_samples to follow exactly same pattern as described in specification
[00:15] <ubitux> michaelni: are you in the middle of a merge?
[00:19] <durandal_1707> probably, because stack is full
[00:20] <durandal_1707> ... more that 10 entries....
[00:28] <cehoyos> Hi! Is there a bug in ticket #2484 or do I miss someting?
[00:28] <ubitux> cehoyos: options not transmitted
[00:28] <cehoyos> options not transmitted?
[00:29] <ubitux> known limitations to our playlist-like demuxer
[00:29] <ubitux> oh wait
[00:29] <cehoyos> Sorry, I don't understand...
[00:29] <ubitux> wrong bug
[00:29] <ubitux> sorry forget what i said
[00:29] <cehoyos> Is that #2485 you are talking about?
[00:29] <ubitux> yeah
[00:29] <cehoyos> Since I don't understand #2485, it would be great if you could comment
[00:29] <ubitux> keyboard derp
[00:30] <ubitux> yeah i'll do eventually
[00:30] <cehoyos> Is is a duplicate of another ticket?
[00:30] <cehoyos> s/Is is/Is it
[00:30] <ubitux> not really
[00:30] <cehoyos> But it is reproducible?
[00:31] <ubitux> if there is a related ticket, it's hls options thing
[00:31] <cehoyos> Will the fix be the same?
[00:31] <ubitux> anyway, it's just a guy from my previous company, so i'm a bit lazy to get active on the ticket
[00:31] <ubitux> that's more a design limitation, maybe a hack can be added but well..
[00:31] <cehoyos> I see. Am I correct that #2484 is a bug, or do I miss something?
[00:32] <cehoyos> (My question is mostly; Was atempo never tested?)
[00:32] <ubitux> i never used it, can't help you
[00:33] <cehoyos> I wonder if the most common use-case isn't PAL-NTSC conversion, and it appears that it can't do that.
[00:33] <cehoyos> (But as said: I may absolutely misunderstand something.)
[00:34] <durandal_1707> the guy who write it is still active
[00:34] <cehoyos> Can video filters buffer frames?
[00:34] <durandal_1707> sure
[00:34] <cehoyos> So is the comment in ticket #1430 wrong?
[00:34] <durandal_1707> i read that vid. thing, and i decided to stay away from it
[00:35] <cehoyos> You have a point...
[00:36] <durandal_1707> but don't push it...
[00:38] <michaelni> ubitux, no iam not in the middle of a merge
[00:39] <ubitux> ok i can push then
[00:39] <cone-429> ffmpeg.git 03Clément BSsch 07master:0f1250b7e52f: lavc/gif: make possible to disable offsetting.
[00:39] <cone-429> ffmpeg.git 03Clément BSsch 07master:e1b35bdde2fc: lavc/gif: add flag to enable transparency detection between frames.
[00:39] <cone-429> ffmpeg.git 03Clément BSsch 07master:5927ebab513d: doc/general: animated GIF are now compressed.
[00:39] <cone-429> ffmpeg.git 03Clément BSsch 07master:7004cad36df2: Changelog: notify GIF improvements.
[00:39] <cone-429> ffmpeg.git 03Clément BSsch 07master:f5ede48fbb41: lavc/gif: miscellaneous cosmetics.
[00:39] <cone-429> ffmpeg.git 03Clément BSsch 07master:a16c20569db6: lavf/gifdec: add loop support.
[00:39] <cone-429> ffmpeg.git 03Clément BSsch 07master:67cc31d6c74b: lavf/gif: add final_delay option.
[00:39] <ubitux> here we go, i'm done with gif
[00:40] <durandal_1707> but 2pass
[00:40] <ubitux> whatever
[00:40] <ubitux> it's good enough for me
[00:40] <cehoyos> Did you mention somewhere (Changelog?) that the gif muxer does not support rawvideo anymore?
[00:40] <ubitux> nope..
[00:40] <durandal_1707> i seriosly doubt anyone used that encoder ....
[00:41] <ubitux> my gif sample went from 3M to < 400kB
[00:41] <ubitux> and that's not violent optim
[00:41] <ubitux> using the rawvideo muxer previously was definitely a bad idea :P
[00:42] <cehoyos> What used the rawvideo muxer previously?
[00:42] <durandal_1707>  /* better than nothing gif encoder */ :)))))
[00:43] <durandal_1707> cehoyos: gif muxer
[00:51] <durandal_1707> ugh, i must add more pixel formats to stereo3d so i can dump down3dright
[00:53] <durandal_1707> ubitux: funny. but does ffmpeg -i in.gif -codec copy out.gif works?
[00:55] <ubitux> mmh good question
[00:56] <ubitux> might need some adjustments :)
[00:56] <ubitux> it doesn't seem to work previously anyway
[00:57] <durandal_1707> the only case it would be useful is to change delay thing...
[01:09] <durandal_1707> ubitux: gonna add yourself to (c)?
[01:12] <ubitux> i don't really care
[01:12] <ubitux> should i?
[01:12] <durandal_1707> dunno...
[01:14] <ubitux> btw
[01:14] <ubitux> what do i do about timeline?
[01:14] <ubitux> i need that feature :(
[01:16] <durandal_1707> outvote?
[01:17] <ubitux> well, i just have nicolas that isn't really excited about that
[01:18] <ubitux> i got reviews from stefano, so well the code is ready
[01:18] <ubitux> but... no explicit lgtm or anything
[01:18] <ubitux> durandal_1707: not interested in the feature?
[01:19] <durandal_1707> feature - yes, implementation - so, so
[01:19] <ubitux> what would you prefer?
[01:20] <durandal_1707> that everyone is happy
[01:20] <ubitux> of course
[01:23] <durandal_1707> nevcairiel: what is this BLZ0 fourcc?
[02:45] <cone-429> ffmpeg.git 03Michael Niedermayer 07master:8ebfd7c49e44: h264: remove unused variable
[02:45] <cone-429> ffmpeg.git 03Michael Niedermayer 07master:a0fbc28c3881: vc1dec: Fix non pullup tff
[02:45] <cone-429> ffmpeg.git 03Michael Niedermayer 07master:6c9d28a2294d: vc1dec: Fix tff == 0 handling in init_block_index()
[02:58] <Compn> highgod is writing a vp8 gpu decoder, thats pretty awesome :)
[02:58] <Compn> i wonder for what video card?
[03:13] <kierank> amd i think
[03:13] <kierank> amd seem to be sending a ton of money to that company
[03:58] <highgod> Hi,I want to ask a question, how can I set the value in HAVE_LIST, such as that I add a CL_CL_H to detect whether there is a cl/cl.h exist 
[03:59] <highgod> @Compn:about the vp8 decoder,hehe, the performance is very low, and the result is not correct
[04:01] <iive> you've implemented vp8 decoder in opencl?
[04:02] <highgod> we want to, but it is so hard to use GPU to implement
[04:04] <highgod> and we are doing the work, it is very hard
[04:05] <iive> why is it hard? gpu are not good when code needs to branch a lot?
[04:06] <oneal> Yes, gpu doesn't good at the codes with more branch
[04:07] <oneal> this is limited by the gpu architecture.
[04:08] <highgod> GPU is not suitable for the data which has a strong correlation
[04:09] <highgod> So, can anyone give me some help on the configure thing? hehe, I am not familiar with the configure,hehe I reference the openjpeg, but still can't set the value
[04:11] <drv> add a check_header line down by check_header direct.h, then add an entry in HAVE_LIST
[04:16] <highgod> OK, I will try it, thanks
[04:17] <Skyler_> eesh, doing a video decoder on a gpu sounds very difficult
[04:19] <highgod> @drv:sorry, I can't find the direct.h file
[04:20] <iive> maybe a hybrid one would be more feasible. doing MC, quant/idct/pp look like tasks that could easily be done in parallel.
[04:21] <drv> highgod: sorry, that is a line in 'configure' :)
[04:21] <drv> as is the HAVE_LIST variable
[04:21] <highgod> @iive:what is pp?
[04:22] <iive> post processing
[04:22] <iive> like loop filter
[04:22] <Skyler_> iive: idct+predict can at best be done in wavefront order, it's very hard to get enough parallelism (because of the dependencies with intra pred)
[04:22] <oneal> Yes, MC, quanti/idct can be one in parallel.
[04:23] <Compn> what about using the driver api itself
[04:23] <Compn> without going into opencl
[04:23] <Compn> not even sure ati has open api on windows...
[04:23] <Compn> just to offload the mdct or cabac
[04:23] <Compn> (or whatever vp8 uses)
[04:23] <Compn> that shares with h264
[04:23] <oneal> but gpu can have a speedup just in the case of enough data to be processed by GPU.
[04:23] <Skyler_> um... both nvidia and amd use ASIC decoders that do the whole process basically
[04:24] <Skyler_> and are programmed by microcode on the graphics card
[04:24] <Skyler_> vp8 doesn't use the same idct as h264, nor the same entropy coder
[04:24] <Skyler_> nor the same prediction (very /similar/, but not the same)
[04:24] <Skyler_> nor the same MC, nor the same deblock...
[04:24] <oneal> in the pipeline of decoder, there is a strict serial process procedure.
[04:25] <Compn> Skyler_ : darn, i thought something was similar :\
[04:25] <highgod> @drv:OK, find it, thanks
[04:25] <Skyler_> they're similar, but not in ways that are really useful
[04:27] <oneal> I have tried to optimze the inter-predication and MC. but there is a dependency between the MC in a frame. 
[04:27] <highgod> @drv: it works, thanks
[04:29] <oneal> the design I have done is all blocks which to be process by inter predicton is done in one function(like OpenCL kernel).
[04:30] <leo2013> hello
[04:30] <leo2013> ?
[04:31] <Compn> yo
[04:31] <leo2013> I met one problem in vp8
[04:31] <oneal> we assume the blocks inter-predicted just dependent the reference block in the reference frame. in fact, there is a dependency from the adjacent mb. 
[04:32] <leo2013> My problem is that the edge processing.
[04:33] <leo2013> for the input image whose size is 1920*1080
[04:33] <leo2013> the 1080 can't be dived well by 16
[04:33] <leo2013> so that it'll be the 67.5 blocks.
[04:33] <leo2013> and for the last block 68
[04:33] <leo2013> we have half block in dst.
[04:34] <leo2013> but the edge processing will fill it by using 16*16
[04:34] <oneal> in this case, it should to be paded to 16 multiple
[04:34] <leo2013> so that the overflow for more than 8 lines.
[04:34] <leo2013> I checked that mc_fun[][]
[04:35] <leo2013> this function in one mode is that only copy the data to dst.
[04:35] <Skyler_> cropping is done after decoding
[04:35] <Skyler_> e.g. the buffer is allowed to be 1920x1088; the decoder should not have to special-case that internally
[04:35] <Skyler_> (h264 and all similar formats have the same kind of thing)
[04:36] <leo2013> but the height of dst is 1080.
[04:36] <Skyler_> then there's something wrong; you should be allocating a frame of the size of the decoded video, not the size of the displayed video
[04:38] <weixuan> the dst comes from the decoded video, not the displayed video   the height of it is 1080
[04:39] <Skyler_> um... you decide in your decoder what frame size to allocate, unless I'm totally missing something
[04:39] <Skyler_> because that's how every decoder works
[04:39] <Skyler_> look at how vp8.c does cropping, or how h264.c does cropping
[04:40] <leo2013> #define PUT_PIXELS(WIDTH) \
[04:40] <leo2013> static void put_vp8_pixels ## WIDTH ##_c(uint8_t *dst, ptrdiff_t dststride, uint8_t *src, ptrdiff_t srcstride, int h, int x, int y) { \
[04:40] <leo2013>     int i; \
[04:40] <leo2013>     for (i = 0; i < h; i++, dst+= dststride, src+= srcstride) { \
[04:40] <leo2013>         memcpy(dst, src, WIDTH); \
[04:40] <leo2013>     } \
[04:40] <leo2013> }
[04:40] <Skyler_> ?
[04:40] <leo2013> in ffmpeg,vp8 will use this func.
[04:40] <Skyler_> yes...?
[04:40] <leo2013> yes.
[04:40] <Skyler_> that's a C implementation of the 00 MC position
[04:40] <leo2013> yes.
[04:40] <leo2013> that the c implementation for 00
[04:40] <Skyler_> but what does this have to do with post-decode cropping?
[04:41] <leo2013> that'll overflow for the last line.
[04:41] <Skyler_> erm, no it won't
[04:41] <leo2013> if the last block value can't get dived well.
[04:41] <Skyler_> No, again, it won't.
[04:41] <leo2013> why?
[04:42] <leo2013> dst height is 1080, and the block after padded is 1088
[04:42] <Skyler_> dst height is 1088
[04:42] <Skyler_> Your video has a coded size of 1920x1088.
[04:42] <Skyler_> Your video has a DISPLAY size of 1920x1080.
[04:43] <Skyler_> Your video could, in theory, have a display size of 532x362, with a coded size of 1920x1088 (this is possible with h264, for example)
[04:43] <leo2013> no,the codes size is 1984*1088
[04:43] <leo2013> so it's 124 block in horizonal.
[04:44] <Skyler_> Um....
[04:44] <Skyler_> how did it go from 1920 to 1984?
[04:45] <leo2013> I used the av_log to check the real width.
[04:46] <leo2013> s->mb_width
[04:46] <weixuan> 1984 is the linesize of the y plane
[04:47] <weixuan> and it is the decoder video linesize
[04:47] <Skyler_> Linesize is not width!
[04:47] <weixuan> yes
[04:47] <Skyler_> They are two different things.
[04:47] <leo2013> yes.
[04:48] <leo2013> but it's calculated in mc_fun
[04:48] <leo2013> in the func:mc_func[0][0](dst, linesize, src + y_off * linesize + x_off, linesize, block_h, 0, 0);
[04:48] <leo2013> just like you see,for block 68,the block_h is 16
[04:48] <Skyler_> ???? that function has nothing to do with what you're talking about
[04:48] <Skyler_> that's the height of a macroblock, which has nothing to do with cropping frames
[04:49] <leo2013> where does the other data(more than 8 lines) come from ?
[04:49] <iive> i actually don't remember how MC handled the down/right edges. does it emu_edge/pad on the visible image boundary or on MB boundary? 
[04:49] <Skyler_> You're confusing the coded and display resolution again.
[04:49] <Skyler_> iive: the spec itself does decoding to the coded size
[04:49] <leo2013> the other data(more than 8 lines) will come from ref-frames.
[04:50] <Skyler_> Your decoder literally does not know it is possible to create frames that are not divisible by 16.
[04:50] <Skyler_> If your decoder knows this, your decoder is written incorrectly.
[04:50] <Skyler_> Cropping is done after decoding.
[04:50] <iive> fair enough.
[04:50] <Skyler_> The actual specification dictates that decoding is done on the coded frame, not the display frame.
[04:50] <iive> i assume that encoders should pad the extra pixels before encoding.
[04:51] <Skyler_> actually.  I'm 98% sure of that.  Need to confirm it's true for VP8, I'd assume it is, but
[04:51] <leo2013> yes.but before the cropping, need loopfilter.
[04:51] <leo2013> the loopfilter use the filled data from dst in the last block line,right?
[04:51] <Skyler_> Yes, loopfilter is part of decoding.  It is not something that happens after decoding.
[04:51] <Skyler_> This is why it is called a loopfilter.
[04:52] <Skyler_> okay, confirmed, the code in MC does this:
[04:52] <Skyler_>     int width = 16*s->mb_width, height = 16*s->mb_height;
[04:52] <Skyler_> VP8 operates the same way as H.264 then.
[04:52] <Skyler_> From the perspective of the decoder, all decoding is done on a plane of macroblocks whose width and height are divisible by 16 pixels.
[04:53] <iive> things are much simpler this way :)
[04:53] <Skyler_> After the decoder is done, it says "oh, and by the way, you should only display 1920x1080 of this, not 1920x1088 of this."
[04:53] <Skyler_> But for the decoder, decoding a 1920x1080 and 1920x1088 stream is exactly, precisely, identical.
[04:53] <Skyler_> In fact, if the decoder doesn't do it that way, its output won't be correct.
[04:54] <leo2013> yes.so the width and the hieight will be cropping results.but for the last block line,it depends the ovreflow parts,right?
[04:54] <Skyler_> What do you mean?
[04:54] <Skyler_> Yes, the decoder will use parts of the frame that are not displayed.  This is by design, and the decoder does not know about this or care.
[04:54] <leo2013> we decoded the last block line.
[04:55] <leo2013> in fact, the last block line is 8 lines for input(1080p)
[04:55] <iive> only half of the last block line would be shown.
[04:55] <leo2013> how about the overflow 8 lines in dst[0]?
[04:55] <Skyler_> *sigh* how many times do I have to repeat myself before you will read what I am saying?
[04:55] <leo2013> yes.you're right.
[04:55] <iive> but it is decoded as a whole
[04:55] <leo2013> but how to fill it?
[04:55] <Skyler_> The same way you fill any other chunk of memory, by writing to it?
[04:56] <iive> it is not decoder problem.
[04:56] <leo2013> only filled the last 8 lines according to 8 lines in dst[0]?
[04:57] <Skyler_> *sigh*
[04:57] <iive> leo2013: imagine this. you get 1920x1080 before encoding. you expand it to 1920x1088 and encode it.
[04:57] <leo2013> we haven't more than 8 lines data to fill it.
[04:57] <Skyler_> What do you mean?
[04:57] <Skyler_> Your reference frame is 1920x1088.
[04:57] <Skyler_> Your current frame is 1920x1088.
[04:57] <Skyler_> All your frames are 1920x1088.
[04:57] <Skyler_> If they're not, your code is incorrect and will not decode VP8 correctly.
[04:57] <iive> you always decode 1920x1088 and then cut the last 8 lines.
[04:57] <leo2013> I want to know where the data comes from for the last 8 lines?
[04:58] <Skyler_> Um, you create it.  You're the decoder.
[04:58] <Skyler_> Your job is to decode the video
[04:58] <Skyler_> which means decoding the pixels coded in the video file...
[04:58] <iive> leo2013: the data is padded (filled) before encoding.
[04:58] <Skyler_> which includes those last 8 lines.
[04:58] <Skyler_> for a total of 1920x1088.
[05:01] <oneal> leo2013: the encoded video stream contains 1920x1088, not the 1920x1080.
[05:02] <Skyler_> exactly !
[05:03] <iive> Skyler_: btw does x264 repeat the edge pixels, when padding like this?
[05:04] <Skyler_> yup
[05:04] <Skyler_> I tried some other approaches (e.g. mirroring), but inter prediction makes mirroring a lot worse than repeating
[05:04] <Skyler_> in pure intra mode it might help?  I'm not sure if I tried that (I think JPEG generally does repeating)
[05:05] <iive> ok, i'm off
[05:06] <leo2013> your mean my reference frame is the 1920*1088 too?
[05:06] <Skyler_> correct
[05:07] <leo2013> and the key frame is 1920*1088 to fill the refrence frame.
[05:07] <oneal> Skyler, what's mean of "I tied some other apporache(e.g. mirroring), but inter prediction makes mirroring a lot worse than repeating"
[05:08] <Skyler_> oh, it's not related to what you're doing, but the encoder gets to decide how to fill those padded pixels (to pad up to divisible-by-16)
[05:08] <Skyler_> the easy way is padding, but in theory there could be something smarter
[05:08] <leo2013> so the padded data(8 lines) comes from key frame which more than 1080.
[05:08] <Skyler_> JPEG encoders typically use a technique called mirroring where they "mirror" the pixels over the edge, instead of repeating the last line over and over
[05:08] <Skyler_> but I found this was pretty useless for video.
[05:09] <oneal> oh, I understand what's you mean
[05:09] <Skyler_> so basically, the encoder gets 1920x1080, and needs to turn it into 1920x1088
[05:09] <Skyler_> in the way that takes the fewest possible extra bits.
[05:09] <Skyler_> the decoder then decodes it to 1920x1088, and throws away the last 8 lines when it's done, to get 1920x1080 again.
[05:23] <leo2013> In emulated_edge_mc,for(; y<end_y; y++){
[05:23] <leo2013>         memcpy(buf, src, w*sizeof(pixel));
[05:23] <leo2013>         src += linesize;
[05:23] <leo2013>         buf += linesize;
[05:23] <leo2013>     }
[05:25] <leo2013> the end_y can be 21, and the src is from 1071, so when copy src to buf, it'll be more than 1088.
[05:47] <Skyler_> leo2013:     end_y = FFMIN(block_h, h-src_y);
[05:47] <leo2013> I met another problem in vp8 is the edge processing in inter_predict.Especially for the last block line in emulated_edge_mc.There will be the possibility of overflow when fill the td->edge_emu_buffer.
[05:48] <leo2013> yes.
[05:48] <leo2013>  end_y = FFMIN(block_h, h-src_y);
[05:48] <leo2013> this min value can be 21
[05:48] <leo2013> 21 for Y
[05:49] <leo2013> 8+8+5
[05:50] <Skyler_> What is block_h?
[05:50] <leo2013> block_h is 16
[05:51] <leo2013> h-src_y
[05:51] <Skyler_> What is the maximum value of MIN(16,x)?
[05:55] <leo2013> block_h + subpel_idx[1][my]
[05:55] <leo2013> but if the block_h is 21,how about it?
[05:55] <weixuan> block_h + subpel_idx[1][my]
[05:56] <weixuan> and the block_h = 16 and the subpel_idx[1][my] = 5
[05:56] <weixuan> so the final value of the block_h = 21
[05:57] <weixuan> so it will overflow the range of the src
[05:57] <leo2013> static const uint8_t subpel_idx[3][8] = {
[05:57] <leo2013>     { 0, 1, 2, 1, 2, 1, 2, 1 }, // nr. of left extra pixels,
[05:57] <leo2013>     { 0, 3, 5, 3, 5, 3, 5, 3 }, // nr. of extra pixels required
[05:57] <leo2013>     { 0, 2, 3, 2, 3, 2, 3, 2 }, // nr. of right extra pixels
[05:57] <leo2013> };
[05:57] <Compn> you guys have to talk to BBB-
[05:57] <Compn> hes the vp8 guy :)
[05:58] <Compn> i am not, and its time for me to sleep. night and good luck :)
[05:58] <Compn> probably michaelni knows something about it as well 
[05:58] <leo2013> extra pixels required
[05:58] <Skyler_> except, no.
[05:58] <Compn> and of course Skyler_ :)
[05:58] <Skyler_> end_y = FFMIN(block_h, h-src_y); 
[05:58] <Skyler_> put simply:
[05:59] <weixuan> good night
[05:59] <Skyler_> The loop you pointed to copies the pixels that do not need to be emulated.
[05:59] <Skyler_> The MIN() statement says "we don't copy more pixels than we need rows of pixels (obviously)"
[05:59] <Skyler_> that's the h-src_y part
[05:59] <Skyler_> er, correction
[05:59] <Skyler_> that's the block_h part
[05:59] <Skyler_> h-src_y is the constraint on the number of rows available.
[05:59] <Skyler_> But the inner workings of emulated_edge_mc aren't actually important; it's probably easier to understand _what_ it does than how it works.
[06:00] <Skyler_> All it does is "if a pixel is off the frame, calculate it to be equal to the closest pixel that is on the frame".  That's it.
[06:00] <Skyler_> It's a bit obfuscated, yes <.<
[06:02] <leo2013> I found there are sevral blocks fit that condition extra.the over the bottom of scr,what's the nearest value could be gotten?
[06:03] <Skyler_> The phrasing I used is equivalent to "any pixel past the edge is equivalent to what you'd get if you padded the edge by repeating the last line over and over"
[06:03] <Skyler_> So this generally leaves you with two options in a codec:
[06:04] <Skyler_> 1) pad the edges of the frame with the last line/column, repeated ~16 times or however many is necessary to guarantee correct behavior.  Don't implement emulated_edge_mc.
[06:04] <Skyler_> Advantage: faster MC, no extra special case
[06:04] <Skyler_> Disadvantage: requires, well, padding every fram
[06:04] <Skyler_> *frame
[06:04] <Skyler_> 2) emulated_edge_mc, which eliminates the need for padding.
[06:04] <Skyler_> I think technically you need about 19 lines of padding because of subpel.
[06:05] <Skyler_> I think vp8 used to support both, but 1) was eventually removed due to not being faster?  I'm not sure.
[06:05] <Skyler_> But which one you do is totally your choice.
[09:53] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:68d8238cca52: hpeldsp: Add half-pel functions (currently copies of dsputil)
[09:53] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:23de9e91df1c: Merge commit '68d8238cca52e50e8cc81bf2edcaf8088c52d4c0'
[10:08] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:3bd062bf7f81: vp3: Use hpeldsp instead of dsputil for half-pel functions
[10:08] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:cb7ecb75635d: vp56: Use hpeldsp instead of dsputil for half-pel functions
[10:08] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:8f992dc8c7c5: indeo3: Use hpeldsp instead of dsputil for half-pel functions
[10:08] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:0f0a11d5768e: bink: Use hpeldsp instead of dsputil for half-pel functions
[10:08] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:8071264f2196: interplayvideo: Use hpeldsp instead of dsputil for half-pel functions
[10:08] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:3c6621708b93: Merge commit '8071264f2196d71ff49c3944c33f8d3d83f548f1'
[10:22] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:c10470035e60: mimic: Use hpeldsp instead of dsputil for half-pel functions
[10:23] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:6caa44aa7df0: svq1: Use hpeldsp instead of dsputil for half-pel functions
[10:23] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:f4fed5a2f97e: mpegvideo: Use hpeldsp instead of dsputil for half-pel functions
[10:23] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:2f6bc5f7c193: svq3: Use hpeldsp instead of dsputil for half-pel functions
[10:23] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:ab4ba6b74d9e: Merge commit '2f6bc5f7c193477c2ebc0acce8f2d5551445e129'
[12:50] <saste> ubitux: comments on interleave?
[13:05] <ubitux> saste: ok, will do in a moment
[13:05] <ubitux> thanks for the timeline reviews btw
[13:28] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:1277dc07fbe6: svq1enc: Use hpeldsp instead of dsputil for half-pel functions
[13:28] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:28bc406c84b0: mjpeg: Use hpeldsp instead of dsputil for half-pel functions
[13:28] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:3fee9fa02232: Merge commit '28bc406c84b04a5f1458b90ff52ddbec73e46202'
[13:44] <durandal_1707> michaelni: so any ideas for s302m?
[13:46] <saste> ubitux: ffplay IN -vf "select='if(gt(random(0), 0.2), 1, 2)':n=2 [tmp], edgedetect, [tmp] interleave"
[13:48] <durandal_1707> michaelni: if not, i will push it with experimental flag and be done with it... (hoping others can fix it...)
[13:50] <michaelni> durandal_1707, iam not sure how to solve it best, a s302 muxer would be an option instaead of a encoder, or a private option that contains the frame rate but that would need a change in ffmpeg*.c to set it, i also dont mind if you push with the flag ...
[13:51] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:8db00081a37d: x86: hpeldsp: Move half-pel assembly from dsputil to hpeldsp
[13:51] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:d0aa60da1022: Merge commit '8db00081a37d5b7e23918ee500bb16bc59b57197'
[13:53] <durandal_1707> michaelni: s302m muxer is useles, how would you than put it in ts?
[13:54] <michaelni> the s302 muxer would have to pass the stuff into a ts muxer, not sure how that would look
[13:54] <durandal_1707> whats wrong with internal approach that when variable frame size flag is set than interleave thing sets frame size that s320m expects?
[13:55] <durandal_1707> or additional flag could be introduced
[13:55] <michaelni> what interleave code exactly do you speak of ?
[13:56] <durandal_1707> whatever code that sets frame->nb_samples when its 0 in init
[13:57] <michaelni> my concern is just that complexity should not be moved into the user application
[13:57] <michaelni> that is the encoder or avcodec or avformat should do thze work
[13:58] <michaelni> needing every user app to have code to get the interleaving right would be annoying, thats why i asked which code you meant ...
[13:59] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:47e5a98174eb: ppc: hpeldsp: Move half-pel assembly from dsputil to hpeldsp
[13:59] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:055e5c8e01c5: Merge commit '47e5a98174eb9c07ad17be71df129719d60ec8b7'
[13:59] <durandal_1707> so variable frame flag just means user app can set wheterever it wants?
[14:03] <durandal_1707> saste: buffer queue overflows do not happens with interleave anymore?
[14:04] <michaelni> durandal_1707, about var-frame within limits id say yes
[14:05] <durandal_1707> michaelni: and it can't be changed after set in init, because for some fps, it cant be same for every frame, so it goes 161,162,161,162 or similar
[14:05] <durandal_1707> so the only way how it can be currently handled is from user app
[14:07] <durandal_1707> hmm, it can be changed ..., from decoder, but from encoder and without flag?
[14:08] <durandal_1707> it simply can't, or i'm missing something?
[14:08] <durandal_1707> so its API limitation
[14:09] <durandal_1707> i could do internaly buffering ...
[14:10] <ubitux> saste: ok, will try soon, but first: since i'm lazy, do you have a out-of-box working command injection test?
[14:10] <ubitux> (i've restored the enable command, and now for all filters)
[14:10] <ubitux> (supporting the timeline of course)
[14:14] <michaelni> durandal_1707, yes internal buffering may be an option 
[14:21] <saste> ubitux: out-of-box working command injection test??
[14:21] <saste> I use sendcmd to test process_command() callbacks
[14:22] <ubitux> yeah it's ok i've finally made one
[14:25] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:7384b7a71338: arm: hpeldsp: Move half-pel assembly from dsputil to hpeldsp
[14:25] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:c4010972c4a7: Merge commit '7384b7a71338d960e421d6dc3d77da09b0a442cb'
[14:26] <durandal_1707> kierank: could you provide useful file with s302m codec for coverage test? (the one i found is 0 only)
[14:27] <kierank> not sure if i can make any s302m public
[14:27] <kierank> i think vlc have some
[14:29] <kierank> you have the spec folder access, right?
[14:29] <durandal_1707> where?
[14:29] <kierank> oh apparently not
[14:29] <kierank> do you have dropbox
[14:31] <kierank> https://dl.dropboxusercontent.com/u/2701213/Specs/SMPTE/SMPTE%20Standards/s302m-2007.pdf
[14:32] <kierank> whatever you end up doing in ffmpeg, i'll probably have to make a small fork of s302m because i need to set the frame size manually
[14:34] <durandal_1707> kierank: why?
[14:36] <durandal_1707> in that case i just leave code as is, (it will obviosly need changes in ffmpeg ....)
[14:36] <durandal_1707> and where are those vlc's s302m samples?
[14:39] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:bfb41b5039e3: bfin: hpeldsp: Move half-pel assembly from dsputil to hpeldsp
[14:39] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:c5a11ab6d106: Merge commit 'bfb41b5039e36b7f873d6ea7d24b31bf3e1a8075'
[14:55] <durandal_1707> hmm, what about w3fdif deinterlacer from FFmbc?
[14:55] <kierank> durandal_1707: because 302m is used for low latency and so i get frames with 1601, 1602, 1601 etc. I will need to pass whether i am 1601 or 1602 straight from hardware
[14:56] <kierank> durandal_1707: in vlc samples repo i guess
[14:57] <durandal_1707> sorry, i dont know where that is
[15:01] <kierank> http://streams.videolan.org/ somewhere but i can't seem to find it
[15:09] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:278bd2054ca6: sh4: hpeldsp: Move half-pel assembly from dsputil to hpeldsp
[15:09] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:4bdec0e71edc: Merge commit '278bd2054ca61ab70dfe38f1774409cda2da5359'
[15:13] <durandal_1707> kierank: well private option to not use interneal buffering at could be added, leaving it to the calling code
[15:20] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:78ce568e43a7: sparc: hpeldsp: Move vis half-pel assembly from dsputil to hpeldsp
[15:20] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:fdb1f7eb7a80: Merge commit '78ce568e43a7f3993c33100aa8f5d56c4c4bd493'
[15:27] <ubitux> saste: hey the blur effect is awesome :)
[15:27] <ubitux> i believe you can do the same with enable now though :)
[15:27] <durandal_1707> how you repeat Nth frame  X times?
[15:30] <ubitux> ./ffplay ~/samples/matrixbench_mpeg2.mpg -vf "boxblur=enable='if(gt(random(0), 0.2), 0, 1)'"
[15:30] <ubitux> hehe
[15:30] <ubitux> durandal_1707: from the cmdline?
[15:30] <durandal_1707> how that can be useful?
[15:30] <durandal_1707> ubitux: with lavfi
[15:31] <ubitux> maybe we could ajudst copy to re-inject the same frame
[15:31] <ubitux> re-inject each frame copied N times
[15:31] <ubitux> (and use enable to make the copy only once)
[15:31] <durandal_1707> FFmbc have repeatframe
[15:32] <ubitux> (durandal_1707: the boxblur thing make some kind of wave effect)
[15:32] <ubitux> (so that's awesome)
[15:33] <ubitux> durandal_1707: how can this dup thing be useful?
[15:34] <durandal_1707> dup thing?
[15:34] <ubitux> repeatframe
[15:34] <ubitux> what you are talking about
[15:34] <durandal_1707> filter from FFmbc
[15:34] <ubitux> yes but what is the purpose?
[15:35] <durandal_1707> to repeat frame
[15:35] <ubitux> what for?
[15:36] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:2957d29f0531: alpha: hpeldsp: Move half-pel assembly from dsputil to hpeldsp
[15:36] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:6ec26157b96b: Merge commit '2957d29f0531ccd8a6f4378293424dfd92db3044'
[15:42] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:54cd5e4f92de: dsputil: Remove hpel functions (moved to hpeldsp)
[15:42] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:d2d2c309e8f3: Merge commit '54cd5e4f92de6bd0fb8e24069153b0156c8136bc'
[15:42] <durandal_1707> ubitux: to repeat single image with blend filter....
[15:50] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:c443117f25e0: dsputil: Remove dct_bits
[15:50] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:c3624cfe7638: Merge commit 'c443117f25e034c1e3ef35334b61b937e1e208ff'
[15:54] <ubitux> http://lucy.pkh.me/youtube-free.webm youtube and my ISP, yepee.
[15:55] <ubitux> 0kb/sec regularly ffs
[15:55] <ubitux> (i can in practice download up to 1.6MBytes/sec)
[15:59] <Compn> ubitux : dang
[16:00] <Compn> ubitux : throttling ?
[16:00] <ubitux> it's been like this since about 2 years
[16:00] <ubitux> it's particularly painful nowadays
[16:01] <ubitux> Compn: it's a ping pong about who will pay between google & my isp
[16:01] <Compn> too bad you cant use youtube over https
[16:01] <Compn> the video urls still seem to be http
[16:01] <Compn> html over https tho
[16:01] <ubitux> it's the same with https
[16:02] <Compn> hows the speed on googlevideo or vimeo ?
[16:03] <ubitux> it's fine most of the time for vimeo
[16:03] <ubitux> (googlevideo still exists?)
[16:03] <Compn> yeah they were going to kill it, but then didnt
[16:03] <Compn> but the last video i tried to watch there was missing or unavailable
[16:03] <Compn> they were thinking about migrating it to youtube, but i dunno current status
[16:04] <Compn> they pestered me about moving my single video to youtube, which i finally did
[16:04] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:619e0da19119: dsputil: Remove unused 32-bit functions
[16:04] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:f6dcd844ee65: Merge commit '619e0da19119bcd683f135fe9a164f37c0ca70d1'
[16:06] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:c9f5fcd08c3a: dsputil: Merge 9-10 bpp functions for get_pixels and draw_edge
[16:06] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:bf66016e4c03: Merge commit 'c9f5fcd08c3a33bfb1b473705c792ab051e7428d'
[16:20] <cone-132> ffmpeg.git 03Ronald S. Bultje 07master:d4d186d185df: dsputil: Remove non-8bpp draw_edge
[16:20] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:9ae56b85b664: Merge commit 'd4d186d185df98492d8935a87c5b5cf369db9748'
[16:24] <cone-132> ffmpeg.git 03Martin Storsjö 07master:287c8db39e71: cosmetics: bfin: Fix indentation in the dsputil init function
[16:24] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:0e3d2b2c8d02: Merge commit '287c8db39e71af7047e551bbfd1264d771cccbc9'
[16:28] <saste> btw there will be socis this year?
[16:29] <saste> in case we are going to propose something very much space oriented (like the star mapper proposed by michaelni)
[16:29] <j-b> saste: ping
[16:29] <saste> j-b: pong?
[16:32] <cone-132> ffmpeg.git 03Martin Storsjö 07master:a60136ee570c: vc1: Remove now unused variables
[16:32] <cone-132> ffmpeg.git 03Martin Storsjö 07master:b71a0507b01e: x86: Remove unused inline asm instruction defines
[16:32] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:0dd25e46996d: Merge remote-tracking branch 'qatar/master'
[16:59] <BBB-> ?
[17:06] <Compn> just gpu devs looking for vp8 help 
[17:11] <kierank> durandal_1707: who's  Darryl Wallace?
[17:13] <BBB-> hm I'll respond to the ML again
[17:14] <BBB-> next time they're on IRC, please ask them for samples. I don't like the thought of a bug at all, better to confirm quickly or just confirm that it's a bug on their end
[17:16] <Compn> sounds like it was a bug on their end with edge emulation and display res
[19:24] <durandal_1707> kierank: the guy who added pull request on ffmpeg github mirror
[19:24] <kierank> ah
[19:27] <durandal_1707> michaelni: how fast is subsampled yuv to unsubsampled yuv conversion in sws?
[19:37] <xlinkz0> can i ask library usage questions here? #ffmpeg seems to be pretty dead
[19:39] <michaelni> durandal_1707, someone would have to benchmark 
[19:40] <funman> win 15
[19:49] <cone-132> ffmpeg.git 03Paul B Mahol 07master:a56fd051ee64: lavfi/stereo3d: support more formats for non-anaglyph outputs
[19:57] <cone-132> ffmpeg.git 03highgod0401 07master:fdad04e75628: avfilter/deshake_kernel: fix reset value bug of deshake kernel
[21:08] <michaelni> j-b, "http://download.videolan.org/pub/contrib/c99-to-c89/" seems dead (https://ffmpeg.org/trac/ffmpeg/ticket/2487)
[21:09] <michaelni> s/dead/404/
[21:11] <funman> there's https://github.com/mstorsjo/c99-to-c89
[21:11] <funman> dunno if we should make a tarball
[21:12] <funman> ah apparently it was binaries
[21:12] <michaelni> yep, the link from github.com/mstorsjo/c99-to-c89 points also to videolan for binaries
[21:12] <michaelni> same 404
[21:13] <funman> i can't find that folder somewhere else on the ftp, i guess it has been deleted..
[21:13] <funman> j-b: was it you?
[21:13] <j-b> no, but I think it was a weird old script
[21:14] <j-b> I'll re-put that in place
[21:15] <cehoyos> j-b: Hi, is there a reason why the c99-wrap.exe binaries disappeared from http://download.videolan.org/pub/contrib/c99-to-c89/
[21:15] <cehoyos> ?
[21:16] <cehoyos> nevcairiel: Could we provde the binaries you use on ffmpeg.org ?
[21:16] <funman> cehoyos: you just missed the conversation ^_^
[21:16] <cehoyos> (Iirc, this is the third download location that disappeared)
[21:16] <funman> 21:14 <@j-b> I'll re-put that in place
[21:16] <funman> 21:14 -!- cehoyos [~cehoyos at chello080108089202.30.11.tuwien.teleweb.at] has joined #ffmpeg-devel
[21:16] <cehoyos> Enlighten me please
[21:16] <cehoyos> ok, thank you!
[21:17] <cehoyos> Ah, you mean "just" as in "really just" =-)
[21:36] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:4824aea7afd4: avcodec/mpegvideo: change asserts to av_asserts
[22:09] <cone-132> ffmpeg.git 03Marton Balint 07master:40693ba3ac3f: ffplay: simplify aspect ratio calculation
[22:09] <cone-132> ffmpeg.git 03Marton Balint 07master:d148339d19c6: ffplay: use AV_NOPTS_VALUE video frame pts instead of using 0
[22:09] <cone-132> ffmpeg.git 03Marton Balint 07master:b8facbeecb66: ffplay: only do early frame drop if video queue is not empty
[22:09] <cone-132> ffmpeg.git 03Michael Niedermayer 07master:eda61abc846b: Merge remote-tracking branch 'cus/stable'
[22:15] <cehoyos> ubitux: I never tried to reproduce ticket 2446, could you test and close the ticket if it is fixed?
[23:03] <ubitux> cehoyos: sure, will do
[23:06] <cehoyos> Merci
[23:51] <ubitux> hey btw
[23:51] <ubitux> just came accross this: http://cr.i3wm.org/
[23:51] <ubitux> it looks awesome for contributions
[00:00] --- Sun Apr 21 2013


More information about the Ffmpeg-devel-irc mailing list