[Ffmpeg-devel-irc] ffmpeg.log.20180222

burek burek021 at gmail.com
Fri Feb 23 03:05:01 EET 2018


[00:42:58 CET] <analogical> anyone know if the roku player can stream files from an SMB share??
[01:24:26 CET] <mvmv> Does ffmpeg support taking images directly from the web as an input method? If so, are special flags documented and available? A test run as with the following command yeild a zero byte output file:
[01:24:32 CET] <mvmv> ffmpeg -loop 1 -i https://upload.wikimedia.org/wikipedia/commons/c/c6/2013_Porsche_911_Carrera_4S_%28991%29_%289626546987%29.jpg out.mp4
[01:26:31 CET] <furq> huh
[01:26:37 CET] <furq> that gets oom killed on both systems i tried it on
[01:29:10 CET] <mvmv> No errors on Amazon Linux...
[01:30:12 CET] <furq> it doesn't seem to be buffering the input image properly
[01:30:23 CET] <furq> but also it's near enough to 4k so x264 will need a ton of memory
[01:31:28 CET] <furq> dmesg | grep killed
[01:31:40 CET] <furq> if it's getting oom killed it should show up there
[01:31:47 CET] <shtomik> Hi guys, I'm using sample from examples/transcoding.c, but after transcoding I get spoiled sound, but sometimes error: more samples than frame size (avcodec_encode_audio2), but in this example I have a filter graph, that convert audio sample fits, channel layout and sample rate, what am I doing wrong? how to change packet size, if there is required? or I'm asking not correct questions? Thanks!
[01:32:25 CET] <furq> shtomik: https://www.ffmpeg.org/doxygen/trunk/doc_2examples_2resampling_audio_8c-example.html
[01:36:45 CET] <shtomik> furq: Okay, thanks, and what about filters ??? They don't work on that case? So sorry about my English.
[01:40:49 CET] <shtomik> <furq> ;)
[01:40:52 CET] <mvmv> furq: No errors found there either, but tested some additional images with different results, so seems to be memory related, thanks for the lead.
[03:45:12 CET] <convert> haha google took over ffmpeg and removed ffserver thats great
[03:45:38 CET] <convert> youtube pwns ffmpeg now
[03:46:08 CET] <convert> you should print a "welcome to opensource" banner when ffmpeg runs
[03:48:55 CET] <convert> follow the money behind whoever wrote that piece on the ffmpeg wiki!
[03:51:41 CET] <convert> "easier to maintain" is because google replaces competent opensource developers with developers who are technically not competent to deal with the challenges of software development or refuse on incentive to do what is required by the software but instead serve interests of the kingdom of the  antichrist
[03:52:11 CET] <convert> the api requirements are for a hypothetical google market, not for the users of ffmpeg
[03:52:41 CET] <convert> the actual users are irrelevant compared to the model that is funded by the "investors in opensource"
[03:58:26 CET] <klaxa> lol what
[03:59:02 CET] <klaxa> you are free to fund developers for ffserver, but it's going to be hard to find some
[03:59:23 CET] <klaxa> there are numerous better ways to do streaming in 2018
[03:59:51 CET] <convert> all those have dependencies which google feeds on
[04:00:02 CET] <convert> the only dependency earlier for ffserver was ffmpeg
[04:00:14 CET] <convert> it is the kingdom in action
[04:01:03 CET] <klaxa> what are these other dependencies?
[04:01:10 CET] <klaxa> i'm thinking of nginx
[04:01:25 CET] <convert> oh i was not thinking of that
[04:01:30 CET] <klaxa> with hls you are even free to use any webserver you want
[04:01:39 CET] <convert> afk..
[04:05:48 CET] <furq> you people have vivid imaginations
[04:06:23 CET] <furq> ffserver was removed because it was a piece of shit that never worked properly and was unmaintained for years
[04:06:41 CET] <convert> to be honest, i never used it myself
[04:06:46 CET] <furq> lol
[04:06:49 CET] <convert> hehe
[04:06:55 CET] <furq> i am truly sorry for your loss
[04:24:52 CET] <kepstin> ... wow, google apparently recommends using cq target quality of '31' with vp9 on 1080p content?
[04:25:06 CET] <kepstin> that's definitely in the range where i'm still seeing noticable artifacts
[04:25:19 CET] <kepstin> but well, I guess that kinda matches what I see on youtube :/
[04:25:25 CET] <kepstin> (from https://developers.google.com/media/vp9/settings/vod/ )
[09:11:58 CET] <bodqhrohro> Looks like the `mix` filter is missing in my ffmpeg installation. Can I use something instead?
[09:12:42 CET] <bodqhrohro> I need to merge an input stream with some geq generated noise streams
[09:19:39 CET] <pmjdebruijn> bodqhrohro: you can get static binaries from: https://www.ffmpeg.org/download.html
[12:50:07 CET] <bodqhrohro> The latest static build tells there is no filter 'mix' as well
[12:50:51 CET] <durandal_1707> bodqhrohro: you need devel version
[12:51:41 CET] <bodqhrohro> durandal_1707: is it so fresh? Or just disabled in non-devel version?
[12:52:10 CET] <durandal_1707> is very fresh
[12:53:11 CET] <bodqhrohro> So there was no way to mix several video streams before?
[12:53:54 CET] <durandal_1707> bodqhrohro: only 2 with blend filter
[12:54:58 CET] <SortaCore> this is odd, I have the same timing info on both src and destination, but the dts/pts is constantly coming up with non-monotonic DTS/PTS
[12:55:24 CET] <bodqhrohro> durandal_1707: okay, the blend filter looks even more powerful
[13:00:00 CET] <durandal_1707> bodqhrohro: it can mix only 2 frames, mix doesnt have that limits but is much simpler
[13:04:06 CET] <bodqhrohro> durandal_1707: but nothing prevents me from mixing them sequentially in pairs? Or mix works faster for several streams than a chain of blends?
[13:04:50 CET] <durandal_1707> bodqhrohro: you cant have same results....
[13:08:15 CET] <bodqhrohro> Okay, thanks, I'll firstly play with blend than only try mix if blend will be not sufficient. Requiring the freshest features would make my script harder to run for others
[13:15:54 CET] <relaxed> bodqhrohro: my git builds have the mix filter, https://www.johnvansickle.com/ffmpeg/
[15:30:28 CET] <zerodefect> In the overlay filter, I can give the overlay pin an ARGB frame. When it is overlayed on YUV, it doesn't look like the filter takes the main pin's color characteristics (ex: BT.709) into consideration. Should it?
[16:07:00 CET] <lyncher> hi. I'm implementing a filter in libavfilter which requires reading from a source with URLContext
[16:07:36 CET] <lyncher> when I'm compiling ffmpeg with my changes I get link errors of ffurl_alloc, ffurl_connect and ffurl_read
[16:07:53 CET] <lyncher> it seems that URLContext is an internal class of libavformat
[16:08:15 CET] <lyncher> how can I access a source and pass a rw_timeout?
[16:08:40 CET] <lyncher> I want to avoid blocking if there's no data at input
[16:09:00 CET] <lyncher> I've tried AVIOContext with no success.....
[16:27:45 CET] <durandal_1707> lyncher: you cant use internal ff* symbols from another library
[16:28:59 CET] <Romano> Can I create .m4s files with the ffmpeg utility?
[16:29:40 CET] <Romano> I'm trying to convert a video file into an .m4s file
[16:30:19 CET] <Romano> I know ffmpeg can create MPEG-DASH segments that are in the m4s format
[16:30:43 CET] <Romano> But is there a way to use the ffmpeg utility to convert a file to m4s?
[16:38:00 CET] <Romano> Can I convert a video file into a .m4s file?
[16:38:14 CET] <greatguy> Está aqui alguém caralho»
[16:38:29 CET] <greatguy> Nunca mais chegam as 17 horas
[17:09:40 CET] <SortaCore> Romano: that's kind of a dead format, although it's related to mp4
[17:18:59 CET] <shtomik> Hi to all guys, tell me, please, how to add filter to my code for asetnsamples, for change frame size ? av_opt_set_bin(buffersink_ctx, "asetnsamples",(uint8_t*)&enc_ctx->frame_size, sizeof(enc_ctx->frame_size),AV_OPT_SEARCH_CHILDREN); or where can I read about all functions and filters? Thanks!
[17:19:39 CET] <shtomik> My top example is incorrect.
[17:22:12 CET] <Romano> MPEG-Dash uses it in its segments
[17:22:51 CET] <Romano> I want to be able to create such files to use in the creation of personalized mpeg-dash segments
[17:25:51 CET] <Nacht> Romano: Segment supports m4s I believe
[17:26:05 CET] <Nacht> Romano: https://www.ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
[17:26:40 CET] <Nacht> Or just use DASH:
[17:26:41 CET] <Nacht> https://www.ffmpeg.org/ffmpeg-formats.html#dash-2
[17:29:26 CET] <Romano> Maybe I wasn't very specific
[17:29:46 CET] <Romano> I'm not trying to segment a file into same size segments
[17:29:53 CET] <Romano> Like what dash uses
[17:31:57 CET] <Romano> I want to have separated chunks that are .m4s files to use in an mpd. This way I can change segments dynamically
[17:49:37 CET] <gh0st3d> Hey guys, trying to use ffmpeg in an AWS Lambda funtion. I've got a binary that works fine when I use png files, but I'm trying to use jpegs and it's giving me issues with the shared library libjpeg.so.8 ... Anyone know of a compiled binary that includes libjpeg or a way I can work around this?
[17:52:14 CET] <saml> gh0st3d, there's staticbuild. did you try that?
[17:52:35 CET] <saml> https://www.johnvansickle.com/ffmpeg/
[17:52:51 CET] <saml> that includes           libopenjpeg: 2.3.0
[17:53:16 CET] <kepstin> openjpeg is not relevant, that's a jpeg2000 library, not jpeg
[17:53:48 CET] <gh0st3d> Yeah I tried that a little while ago and realized that as well
[17:55:19 CET] <kepstin> but yeah, you either have to include a copy of libjpeg.so.8 in your deployment package (and have the ffmpeg configured to find it somehow, I dunno how they set up lib paths - might need a wrapper shell script)
[17:55:30 CET] <kepstin> or just build your own ffmpeg with it statically linked
[17:56:44 CET] <gh0st3d> If the lambda environment has libjpeg.so.62, is that something I can try changing ffmpeg to use instead of the so.8? Apologies if that's a dumb question, this stuff is a bit over my head so I'm trying to understand it all
[17:56:57 CET] <gh0st3d> I'll look into the lib path thing
[17:57:43 CET] <kepstin> you'd have to recompile ffmpeg to use the different libjpeg api version
[17:58:16 CET] <gh0st3d> Gotcha. Ok thank you for that info!
[17:58:27 CET] <kepstin> hmm, I'm actually kind of confused now, I thought ffmpeg had internal jpeg decoding/encoding
[17:58:33 CET] <kepstin> why is it using libjpeg at all
[17:59:37 CET] <gh0st3d> Ah wait. It looks like it may be phantomjs failing to find libjpeg, and the error from ffmpeg is actually saying there's no jpeg inputs like it's expecting
[17:59:49 CET] <gh0st3d> The errors were back to back, confused me a bit.
[18:00:06 CET] <gh0st3d> That probably makes more sense.
[18:08:22 CET] <zerodefect> @Kepstin: In the overlay filter, I can give the overlay pin an ARGB frame. When it is overlayed on YUV, it doesn't look like the filter takes the main pin's color characteristics (ex: BT.709) into consideration. Should it?
[18:10:55 CET] <kepstin> zerodefect: the overlay filter requires all inputs to be in the same pixel format - if they aren't, some automatically inserted 'scale' filters will be used to do the conversion. If it's not doing what you want, you should manually convert pixel formats.
[18:12:33 CET] <zerodefect> @kepstin: So do color ranges, characteristics, etc apply to RGB/ARGB too or is that more of a YUV thing?
[18:13:34 CET] <kepstin> the colour characteristics basically just affect how to convert between RGB and YUV formats.
[18:13:56 CET] <kepstin> (although there's some complications with gamma, but that's usually close enough to ignore)
[18:14:57 CET] <zerodefect> so would the same algorithm be used to convert from RGB to YUV SD (BT.601) and RGB to YUV for HD (BT.709)
[18:15:17 CET] <kepstin> no, the calculation is different to convert between rgb and yuv for bt.601 vs bt.709
[18:15:47 CET] <kepstin> assuming by rgb you mean srgb
[18:16:56 CET] <zerodefect> At the moment, I'm using a decoded png with I imagine is not srgb, right?
[18:17:03 CET] <zerodefect> *with=which
[18:17:19 CET] <kepstin> unless stated otherwise, png images are usually srgb
[18:18:13 CET] <zerodefect> Ok. Intersting. So wiki says that sRGB uses BT.709 primaries.  Correct?
[18:18:46 CET] <kepstin> I think that by default, if you use overlay with argb and yuv, ffmpeg will convert both inputs to rgb to do the overlay, then possibly convert back to yuv afterwards.
[18:19:07 CET] <kepstin> you can change that by explicitly inserting format filters, scale filters, etc.
[18:19:22 CET] <kepstin> (use -v verbose to see the auto-inserted conversions)
[18:20:13 CET] <zerodefect> So if I was overlay sRGB onto YUV BT.709, no colorspace conversion would be necessary...only if blending on BT.601 or any other standard
[18:21:13 CET] <kepstin> well, no, it needs to do conversions somewhere.
[18:21:23 CET] <kepstin> to do the blend, both inputs have to be the same format
[18:21:39 CET] <kepstin> so either the rgb has to be converted to yuv, or the yuv has to be converted to rgb
[18:21:55 CET] <kepstin> and in either case, if it uses the wrong colourspace, you could get a mismatch
[18:22:22 CET] <kepstin> like, for example, if it converts the rgb png to yuv bt.601 then overlays over your bt.709 frame.
[18:23:09 CET] <zerodefect> Sorry, when I said 'colorspace', I wasn't clear. I was trying to deduce when need for 'colorspace' filter was required.
[18:23:15 CET] <zerodefect> Yeah, that makes sense.
[18:23:47 CET] <zerodefect> I appreciate that need to be made into a similar format.
[18:23:59 CET] <zerodefect> This world is very hairy! You've taught me some things here...so thank you!!
[18:24:08 CET] <kepstin> unfortunately, a lot of the ffmpeg code predates bt.709 in common usage, so you might get implicit conversions that guess or default to "wrong" values or otherwise don't know what colourspace their input is supposed to be.
[18:24:28 CET] <kepstin> I think there's still work being done now to make this better.
[18:25:06 CET] <zerodefect> Thanks for heads up. Presumably, if I set these explicitly, it will work as anticpated.
[18:26:15 CET] <zerodefect> One other thing. Does 'full range' 0 to 255 and limited range  16 to 236 apply to RGB formats too?
[18:49:35 CET] <diverdude> hi, can ffmpeq read .seq files and windows movie maker files?
[19:02:44 CET] <daddesio> I don't think Windows Movie Maker .mswmm has been reverse-engineered yet...
[19:04:33 CET] <durandal_1707> daddesio: do you have such files?
[19:04:35 CET] <diverdude> daddesio: what about .seq files?
[19:05:16 CET] <daddesio> durandal_1707: no, I just googled "windows movie maker file extension".
[19:06:47 CET] <diverdude> daddesio: but i dont think .seq files have anything to do with windows movie maker
[19:07:45 CET] <daddesio> diverdude: do you know which type of seq files you have?
[19:09:11 CET] <diverdude> no, not yet....I have asked for a sample so I am waiting for it. Are there several different .seq file types?
[19:09:40 CET] <daddesio> I don't even know what .seq is :P
[19:28:29 CET] <saml> so, -filter_complex fps=40   and -r 40 as output option yields different frames
[19:28:52 CET] <saml> fps drops evenly. -r seems to drop towards the end
[19:41:38 CET] <shtomik__> wtf, why when I use transcode.c example, after transcoding I have a sound with noise ? transcoding to mp3 or aac in mp4. Filters: resample and and format, doesn't change the situation... So sorry for my English ...
[19:42:45 CET] <JEEB> make sure that in the filter path you are getting the right sample format out, are setting the correct one as input, and that you are telling the filter chain to give you as many samples as the audio encoder requires
[19:43:09 CET] <JEEB> (and of course that you tell the encoder that you'll be feeding it sample format X)
[19:48:14 CET] <shtomik__> In my program output I have: Stream #0:1: Audio: pcm_f32le, 44100 Hz, stereo, flt, 2822 kb/s, and output: Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 128 kb/s, but ffprobe output: Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 143 kb/s (default)
[19:48:27 CET] <shtomik__> Bitrate changes?
[19:53:20 CET] <shtomik__> > and that you are telling the filter chain to give you as many samples as the audio encoder requires
[19:53:20 CET] <shtomik__> is it about asetnsamples ?
[19:59:49 CET] <lyncher> (libav) I'm using async:tcp://... to read from a source. how can I tell if there's data available to read in async?
[20:02:01 CET] <JEEB> shtomik__: av_buffersink_get_samples
[20:02:09 CET] <JEEB> or set_frame_size
[20:03:01 CET] <JEEB> but usually the encoder will just derp at you if the frame size is wrong
[20:16:10 CET] <shtomik__> JEEB: av_buffersink_set_frame_size? But where am I need to set the frame size? Sorry, for now I don't understand...
[20:17:59 CET] <shtomik__> JEEB: I think so that my problem with count of samples (frame size), I set the format for encoder and in filter for in and out
[20:29:57 CET] <shtomik__> JEEB: Now I debug and find out that I have 1024 frame size for aac codec, and 512 nb_samples, is it my fault? I need a 512 frame size?
[20:57:44 CET] <saml> how do I select frames every second (or at whatever interval)?
[20:58:19 CET] <saml> https://trac.ffmpeg.org/wiki/Create%20a%20thumbnail%20image%20every%20X%20seconds%20of%20the%20video  by reading manual
[21:03:47 CET] <Filarius> I need strange... i want to push raw data thought rtmp server what used to show video on website (like twitch)
[21:05:51 CET] <Filarius> is it even possible ? Looks like I found example to send raw data, but not sure if its right, or I need to do something else to make website copy my stream on output channel
[21:06:40 CET] <Filarius> ffmpeg -re -f rawvideo -pix_fmt yuv420p -s:v 480x320 -r 1 -i data.h264 -f flv "rtmp://some_url"
[21:06:45 CET] <EpicEraser> Hey, I've been struggling with this annoying problem. I'm trying to create a generic pipeline that makes sure that the video and audio streams end at the same time and fixes the framerate at roughly the same time (duplicating video frames if necessary). I have accomplished this (more or less) with -r and -vsync cfr .
[21:07:17 CET] <kepstin> Filarius: do you have raw video or h264 video?
[21:07:29 CET] <EpicEraser> Now I want to use a complex filter graph with the fps filter instead of -r, because of certain filters I want to apply. But I can't seem to get the same behavior.
[21:07:49 CET] <Filarius> kepstin: I have just some binary data, not video at all
[21:07:51 CET] <kepstin> Filarius: or, i mean to say: raw uncompressed video or raw h264 video?
[21:08:02 CET] <kepstin> Filarius: uh, what do you want to do with that, then?
[21:09:36 CET] <Filarius> kepstin: actually i'm looking way to use social network website as proxy, as I have ISP who do not limit traffic to this website, including video (i know, its pretty stupid idea, but I want to give it try :D )
[21:09:53 CET] <Filarius> so I trying to use video streaming option
[21:10:13 CET] <kepstin> Filarius: well, streaming video sites have generally very strict limits on supported video codecs and options permitted
[21:10:26 CET] <kepstin> so you're not gonna put arbitrary data through there.
[21:11:05 CET] <Filarius> maybe I can fake whats my data is h264 ?
[21:11:35 CET] <kepstin> probably pretty hard, because if their decoder can't decode it, they'll probably kick your stream off.
[21:11:59 CET] <Filarius> sigh
[21:12:14 CET] <kepstin> you might as well just turn your data into a picture then encode it with a real video codec.
[21:12:23 CET] <kerio> the issue is not getting the data past the decoder
[21:12:31 CET] <kerio> the issue is getting the data past the encoder
[21:12:56 CET] <kepstin> be pretty low data rate, particularly sicne you have to make it survive lossy transcodes
[21:13:29 CET] <EpicEraser> It's an interesting idea though. Smuggling data :D
[21:13:44 CET] <Filarius> actually I already have "encoder" what wrap data into true H264, but it CPU heavy - I need to make DCT/IDCT trasforms on my side then I pass pcitures to ffmpeg and its convert raw yuv420 into h264
[21:14:00 CET] <kepstin> i mean, you could certainly write a custom application using libavformat that sends arbitrary data inside avpackets that claim to contain h264.
[21:14:26 CET] <Filarius> oh
[21:15:33 CET] <kerio> parse your data as yuv420p and encode it losslessly
[21:15:46 CET] <Filarius> hm, never had to do something like this, most my hard - this app what convert raw data into true h264, but its made with overhead to fight with lossy compression
[21:16:04 CET] <kepstin> I wouldn't expect most streaming services to allow lossless h264 video, and the bitrate would probably be too high anyways
[21:16:20 CET] <kerio> you choose the bitrate
[21:16:31 CET] <kepstin> you can't choose bitrate and use lossless encoding...
[21:16:34 CET] <Filarius> btw its works nice with Youtube and can store 7 bits on 8x8 of screen
[21:17:03 CET] <kerio> kepstin: ask yourself
[21:17:11 CET] <kepstin> Filarius: if you have an app that makes yuv frames of data that can survive encoding, then you can just pipe those yuv frames to ffmpeg input.
[21:17:17 CET] <kerio> what is the framerate of random data parsed as yuv420p raw video?
[21:17:36 CET] <kepstin> kerio: most streaming services have framerate limits, and will drop your stream if you go too far outside.
[21:17:47 CET] <kerio> hm
[21:17:54 CET] <kerio> send P-frames that just say "equal to before"?
[21:18:07 CET] <kepstin> I suppose you could just send duplicate frames, yeah
[21:18:12 CET] <kerio> anyway we're all failing to consider the true answer
[21:18:13 CET] <kepstin> the encoder will already handle that well
[21:18:17 CET] <kerio> BIGASS QR CODES
[21:18:27 CET] <Filarius> well, if it will be proxy - then bitrate is just as big as my internet, and its nothing problem whatever if it running slower of faster FPS (just need be sure what rtmp server will not make it problem here to use it as proxy)
[21:18:44 CET] <kepstin> fancy qr codes sounds a lot like what Filarius already has, encoding data into images.
[21:18:56 CET] <kerio> Filarius: you're not going to have a good experience btw
[21:19:03 CET] <kepstin> I mean, you'll want some ecc and to design patterns that are encoding-resistant
[21:19:03 CET] <kerio> you're looking at like 15 seconds of RTT
[21:19:42 CET] <Filarius> dude, again, I have something much better than QR already, and its can transport data after lossy compression without data corruption (if its made with options for this lossy copression(
[21:19:55 CET] <kerio> so send that
[21:20:07 CET] <kerio> or buy one of THESE https://www.youtube.com/watch?v=TUS0Zv2APjU
[21:20:18 CET] <Filarius> its too CPU heavy to allow more than 1 Mbps
[21:20:29 CET] <kerio> send less?
[21:20:35 CET] <Filarius> I need MORE :D
[21:21:08 CET] <kerio> use a faster computer
[21:21:25 CET] <Filarius> gimmimoney for that
[21:22:06 CET] <EpicEraser> Assuming your target doesn't re-encode, just create the h.264 streams manually. Dump random stuff in there but make it parse as h.264
[21:22:12 CET] <Filarius> i have 4 core 4 Ghz, and its still slow
[21:22:31 CET] <kerio> EpicEraser: the issue is facebook reencoding the stream
[21:22:35 CET] <kerio> Filarius: https://www.youtube.com/watch?v=LH-i8IvYIcg
[21:22:43 CET] <EpicEraser> I think Twitch doesn't by default
[21:23:12 CET] <kerio> is there a cellphone company with unlimited twitch? :o
[21:23:23 CET] <EpicEraser> Oh that's what this is about?
[21:23:27 CET] <Filarius> twitch - do not reencode source stream, yep, also i'm not about facebook, and youtube do not allow source stream to be able to watch
[21:23:43 CET] Action: kerio /whoises Filarius 
[21:24:00 CET] <kerio> free vk?
[21:24:30 CET] <EpicEraser> Something something net neutrality...
[21:25:16 CET] <saml> how do you encode video so that it'll have a big timestamp  millisecond or second resolution?
[21:25:26 CET] <Filarius> btw if u curious you can check result of that my application do https://www.youtube.com/watch?v=eKFO37JZ38Q (yep, this noise is data what can be extracted without errors)
[21:26:22 CET] <EpicEraser> Anyway, has anyone dealth with my fps thing before? Basically stretching the last frame of the video out over the remainder of the audio using filters...
[21:27:57 CET] <Filarius> kerio, sorry, what about free vk ? I did not get it
[21:28:33 CET] <kerio> i'm trying to figure out which social network is unmetered
[21:28:37 CET] <kerio> on your mobile connection
[21:28:43 CET] <kepstin> EpicEraser: hmm. so you have a video and you want to duplicate (only) the last frame after the video ends?
[21:29:01 CET] <Filarius> vk too, I have free traffic for telegram, but telegram bots can not speak to each-other
[21:29:14 CET] <Filarius> just checked it today
[21:29:31 CET] <EpicEraser> Basically I have inputs that may have video that ends before audio
[21:29:58 CET] <EpicEraser> I want my output video and audio to start and end at the same times, without stretching or losing av sync
[21:30:14 CET] <EpicEraser> Reason for this is that the outputs may be used for things like HLS
[21:30:52 CET] <Filarius> kerio: I had idea to just use file sending, but my friend send what creating/deletiong  many files is too suspicious
[21:30:53 CET] <kepstin> hmm. someone should rewrite the loop filter to let you use a negative number in the start parameter to count from the end of the video :)
[21:31:49 CET] <kepstin> EpicEraser: I've actually wanted a filter to do that for a while, I might look into it after my fps filter rewrite patch is merged.
[21:31:54 CET] <kepstin> but that'll be a while away
[21:32:34 CET] <kepstin> in the mean time, you can hack it by using the "overlay" filter to overlay your video on top of a blank video, with the eof_action=repeat option to let it duplicate the last frame
[21:32:53 CET] <kepstin> (you can use the color filter to make an endless blank video to use as a base)
[21:33:10 CET] <EpicEraser> Yeah I've seen that hack on StackOverflow
[21:33:27 CET] <EpicEraser> What's the reason for the fps rewrite?
[21:33:43 CET] <EpicEraser> Also, I think the framesync options should totally be their own filter
[21:33:51 CET] <kepstin> the current fps filter has issues with excessive memory usage when filling in large timestamp gaps.
[21:34:20 CET] <EpicEraser> That is good to know.
[21:34:56 CET] <EpicEraser> Would you recommend using the fps filter over -r in general?
[21:35:07 CET] <kepstin> (I've had some broken webcam videos with multi-hour-long timestamp gaps in them, and it would queue up 100s of thousands of frames all at once)
[21:35:22 CET] <kepstin> using the -r output option simply adds an fps filter on the end of your filter chain
[21:35:38 CET] <kepstin> it's not a separate implementation
[21:35:45 CET] <EpicEraser> Interesting... I had no idea!
[21:35:53 CET] <EpicEraser> Should have probably read the source huh ._.
[21:36:13 CET] <kepstin> I think this is mentioned in the ffmpeg man page somewhere? not sure.
[21:36:28 CET] <EpicEraser> It's not, unfortunately
[21:37:14 CET] <EpicEraser> Then what I'm confused about is how "-lavfi fps=25 -vsync cfr" and "-r 25 -vsync cfr" produce vastly different frame counts for my current video
[21:37:14 CET] <kepstin> huh. the -s option mentions that it just adds a scale filter
[21:37:22 CET] <kepstin> but the -r option doesn't say anything about that
[21:38:37 CET] Action: kepstin notes that adding a 'repeatlast' option to his rewritten fps filter would be something like 4 lines of code ;)
[21:38:49 CET] <kepstin> (well, maybe a little more than that)
[21:40:13 CET] <EpicEraser> Well it would need to accept multiple inputs right?
[21:40:56 CET] <EpicEraser> I feel like separating out the framesync into its own filter would simplify outher filters
[21:41:03 CET] <EpicEraser> other even
[21:41:48 CET] <kepstin> not sure what you mean, framesync is a framework (library, basically) that filters use in order to be able to get a set of matching frames together without having to deal with all the inputs themselves
[21:42:06 CET] <kepstin> so making it a separate filter removes the benefit is has of reducing the input handling code in the filter itself.
[21:42:52 CET] <EpicEraser> I see, again, wasn't aware
[21:43:22 CET] <kepstin> but sure, it is certainly possible to make a "pass-through" framesync filter that has N inputs and N outputs and just runs the framesync code on the inputs, and writes the frames it produces to the corresponding outputs.
[21:43:28 CET] <EpicEraser> I'm going from the docs, where there are now 3ish filters that accept the same set of "framesync" parameters
[21:44:14 CET] <EpicEraser> Thanks for all the info by the way, learning a lot
[21:44:22 CET] <EpicEraser> Really appreciate it
[21:45:47 CET] <EpicEraser> Do you happen to have any idea how "-lavfi fps=25 -vsync cfr" and "-r 25 -vsync cfr" produce vastly different frame counts for the video I'm looking at?
[21:46:33 CET] <kepstin> EpicEraser: no idea, would need more context on the command.
[21:47:04 CET] <EpicEraser> ffmpeg -y -i x.mov -pix_fmt yuv420p -c:a aac -c:v h264 -lavfi fps=25 -vsync cfr -max_muxing_queue_size 400 y.mov
[21:47:15 CET] <EpicEraser> It has some weird timing properties, hence the queue size thing
[21:47:44 CET] <EpicEraser> It's a "worst-case input" example for timing behavior
[21:52:01 CET] <kepstin> ... hmm. I dunno whether to add this to my fps patchset or not. https://gist.githubusercontent.com/kepstin/aeeda174171fb7413490d4df0cc86695/raw/fps-repeat-last.diff :)
[22:00:15 CET] <EpicEraser> Wouldn't that repeat the last frame infinitely?
[22:00:33 CET] <EpicEraser> Also, cool ^_^
[22:01:25 CET] <kepstin> technically not infinitely, but I'm not gonna wait around for a 64bit integer to overflow :)
[22:04:17 CET] <EpicEraser> repeatlast repeats the video stream until the audio stream ends normally right?
[22:04:20 CET] <EpicEraser> Not infinitely
[22:06:15 CET] <kepstin> EpicEraser: no, the video filters know nothing about the audio track
[22:06:45 CET] <kepstin> on a multi-input filter with framesync, repeatlast extends the shorter video input to the length of the longer video input
[22:07:17 CET] <EpicEraser> The framesync repeatlast is supposed to synchronize multiple inputs
[22:07:43 CET] <kepstin> multiple inputs *to the filter*, which must all be of the same type (audio/video)
[22:08:09 CET] <EpicEraser> If set to 1, force the filter to extend the last frame of secondary streams until the end of the primary stream
[22:08:10 CET] <kepstin> to make video and audio the same length, the normal recommended thing to do is make the shorter one infinitely long, then use the "-shortest" ffmpeg option to cut it when the other stream ends.
[22:08:27 CET] <EpicEraser> Oh, good to know again!
[22:08:46 CET] <kepstin> EpicEraser: on a filter with multiple inputs, that option extends the secondary input video to match the primary input video.
[22:08:52 CET] <EpicEraser> Kind of a scary option though, in case of multiple audio inputs
[22:09:21 CET] <kepstin> EpicEraser: the framesync options only deal with frames going into *the specific filter* that the options are on.
[22:09:30 CET] <kepstin> they don't know about anything else.
[22:09:37 CET] <EpicEraser> Yeah
[22:11:05 CET] <kepstin> my work involves writing a lot of scripts/tools that generate ffmpeg commands, and I usually end up having infinite-length video and audio streams, and using trim/atrim filters or the -t output option to cut them at the same spot.
[22:12:06 CET] <EpicEraser> That's very informative
[22:12:38 CET] <EpicEraser> How do you normally make a video infinitely long?
[22:12:45 CET] <EpicEraser> Overlay with empty video?
[22:18:28 CET] <kepstin> depends on the particular use case, but often I end up concatenating a blank video (generated with the 'color' filter) to the end.
[22:20:15 CET] <EpicEraser> My use cases involve basically reproducing what a player would do. Since there are no more frames, the last frame stays there. Hence repeating the frame.
[22:29:51 CET] <newbie|3> hi all
[22:33:17 CET] <newbie|3> is there someone here who can support me with 2pass libxvid encoding?
[22:35:47 CET] <saml> i suppport for free
[22:42:01 CET] <Filarius> i wonder if there one can help me build application to post my own data in h264 codec packets with libavcodec and his friends. I'm familiar with programming, just nothing about c/c++
[22:45:33 CET] <saml> what's protocol? http?
[22:46:49 CET] <Filarius> rtmp server, I need to make it share my own data, but I can do it if it will think I send him real h264 stream
[22:48:34 CET] <Filarius> sure, I need to do encoder and decoder, just I do not know where to start. I can try to learn how to make in with c++, but its going a little too complicated for me. I had some expirience with python and c# as hobby
[22:49:14 CET] <Fenrirthviti> could just look at the nginx-rtmp module to start?
[22:52:01 CET] <Filarius> Fenrirthviti, look, lets say I have some binary data (think about terabyte archive), and Twitch (gaming video streaming), and I need to make it stream my data as it will be just video stream.
[22:54:41 CET] <Filarius> meanwhile this data is nothing about being video
[22:57:55 CET] <Fenrirthviti> I'm really confused at what you're trying to do
[22:59:31 CET] <saml> does twitch give you an rtmp endpoint?
[23:02:11 CET] <saml> ffmpeg -re -i yolo-huge-video.mp4 -acodec copy -vcodec copy -f flv rtmp://live-jfk.twitch.tv/app/{stream_key}
[23:03:46 CET] <Fenrirthviti> twitch is only rtmp ingest
[23:22:06 CET] <saml> run_vmaf from netflix is so fast. but libvmaf filter is so slow
[23:36:48 CET] <Filarius> saml, Fenrirthviti, I have ISP what have no limits on some social network websites, and I'm looking way how I can use video stream to use RTMP server as kind of proxy. One guy here said I must dig LibAVFormat and make h264-legit stream, but with my own data instread of compressed video frames
[23:39:13 CET] <Fenrirthviti> Oh.
[23:39:37 CET] <Fenrirthviti> That's a bit outside what I feel comfortable helping with.
[23:39:48 CET] <Fenrirthviti> As that kind of reeks as "illegal as fuck"
[23:39:52 CET] <shtomik__> Hi guys, can you help me?
[23:40:27 CET] <furq> Filarius: i don't see why you'd need libavformat for that
[23:40:35 CET] <kerio> Fenrirthviti: how's that illegal
[23:40:51 CET] <shtomik__> I think that my problem with count of samples (frame size), I am using transcodeng.c example and after encoding my sound with noise....
[23:40:56 CET] <Fenrirthviti> well, illegal is probably too strong
[23:41:01 CET] <furq> you'd just need to create a valid h264 (or aac) stream and then mux it and send it with ffmpeg
[23:41:05 CET] <shtomik__> Now I debug and find out that I have 1024 frame size for aac codec, and 512 nb_samples, is it my fault? I need a 512 frame size?
[23:41:07 CET] <kerio> it probably violates a tos and a half
[23:41:09 CET] <furq> but yeah that's way out of the scope of anything we can help with
[23:41:12 CET] <Fenrirthviti> but circumventing restrictions seems like a really heavy grey area.
[23:41:20 CET] <Fenrirthviti> and not something I'm comfortable helping with.
[23:41:38 CET] <furq> you really just need to read up on the h264 (or aac) specs and figure it out
[23:41:49 CET] <Filarius> furq, libavformat - well, its just what that guy said, even I can guess its not name I must use
[23:42:02 CET] <kerio> does anyone know of any tool that can losslessly resample a h264 stream?
[23:42:50 CET] <shtomik__> furq: Hi ;) Can you help me, please?
[23:43:25 CET] <furq> aac is always 1024 frame size
[23:43:30 CET] <furq> you presumably need 1024 samples
[23:43:51 CET] <shtomik__> asetnsample filter?
[23:43:59 CET] <shtomik__> but that don\t work too ;(
[23:44:20 CET] <furq> you could try running the same command line with ffmpeg -v debug
[23:44:31 CET] <furq> the debug messages will tell you if ffmpeg has auto-inserted any filters
[23:44:34 CET] <shtomik__> I'm using libav
[23:44:38 CET] <furq> s/command line/filterchain/
[23:44:46 CET] <Filarius> furq, sounds like i must take x264 codec source code and find place where compressed frames packed into stream. Bad for me I'm not so familar with C/C++ and software what used to compile such things (but I had some expirience with Python and C#)
[23:45:01 CET] <furq> there's no need to look at x264
[23:45:05 CET] <furq> like 99% of that is irrelevant to you
[23:46:00 CET] <furq> http://www.itu.int/rec/T-REC-H.264/en
[23:46:05 CET] <furq> something like that would probably be more useful
[23:47:04 CET] <furq> i assume that lavf and any rtmp server won't actually attempt to decode the stream
[23:47:27 CET] <furq> so presumably as long as it looks vaguely like a real stream it should work
[23:47:44 CET] <furq> obviously you then have the issue that you need a decoder on the other end
[23:47:54 CET] <furq> and you also have the issue of no error resiliency
[23:48:00 CET] <shtomik__> furq: I'm using code from transcoding.c, after transcoding pcm to aac, I have a trouble with sound(noise...), I need 1024 nb_samples? Buffer?
[23:48:11 CET] <furq> i've never resampled with the api
[23:48:17 CET] <Filarius> sure I need to implement both "encoder" and "decoder"
[23:48:28 CET] <furq> my suggestion was to take the filterchain you're using and try running it in an ffmpeg command with -v debug
[23:48:43 CET] <furq> if ffmpeg has to auto-insert filters to make it work then the debug messages will tell you
[23:49:00 CET] <furq> beyond that i wouldn't really know
[23:49:37 CET] <Filarius> furq, khm, do I undertand you right -I need to read and understand 800 pages of H264 specification ?
[23:52:02 CET] <Filarius> changing x264 source code now looks much better idea
[23:55:40 CET] <furq> Filarius: presumably just sections 7, 9 and maybe annex b
[23:55:42 CET] <furq> but you do you
[23:55:53 CET] <furq> if you do want to just read some source then openh264 will be much simpler
[23:58:11 CET] <Filarius> I remember there was simple implimentation of h264 somewhere.. I used that part of algorithmfor IDCT/DCT for better data storing in h264 video stream
[23:58:54 CET] <alexp> openh264 is presumably that simpler implementation
[23:59:32 CET] <alexpigment> it's about as simple as i've seen while still technically working (technically is the key word)
[23:59:33 CET] <alexpigment> ;)
[00:00:00 CET] --- Fri Feb 23 2018


More information about the Ffmpeg-devel-irc mailing list