[Ffmpeg-devel-irc] ffmpeg.log.20170305

burek burek021 at gmail.com
Mon Mar 6 03:05:01 EET 2017


[01:13:42 CET] <faLUCE> do you know what exactly is gop_size ? Has it anything to do with key frames?
[01:32:13 CET] <furq> faLUCE: https://en.wikipedia.org/wiki/Group_of_pictures
[01:52:48 CET] <thebombzen_> faLUCE: is has everything to do with keyframes
[01:53:18 CET] <thebombzen_> a group of pictures is the list of all frames that appear after a keyframe and depend on it
[01:53:55 CET] <thebombzen_> so if you have a keyframe every 250 frames, the keyframe and the 249 frames after it is the "group of pictures"
[01:54:10 CET] <thebombzen_> in other words gop_size is exactly the distance from one keyframe to the next
[12:35:35 CET] <bencc1> I capture the screen with -preset ultrafast to mkv
[12:35:58 CET] <bencc1> after that I transcode to mp4 with -preset medium or slow and higher crf
[12:36:40 CET] <bencc1> is it reasonable to capture and transcode to mp4 with -preset medium in real time?
[12:36:45 CET] <BtbN> you are not going to improve the quality with a reencode
[12:37:22 CET] <bencc1> right. I'm not trying to improve the quality but capture fast and later decrease the file size
[12:37:40 CET] <bencc1> but I wonder if I can do it in one step in real time
[12:37:50 CET] <bencc1> or is transcoding usually slow
[12:37:53 CET] <BtbN> yeah, that works. Could also just capture lossless, if you don't care about the size
[12:38:10 CET] <bencc1> I do care about the size
[12:38:23 CET] <BtbN> The intermediate size?
[12:38:43 CET] <BtbN> Also entirely depends on your CPU. A fast CPU can easily do the slow preset in realtime without much effort
[12:39:00 CET] <bencc1> possible to transcode in real time with "-preset medium -crf 23" ?
[12:39:14 CET] <BtbN> Like I just said, entirely up to your CPU and the load on it.
[12:39:29 CET] <BtbN> no other way to find out then to just try it
[12:39:44 CET] <bencc1> https://ark.intel.com/products/33929/Intel-Xeon-Processor-L5420-12M-Cache-2_50-GHz-1333-MHz-FSB
[12:39:52 CET] <bencc1> that's the cpu. a bit old but dedicated
[12:40:01 CET] <bencc1> multiple threads can help?
[12:40:37 CET] <bencc1> I can also use E3-1230v2 which is newer and faster
[12:40:54 CET] <bencc1> 1 thread == 1 core?
[12:42:01 CET] <BtbN> not strictly, but in general
[12:42:39 CET] <BtbN> Just don't restrict ffmpeg to one thread or something, and you are fine.
[12:43:23 CET] <bencc1> ok. I'll try it now without limit
[12:43:38 CET] <bencc1> later I'll need to capture several videos at the same time
[12:43:51 CET] <bencc1> so I'll try to limit to 1 or 2 threads and see if ffmpeg can keep up
[12:44:08 CET] <BtbN> It entirely depends on the system load and CPU speed
[12:45:03 CET] <bencc1> ram and disk io are less important than cpu in this case?
[13:01:31 CET] <bencc1> BtbN: encoding to mp4 directly during capture seems to work
[13:02:06 CET] <BtbN> don't use mp4 for live recordings
[13:02:22 CET] <BtbN> use mkv, flv or mpegts or something else that doesn't need a global header
[13:03:08 CET] <bencc1> what's the problem with global header?
[13:03:17 CET] <bencc1> in case it fail in the middle?
[13:03:36 CET] <BtbN> If an mp4 file is not finalized, its entire contents become useless
[13:04:52 CET] <bencc1> good point. I'll use mkv and than just convert to mp4 with copy for audio and video and faststart
[13:06:41 CET] <bencc1> I'm getting many "Non-monotonous DTS in output stream 0:1; ..." during capture
[13:06:44 CET] <bencc1> is this an issue?
[13:06:55 CET] <bencc1> does it means that the cpu can't keep up?
[14:04:11 CET] <DHE> bencc1: the DTS should be arriving in strictly increasing order. it hasn't been. could be indicative of wraparound or an error in the source
[14:04:29 CET] <holdie> Hi guys, when I play a file with ffplay and I try to go fullscreen by pressin f it does not always work, whereas double clicking on the video seems to work okay all the time. As anyone noticed this? (I'm on Linux and just compiled ffmpeg from git)
[15:10:06 CET] <bencc1> DHE: the source is x11grab and puseaudio
[15:10:10 CET] <bencc1> pulseaudio
[16:21:42 CET] <Meins> Hello
[16:22:11 CET] <Meins> is someone online how can help me with multicast streaming with ffmpeg?
[16:22:22 CET] <JEEB> ffmpeg-all.html
[16:24:22 CET] <Meins> Hi Jeeb
[16:32:58 CET] <ZeroWalker> hi, i can someone elaborate on the data and linesize on  AVFrame, what does it want for an RGBA input?
[16:35:30 CET] <JEEB> https://ffmpeg.org/doxygen/trunk/structAVFrame.html
[16:35:59 CET] <farfel> hello: I built ffmpeg with openjpeg: how can I verify that it is in fact using the library, rather than built in codec for j2k ?
[16:36:10 CET] <JEEB> data are just pointers to uint8_t memory blocks and linesize is the full amount of those until the start of the next line
[16:36:27 CET] <JEEB> for example if you align your stuff to a certain alignment the next line will not start right away
[16:37:07 CET] <JEEB> and of course depending on situations the linesize can be different between planes
[16:46:18 CET] <farfel> aha!!!! figured it out:   -c:v  libopenjpeg
[16:52:50 CET] <JEEB> farfel: yes - that sets the exact decoder for that format
[17:00:29 CET] <ZeroWalker> but, hmm, so for RGBA, would data[0] just be the entire pixel array, and linesize[0] be width*4?
[17:01:57 CET] <JEEB> linesize depends on the bytewise alignment
[17:02:15 CET] <JEEB> I'm pretty sure you want some sort of alignment and I think av_alloc by default gives you some
[17:02:28 CET] <JEEB> *av_malloc
[17:02:52 CET] <JEEB> although don't quote me on this, I haven't really created AVFrames manually recently :P
[17:03:10 CET] <ZeroWalker> is there another way to do it?
[17:03:43 CET] <JEEB> I mostly deal with both input and output going through the framework so no real back-to-avframe kind of thing
[17:03:59 CET] <JEEB> but yes, you seem to have grasped the general gist
[17:04:22 CET] <JEEB> (your example just happened to not take into account any alignment optimizations which would add padding between things
[17:07:09 CET] <farfel> Thanks, JEEB
[17:07:35 CET] <ZeroWalker> but, wouldn't padding be there by default if it's rgba? as it's 32bit and the width/height is probably always going to be a multiple of 16 i think
[17:08:22 CET] <JEEB> anyways, since you're creating the AVFrame you control it
[17:08:44 CET] <JEEB> so you can play around and see if anything complains about lack of alignment
[17:09:26 CET] <ZeroWalker> but, so, should i just point data[0] to the start of the pixel array, and linesize[0] width*4?;o
[17:10:16 CET] <JEEB> I think that's it. you could also try looking at how a created RGBA AVFrame looks when decoded from f.ex. AVCodec
[17:11:33 CET] <ZeroWalker> ah, well i will try this to begin with, basically have nothing set up as i have just used pipe before, and now i try to do it in code
[17:12:22 CET] <ZeroWalker> http://sprunge.us/MBjT
[17:12:28 CET] <ZeroWalker> does this look about right?
[17:14:00 CET] <JEEB> wasn't there a function to allocate AVPackets?
[17:14:07 CET] <JEEB> just like AVFrames
[17:14:12 CET] <JEEB> although E_NO_IDEA
[17:14:37 CET] <JEEB> it's been a while since I've used the API and the last time I just blasted avcodec and swscale'd decoded pictures :)
[17:15:17 CET] <ZeroWalker> i thought AVPackets where an out in that function;o
[17:15:23 CET] <ZeroWalker> meaning i would receive the packet
[17:15:35 CET] <JEEB> right
[17:16:05 CET] Action: JEEB has done most of his work inside avcodec/avformat :D
[17:16:09 CET] <JEEB> not using the APIs
[17:16:18 CET] <ZeroWalker> :D
[17:44:18 CET] <bencc1> how can I check how many cores ffmpeg use to transcode?
[18:09:40 CET] <fritsch> bencc1: top - then press H
[19:22:26 CET] <DHE> bencc1: keep in mind each codec - decoder and encoder - may run in multi-threaded mode. so you may end up with a surprisingly high number.
[19:32:14 CET] <bencc1> DHE: I see more than 10 processes
[19:32:24 CET] <bencc1> but the total cpu % is not too high
[20:17:34 CET] <ZeroWalker> i seem to get AVERROR_EXTERNAL when trying to encode2 or send_packet, but i can't find much information go on
[20:28:32 CET] <DHE> you mean send_frame ?
[20:29:33 CET] <DHE> ultimately it means a (non-specific) error was received from a 3rd party library like x264 and it's being relayed up to you. any logs generated?
[20:32:56 CET] <ZeroWalker> yeah think so
[20:33:22 CET] <ZeroWalker> no logs sadly, trying to get the thing working, just encoding a frame  (RGBA) with libx264
[20:35:23 CET] <ZeroWalker> here is the code i am trying out with: http://sprunge.us/YTcb
[21:03:58 CET] <ZeroWalker> no ideas? the code is weird as i try different stuff, but it's the encode2 part that fails (or send_frame if i use that).
[21:04:26 CET] <ZeroWalker> i use Zeranoe's libs and dlls
[21:18:07 CET] <DHE> well if you're feeding it RGB I think you need to use the libx264rgb codec "variant"
[21:20:23 CET] <ZeroWalker> i would like it to be converted to yv12 by ffmpeg
[21:20:51 CET] <DHE> I believe you'll have to use a filter or the swscale code directly to do so
[21:21:43 CET] <ZeroWalker> hmm, okay let's go with libx264rgb to start with, just wana learn how to even do anything;d
[21:22:01 CET] <ZeroWalker> hmm i the rgb version "	if (avcodec_open2(context, enc_x264, 0) < 0)" this part fails
[22:45:02 CET] <Emil> Hi
[22:45:10 CET] <durandal_1707> hi
[22:45:16 CET] <Emil> So I compiled ffmpeg with libx264 for raspi 3
[22:45:34 CET] <Emil> And now I'm trying to stram from the raspicam module
[22:45:48 CET] <Emil> Does anyone have a readymade config or could help with getting it working?
[22:46:03 CET] <Emil> I followed this guide: http://engineer2you.blogspot.fi/2016/10/rasbperry-pi-ffmpeg-install-and-stream.html
[22:47:53 CET] <Emil> Or what's the preferred way to stream a website friendly stream? Personally I would prefer wrapping the native h264 to mp4 but mjpeg would be fine, too
[22:50:13 CET] <Emil> durandal_1707: any pointers?
[22:50:55 CET] <durandal_1707> just avoid ffserver
[22:51:20 CET] <Emil> durandal_1707: what's the preferred way to stream to multiple?
[22:51:26 CET] <Emil> If ffserver should be avoided
[22:51:51 CET] <RossW>  oh? Why so? (I was planning to try using ffserver this week as a one-to-many solution!)
[22:52:25 CET] <Emil> I heard (read of on the ffmpeg website) that ffserver support will be dropped
[22:52:44 CET] <Emil> but what is the replacement? What should we use to actually stream to multiple?
[22:52:59 CET] <durandal_1707> its gonna go away, so for longterm use else smthing
[22:53:15 CET] <RossW> well that sucks. What do you suggest?
[22:53:24 CET] <RossW> or what do THEY suggest?
[22:53:54 CET] <durandal_1707> vlc or smthing else
[22:54:03 CET] <Emil> vlc is horrible
[22:56:31 CET] <JEEB> so what is the use case with whatever you mean "stream to multiple"
[22:56:36 CET] <JEEB> as in, waht does it mean
[22:56:55 CET] <JEEB> you encode once and push it to multiple servers?
[22:57:02 CET] <Emil> JEEB: http live video
[22:57:07 CET] <Emil> website friendly
[22:57:15 CET] <Emil> JEEB: anyone can connect and view the stream
[22:57:36 CET] <JEEB> that's still generic as <beep>. what's the "multiple" there?
[22:57:46 CET] <Emil> I don't quite understand?
[22:57:52 CET] <Emil> Multiple as in more than one
[22:58:03 CET] <Emil> obviously the resources of my server are limiting actual users
[22:58:04 CET] <JEEB> more than one server? client?
[22:58:07 CET] <Emil> but that's besides the point
[22:58:35 CET] <Emil> JEEB: one raspberry pi that streams video which many clients on lan can connect to
[22:58:43 CET] <Emil> and view the stream on their web browsers
[22:59:11 CET] <JEEB> look into nginx-rtmp, that takes in RTMP and outputs HLS and DASH
[22:59:44 CET] <Emil> That seems silly
[22:59:53 CET] <JEEB> why?
[23:00:00 CET] <JEEB> you transcode once, and then the server handles packaging
[23:00:21 CET] <JEEB> RTMP as ingest can sound bad, but it seems to work. other servers take in fragmented ISOBMFF or whatever
[23:01:11 CET] <JEEB> that way you can also completely separate transcoding and serving, so that load from the first thing doesn't affect the latter
[23:20:44 CET] <jcay> hello... can concat function work with resize (-s) at the same time?
[23:21:16 CET] <bencc1> there is also https://github.com/kaltura/nginx-vod-module
[23:21:55 CET] <bencc1> I think that Facebook live and Youtube live receive RTMP as well
[23:22:10 CET] <JEEB> as long as you're OK with AGPLv3 the kaltura thing is OK as well
[23:22:53 CET] <BtbN> everything uses rtmp for streaming
[23:23:36 CET] <JEEB> yes, it's a common thing for ingest these days
[23:26:50 CET] <bencc1> JEEB: is there something wrong with AGPLv3 as long as you treat the packing server as black box?
[23:27:25 CET] <bencc1> nginx-rtmp has the same features as kaltura's module?
[23:28:10 CET] <thebombzen> jcay: depends on how you're concatenating
[23:28:17 CET] <JEEB> the thing of having to give the source code of at least your whole nginx build can be an effort
[23:28:41 CET] <JEEB> but if you cut it at that nginx then I guess it's not too bad as long as anything else doesn't get touched
[23:28:49 CET] <thebombzen> jcay: if you're using the concat filter, then yes. but remember that you can only resize if you transcode
[23:30:38 CET] <thebombzen> out of curiosity, is there any practical advantage to using the -s and -pix_fmt options, rather than just appending -vf scale and -vf format to the filterchain?
[23:30:52 CET] <JEEB> no
[23:31:19 CET] <thebombzen> it sounds like the only reason you'd want to is for several video streams
[23:31:23 CET] <JEEB> I think there's just enough of those old guides that use the "short-hands"
[23:31:38 CET] <JEEB> well for several video streams you can always do -vf:v:0
[23:31:40 CET] <thebombzen> but -vf:v should work for several streams right
[23:31:44 CET] <JEEB> yes
[23:32:12 CET] <thebombzen> you could view -s as syntactic sugar perhaps if you don't already have a filterchain
[23:32:16 CET] <thebombzen> cause it's short
[23:47:10 CET] <jcay> thebombzen: was the question to me?
[23:47:25 CET] <thebombzen> no you asked the question to us
[23:47:30 CET] <thebombzen> and I answered it
[23:47:58 CET] <jcay> you asked if there is advantage is using -s
[23:49:48 CET] <thebombzen> no I asked the question to everyone here, and it was answered
[23:50:12 CET] <thebombzen> you mentioned -s which prompted me to think about the option, and therefore ask the question. but it wasn't related to your question
[23:51:20 CET] <jcay> oh I see, sorry :)
[23:53:35 CET] <jcay> it seemed to me to be a better option for tonight to downscale every movie and then concatenate, too tired to learn more
[00:00:00 CET] --- Mon Mar  6 2017


More information about the Ffmpeg-devel-irc mailing list