[Ffmpeg-devel-irc] ffmpeg.log.20190304

burek burek021 at gmail.com
Tue Mar 5 03:05:01 EET 2019


[08:25:44 CET] <L29Ah> [ffmpeg] filter: Channel 1 clipping 10 times. Please reduce gain.
[08:25:45 CET] <L29Ah> huh what? i only have negative gains (using mpv --af=equalizer=f=87:t=h:width=5:g=-10,equalizer=f=300:t=h:width=700:g=-7,equalizer=f=1000:t=h:width=2000:g=-5)
[18:03:21 CET] <logicWEB> hello :-) I'm encoding video from a bunch of files using the `-i Frame%05d.png` style syntax. I find that if I have only that input, then `-ss 20:00` seeks to 20 minutes worth of frames in pretty much instantly, but if I have two sources (e.g., a second source for audio), then the image source (is that implicitly using `image2`?) seems to insist on actually reading & decoding every single frame up to the seek point
[18:04:07 CET] <logicWEB> `ffmpeg -framerate 30000/1001 -i Frame%05d.png -ss 20:00 ...` <-- starts up right away
[18:04:25 CET] <logicWEB> `ffmpeg -framerate 30000/1001 -i Frame%05d.png -i Source.m2ts -map 0:v -map 1:4 -ss 20:00 ...` <-- takes a long time to start up
[18:04:38 CET] <logicWEB> is this a bug? should I consider reporting it as such?
[18:06:36 CET] <logicWEB> I read about input seeking vs. output seeking, but a) I want my seek to apply identically to both inputs so the audio lines up, and b) if I only have the one input, the seek runs quickly
[18:07:12 CET] <logicWEB> I'm a bit worried about `ffmpeg -s TIMESTAMP -i Frame%05d.png -s TIMESTAMP -i AudioSource.m2ts ...`, but is that actually the correct way to do it, specifying the same timestamp multiple times?
[18:07:37 CET] <logicWEB> (worried because I'm unfamiliar with it and am unsure about whether it will actually do exactly what I want in terms of lining up the two disparate sources)
[18:08:02 CET] <logicWEB> er, I did a brain fart on that last command line, read my `-s` as `-ss` :-P
[18:14:01 CET] <logicWEB> I'm just speculating here, but is it that the input options set up an object describing the input as a whole, and then an output seek `-ss` after that goes to that object and says, as an action "seek to this time", and if your only input is image frames, then that action goes to the `image2` format handler, which says, "oh hey, I know how to do that, I'll just not bother reading X frames"
[18:14:27 CET] <logicWEB> but if you have multiple inputs and are picking inputs, then the _overall_ input object is a multiplexer, and the multiplexer handles that seek action by pulling that many seconds of input from each source and discarding?
[18:14:28 CET] <kepstin> logicWEB: you should be using -ss as an input option (before the -i), separately on each input
[18:15:01 CET] <kepstin> logicWEB: i'm surprised that -ss after -i with image2 demuxer is fast, it really shouldn't be (there's no special case for that)
[18:15:40 CET] <kepstin> note that `ffmpeg -s TIMESTAMP -i Frame%05d.png -s TIMESTAMP -i AudioSource.m2ts` only seeks one of the two inputs
[18:16:11 CET] <kepstin> in that case the ... well, `-ss` not `-s`, will cause seeking only in the `-i AudioSource.m2ts`
[18:16:29 CET] <kepstin> note that seeking in m2ts files is probably slow in either case, since there's no seek index.
[18:16:31 CET] <logicWEB> kepstin: and if I specify the same exact fractional second time as `-ss` to two different inputs, I don't ever have to worry about them both handling it 100% accurately? I seem to recall in the past there was some issue about how input seeking had the potential to be based on an estimate or otherwise inaccurate, but then there's a footnote on one page saying that as of a particular version that's no longer the case?
[18:16:50 CET] <kepstin> logicWEB: unless you travel 10 years back in time, it's not an issue.
[18:16:56 CET] <logicWEB> okay, good to know :-)
[18:17:23 CET] <logicWEB> why would `ffmpeg -ss TIMESTAMP -i Frame%05d -ss TIMESTAMP -i AudioSource.m2ts` only seek the audio source? what is the syntax to make it seek both?
[18:17:36 CET] <kepstin> oh, i just misread that
[18:17:38 CET] <kepstin> that's correct
[18:18:41 CET] <logicWEB> cool beans then, I'll try that out the next time I'm encoding with multiple inputs. in the time since I asked the question, the slow output seek finished so now it's already encoding :-P
[18:19:33 CET] <logicWEB> for what it's worth, I almost always specify output seeking out of habit, and it's a habit I haven't broken because it's never been slow when there's only one input. I have a _suspicion_ that when the video source isn't being multiplexed, the seek operation goes directly to the input format handler and the input format handler doesn't _have_ to implement it by decoding & discarding the time you want to skip
[18:19:54 CET] <logicWEB> I'm just guessing about the cause, but I'm 100% certain that when I specify an MKV file as input and _output_ seek it, it starts up right away, I do it all the time without thinking :-P
[18:20:00 CET] <kepstin> but yes - if you use '-ss' before `-i`, it asks the demuxer to use file format specific knowledge to seek, and then ffmpeg internally handles making sure it's accurate to the exact frame. If you use `-ss` after all `-i`, then ffmpeg decodes all videos, runs through the filter chain, and then discards up until it gets an output timestamp matching `-ss`
[18:20:53 CET] <kepstin> note the difference between input timestamp and output timestamp - if you have filters than modify timestamps (like setpts) this can be a noticable change :)
[18:21:04 CET] <logicWEB> ah yes that is a good point, thanks :-)
[18:22:29 CET] <logicWEB> the project I'm working on right now is trying to make a movie on DVD look as nice as possible, and the authors of this DVD frequently made micro-adjustments to input framerate and their software handled it by cutting or duplicating individual fields, so there is random interpolation _all over the place_, it's a huge mess, so I extracted every frame of the movie and then I'm putting it back together
[18:22:39 CET] <kepstin> and there's no special case for speeding up output `-ss` with one input file, since it has to look at output timestamps anyways.
[18:22:41 CET] <logicWEB> and to my pleasant surprise, passing 170,000 frames of input and combining it with the audio track from the original MPEG-2 stream actually lined up exactly
[18:23:35 CET] <kepstin> (although.. maybe there is if you don't specify any filtering? I haven't actually checked that, to be honest)
[18:24:33 CET] <kepstin> logicWEB: huh, that's really weird. was it not a standard telecine pattern?
[18:24:35 CET] <logicWEB> well in any case, whether there is or isn't, I'm clearly not doing it the way it's _meant_ to be done, and I'll make a conscious effort going forward to put all my `-ss`es before my `-i`s
[18:25:33 CET] <kepstin> logicWEB: having periodic field duplications is one way of handling ntsc film to pal (24 frames per second to 50 fields per second), for example
[18:25:35 CET] <logicWEB> it's different patterns all over the place :-P one that showed up in quite a few animations was AA AB BB BC BC CC, but then here and there you'll get a spurt of AA AB BB BC CC CD or AA AB AB BB BC BC CC CD CD DD
[18:25:52 CET] <logicWEB> and in some places AA BC CB DD
[18:25:58 CET] <kepstin> ouch, that's just nasty.
[18:26:02 CET] <logicWEB> er AA BC CD DD
[18:26:22 CET] <kepstin> like, only a single field of `B`? :(
[18:27:11 CET] <kepstin> but yeah, that sort of thing does sound like someone did edits on a telecined film using equipment for interlaced video.
[18:27:20 CET] <logicWEB> yes. there was one part also that did AB AB CC DD, like, just had a random interpolated frame that was half of two other frames, repeated twice, and then jumping straight to the following frame, making it impossible to reconstruct the original
[18:28:14 CET] <logicWEB> the other thing I did with this project is pass every single frame through an edge-preserving upscaler, it's a bit of an experiment, but I think it has mostly improved the quality, except in the credits which are combed all to hell and hurt the eyes to look at
[18:28:25 CET] <logicWEB> sigh :-P
[18:28:47 CET] <logicWEB> I shake my fist at whoever though up the idea of passing interlaced frames as though they were progressive to the MPEG-2 encoder
[18:29:06 CET] <kepstin> oh, yeah, credit rolls are fun. A lot of the time the credit text itself is 60i overlaid over telecined 24fps
[18:29:10 CET] <kepstin> and that's just impossible.
[18:29:38 CET] <logicWEB> really gives you the impression that the people doing that editing don't _really_ understand the video format
[18:30:14 CET] <logicWEB> anyway thanks very much for your time :-)
[18:30:16 CET] <kepstin> i mean, if you're actually viewing the 60i video realtime on an interlaced tv, it looked good enough.
[18:30:32 CET] <kepstin> but if it got partly detelecined by the dvd encoder software,...
[18:31:00 CET] <logicWEB> right but interlaced TVs have been in the minority since ... I dunno, 15 years ago now? 20? :-P
[18:31:04 CET] <JEEB> kepstin: there was an avisynth thing into which you would feed the telecine pattern, and it would deint to 60/1.001Hz, then double that to 120/1.001 as that can be divided by 24/1.001
[18:31:34 CET] <JEEB> and it would both inverse telecine and apply the interlaced credits nicely
[18:31:45 CET] <JEEB> it was lol slow, but nice
[18:31:55 CET] <logicWEB> that does sound nice :-)
[18:32:01 CET] <kepstin> JEEB: my impression was that the 120/1.001 thing was mostly a workaround for avi not supporting vfr natively, so you picked a LCM of 30, 60, 24 as the framerate
[18:32:10 CET] <JEEB> kepstin: that's different
[18:32:15 CET] <JEEB> 120/1.001 in AVI was a VFR thing
[18:32:17 CET] <JEEB> "null frames"
[18:32:25 CET] <JEEB> this was an avisynth script
[18:32:34 CET] <JEEB> avisynth is what came before vapoursynth
[18:32:38 CET] <JEEB> frame server :P
[18:32:38 CET] <logicWEB> of course there's not really much you can do when the interlaced frame has been encoded as progressive and the frequency of the comb is too high for accurate representation at the bitrate it's encoding at, causing the detail (especially in the red channel) to bleed across scans
[18:33:50 CET] <kepstin> logicWEB: sometimes it helps to make sure you force swscale to to an 'interlaced' conversion from 4:2:0 to 4:2:2 or 4:4:4
[18:34:15 CET] <JEEB> kepstin: http://up-cat.net/p/ab797e4f
[18:34:20 CET] <kepstin> depends on the exact circumstances tho
[18:34:31 CET] <logicWEB> does that help with decoding existing video that has already been mangled, or just getting better output when encoding?
[18:34:46 CET] <JEEB> that's from one of my encode scripts from ye olden days
[18:35:06 CET] <logicWEB> in this project, I've had frames where the frame I want is the bottom field of frame X plus the top field of frame X + 1, but when I reconstruct it, I find that detail from the top of X and the bottom of X + 1 has bled into my reconstructed frame
[18:35:10 CET] <kepstin> logicWEB: in some cases it can help with decoding poorly encoded video back to fields.
[18:35:18 CET] <kepstin> but depends on exactly how the video was encoded
[18:35:27 CET] <kepstin> just another thing to throw into your toolbox to try out :)
[18:35:29 CET] <JEEB> the last time I used it in anger was against an interlaced scrolling text that told you to not pass the content over the interwebs and that copyright is a thing
[18:35:31 CET] <logicWEB> I think the answer to that question for this particular project is "badly" :-D
[18:36:11 CET] <kepstin> JEEB: ah, I see - the idea is to get the deinterlaced 60i scroll overlaid on 24p video without any judder?
[18:36:22 CET] <JEEB> kepstin: yes. that's why it gets doubled to 120Hz
[18:36:29 CET] <JEEB> as that is modulo 24
[18:36:52 CET] <JEEB> oh how convenient
[18:37:01 CET] <JEEB> I actually had been testing it by encoding that part separately
[18:40:04 CET] <JEEB> kepstin: https://megumin.fushizen.eu/random/ivtc_txt60mc_sample.mkv
[18:40:06 CET] <JEEB> this is how the result looked
[18:40:15 CET] <JEEB> which was pretty good I guees
[18:41:10 CET] <JEEB> (given that it was indeed mismatching rates between the parts)
[18:41:33 CET] <JEEB> now, of course just ignoring the overlay would have been 100% as valid
[18:41:52 CET] <JEEB> since people would be watching the video after all, not the overlay :P
[18:46:52 CET] <brimestone> How would one get FFMpeg with libssh on macOS?
[18:47:21 CET] <JEEB> you build it?
[18:47:42 CET] <JEEB> or you utilize something else to give you access to that storage over ssh (if that's really the best alternative for you)
[18:51:16 CET] <brimestone> But, building FFMpeg with libssh is better for me..
[18:51:29 CET] <JEEB> sure, if it works for ye that's fine too
[18:52:35 CET] <brimestone> unfortunately brew doesn't do --options anymore, which mean I have to build it from the ffmpeg source.. unless you know where I can download a compiles (with --enable-libssh)
[18:53:05 CET] <JEEB> it's not too hard to build unless whichever libssh FFmpeg utilizes is hard to build
[18:53:19 CET] <JEEB> I actually kept 100% out of homebrew when building my FFmpeg on mac
[18:53:30 CET] <JEEB> had to build some tools first, but that's just details ;)
[18:53:49 CET] <JEEB> https://kuroko.fushizen.eu/docs/buildan_mpv_macos.txt
[18:53:54 CET] <brimestone> Im always weary about building it myself..  I may exclude stuff that are unknowingly important
[18:54:24 CET] <JEEB> for decoding FFmpeg contains 99% of all stuff in itself
[18:54:35 CET] <JEEB> for encoding you probably know what formats you will be encoding in
[18:54:36 CET] <JEEB> if in any
[18:55:11 CET] <brimestone> I do.. well, assuming 3dlut is built in to avfilter and I dont have to enable anything
[18:55:34 CET] <JEEB> 3dlut is a filter not a video or audio format
[18:55:46 CET] <JEEB> for example AVC/H.264 is a video format, and you need libx264 to *encode* with it
[18:55:57 CET] <brimestone> Yeah, so should be built in yeah?
[18:56:00 CET] <JEEB> for decoding the AVC/H.264 decoder is internal and enabled by default
[18:57:09 CET] <JEEB> brimestone: yes, lut3d is a filter that comes with FFmpeg itself most likely since it got enabled with my FFmpeg config, which is rather minimal
[18:57:33 CET] <JEEB> or the correct wording is "does not depend on external libraries that you'd have to build first"
[18:58:41 CET] <brimestone> right.. thanks.
[18:59:34 CET] <JEEB> brimestone: or alternatively you could look up how you could modify that homebrew recipe
[18:59:47 CET] <JEEB> and add the option an dependency yourself
[18:59:58 CET] <brimestone> look up?
[19:00:10 CET] <JEEB> figure out
[19:00:18 CET] <brimestone> Oh got, it. :)
[19:00:31 CET] <JEEB> since IIRC homebrew supported modifying the rules of making the packages
[19:01:49 CET] <brimestone> I was using http as source protocol and have several computer transcode segment, but I get only make it work for byte-rage 0-100000 the 100000-200000 wont work..
[19:02:39 CET] <brimestone> So I tried to do -ss and -t but it causes the server to load the whole file and not just the byte-range that -ss -t requires.
[19:06:50 CET] <brimestone> JEEB, maybe something will stand out? https://gist.github.com/brimestoned/f625c5205a4ad20b5140a9a0cbbd8f5b
[19:07:59 CET] <JEEB> maybe the lack of initial structures at all?
[19:08:04 CET] <kepstin> brimestone: with mov files, ffmpeg needs to read some initialization data for the codecs that's either at the start or end of the fil
[19:08:14 CET] <JEEB> mov is not a format you can just randomly access in
[19:08:17 CET] <JEEB> (or mp4)
[19:08:32 CET] <kepstin> brimestone: you can't decode mov from an arbitrary point, you need to use -ss and then ffmpeg will make multiple requests
[19:09:19 CET] <kepstin> ffmpeg will make different requests to read the file headers, seek index, and then it'll make a request at the byte range where the video it wants to decode is.
[19:10:25 CET] <brimestone> Yes, I tried that.. but its actually causing the server to load the entire file (17GB+) instead of doing range reads
[19:10:47 CET] <kepstin> did you put -ss before -i ?
[19:10:53 CET] <brimestone> Yes...
[19:10:57 CET] <brimestone> Should it be after?
[19:11:06 CET] <brimestone> -ss is before -I and -t is after -i
[19:12:54 CET] <kepstin> so in that case, ffmpeg *should* be doing a byterange request with the start point of where it's reading video from, but it'll say it's reading until the end (it doesn't know exactly where it's gonna stop at that point - it'll signal the stop by disconnecting from the server)
[19:15:09 CET] <brimestone> Yes! Thats what I expected it to do. So I thought I would do a ffprobe and find the actual byte offset of where the k frame ares and split it "properly" but still same issue
[19:15:44 CET] <kepstin> do you have a log of the actual https requests firefox is making with -ss?
[19:16:14 CET] <kepstin> but yes, this cannot work unless ffmpeg is allowed to do its own seeking, because of the design of the mp4 format.
[19:16:16 CET] <brimestone> I can create it from the debug log. Let me re-create it now
[19:16:37 CET] <kepstin> er, mov, but basically the same thing
[19:16:56 CET] <brimestone> But the source is not mp4, its a ProRes mov and all of the frames are k frames
[19:17:08 CET] <brimestone> ohh..
[19:17:22 CET] <brimestone> Let me re-create the problem and post the logs on the gist.
[19:20:35 CET] <brimestone> Got the log. https://gist.github.com/brimestoned/f625c5205a4ad20b5140a9a0cbbd8f5b
[19:22:37 CET] <kepstin> with -ss 10, it probably just decided that the seek point was close enough to the start of the file that it wasn't work closing the connection and re-opening at the seek point?
[19:22:46 CET] <brimestone> Then eventually, it failed.  moov atom not found | Statistics: 0 bytes read, 3 seeks
[19:23:13 CET] <kepstin> hmm. strange. i would have expected it to try seeking to the end of the file to find the moov
[19:23:40 CET] <kepstin> but if you're http streaming mov or mp4 files, you should really use qt-faststart or -movflags faststart to move the moov to the start of the file
[19:24:08 CET] <brimestone> Well its not streaming.. its just statically serving that file..
[19:24:17 CET] <kepstin> same thing
[19:24:39 CET] <kepstin> if you're decoding/playing it while it downloads, then it's streaming :)
[19:24:52 CET] <brimestone> Which got my to look to maybe libssh would give me ease.. but turns out, compiling it just gives me never ending errors
[19:25:37 CET] <kepstin> anyways, it sounds like the solution to your problem is to make your mov file streaming friendly by remuxing it to move the moov to the start of the file, then ffmpeg with -ss should behave nicely.
[19:26:19 CET] <kepstin> but it is strange the ffmpeg is unable to seek to the end of the file to read the moov
[19:27:17 CET] <brimestone> haha.. chicken and the egg. What Im trying to do is to build a distributed transcoder.. where the server host a source file and n numbers of transcoders computer would transcode segment of the file and return it back to the server then the server would concat it all together
[19:28:38 CET] <kepstin> if you want to do that, then the source file has to be in a format that can be read in a segmented fashion. The usual thing to do is to have one server convert the file to segments first (low cpu, high io operation), and then encode those in parallel (high cpu, low io), and then merge them again when done on one box (low cpu, high io)
[19:29:32 CET] <kepstin> audio is usually faster to encode and is tricky to segment (due to encoder delays and frame sizes), so I recommend having the audio encoded separately
[19:29:41 CET] <kepstin> (in full)
[19:29:47 CET] <brimestone> interesting..
[19:30:05 CET] <brimestone> Yes, thats also on my list of things to do... if I can't get the libssh working..
[19:30:53 CET] <brimestone> I could share the file to the transcoders via smb, but thought, libssh + certificates would be easy to deploy
[19:31:06 CET] <kepstin> also, strange to be doing this on mac os, I don't know where you'd find a bunch of big high cpu servers running mac os to do transcoding on ;)
[19:31:17 CET] <kepstin> most linux ffmpeg build should have working libssh, i'd expect?
[19:33:22 CET] <brimestone> well, initially I was planing to use ubuntu to do this, but not sure how to handle "ProRes" license.
[19:33:49 CET] <brimestone> By doing it on the Mac, then I get Apple ProRes license..
[19:34:24 CET] <brimestone> Then, thought about adding Docker + Kubernetes or Swarm, but not sure yet.
[19:34:45 CET] <kepstin> ffmpeg uses an internal decoder, not the system one, fwiw - but you'd have to talk to your lawyer about whether that changes anything
[19:45:02 CET] <brimestone> kepstin: I'm now a little confused.. should I try to do this in  a "distributed" fashion? Or should I just stick with a single machine arch.
[19:46:45 CET] <kepstin> completely depends on your requirements. ffmpeg should behave similarly enough on different systems that it won't really make a difference.
[20:37:02 CET] <brimestone> Thats odd, my machine at home failed to install libssh
[20:53:26 CET] <brimestone> But on my laptop at work, it just worked!
[21:59:01 CET] <kevinnn> Does anyone have a nice example of how to parse an RTSP stream manually with libav?
[21:59:29 CET] <kevinnn> specifically I noticed that ffmpeg/libav do not handle i-frames well
[22:00:15 CET] <kevinnn> And I cannot use libav's built in rtsp client because I am on windows and I cannot figure out how to get ffmpeg to compile with win threads
[22:00:46 CET] <JEEB> umm, all of my mingw-w64 toolchains and I think even MSVC by default hit the default windows thread wrapper
[22:01:00 CET] <JEEB> and I don't have issues with I-frames anywhere (UDP or TCP)
[22:01:15 CET] <JEEB> not that I'm using rtsp, but if you're not using FFmpeg's rtsp protocol either...
[22:06:05 CET] <kevinnn> https://pastebin.com/qKciKE5v
[22:06:08 CET] <kevinnn> JEEB
[22:06:22 CET] <kevinnn> sorry to post my code but that is how I am decoding each packet
[22:06:30 CET] <kevinnn> and the screen just does not look right
[22:06:35 CET] <kevinnn> sorry it is a bit of a mess
[22:07:41 CET] <mozzarella> guys
[22:08:05 CET] <mozzarella> how can I set the speed of the output to be 240 times that of the original
[22:08:23 CET] <mozzarella> original/input
[22:15:20 CET] <JEEB> kevinnn: are you sure whatever you're doing with the I/O etc is not making you lose packets?
[22:15:34 CET] <JEEB> if you try to read the stream with something like mpv, does it work?
[22:15:42 CET] <JEEB> https://mpv.srsfckn.biz/
[22:15:46 CET] <JEEB> since you were on windows
[22:15:51 CET] <JEEB> mpv "rtsp://blah"
[22:16:22 CET] <JEEB> basically want to compare with something else using FFmpeg to decode video :P
[22:16:46 CET] <kevinnn> we have tested it on VLC on windows and it worked okay there, but that was on a linux based system
[22:17:56 CET] <JEEB> VLC utilizes FFmpeg's decoder when hardware decoding is disabled
[22:18:13 CET] <JEEB> mpv just utilizes FFmpeg for pretty much all bits of it :P
[22:18:46 CET] <JEEB> anyways, if swdec works then FFmpeg's H.264 decoder shouldn't have issues with your H.264 stream and the issue is somewhere else
[22:19:02 CET] <JEEB> either receiving or depacketizing
[22:19:29 CET] <JEEB> (and receiving issues can also be due to the sender and/or your network)
[22:20:18 CET] <JEEB> I know that UDP rtsp tends to break with a *lot* of network stuff, while TCP works. so whenever you compare different reception of an RTSP stream you should check if it's using TCP or UDP RTSP
[22:22:14 CET] <kevinnn> okay I am going to download and install mpv real quick and check to make sure it can stream correctly on my windows computer. But I am fairly certain it will work
[22:22:19 CET] <kevinnn> also we are using UDP
[22:22:30 CET] <kevinnn> so it must be depacketizing...
[22:22:37 CET] <kevinnn> which is in the code I posted
[22:22:46 CET] <JEEB> no install needed for mpv, it's just extract and call from command line
[22:22:46 CET] <kevinnn> does it appear incorrect?
[22:23:26 CET] <JEEB> I didn't check your decoding code too hard, but unless you're re-using AVFrames etc it should be OK
[22:23:40 CET] <JEEB> also for the record, mpv defaults to TCP RTSP, you will have to set a separate parameter to udp
[22:23:44 CET] <JEEB> *for UDP
[22:23:48 CET] <JEEB> while the FFmpeg default is UDP
[22:24:04 CET] <JEEB> so always check what the client attempts to connect with
[22:24:05 CET] <kepstin> a common cause of receive issues on udp is that the application is running too slowly, and the os receive buffer fills so packets are lost.
[22:24:29 CET] <kepstin> also your code has some strange variable naming, lots of "x264 decoder" stuff, but the x264 library is an encoder only :)
[22:26:38 CET] <kevinnn> JEEB: okay how do I get mpv to use UDP?
[22:26:51 CET] <kevinnn> can't find it in the options
[22:27:11 CET] <forgon> I want to compress a video I created, but do not know which options I have. Could someone point me to a good link?
[22:27:11 CET] <kevinnn> oh I found it!
[22:27:14 CET] <JEEB> https://mpv.io/manual/master/#options-rtsp-transport
[22:27:20 CET] <JEEB> yes, this thing :P
[22:27:39 CET] <kevinnn> okay I think it is working...
[22:27:44 CET] <kevinnn> but there is no GUI
[22:27:53 CET] <forgon> I chose libx264 as video codec.
[22:27:53 CET] <kevinnn> it just says it is playing
[22:28:19 CET] <JEEB> mpv is very minimal, you just get on-screen controls on the video surface
[22:28:42 CET] <JEEB> for more detailed logs see --log-file=mpv_sucks.log and then the log file it writes
[22:28:45 CET] <kevinnn> onscreen controls?
[22:29:04 CET] <kevinnn> so it is normal not to have visual output?
[22:29:25 CET] <JEEB> no, if you have video it should be there
[22:29:27 CET] <JEEB> https://user-images.githubusercontent.com/680386/47966850-2d349f00-e057-11e8-9997-42bfa1d68777.png
[22:29:40 CET] <JEEB> that's the amount of controls you get by default on-screen :P
[22:30:14 CET] <kevinnn> but no GUI even pops
[22:30:21 CET] <kepstin> forgon: there's a convenient ffmpeg wiki page for that, here: https://trac.ffmpeg.org/wiki/Encode/H.264
[22:30:30 CET] <pink_mist> did you forget to build mpv with lua support?
[22:30:33 CET] <JEEB> kevinnn: then no data is coming out or something else?
[22:30:46 CET] <kevinnn> build? I didn't build mpv
[22:30:47 CET] <kepstin> forgon: you generally want to use CRF mode, and the only options you might want to change there are usually "-crf" and "-preset"
[22:30:50 CET] <JEEB> pink_mist: he's using the lachs0r's builds and he specifically switched to UDP for RTSP
[22:30:54 CET] <pink_mist> ah
[22:33:23 CET] <JEEB> kevinnn: anyways, rtsp has worked for me on a local network during testing (albeit TCP only because UDP would just get packets dropped by the network equipment)
[22:33:42 CET] <JEEB> what about `ffprobe -v verbose rtsp://STUFF`
[22:37:08 CET] <kevinnn> JEEB: for some reason mpv does not work with my rtsp stream... I am going to do some digging to figureout why..
[22:38:19 CET] <JEEB> that's why  I noted you'd also check with ffprobe
[22:49:58 CET] <forgon> While reading about codecs, I constantly run across x265 as alternative to x264. Is there a reason not to use it?
[22:50:52 CET] <JEEB> not optimized (in both psychovisuals and speed) encoding, encodes into a format that is less supported by hardware (but you might be OK with that)
[22:51:21 CET] <JEEB> if your use case is to just encode at a very low bit rate where things look really crap, then HEVC (which is what x265 encodes) might be worth it
[22:51:29 CET] <JEEB> but only if you put the extra time into it
[22:52:14 CET] <pink_mist> I've had issues software decoding x265 stuff too :/
[22:52:31 CET] <pink_mist> as in, stutters because my cpu can't keep up :/
[23:05:48 CET] <another> depends on cpu, load and resolution
[23:22:44 CET] <pink_mist> my cpu was a 3.7GHz Core i3-6100 with 2 cores (4 threads), and the resolution was a 4k video played back on a 1080p screen, nothing else was causing any load
[23:23:52 CET] <brimestone> hey guys, how would I approach this (speed). Source file is (ProRes 4K 444 10Bit LogC) and I want to create a 480x270 with 3dlut filtered. Should I scale the video to 480x270 then apply the lut?
[23:29:35 CET] <kepstin> scaling then applying the lut will be faster than applying the lut then scaling.
[23:32:22 CET] <kepstin> (if the resolutions were closer, it would be harder to say - iirc the lut3d filter needs rgb data, and I assume you're gonna want to convert to yuv420p after, so there's multiple conversions involved)
[23:32:53 CET] <kepstin> i could be wrong about the rgb thing :)
[23:34:34 CET] <kepstin> just checked, it is rgb.
[23:40:46 CET] <kepstin> brimestone: in the end, your filter chain's probably gonna look like 4K yuv444p10le -> scale -> 480x270 rgb (of some sort, probably "bgr0" - should match bit depth of lut) -> lut3d -> scale -> 480x270 yuv420p
[23:42:22 CET] <brimestone> it looks good, just need to figure out how to make it fast.
[23:42:33 CET] <brimestone> Only getting about 0.0983xRT
[23:42:40 CET] <brimestone> ~2.1 fps
[23:43:20 CET] <kepstin> are you doing video encoding as well?
[23:43:44 CET] <kepstin> I'd expect the slowest parts of an ffmpeg command doing that filter combination to be the video decoding on input and encoding on output
[23:43:58 CET] <kepstin> after that, the downscale and conversion from 4k to 280x270
[23:45:12 CET] <kepstin> it would be worth running with no output - replace your output file with "-f null /dev/null" - to see if the output is the bottleneck
[23:45:19 CET] <kepstin> (and remove and -c options)
[23:45:25 CET] <brimestone> Im basically doing this. https://gist.github.com/brimestoned/9cc865bd12d66586b0d4f5ff7b35f543
[23:45:40 CET] <kepstin> also try dropping the filter chain and see how fast it can manage decoding only
[23:45:52 CET] <kepstin> ok, so i see your issue
[23:45:57 CET] <kepstin> you're not downscaling :)
[23:46:09 CET] <kepstin> each -vf option *replaces the entire filter chain*
[23:46:29 CET] <kepstin> you need to use `-vf scale=480:270,lut32=/path/to/file`
[23:46:40 CET] <brimestone> Oh... let me try that.
[23:46:42 CET] <kepstin> fixing my typo of course
[23:47:13 CET] <kepstin> right now it's slow because libx264 is encoding 4k video (possibly 10bit even)
[23:48:36 CET] <kepstin> i'd expect libx264 to be complaining that you're exceeding the level limits on that encode too, and you should have noticed that the output video resolution (next time, please also include the console output of the ffmpeg command in your paste) was incorrect.
[23:50:45 CET] <brimestone> wow.. its faster!
[23:54:06 CET] <kevinnn> Hi all, does "[udp @ 00395860] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)" jeopardize performance?
[23:54:16 CET] <kevinnn> and if it does how can I get rid of it?
[23:54:43 CET] <kepstin> kevinnn: it means that your build can't run the receive in a separate thread, which means it has a higher chance of dropping packets
[23:54:55 CET] <kepstin> and the answer is to find or build an ffmpeg with working threads.
[23:55:21 CET] <kevinnn> I am on windows and it is a crazy pain to get it to build with pthread support
[23:55:29 CET] <kevinnn> and pre-built ones i can use?
[23:55:33 CET] <JEEB> that UDP protocol feature is specifically implemented with only native pthreads (or wrappers of thereof)
[23:55:49 CET] <JEEB> also man, are you still on about this stuff? are you that guy I helped before?
[23:56:10 CET] <linext> ffmpeg has been working very well for recording security cam video stream.  thanks for the help
[23:56:28 CET] <kevinnn> yes I am that guy :) and I appreciate the continued support
[23:57:06 CET] <kevinnn> right but I am a linux guy. and getting pthread to compile nicely with ffmpeg on windows is a nightmare
[23:57:15 CET] <kevinnn> one I have yet to get right yet
[23:57:17 CET] <JEEB> uhh
[23:57:22 CET] <JEEB> on linux pthreads is what you use
[23:57:27 CET] <JEEB> so linux is not your problem
[23:57:32 CET] <JEEB> not sure why you mention it
[23:57:37 CET] <linext> the -c copy argument was very helpful for reducing the CPU usage from 25% to 5%
[23:58:07 CET] <kevinnn> I mention it because compiling this stuff on windows is a pain in comparison to how things are built in linux
[23:58:17 CET] <kevinnn> and integrated into visual studio
[23:58:34 CET] <kevinnn> are there any pre-built binaries available with pthread built in?
[23:58:43 CET] <JEEB> I'd say it's pretty much the same :P integration for visual studio I have no idea and I have no interest of. as long as the libraries and headers are around it gets found and that's as much as I care
[23:58:54 CET] <JEEB> kevinnn: I think if you want that feature the best alternative would be to have it implemented in the UDP protocol
[23:59:08 CET] <JEEB> which is either you coding it, or putting up a bounty on it
[23:59:34 CET] <JEEB> because last time you tried whatever pthreads wrapper it crashed IIRC
[23:59:40 CET] <JEEB> so I'm not going to waste time with that
[00:00:00 CET] --- Tue Mar  5 2019


More information about the Ffmpeg-devel-irc mailing list