[Ffmpeg-devel-irc] ffmpeg.log.20180530

burek burek021 at gmail.com
Thu May 31 03:05:01 EEST 2018


[01:19:31 CEST] <Zexaron> Hello, I was looking for those examples or explanation how someone can interface with ffmpeg API, if there is such a thing, basically, dumping frames off a 3D application's render output and making a video.
[01:19:52 CEST] <JEEB> yes tehre most defintely are APIs
[01:20:04 CEST] <JEEB> the ffmpeg.c application is just a single API client that doesn't even utilize all of the features
[01:20:47 CEST] <Zexaron> To recap it's Dolphin emulator https://github.com/dolphin-emu/dolphin/blob/master/Source/Core/VideoCommon/AVIDump.cpp  - people here said, using AVI is definitely not right since no VFR support, then it may be an issue with how PTS is being applied, and then problem with "ticks" or something
[01:22:10 CEST] <Zexaron> I'm quite a rookie "programmer" but I'm willing to learn and take a look into this, since none of the others are interested and busy with other stuff, it's a bit hard to know all of their problems, because I don't know in depth as I never used this ffmpeg movie recording which they call "framedumping"
[01:22:12 CEST] <JEEB> time base is something like 1/1000 , and it tells you the size of a single tick. then each raw frame should have a PTS and duration on that time base
[01:22:58 CEST] <Zexaron> Here's some examples of framedumping issues I found on the forums from the past, the developers know the code is crap tho, just nobody fixies it, however they don't seem to know that AVI is not VFR lol
[01:23:17 CEST] <JEEB> well you can hack up AVI for VFR with null frames
[01:23:55 CEST] <JEEB> AVI basically has a static frame rate, and then you put frames within that in
[01:24:05 CEST] <JEEB> and a frame can be "null"
[01:24:12 CEST] <JEEB> aka "just show the previous one"
[01:24:16 CEST] <Zexaron> Here's an example from degasus: https://forums.dolphin-emu.org/Thread-enable-dual-core-dump-frames-disaster?pid=416986#pid416986
[01:24:43 CEST] <JEEB> but generally the thing with screen capture is that you really need to keep the image in VRAM for as long as possible
[01:26:24 CEST] <JEEB> grab RGB -> scale down if needed (preferably in a shader) -> convert to YCbCr if needed (preferably in a shader) -> pass image to either a VRAM based (proprietary CPU/GPU ASIC encoder black box APIs) or RAM-based (yer usual software encoding) encoder while not trying to stall the rendering pipe line
[01:27:06 CEST] <Zexaron> Because it's an emulator it's that hardeer, they also use Tool-Assisted-Speedrun (TAS) Input controls recording, separate from framedumping, and they have desync issues and don't know what causes it https://forums.dolphin-emu.org/Thread-where-does-dolphin-save-my-recordings?highlight=framedump
[01:27:52 CEST] <Zexaron> So the workaround is, record the TAS Inputs when playing, then replay TAS Movie and framedump (ffmpeg movie) ... so it takes 2 steps instead of just all at once
[01:29:25 CEST] <Zexaron> JEEB: Oh they also want things to be pristine, lossless even, that's why they have FFV1 option too, but I'm already cooking up a plan, H264 hardware encoding could be attempted as it was suggested here too, because emulator mostly needs CPU and that would also fix the slowdowns
[01:29:48 CEST] <JEEB> nvidia does support lossless coding, even in 4:4:4 nowadays
[01:29:58 CEST] <JEEB> which is funny because their ASIC doesn't support *decoding* that
[01:29:58 CEST] <Zexaron> Well I haven't contributed much yet, that's why I'm calling "they", since im newcomer
[01:30:27 CEST] <Zexaron> Ah wait, no 4:4:4 on GPUs ... isn't that like a big deal ?!?
[01:31:07 CEST] <JEEB> if you planned to utilize the GPUs for "pristine" things I'd guess they don't want a downsampling of chroma
[01:31:26 CEST] <JEEB> as I noted, and tested - nvidia can do 4:4:4 lossless on their *encoder* ASIC
[01:31:33 CEST] <JEEB> what they lack is the *decoder* part
[01:31:36 CEST] <Zexaron> That makes "GPU Acceleration" such a ... 2 steps back one step forward deal, well I have an RX480 so I can only test AMD stuff, probably won't bother with nvidia, that would have to be covered by someone else
[01:32:06 CEST] <JEEB> well the main part is to get the capture part optimized and as non-blocking of the rendering loop as possible
[01:32:39 CEST] <JEEB> you can look at examples of how to do correct colorspace conversions and scaling in shaders for video from, say, libplacebo or mpv
[01:32:58 CEST] <Zexaron> THe AVIDump.cpp code is old, they also use XVID fourcc for "compatability" not sure with what, probably Avi tools, I didn't do much with Avi tools historically back in the day
[01:33:20 CEST] <kepstin> my impression is that dolphin is mostly single-threaded anyways, or at least relatively few threads? so if cpu scheduler does its job, a cpu-based encoder on a multicore cpu would probably be fine.
[01:34:06 CEST] <JEEB> the main thing is to get most of the stuff done quickly on the GPU that's possible, and then stuff passed on to the CPU (if required) as optimized as possible so that the render loop doesn't get stuck
[01:34:24 CEST] <JEEB> at least in realtime capture
[01:34:37 CEST] <Zexaron> kepstin: thanks for reminding me, Dolphin has Dual-Core mode, enabled by defaul, it does introduce some bugs in games but generally fine, singlecore isn't perfect either (sometimes regressions) ... they said that GPU/CPU threads are async when dual-core is enabled
[01:35:58 CEST] <Zexaron> And supposably Dual-Core, which is doesn't mean dual core inside, but the host HW it runs on, to also be messing with framedumping , AFAIK.
[01:38:28 CEST] <kepstin> so yeah, as little as possible should happen in the emulation/rendering threads - once you have the frame data, pass it over to a separate thread that handles encoding, writing files, etc.
[01:38:46 CEST] <JEEB> well the first thing should be something that does as much as possible on the GPU
[01:38:58 CEST] <JEEB> scaling, colorspace conversions
[01:39:13 CEST] <JEEB> then that passes the data off to the thing that would call FFmpeg's APIs
[01:39:47 CEST] <Zexaron> JEEB: One of the main things I've seen is the video doesn't look right when frames aren't stable, and they don't need realtime, I think the problem was also in the file not being reliable and half broken, that would explain the "XVID for compatibility" as they probably had problems with 3rd party AVI tools ... so other people yesterday pointed out, it may not be bandwidth or CPU speed, but the PTS timecodes and the use of ticks, this
[01:39:47 CEST] <Zexaron> could have a bigger effect, maybe just this would improve most things.
[01:40:20 CEST] <JEEB> yes, you definitely want to keep your PTS and duration of frames throughout the whole chain
[01:40:38 CEST] <Zexaron> I mean, the file gets saved to disk offline, probably doesn't need to be realtime, it can have some delay before writing stuff, it's not getting piped, not a livestream.
[01:41:05 CEST] <furq> well you have a finite amount of buffer so it needs to at least average out to realtime in the long run
[01:41:50 CEST] <furq> no cpu should be breaking a sweat encoding ffv1 at 480p60
[01:41:59 CEST] <furq> but i guess some people want to capture at ridiculous resolutions
[01:42:06 CEST] <Zexaron> oh, sure, but let me just asking which buffer? VRAM can't be used as buffer?
[01:42:21 CEST] <Zexaron> Not that I don't know, but just so I'm sure.
[01:42:40 CEST] <furq> wherever you're buffering frames
[01:42:50 CEST] <furq> probably not vram because they've already been displayed
[01:43:27 CEST] <Zexaron> Yeah, I that didn't make any sense why lossless would be causing slowdowns, except if they, didn't provide fast enough storage and misdiagnosed the problem.
[01:43:49 CEST] <JEEB> well your image would first be in VRAM
[01:44:03 CEST] <Zexaron> Because I did deal with video transcoding for a long time including ffmpeg ofcourse, so I know stuff from this side well (just not the deep programming stuff)
[01:44:15 CEST] <JEEB> whatever buffer you used for rendering, or even before it if you want the linear RGB stuff
[01:44:37 CEST] <JEEB> and then, since GPUs are actually good with image processing (as opposed to decoding or encoding modern video formats)
[01:44:57 CEST] <JEEB> you do as much of the wanted processing (can be none or close to none by default depending on the use case)
[01:45:03 CEST] <JEEB> in VRAM with the GPU
[01:45:14 CEST] <JEEB> scaling and colorspace conversion, if required
[01:45:38 CEST] <JEEB> this should already be done outside of the render loop, as I mentioned
[01:45:51 CEST] <JEEB> as in, you most likely pass on the source buffer to another "worker"
[01:46:09 CEST] <JEEB> that then has a queue or something so it doesn't block things
[01:46:48 CEST] <JEEB> then that makes a buffer that is something that can be straight fed into an encoder, and that data then gets passed into RAM (if a CPU-based encoder is utilized)
[01:47:08 CEST] <JEEB> and that should then of course be a separate "worker", calling the FFmpeg APIs
[01:47:19 CEST] <kepstin> in the ideal case with a hardware encoder, you don't do any copies from vram to system ram until you have the encoded video to save to disk.
[01:47:19 CEST] <Zexaron> Thanks for that, It would defnitely be a custom implementation obviously, it's quite a juggling act, some stuff they do with rendering has to be on host CPU just because there's issues with VRAM-RAM bandwidth/latency on modern PCs
[01:48:43 CEST] <JEEB> kepstin: yes if you feed it to a black box graphics hardware compatible ASIC encoder, then you will want to keep the image in VRAM all the way until you get compressed bit stream, in which case the encoder's API probably won't even let you get a VRAM buffer :D
[01:49:22 CEST] <Zexaron> so there's a number of buffers, but I guess for framedumping, just need the last one, I've read about it and talked but I'm not sure right now where is the final one, it surely has to be on the GPU no, if it's just a copy of that image inside VRAM, without having to do any RAM loop, then it should be good
[01:50:32 CEST] <Zexaron> I may also not be explaining it 100% accurately right now, but it's something like that.
[01:50:51 CEST] <JEEB> well, you need the buffer that you consider the thing that should be the source for your capture. in most cases it is the final render
[01:51:05 CEST] <JEEB> as I mentioned up there in the log, some might also want the linear RGB version
[01:51:35 CEST] <JEEB> so that the conversion to gamma or PQ or HLG or whatever can be done by the capture thing. if anyone wants to use "HDR" shaders and capture HDR, for example :P
[01:52:00 CEST] <JEEB> but that is out of the scope for basic design of such a capture setup
[01:52:38 CEST] <JEEB> anyways, almost 3am. need sleep :P
[01:52:49 CEST] <Zexaron> I actually don't see the reason why they would want 4:2:2 ... at first, but then I clicked, you mean because online video services are all 4:2:2 too and if they upload 4:4:4 it woudln't make no sense right?
[01:54:00 CEST] <Zexaron> JEEB: heh, it's very hard if Dolphin would ever do HDRWCG, but there's been a discussion just in the last few days https://forums.dolphin-emu.org/Thread-feature-request-hdr-support
[01:55:09 CEST] <Zexaron> yeah, later
[03:14:21 CEST] <Zexaron> Hmm, I was looking if ffmpeg provides .libs for including in crossplatform projects
[03:17:04 CEST] <Zexaron> ffmpeg is used as an external
[03:17:15 CEST] <Zexaron> was looking to update the version
[03:18:46 CEST] <Zexaron> Ffmpeg isn't built as a project, only some external dependencies are used
[03:20:16 CEST] <Zexaron> I guess it's only for windows, okay so all I need is to download shared or dev build from zeranoe
[03:20:24 CEST] <Zexaron> https://github.com/dolphin-emu/dolphin/tree/master/Externals/ffmpeg
[03:34:06 CEST] <Zexaron> yeah, dev linking
[03:34:12 CEST] <Zexaron> got it
[04:39:21 CEST] <Zexaron> hmmm, zeranoe's readme for dev linking doesn't say dev in readme, says shared, not a big deal but well, still a bug ;)
[04:42:20 CEST] <Zexaron> maybe im wrong to not sure
[04:42:49 CEST] <Zexaron> but anyway, libs from zeranoe are completely different kind than the ones existing here, not sure what kind of step i need to make
[04:43:59 CEST] <Zexaron> Seem to be much larger on repo, while zeranoe's just a few 100 kb https://github.com/dolphin-emu/dolphin/tree/master/Externals/ffmpeg/lib
[05:15:05 CEST] <Zexaron> oh now i get it, it's very selective from what kind of codecs it includes ... custom job i guess, since the full blown dll is huge for avcodec
[05:21:16 CEST] <Zexaron> Ah oh well https://stackoverflow.com/questions/11701635/use-ffmpeg-in-visual-studio
[08:41:31 CEST] <null_> hello? would anyone be able to contribute to this superuser question? https://superuser.com/questions/1326835/ffmpeg-arguments-for-optimizing-video-stream
[08:56:52 CEST] <d-safinaskar> I want to capture X using 4 fps. I use this command:   ffmpeg -f x11grab -framerate 4 -s 1920x1080 -i $DISPLAY -c:v libx264 -pix_fmt yuv444p -preset ultrafast -qp 0 ~/2018-05-30-03-06.mkv
[08:57:10 CEST] <d-safinaskar> and I get lots of messages "Past duration 0.999825 too large"
[08:57:14 CEST] <d-safinaskar> what this means?
[08:59:37 CEST] <d-safinaskar> usually i don't see such message
[08:59:56 CEST] <d-safinaskar> but when i suspend my laptop and then power on it again, the message appears
[09:00:55 CEST] <d-safinaskar> it seems that when the laptop powers on, ffmpeg tries to overtake/catch all this time when the ffmpeg was unable to capture screen because the laptop slept
[09:01:21 CEST] <d-safinaskar> so, it seems, ffmpeg tries to capture using very fast speed and he cannot and thus this error message
[09:01:44 CEST] <d-safinaskar> but i want ffmpeg not to try to overtake/catch anything
[09:02:02 CEST] <d-safinaskar> when resuming ffmpeg should simply continue to record using 4 fps
[09:02:05 CEST] <d-safinaskar> how to do this?
[09:07:09 CEST] <d-safinaskar> also, i know about -loglevel, i want to understand what is happening
[09:33:05 CEST] <lee__> hello
[09:33:29 CEST] <lee__> i have question
[09:33:50 CEST] <lee__> i want to build ffmpeg for android in window os
[09:34:19 CEST] <lee__> i modify configure file, and make .sh file
[09:34:52 CEST] <lee__> but build processing, no such file or directory
[09:35:00 CEST] <lee__> how can i fix it?
[09:36:23 CEST] <lee__> C compiler test filed <== err
[09:42:11 CEST] <JEEB> I build for android just fine on *nix so I don't see why it wouldn't work for you
[09:42:33 CEST] <JEEB> also configure times on windows are lolhueg so I'd at least recommend running the stuff under WSL
[09:57:16 CEST] <lee__> @JEEB  you say i try another os ? not windows?
[10:04:56 CEST] <JEEB> well of course you're most likely also just doing things wrong, but what I'm saying is that after making a toolchain and doing normal cross-compilation it WorkedForMe
[12:19:34 CEST] <ariyasu> im using -ss to mark the start time of where i want to process a video "ffmpeg -ss 00:01:24 -i in.mkv -c copy out.mkv"
[12:20:05 CEST] <ariyasu> it works but it gives this error "[matroska @ 0000000000512840] Non-monotonous DTS in output stream 0:0; previous: -424, current: -440; changing -424. This may result in incorrect timestamps in the output file."
[12:21:02 CEST] <ariyasu> then i go to process the out.mkv to cut the end off it with "ffmpeg -t 310 -i out.mkv -c copy done.mkv" but not it won't work and give the error
[12:21:32 CEST] <ariyasu> "[matroska @ 0000000002cfa4c0] Can't write packet with unknown timestamp av_interleaved_write_frame(): Invalid argument"
[12:21:48 CEST] <ariyasu> is there any thing i can do, other than using mkvmerge to cut?
[12:55:47 CEST] <PaulHere> Hey, how can you access the sps (Sequence parameter set) from h264 using libavcodec?
[12:56:32 CEST] <JEEB> not sure you get that level of access through the API. you can get the NALs from the API in extradata/AVPackets but there is no way to tell it "hi, I would like the SPS"
[13:04:00 CEST] <Zexaron> Hello
[13:04:18 CEST] <Zexaron> does ffmpeg github uses forward slash in branch names on purpose?
[13:04:44 CEST] <BtbN> github is just a mirror.
[13:05:06 CEST] <Zexaron> I have some git bash issues on windows, or i'm not properly using the fwslashes and dobulequotes, maybe it needs special treatment
[13:05:34 CEST] <BtbN> The releases branches to have slashes in them, if that's what you mean?
[13:06:10 CEST] <Zexaron> git merge doesn't require a fwslash after remote name anymore so that's good, that saved me, but git checkout -b newbranch --track upstream/release/4.0 doesn't work (fatal: upstream/release/4.0 is not a commit cannot create branch)
[13:07:17 CEST] <Zexaron> yeah, the fwslash makes git also create a folder called release and put 4.0 file into it, I'm wondering if that's set up like that on purpose to do that, versus the use of fwslash just being randomly used as a visual separator
[13:07:37 CEST] <JEEB> it's just a visual separator, a lot of projects seem to do release/*
[13:07:56 CEST] <JEEB> some might use it for more detailed workings of git, but (´4@)
[13:08:22 CEST] <Zexaron> it does not create a file "release/4.0" since / is an invalid filename character
[13:08:24 CEST] <BtbN> works fine for me for all intended purposes.
[13:09:09 CEST] <Zexaron> JEEB: yeah if it's just visual, then it's not the most optimal one, it messes with paths and bash commands
[13:09:37 CEST] <Zexaron> You guys using linux?
[13:12:15 CEST] <Zexaron> well, yeah it's just a mirror, I guess you can't try because it's not setup, well I only need a mirror because i need to build libs for another project
[13:14:01 CEST] <Zexaron> yeah but the fwslashes are also on the main repo ... ah git is also used, i thought it was something else
[13:15:32 CEST] <Zexaron> well someone try then: git checkout -b test01 --track remote/release/4.0
[13:19:23 CEST] <JEEB> worked for me
[13:19:27 CEST] <JEEB> git checkout -b testbranch --track origin/release/4.0
[13:19:27 CEST] <JEEB> Branch 'testbranch' set up to track remote branch 'release/4.0' from 'origin'.
[13:19:28 CEST] <JEEB> Switched to a new branch 'testbranch'
[13:23:35 CEST] <Zexaron> You on windows or ?
[13:23:43 CEST] <JEEB> *nix myself
[13:24:01 CEST] <JEEB> I stopped building my windows binaries on windows after the configure got ridiculously long
[13:24:14 CEST] <Zexaron> well i have the remote set to single-branch maybe that's why it doesn't work with that
[13:24:17 CEST] <JEEB> (which was partially MS, and partially FFmpeg doing changes to the script)
[13:24:42 CEST] <Zexaron> Do you have origin set to * or single-branch like master for example ?
[13:25:01 CEST] <JEEB> whatever the default is when you clone
[13:25:37 CEST] <JEEB> I seem to get all tags and branches at least (unless the tags lack a mention in any branches)
[13:25:59 CEST] <Zexaron> if you didn't specify --single-branch when doing a clone then you have /* which means it'll clone all branches and fetch / merge wouuld then work differently a bit
[13:26:29 CEST] <JEEB> basically the only thing where I've had issues with the directory structure in git is when someone made a branch called the same as an origin you have
[13:26:37 CEST] <JEEB> and you checked out that branch locally
[13:38:25 CEST] <Zexaron> oh, well, I'm reading up on git checkout and tracking, could be because i use single-branch and don't have any of extra branches cloned, well I did create release/4.0 branch myself and merged from remote release/4.0 (hopefully that is the same as cloning)
[15:11:53 CEST] <th3_v0ice> Hi. I am using FFmpeg C API and I have been setting the number of threads for the encoder and decoder by setting the AVCodecContext->thread_count but my CPU usage never goes above 50% even if i set some large number. Why? CPU has 8 cores with 2 threads each.
[15:13:01 CEST] <Mavrik> Just because you set those fields it doesn't mean that encoder/decoder are multithreaded and they'll be able to use them
[15:14:55 CEST] <th3_v0ice> Mavrik, then what should I do?
[15:16:04 CEST] <Mavrik> explain what you're transcoding a bit more.
[15:18:02 CEST] <th3_v0ice> I am just doing simple decode of the input video and encoding it with x264 encoder.
[15:22:02 CEST] <th3_v0ice> My base code is trancoding.c from FFmpeg's github
[15:23:34 CEST] <kepstin> th3_v0ice: depending on the settings and video size you're using, x264 may simply not be able to split up the work enough to use all the threads.
[15:25:13 CEST] <kepstin> in general, you get better usage of threads for larger image sizes and slower presets.
[15:28:41 CEST] <th3_v0ice> kepstin: Well the input video is 1920x1080, the output is 480x270, preset it medium. So the problem is that it just cant split the workload?
[15:28:59 CEST] <kepstin> 480x270 is a very tiny video, so that's probabyl the issue, yes.
[15:29:57 CEST] <th3_v0ice> Hmmm, is there any scale as per which I can determine the optimal number of threads for resolution?
[15:31:04 CEST] <th3_v0ice> You are right, changing the resolution to 1080p is using 100% CPU.
[15:56:59 CEST] <Zexaron> Hey, so I can send contributions via git push or email while using a github mirror repo too?
[15:57:36 CEST] <JEEB> yea, but I recommend you use git.videolan.org's repo as your upstream
[15:57:43 CEST] <JEEB> that way you always are sure it's up-to-date
[15:58:32 CEST] <Zexaron> I didn't mean to primairly contribute at this time, althought I wanted in the past, right now I had to set up ffmpeg source because I need to build it myself to get LIBs, Zeranoe does not provide the kinds of LIBs that get integrated into the program
[15:58:51 CEST] <Zexaron> But I want to future proof it as if I would set it up for contribution
[15:59:21 CEST] <JEEB> basically the github repo is a mirror so in theory it should have everything the same, yes
[15:59:31 CEST] <JEEB> it just might be that at some point it gets desynchronized
[15:59:37 CEST] <JEEB> as in, nothing runs the synch script
[15:59:41 CEST] <Zexaron> yeah I would need to do a reclone then, anyway I was doing some configs, learning git along the way, need to do a reclone anyway
[15:59:52 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=summary
[15:59:55 CEST] <JEEB> this is the actual thing
[16:00:14 CEST] <Zexaron> however there's another link git:source.ffmpeg.org or something, what about that?
[16:00:50 CEST] <JEEB> that is supposedly another mirror, or at least at one point it was just a redirect to videolan
[16:01:15 CEST] <JEEB> given that when pushing contributions in, you can guess which end point I'm using :P
[16:03:50 CEST] <JEEB> for contributing you generally also then want to have your own github remote which is your own fork
[16:04:24 CEST] <JEEB> i have the read-only videolan one as origin, then my own public repo as 'github'
[16:05:06 CEST] <Zexaron> seems like http source.ffmpeg.org redirects to videolan
[16:05:29 CEST] <JEEB> yea that's what I used to know it did
[16:05:44 CEST] <JEEB> then someone mentioned that "the git UI on videolan is better than the source.ffmpeg.org one"
[16:05:48 CEST] <JEEB> and I was slightly confuzzled
[16:10:20 CEST] <Zexaron> I figured out the tracking from earlier, it was because I didn't have full remote history because I used --single-branch clone, so it's a false alarm, no syntax issues
[16:15:17 CEST] <Zexaron> But wait, how then send patch from downstream, not from local files ?
[16:15:26 CEST] <Zexaron> i mean github
[16:15:36 CEST] <Zexaron> I have my github forks as downstream
[16:16:12 CEST] <Zexaron> at least in the other repos which use PR's on github, I might find other namings more fitting for other repos
[16:16:15 CEST] <JEEB> github is just to have a remote to conveniently link to people if you want people to check something out. actual contributions get sent to the mailing list with (usually) git send-email. if git send-email is not possible, you can also attach the patch files to e-mail
[16:17:10 CEST] <JEEB> so when I'm in some branch of my own I can just make sure my master is up-to-date and then `git format-patch -o dir_name/ master..HEAD`
[16:17:19 CEST] <JEEB> and that creates me a set of patches
[16:17:37 CEST] <JEEB> (same way of X..Y also works for git send-email)
[16:17:52 CEST] <JEEB> (and there's --dry-run of course so you can check if it does things sanely)
[16:18:20 CEST] <th3_v0ice> kepstin: Thanks for helping me!
[16:18:42 CEST] <Zexaron> It's quite different flow so I'll have to specifically learn that stuff, dry run will help yes, but I'm digressing, I was actually trying to build it with VS2017 and get libs, guide says that it may be outdated
[16:19:35 CEST] <JEEB> generally as long as you tell configure that you're trying to do MSVC building and you have cl/link properly in your PATH it /should/ work
[16:19:46 CEST] <JEEB> it's just that native configuration on windows takes like 10min+
[16:19:49 CEST] <JEEB> it's just hilarious
[16:19:52 CEST] <Zexaron> According to this Zeranoe doesn't provide the 3rd option, that's what I was juggling with yesterday evening for hours hehe https://stackoverflow.com/questions/11701635/use-ffmpeg-in-visual-studio
[16:20:22 CEST] <JEEB> the end result of sending the e-mails looks something like this https://patchwork.ffmpeg.org/patch/9114/
[16:21:42 CEST] <Zexaron> yeah I'll stash all that when I get to figuring out that more, wanted to do some stuff a long time but just didn't commit to it
[16:22:21 CEST] <JEEB> but yea, if it's something you're not sure about just push to a branch in a github repo of yours (or gitlab or whatever) on ffmpeg-devel
[16:22:31 CEST] <JEEB> *and link on ffmpeg-devel
[16:22:35 CEST] <JEEB> "does this look sane"
[16:22:46 CEST] <JEEB> and if you get positive/send it on the ML replies
[16:22:55 CEST] <JEEB> then you can just push it on the ML
[16:23:02 CEST] <JEEB> (ML = mailing list)
[16:23:38 CEST] <JEEB> also just fyi, mingw-w64 binaries work nicely with MSVC as well. esp. shared ones
[16:23:39 CEST] <Zexaron> But does mailing list URL need to be setup from git ?
[16:23:44 CEST] <Zexaron> or is that all automatic
[16:24:19 CEST] <JEEB> it's not automatic, git send-email asks it always. and if you can't get that to work you can just put a bunch of patches out with git format-patch and then send them attached to an e-mail
[16:24:47 CEST] <Zexaron> oh ok
[16:24:54 CEST] <JEEB> as in, git send-email always asks for the mailing list address (ffmpeg-devel at ffmpeg.org)
[16:25:21 CEST] <JEEB> but as noted, as far as I know attached patches are OK too
[16:25:26 CEST] <JEEB> which you can get with format-patch
[16:25:35 CEST] <JEEB> (it retains commit authorship info and commit message etc)
[16:27:39 CEST] <Zexaron> kind of a backup
[16:27:41 CEST] <Zexaron> option
[16:28:08 CEST] <Zexaron> why wouldn't the normal way work? But it uses the same email
[16:28:24 CEST] <Zexaron> so if the email server goes down, wouldn't they both stop working
[16:29:13 CEST] <JEEB> yes, just like if github goes down you can't do some things where the workflow uses that
[16:29:33 CEST] <JEEB> most likely if the e-mail server went down people would just start flicking links to their trees on #ffmpeg-devel
[16:36:33 CEST] <Zexaron> On the other hand, about the include, not everything seem to be used, so when I build ffmpeg, do I have to do extra configuration, or simply cherrypick the .lib files from my results, not sure tho, avcodec.lib is only 9 MB while the DLL was like almost 50 (but not sure if lib-dll sizes can be compared, talking about the zeranoe0's shared dll)
[16:36:49 CEST] <Zexaron> https://github.com/dolphin-emu/dolphin/tree/master/Externals/ffmpeg/lib
[16:37:03 CEST] <JEEB> what you are supposed to do is you set --prefix during configuration
[16:37:12 CEST] <JEEB> then after you "make" you also "make install"
[16:37:21 CEST] <JEEB> that puts what you've built into a proper directory structure
[16:37:31 CEST] <JEEB> FFmpeg cannot be utilized properly straight from the source tree
[16:38:08 CEST] <JEEB> I recommend doing most of the development with relatively straightforward configuration (aka "don't start trying to do a minimal build from the beginning")
[16:38:31 CEST] <JEEB> and then when your code works you can start trying to optimize
[16:38:45 CEST] <JEEB> also if you want to retain debug info --disable-stripping is probably what you want
[16:39:26 CEST] <Zexaron> In this case I only need to update the dolphin repo ffmpeg to 4.0 - this is separate from my ffmpeg contributioning
[16:39:48 CEST] <Zexaron> completely unrelated, actually
[16:40:46 CEST] <Zexaron> if you knew that no prob, I just forgot to mention that clearly
[16:43:59 CEST] <Zexaron> Would be great if I could figure out what configs were used building those libs, which components were enabled or not
[16:48:51 CEST] <lyncher> hi. is it possible to convert a h264 video with CEA-608 captions to a raw video keeping the captions?
[16:52:47 CEST] <kepstin> lyncher: if by "raw" you mean decoded, e.g. yuv video - it doesn't have anywhere to store captions, so you'd have to extract them to a separate stream/file
[16:54:26 CEST] <lyncher> keptsin: thank you
[16:59:51 CEST] <JEEB> kepstin: CEA-608 were born in analog video
[16:59:55 CEST] <JEEB> out of view area :D
[17:00:04 CEST] <JEEB> libzvbi I think can read that stuff?
[17:00:24 CEST] <JEEB> I think the VANC stuff is specific in SDI as well
[17:01:20 CEST] <Zexaron> I need YASM, last updated 2014 ... will it work with VS2017 ?
[17:01:36 CEST] <Zexaron> Do I need YASM or libs only
[17:01:38 CEST] <lyncher> I was trying to send a H264 with CEA-608 captions over NDI
[17:01:56 CEST] <lyncher> but NDI only supports raw formats
[17:02:22 CEST] <lyncher> which means that captions have to be dropped before sending the video over NDI
[17:02:47 CEST] <kepstin> NDI really should have a sideband for stuff like captions :/
[18:09:59 CEST] <ltunner> .
[18:10:17 CEST] <ltunner> olá
[18:10:18 CEST] <ltunner> hi
[19:10:00 CEST] <sd1074> Can ffmpeg be used to record videos from 3-4 cameras with accurate timecode, assuming a linux with preempt_rt patch? What's the accuracy of timecodes I can expect? What would it depend on?
[19:10:58 CEST] <sd1074> I would need to synchronize the videos based on the time codes
[19:11:10 CEST] <JEEB> does the source give you timestamps/-codes?
[19:11:25 CEST] <JEEB> or is it something that has to be received from wallclock
[19:12:35 CEST] <sd1074> The system is still being design, we have the flexibility
[19:12:57 CEST] <sd1074> If we use wallclock, how bad is it going to be?
[19:13:15 CEST] <sd1074> *being designed
[19:14:20 CEST] <JEEB> anyways, expect to write your own API client on top of the FFmpeg's libraries' APIs
[19:14:46 CEST] <JEEB> so that you block least amount possible in the capture step and move it forwards in the chain
[19:15:00 CEST] <JEEB> as for wallclock, please no. that's the final fallback
[19:15:20 CEST] <TarquinWJ> Hi, I am trying to create a dashed WebM file from a source MP4 file, so that I get the header and the chunks as separate files. It looks like it should work, according to https://www.ffmpeg.org/ffmpeg-formats.html#webm_005fchunk
[19:15:20 CEST] <TarquinWJ> My command looks like this:
[19:15:20 CEST] <TarquinWJ> ffmpeg.exe -i source.mp4 -map 0:v -c:v libvpx-vp9 -pix_fmt yuv420p -b:v 2M -r 30 -f webm_chunk -keyint_min 90 -g 90 -header init.hdr -chunk_start_index 1 chunk-%d.chk
[19:15:20 CEST] <TarquinWJ> FFMpeg gives this error:
[19:15:20 CEST] <TarquinWJ> > Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
[19:15:22 CEST] <TarquinWJ> > Error initializing output stream 0:0 --
[19:15:23 CEST] <TarquinWJ> I have searched for this error, and only got other people saying they have the same problem
[19:15:24 CEST] <JEEB> preferably all the sources should have some sort of time source that is matched against something
[19:15:33 CEST] <TarquinWJ> any ideas what I need to do to make it work?
[19:16:13 CEST] <JEEB> TarquinWJ: if you need DASH webm an akamai person just added support for that in the dash muxer
[19:16:24 CEST] <JEEB> you get an MPEG-DASH manifest as well with that
[19:16:33 CEST] <TarquinWJ> JEEB: sounds awesome, how do I do it?
[19:17:00 CEST] <JEEB> get latest master FFmpeg, https://www.ffmpeg.org/ffmpeg-all.html#dash-2
[19:17:17 CEST] <JEEB> then set -dash_segment_type webm
[19:18:32 CEST] <JEEB> also no idea how bad the webm_chunk muxer is
[19:19:00 CEST] <sd1074> JEEB, thanks for your advice. There has to be some mechnism to sync the camera clocks then, right? What's the name of that feature? And would it matter if I use GigE interface?
[19:19:57 CEST] <JEEB> sd1074: FFmpeg's libraries handle the synchronization as long as the input timestamps that you feed it are on the same time line
[19:20:04 CEST] Action: TarquinWJ tries to grok
[19:20:17 CEST] <JEEB> if they are all over the place then of course you have to come up with ways of synchronizing between them
[19:20:43 CEST] <JEEB> in the worst case you'll have to utilize the wall clock of the capturing machine, but if there's any way to get any sort of clock for each frame that'd be preferably
[19:20:46 CEST] <JEEB> *preferable
[19:23:59 CEST] <sd1074> as far as I understand, the accurate timestamps in the latter case boil down to using appropriate camera hardware which have internal clocks and clock synchronization mechanism. Could you give me an idea how to search this type of cam? I need to understand the cost trade-offs
[19:24:41 CEST] <TarquinWJ> JEEB: how long ago was this added? I compiled ffmpeg only a couple of weeks ago, is it out of date already?
[19:25:29 CEST] <JEEB> it was pushed into the tree quite recently, yes
[19:25:47 CEST] <JEEB> sd1074: unfortunately I haven't handled such setups yet
[19:27:28 CEST] <TarquinWJ> shweet, progress is progress
[19:28:14 CEST] <sd1074> JEEB, ok, thanks anyway. Technically if the timestamps from cameras are correct, then we won't need to write our own API, right?
[19:28:28 CEST] <JEEB> ffmpeg.c is very static and blocking
[19:28:38 CEST] <JEEB> so I would recommend writing your own API client
[19:28:54 CEST] <JEEB> ffmpeg.c is the command line app that people know from FFmpeg, and it's just an API client
[19:29:00 CEST] <JEEB> the core of FFmpeg are the libraries
[19:29:03 CEST] <JEEB> and the APIs
[19:35:54 CEST] <TarquinWJ> JEEB: could you perhaps tell me if that is in stable (as in, will I get it if I just compile again like I did before?)
[19:37:03 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=shortlog
[19:37:07 CEST] <JEEB> that's the log for master
[19:37:12 CEST] <JEEB> it's the "dashenc" stuff there
[19:37:16 CEST] <JEEB> so if you build master now, you get it
[19:38:02 CEST] <TarquinWJ> nice one, thanks
[19:38:35 CEST] <sd1074> JEEB, ok, thanks
[20:44:33 CEST] <ariyasu> Frame rate                               : 29.970 (29970/1000) FPS
[20:44:33 CEST] <ariyasu> Original frame rate                      : 29.970 (30000/1001) FPS
[20:44:43 CEST] <ariyasu> is there any diffrence between these reported framerates ?
[20:52:08 CEST] <kepstin> ariyasu: yes. 30000/1001 is the exact, correct value for ntsc. 29970/1000 is a rounded approximation (but is pretty close)
[21:03:01 CEST] <ariyasu> thanks you kepstin
[21:11:30 CEST] <Vise> Hello
[21:12:04 CEST] <Vise> I need some help on transcoding a video, I'm sure my question isn't difficult and I'm missing something stupid
[21:12:35 CEST] <Vise> I use filter_complex for the video part but the audio is missing from the output file
[21:13:10 CEST] <Vise> ffmpeg -i INTRO.AVI -c:v libx264 -c:a libmp3lame -pix_fmt yuv420p -filter_complex "[0:v]SeveralFilters[v]" -map "[v]" -movflags +faststart INTRO.mp4
[21:13:37 CEST] <DHE> add "-map 0:a"
[21:13:37 CEST] <Vise> Any solution to get the audio transcoded properly?
[21:14:00 CEST] <DHE> when you use -map, automatic stream selection is disabled and you never explicitly mapped an audio stream
[21:14:19 CEST] <Vise> Ok it makes sense but I could not figure out how my syntax was wrong, that was easy XD
[21:14:28 CEST] <Vise> Thanks DHE
[21:16:32 CEST] <Zexaron> does anyone know the full size of avcodec.lib
[21:16:33 CEST] <TarquinWJ> JEEB: you earned your cookie today, many, many thanks, looks like the nee web dash format nailed it for me :)
[21:16:40 CEST] <TarquinWJ> new*
[21:17:55 CEST] <Zexaron> setting up VS2017 to build ffmpeg is going to take more time, since I'm always on slow-and-steady to make sure i don't mess up my freshly installed win10 since i had a ton of PC maintenance and hardware breakdowns i don't want to deal with backuping and restoring images anymore
[21:18:10 CEST] <Zexaron> so I might not be done today, just for me to ge an idea
[21:19:12 CEST] <Zexaron> a 9 MB avcodec.lib generated 2MB of extra .exe size I calculated when I didn't include it in Dolphin project, but size of full avcodec.dll is 47 MB ... so that means the lib is like over 300 MB in size or what ?
[21:19:58 CEST] <kepstin> Zexaron: the size of the lib depends a lot on the features and libraries included, and also whether debug symbols, etc. are enabled (most builds strip those out)
[21:21:23 CEST] <Zexaron> yeah the 9MB one is stripped, I have no contact with the guy who did that a year ago, and no info oh what features he included, but we're upgrading this framedumping system here so I might need more stuff than he included, https://github.com/dolphin-emu/dolphin/blob/master/Source/Core/VideoCommon/AVIDump.cpp
[21:22:24 CEST] <Zexaron> I imagined I would want to first update ffmpeg to 4.0 in a separate PR, but the tricy thing is, it's hard for them to accept 50 Megs while nothing would use it yet https://github.com/dolphin-emu/dolphin/tree/master/Externals/ffmpeg/lib
[21:23:16 CEST] <Zexaron> Unless I do ffmpeg update and new system all in one PR (or patch as you guys call it) but I've learned huge changes like that all at once are really unwelcome
[21:23:55 CEST] <Zexaron> That said, I know I would use more stuff, since will play with hwaccell
[21:25:12 CEST] <lyncher> kepstin & JEEB: following our previous conversation about NDI
[21:25:49 CEST] <lyncher> I think I found the source of the missing captions over NDI
[21:26:26 CEST] <lyncher> ffmpeg's NDI integration is using NDIlib_video_frame_t
[21:26:35 CEST] <lyncher> which is now deprecated in NDI's SDK
[21:26:59 CEST] <lyncher> the current NDI struct to represent a video frame is: NDIlib_video_frame_v2_t
[21:27:19 CEST] <lyncher> and one of the fields is:
[21:27:20 CEST] <lyncher> const char* p_metadata;		// Present in >= v2.5
[21:27:42 CEST] <lyncher> Per frame metadata for this frame. This is a NULL terminated UTF8 string that should be
[21:27:48 CEST] <lyncher> in XML format. If you do not want any metadata then you may specify NULL here.
[21:28:31 CEST] <lyncher> which means that is possible to encode CEA-608/708 (or any other frame base captioning format) in NDI metadata
[21:28:41 CEST] <lyncher> and receive it in the other end
[23:06:50 CEST] <menon> hello
[23:06:56 CEST] <menon> i would like to cut video with ffmpeg
[23:07:10 CEST] <menon> or with any other linux program really
[23:07:24 CEST] <menon> i would like to input the input file output file beginning time and end time
[23:07:41 CEST] <menon> instead i have to input codec settings and not very well formated arguments
[23:07:47 CEST] <menon> and still my video is cut at different times
[23:07:50 CEST] <menon> and frames are missing
[23:08:10 CEST] <menon> i dont want to use other scripts just to convert it to webm and then back and other walkarounds
[23:08:20 CEST] <menon> my  question is can ffmpeg cut videos at right time
[23:08:35 CEST] <menon> or is it always just 10 minutes of trying commands and seeing if it can or can not
[23:08:46 CEST] <kepstin> menon: it can, but it will usually require re-encoding the video, which means setting codec parameters, etc.
[23:08:52 CEST] <menon> like most topics on this case ended up for me
[23:09:00 CEST] <kepstin> menon: if you have a specific example, we can help you set up the command.
[23:09:13 CEST] <menon> cant it detect the codec
[23:09:25 CEST] <menon> isnt there an alternative that can cut videos
[23:09:35 CEST] <kepstin> it detects the input, but you have to specify the output.
[23:09:35 CEST] <menon> like some webm converter my friend sent me and i lost
[23:09:46 CEST] <menon> it made video frames right and enabled cutting
[23:09:54 CEST] <menon> all i want is cut my video at the time i enter
[23:09:57 CEST] <menon> not at some other time
[23:10:11 CEST] <menon> i see
[23:10:21 CEST] <menon> well i dont really mind about the format of the output
[23:10:32 CEST] <menon> but i want the simplest barest command possible
[23:10:34 CEST] <menon> to do what i need
[23:10:53 CEST] <menon> if i knew the command after years of doing this i would have saved it already
[23:11:00 CEST] <menon> but i never really figured it out
[23:11:11 CEST] <menon> even tho i did try multiple times to play around with it
[23:11:27 CEST] <klaxa> ffmpeg -i input.mp4 -ss 00:00:04.000 -to 00:01:04.000 output.mp4
[23:11:29 CEST] <kepstin> for most purposes, something like "ffmpeg -ss 1:30 -i inputfile -t 4:00 output.mkv" will do something reasonable. The argument to -ss is the start time, and the argument to -t is the *length* of the section.
[23:11:42 CEST] <menon> klaxa: completly ignoring my problem i see
[23:11:50 CEST] <klaxa> i... am?
[23:12:01 CEST] <menon> it doesnt cut at right time or frames are missing
[23:12:22 CEST] <kepstin> menon: you asked for a simple bare command to do what you want, and klaxa provided an example that will do exact frame-accurate cutting.
[23:12:24 CEST] <menon> or it takes a lot of trying to get a mile long command to cut a video with 4 arguments i want to use
[23:12:38 CEST] <menon> that command doesnt work for what i want
[23:12:39 CEST] <klaxa> what command did you use
[23:12:40 CEST] <menon> i want the frames to show
[23:12:46 CEST] <menon> i want it to cut where i want
[23:12:53 CEST] <klaxa> the exact command you used
[23:12:53 CEST] <klaxa> copy pasted from your command line
[23:12:55 CEST] <klaxa> not "the same command"
[23:13:42 CEST] <menon> i have -c copy in it
[23:13:45 CEST] <klaxa> remove it
[23:13:49 CEST] <kepstin> menon: that's the problem.
[23:13:50 CEST] <klaxa> it will cut at the right frames
[23:14:06 CEST] <kepstin> menon: you can't cut on exact frames with "-c copy", because of how video codecs work.
[23:14:08 CEST] <menon> before i try
[23:14:10 CEST] <klaxa> if you used either kepstin's or my command you would have seen that it would have worked
[23:14:12 CEST] <menon> will it cut at the right time?
[23:14:19 CEST] <menon> or will it cut few seconds whereever it wants
[23:14:22 CEST] <menon> instead of the time
[23:14:30 CEST] <klaxa> it will cut at the right frames
[23:14:35 CEST] <klaxa> it will take longer without -c copy
[23:14:38 CEST] <klaxa> but it will be frame accurate
[23:14:51 CEST] <menon> in other words
[23:14:58 CEST] <menon> will the video be freezed at the beginning
[23:15:01 CEST] <kepstin> menon: by default ffmpeg will cut at the exact times specified. If you use -c copy, then it can only cut at keyframes, which might not be where you want.
[23:15:01 CEST] <klaxa> no
[23:15:38 CEST] <menon> kepstin: but from what i understand there are 3 frame types and only the keyframes hold information that is enough on its own to render a frame
[23:15:46 CEST] <menon> rest are differentials from the keyframe
[23:15:51 CEST] <menon> or not?
[23:15:54 CEST] <klaxa> correct
[23:16:03 CEST] <klaxa> or from other differential frames
[23:16:13 CEST] <menon> so if i remove -c copy it will reencode?
[23:16:17 CEST] <klaxa> yes
[23:16:21 CEST] <menon> oh ok
[23:16:27 CEST] <menon> i was sure i tried before
[23:16:38 CEST] <menon> thanks
[23:17:05 CEST] <klaxa> maybe your ffmpeg is a bit old and -ss as an input is not frame accurate? not sure if it was inaccurate in the first place, but i know it's definitely frame accurate nowadays
[23:17:18 CEST] <klaxa> and as an output option as well because it will decode the input instead of seeking
[23:17:19 CEST] <menon> idk its new now
[23:17:33 CEST] <menon> im on a fresh install writing commands and stuff
[23:17:34 CEST] <kepstin> -ss as an input option has been frame accurate for a long time
[23:17:53 CEST] <menon> there might be conflicting answers on this topic if you search
[23:17:55 CEST] <klaxa> yeah replace "a bit" with "very very"
[23:17:58 CEST] <menon> and as you mention older versions
[23:18:08 CEST] <menon> the problem could have been anywhere
[23:18:11 CEST] <menon> if this works im fine with that
[23:18:30 CEST] <kepstin> yeah, there's a lot of poor information about how to use ffmpeg online, particularly if it's old :/
[00:00:00 CEST] --- Thu May 31 2018



More information about the Ffmpeg-devel-irc mailing list