[Ffmpeg-devel-irc] ffmpeg.log.20180823

burek burek021 at gmail.com
Fri Aug 24 03:05:01 EEST 2018


[00:27:12 CEST] <BenMcLean> Hi there folks. I am trying to do a bit of work with my brand new Blu Ray box set of The Hobbit 3D Extended Trilogy
[00:27:46 CEST] <BenMcLean> and was wondering if I might be able to get some technical advice from people who know alot about media conversion
[00:28:06 CEST] <Cracki> use bittorrent
[00:28:37 CEST] <BenMcLean> Here's what I'm trying to do: I want to get the 3D version of the Hobbit movies into a format that I can watch on my Samsung Gear VR headset in 3D with the director's commentary
[00:28:55 CEST] <BenMcLean> Torrents 1. don't have extended in 3D and 2. don't have commentary
[00:28:59 CEST] <kepstin> I don't think the ffmpeg h264 decoder can even handle mvc video?
[00:30:07 CEST] <BenMcLean> The normal process people are following to convert 3D blu ray to VR ready files is to use MakeMKV and then BD3D2MK3D
[00:30:37 CEST] <BenMcLean> and I could be wrong but I believe BD3D2MKV3D is ffmpeg-based
[00:31:22 CEST] <BenMcLean> Here's my problem: That method works when MakeMKV spits out one MKV file with two video streams, but in the case of the Hobbit movie I'm working with, it is making two separate MKV files, one for each eye
[00:31:53 CEST] <kepstin> ffmpeg has filters that can combine those two separate files into common formats like alternating frame or side-by-side, etc.
[00:32:22 CEST] <BenMcLean> kepstin that's what I'm thinking might be the right path to pursue this project, rather than going through BD3D2MKV3D
[00:32:57 CEST] <BenMcLean> But another oddity is that the 3D version of the movie is split up on to two separate discs, and the 3D version doesn't have commentary, only the 2D version does
[00:33:21 CEST] <BenMcLean> So my thought is that I should be able to splice the commentary from the 2D version into the 3D files I make somehow
[00:33:50 CEST] <BenMcLean> but then one final weird thing is that the second disc made THREE MKV files, not two. That's really weird!
[00:45:39 CEST] <BenMcLean> I can't seem to find good settings recommendations for Gear VR or Daydream videos anywhere online. Sites always want to talk about 360 videos, not 3D videos
[00:53:15 CEST] <BenMcLean> This command gives an error:
[00:53:17 CEST] <BenMcLean> ffmpeg -i left.mkv -vf "[in] pad=2*iw:ih [left]; movie=right.mkv [right];[left][right] overlay=main_w/2:0 [out]" -i commentary.mkv -map 0:v -map 0:a -map 0:s -map 1:a:2 -t 30 output.mkv
[00:53:47 CEST] <BenMcLean> The error is: Option vf (set video filters) cannot be applied to input url The_Hobbit_An_Unexpected_Journey_Extended_t00.mkv -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to
[00:54:05 CEST] <BenMcLean> er, that t00 one is what I called "commentary.mkv"
[00:58:38 CEST] <c_14> filters go after _all_ input files, and before the output file you want to apply it to
[01:01:05 CEST] <BenMcLean> oh i see
[01:02:03 CEST] <BenMcLean> i told ffmpeg 30 seconds but it seems to keep on going and going at 29.99 seconds for some reason when encoding
[01:03:37 CEST] <ariyasu> -t will never be exact
[01:03:43 CEST] <ariyasu> it works to the nearest i-frame
[01:04:12 CEST] <ariyasu> also i find it better to use -t before the input instead of the output
[01:04:36 CEST] <c_14> that's a mess if you have multiple inputs though
[01:04:47 CEST] <BenMcLean> What does "sps_id 1 out of range" mean?
[01:04:58 CEST] <ariyasu> true
[01:09:23 CEST] <BenMcLean> I found this but it's not very instructive https://www.ffmpeg.org/doxygen/3.2/structPPS.html
[01:13:18 CEST] <BenMcLean> yeah no, it just keeps encoding and encoding and never stops
[01:14:41 CEST] <BenMcLean> If I say 5 seconds, it just goes to 4.99 seconds and then never stops encoding without the time advancing
[01:14:53 CEST] <BenMcLean> here's the command I used:
[01:14:54 CEST] <BenMcLean> ffmpeg -i The_Hobbit_An_Unexpected_Journey_Extended_Edition_Part_1_t02.mkv -i The_Hobbit_An_Unexpected_Journey_Extended_t00.mkv -vf "[in] pad=2*iw:ih [left]; movie=The_Hobbit_An_Unexpected_Journey_Extended_Edition_Part_1_t03.mkv [right];[left][right] overlay=main_w/2:0 [out]" -map 0:v -map 0:a -map 0:s -map 1:a:2 -c:v h264 -c:a copy -c:s copy -t 5 output.mkv
[01:20:35 CEST] <BenMcLean> Yeah, i come back 10 minutes later and it's still going, still at 4.99 seconds for a 5 second clip.
[01:25:44 CEST] <BenMcLean> maybe i should be doing this in stages instead, like pull out the commentary, stick it in a container first
[02:05:57 CEST] <BenMcLean> If ffmpeg gets stuck in an infinite loop isn't that a problem!?
[03:11:12 CEST] <Guddu> I have a audio and video in MXF format. I want to join those two into a MOV file without any loss. Could this be done using ffmpeg
[03:12:25 CEST] <DHE> one input file or two?
[03:12:50 CEST] <Guddu> DHE, Thanks for your response. Those are 2 separate files.
[03:13:02 CEST] <DHE> ffmpeg -i file1 -i file2 -c copy ouput.mov
[03:13:41 CEST] <Guddu> DHE, -c copy is to specify that the output format is mov?
[03:13:48 CEST] <furq> -c copy is to specify not to reencode
[03:13:50 CEST] <DHE> codec: copy
[03:14:02 CEST] <DHE> (which isn't really a codec)
[03:14:13 CEST] <furq> also if it's mxf then there's a good chance it has every audio channel as a separate stream
[03:14:18 CEST] <furq> in which case you'll need to mix them together
[03:14:37 CEST] <DHE> if that's the case then it will require transcoding to merge them...
[03:14:55 CEST] <JTa> o/ all..
[03:15:08 CEST] <furq> unless it's pcm
[03:15:09 CEST] <Guddu> furq, Yes. That mxf file has 1 Audio Stream with 6 channels.
[03:15:23 CEST] <Guddu> Its PCM
[03:15:24 CEST] <furq> if that's how ffprobe describes it then great
[03:15:30 CEST] <JTa> hey, if there are any socal peeps here please pm me.  I got a question for you.  I'm the guy who organizes most of the blender user meetings here in socal
[03:15:44 CEST] <Guddu> furq, I saw that in mediainfo tool.
[03:16:29 CEST] <Guddu> I am generating this mov file to subsequently be able to load it into my Blu Ray creator program. I am using Leawo for that.
[03:17:17 CEST] <JTa> BTW, I am also working on my addon again in blender that exposes the complete command line parameters to blender's VSE to use ffmpeg externally like most of us do, lol...
[03:17:36 CEST] <JTa> but if you are socal like me please PM me real quick, I have a question for you...
[03:29:10 CEST] <Guddu> DHE, furq So that command is ok? ffmpeg -i file1 -i file2 -c copy ouput.mov
[03:29:25 CEST] <Guddu> Also can i convert it to 1920*1080? Currently it is 2048*858
[03:32:09 CEST] <DHE> well then you're transcoding. and video transcoding tends to be pretty hard on the CPU
[03:32:16 CEST] <DHE> also worth noting that there's an aspect ratio change there
[03:32:49 CEST] <mitigate> klaxa, I finally looked up nginx - you told me in feb it would do what ffserver could do- but i find afaict you cant use html5 + webm as you could with ffserver - you need special video players on the client
[03:33:06 CEST] <Guddu> DHE I have the server available for whole night. So CPU usage is not an issue. Its all for me today.
[03:33:22 CEST] <Guddu> Is this command ok? ffmpeg -i file1 -i file2 -s1920*1080 -c copy ouput.mov
[03:33:41 CEST] <DHE> -s and -c copy are not compatible. you can't resize without transcoding
[03:33:51 CEST] <mitigate> and you need special modules on nginx. double fail. afaict it will not be possible to get the functionality with the new apis (which are also designed to be an ever moving target)
[03:44:55 CEST] <furq> mitigate: you can do webm livestreaming with icecast to some degree
[03:44:59 CEST] <furq> i couldn't vouch for how well it works
[03:46:40 CEST] <Guddu> DHE, I will skip the -s for now and let the Blu Ray program handle it. But if i generate using this command then will the Handbrake took recognize the generated MOV file?
[03:46:52 CEST] <Guddu> ffmpeg -i file1 -i file2  -c copy ouput.mov
[03:51:17 CEST] <mitigate> but the simplicity that is ffmpeg is gone - because these third parties invest and donate to ffmpeg to destroy it\
[03:52:05 CEST] <mitigate> the developer that removed ffserver isnt fit to be a programmer or write new apis if he couldnt provide the functionality
[03:52:56 CEST] <mitigate> "i cant maintain it" gtfo programming!
[03:54:24 CEST] <atomnuker> you do realize there's a better ffserver that was written for the gsoc?
[03:54:39 CEST] <mitigate> no i only saw the proposal
[03:55:01 CEST] <mitigate> i'll check again later..
[03:55:03 CEST] <atomnuker> well its there and it works quite well, better yet it uses fully standard things
[03:55:49 CEST] <atomnuker> (also I pushed and commited to remove ffserver, sorry, had to, was hoding us back)
[03:56:16 CEST] <mitigate> i'm sorry - but i still have a tough time wrapping my head around it
[03:56:23 CEST] <mitigate> and i'm not terribly invested in it
[03:57:22 CEST] <furq> i'm honestly amazed anyone ever got anything running with ffserver
[03:57:51 CEST] <furq> but i'm pretty sure that for years it had been in a state of doing the minimum possible to move it off deprecated apis
[03:58:01 CEST] <furq> and not actually fixing the fact that it never worked well
[04:01:37 CEST] <mitigate> ah klaxa wrote it!
[04:02:25 CEST] <mitigate> it needs lua and microhttpd - lua stops the show on this box
[04:06:31 CEST] <mitigate> but i'll fix that and try it and maybe report again later...
[04:08:06 CEST] <furq> are you on one of those rpm distros
[04:08:41 CEST] <mitigate> no, roll my own...
[04:10:30 CEST] <Guddu> DHE, I generate the output with -c copy but the resulting mov does not loads in Handbrake.
[04:10:38 CEST] <Guddu> Should I try another codec?
[04:16:52 CEST] <feedbackmonitor> Hi, can ffmpeg be used to re-encode footage recorded at 24 fps to output to 60 fps?
[04:17:13 CEST] <furq> sure
[04:17:34 CEST] <furq> do you want to duplicate frames, blend frames, or use motion interpolation
[04:17:40 CEST] <feedbackmonitor> furq, Here is my footage ingredients: http://pastebin.centos.org/1587131/
[04:17:57 CEST] <feedbackmonitor> furq, Whatever looks nicest
[04:18:08 CEST] <furq> motion interpolation will look nicest but it's slow
[04:18:13 CEST] <feedbackmonitor> Audio I can tack on after the fact
[04:18:30 CEST] <feedbackmonitor> furq, I don't care about slow, I care about looking great
[04:18:42 CEST] <furq> https://ffmpeg.org/ffmpeg-filters.html#minterpolate
[04:18:44 CEST] <furq> that's what you want then
[04:18:59 CEST] <furq> just -vf minterpolate=60 should give decent results
[04:19:02 CEST] <furq> you can probably tune it more
[04:19:24 CEST] <feedbackmonitor> can you please provide a code example (but I will read it as I am making a lot of errors that need to be rectified)
[04:19:36 CEST] <feedbackmonitor> furq, ahh
[04:20:27 CEST] <feedbackmonitor> furq, Thanks
[04:20:45 CEST] <feedbackmonitor> I need to read the ffmpeg manual, wil it have that information you mentioned?
[08:14:43 CEST] <NoEgo> I've got some DVD's I ripped to MKV, and they have a container speed of 29.976 and a stream speed of 23.976. I've been using '-vf fieldmatch,yadif=deint=interlaced,decimate' to IVTC based on some advice I found online. The resulting file is 23.976 fps, but the file duration has changed (06:44 > 05:23) and the audio is obviously way out of sync. Am I mistaken in thinking these files are telecined? Is there a surefire test? I am now
[08:14:43 CEST] <NoEgo> using the 'idet' filter to check and am getting very low TFF (13) and BFF (26) numbers. I was previously using the Hybrid GUI to test for interlacing/telecine and encoding, Hybrid reports these files as telecine but it also now encoding everything at 20FPS regardless of the settings so I am not sure if I can trust that anymore.
[09:04:11 CEST] <furq_> NoEgo: you should just be able to framestep through the file and see if it's telecined
[09:04:27 CEST] <furq_> obviously in a player that doesn't automatically deinterlace
[09:05:02 CEST] <furq> two frames out of every five will be interlaced
[09:19:48 CEST] <ldlework> furq: are -pix_fmt and -vcodec coupled?
[09:20:03 CEST] <ldlework> like are only certain pix_fmt's compatible with certain vcodecs?
[09:21:24 CEST] <furq> yeah
[09:21:31 CEST] <furq> -h encoder=foo will list the available pixel formats
[09:21:37 CEST] <ldlework> neat
[09:23:08 CEST] <ldlework> i suppose they are all meaningless to me anyway
[09:23:17 CEST] <JEEB> which /41
[09:40:02 CEST] <NoEgo> furq: Stepping through manually these look progressive. Does that generally mean I would be safe to just encode these with "-r 24000/1001"? Thanks for the help.
[09:40:44 CEST] <furq> are you getting regular duplicate frames
[09:42:17 CEST] <NoEgo> Yeah, it seems 2:2 in the file I am currently testing
[09:42:47 CEST] <NoEgo> every frame repeats twice
[09:43:07 CEST] <furq> that doesn't sound right
[09:43:16 CEST] <furq> are you sure your player isn't deinterlacing
[09:43:33 CEST] <NoEgo> I am using SMplayer with deinterlacing set to none
[09:44:09 CEST] <NoEgo> wait
[09:44:17 CEST] <NoEgo> Now I am not seeing any repeating
[09:44:29 CEST] <NoEgo> That may have just been a very slow scene, my apologies
[09:44:50 CEST] <furq> is this a film or a tv show
[09:45:27 CEST] <NoEgo> Animated film short. It's a Tom and Jerry DVD
[09:45:53 CEST] <furq> i was sort of hoping you wouldn't say animation
[09:46:11 CEST] <furq> since that tends to be a pain
[09:46:32 CEST] <NoEgo> lol, yeah.
[09:46:40 CEST] <furq> if you're not getting any dup frames and not seeing any interlacing then i would just encode it 30p
[09:47:10 CEST] <furq> that'd be weird though given i'm pretty sure that source should be telecined
[09:47:15 CEST] <NoEgo> I am thinking this is definitely soft-telecined
[09:47:42 CEST] <NoEgo> I can force it at 24000/1001 and everything seems to shake out ok
[09:48:26 CEST] <furq> oh right i always forget soft telecine exists
[09:48:38 CEST] <furq> i assume there's some good way to deal with that but i couldn't tell you what it is
[09:50:12 CEST] <NoEgo> Yeah, all the other tricks I've found so far have come with their own headaches. Thanks for the help
[10:12:40 CEST] <linux8659> Hi ,I m wondering if there is a way to pass a ffmpeg command to automatically merge all the files in the pwd ??
[10:13:00 CEST] Last message repeated 1 time(s).
[10:13:05 CEST] <linux8659> the command I use I need to rename all the names first for convenience : ffmpeg -i "concat:1.mp3|2.mp3|3.mp3|4.mp3|5.mp3|6.mp3|7.mp3|8.mp3|9.mp3|10.mp3" -c copy out.mp3
[10:13:13 CEST] <linux8659> sorry.. thanks
[10:31:49 CEST] <linux8659>  I got the aswer on #bash :  ffmpeg -i "concat:$(files=(*.mp3); printf %s "$files"; printf '|%s' "${files[@]:1}")" -c copy out.mp3
[10:34:42 CEST] <furq> that's probably not going to work well
[10:35:04 CEST] <furq> that more or less does the same thing as cat *.mp3 > out.mp3
[10:36:18 CEST] <furq> you probably want something like `for f in *.mp3; do printf "file %q\n" "$f"; done | ffmpeg -f concat -protocol_whitelist file,pipe -safe 0 -i - -c copy out.mp3`
[10:40:18 CEST] <linux8659> furq ok they both work for me ,you say yours is better
[10:41:25 CEST] <furq> like i said, the concat protocol is more or less just cat
[10:41:40 CEST] <furq> so it'll leave a bunch of file headers, id3 tags etc in the middle of the stream
[10:41:44 CEST] <furq> which will cause issues in some players
[10:42:24 CEST] <linux8659> THANKS I ll use it then ,this trick will save me SO MUCH time ,even if understand almost nothing of the command!!!
[10:42:39 CEST] <furq> https://trac.ffmpeg.org/wiki/Concatenate#demuxer
[10:42:54 CEST] <furq> it's more or less just that, except it pipes the concat list in
[10:45:01 CEST] <linux8659> furq THANKS
[11:16:33 CEST] <th3_v0ice> Is it possible to remove the audio stream delay while opening the AVCodecContext for encoding? I tried setting the initial_padding to zero, but this doesnt seem to affect anything.
[11:16:54 CEST] <th3_v0ice> Timestamps are shifted by -initial_padding and I want to avoid this.
[11:57:25 CEST] <ldlework> furq: what did I break? this only records greyness now,
[11:57:44 CEST] <ldlework> ffcast % ffmpeg -f x11grab -i :0.0 -video_size %s -framerate 60 -vcodec libx264 -pix_fmt yuv420p -b:v 2000 -minrate 500k -maxrate 2500k ~/www/caps//045623082018.mp4
[11:58:20 CEST] <ldlework> this was working so good before gar
[13:12:42 CEST] <relaxed> ldlework: -video_size and -framerate go before the input
[13:14:30 CEST] <ldlework> relaxed do you ever use slop?
[13:22:53 CEST] <ldlework> nm
[14:13:09 CEST] <Kam_> Hello, I'm (cross) compiling FFmpeg with --disable-everything, but added a few things.  one of my configurations fails building on the current git master revision (it works with a revision from beginning of August) (maybe by --enable-muxer=webm ?) and it might be caused by commits regarding av1 on 17th August.  here is the error message: https://pastebin.com/HEcsE7tZ  can someone look into this, please?  I do
[14:13:09 CEST] <Kam_>  not have av1 enabled on my configuration and do not intent to have it.
[14:18:13 CEST] <DHE> Kam_: --disable-everything is not usually recommended as it disables a number of core features as well. maybe you just want "--disable-muxers --enable-muxers=avi,pcm*,mpegts,h26*,mov,matroska" or such?
[14:19:45 CEST] <Kam_> DHE, my configuration is fine, I guess this is a regression.  the linker tries to reference 'ff_isom_write_av1c', 'ff_av1_filter_obus_buf' etc. which look like being part of AV1
[14:20:21 CEST] <DHE> yes, a quick source check says that you need the mov muxer enabled as well
[14:20:34 CEST] <Kam_> but I do not want to have AV1
[14:20:51 CEST] <Kam_> what do I need the mov muxer for?
[14:21:05 CEST] <DHE> it's the quickest solution to pull in the needed code
[14:21:41 CEST] <Kam_> I have a even more minimalistic configuration, that one builds.
[14:21:42 CEST] <DHE> even if you don't use av1 as an encoder, the matroska muxer still handles av1 as a codec to be written to. helper functions for it are missing
[14:22:06 CEST] <JEEB> but it still shouldn't fail at compilation if the configuration is valid
[14:22:33 CEST] <JEEB> so that is a boog I would guess in that sense
[14:22:48 CEST] <Kam_> this configuration builds with git master from beginning of August.
[14:23:07 CEST] <JEEB> basically either have that symbol available even without enabling some things, or proper ifdefs are needed in matroskaenc
[14:23:08 CEST] <DHE> sounds like Kam_ wants to be able to compile codec-specific handling straight out of muxers as well
[14:23:21 CEST] <JEEB> we already do that in various cases methinks
[14:23:29 CEST] <JEEB> or we have the stuff that's 100% needed always available
[14:23:37 CEST] <JEEB> it's a mishap due to an aV1- specific parser file
[14:23:44 CEST] <DHE> https://github.com/ffmpeg/ffmpeg/commit/de1b44c2 offending commit
[14:25:13 CEST] <JEEB> I'm at $dayjob but might look into it after I get home
[14:25:55 CEST] <JEEB> I would guess that we still want av1 parsing so we can put AV1 into matroska even if AV1 itself isn't enabled
[14:25:59 CEST] <DHE> there's that, but I would also just suggest a git revert for this specific user's issue
[14:26:33 CEST] <JEEB> most likely movenc already depends on this parser, but not matroskaenc
[14:27:33 CEST] <ldlework> Has anyone played around with PyAV? I'm trying to do basic screen recording but I am pretty lost.
[14:27:49 CEST] <JEEB> yup, `OBJS-$(CONFIG_MOV_MUXER)                 += movenc.o av1.o avc.o hevc.o vpcc.o`
[14:27:55 CEST] <JEEB> so MOV_MUXER has av1.o
[14:28:15 CEST] <DHE> that's the right fix as is...
[14:28:30 CEST] <JEEB> yes, but matrosak muxer probably misses it
[14:28:34 CEST] <JEEB> while using the exports
[14:28:42 CEST] <JEEB> s/exports/symbols/
[14:28:54 CEST] <DHE> yes, it's an error in the commit which should have added the Makefile deps
[14:28:59 CEST] <JEEB> yes
[14:29:15 CEST] <JEEB> anyways, $dayjob
[14:29:17 CEST] <DHE> but the question being asked here is, for a "minimal" config, should it be possible to ./configure that patch right out?
[14:29:25 CEST] <DHE> as you said, #ifdef the AV1 stuff away
[14:29:35 CEST] <JEEB> no
[14:29:43 CEST] <JEEB> we already have AVC/HEVC parsers in there are deps
[14:29:46 CEST] <JEEB> and VPx
[14:29:57 CEST] <JEEB> in some cases if there's a dependency on a decoder or so
[14:30:00 CEST] <JEEB> that'd make sense
[14:30:06 CEST] <JEEB> for just the parser, nope
[14:30:53 CEST] <Kam_> a compontent always comes in a "complete" fashion?
[14:31:16 CEST] <DHE> matroska support means support for all codecs matroska can handle
[14:31:17 CEST] <Kam_> so, webm muxer is always able to handle AV1?
[14:31:31 CEST] <JEEB> in general, there IIRC are some exceptions
[14:31:36 CEST] <Kam_> yeah, I see.  then this makes sense to me.
[14:31:59 CEST] <JEEB> ok, libavformat generally seems to not have it
[14:32:09 CEST] <JEEB> looking at  git grep "#ifdef" -- libavformat/
[14:32:26 CEST] <JEEB> there's some OS specific checks but not feature checks
[14:33:25 CEST] <JEEB> and ok, it is better than I thought in libavcodec as well :D
[14:33:48 CEST] <JEEB> so yes, if you enable some feature then it should get fully enabled
[14:34:07 CEST] <Kam_> could I -- in theory -- encode AV1 with a different program and feed it to FFmpeg to mux it in matroska?
[14:34:33 CEST] <Kam_> .. without FFmpeg having a AV1 encoder
[14:35:24 CEST] <JEEB> yes
[14:36:01 CEST] <JEEB> with the API that is, the command line app would also need the required inptu modules enabled
[14:36:04 CEST] <JEEB> like the IVF demuxer
[14:36:37 CEST] <JEEB> so you'd receive encoded data and put that buffer into AVPacket
[14:36:55 CEST] <JEEB> then feed that AVPacket to an instance of matroskaenc
[14:39:33 CEST] <Kam_> good, then it sounds like the dependency is missing in Makefile, and not to #ifdef it
[14:39:55 CEST] <JEEB> yup
[14:40:08 CEST] <termos> Setting -rtmp_flashver from ffmpeg command line works great, but it is not set when I do av_opt_set(format, "rtmp_flashver", "FMLE test version", 0); Am I doing something wrong here?
[14:54:48 CEST] <Kam_> is WebM also able to contain an AV1 stream?
[14:56:48 CEST] <termos> Figured out I have to set the AVDictionary to avio_open2 function when I open the RTMP stream
[14:58:10 CEST] <Kam_> or is there even a difference between WebM muxer and Matroska? (there is no WebM demuxer, I have to use the Matroska demuxer for webm)
[14:58:17 CEST] <DHE> Kam_: you might also copy pre-encoded AV1 from one .mkv to another, etc.
[15:01:51 CEST] <th3_v0ice> Can anyone help with audio stream delay? Any info is more then welcome.
[15:01:59 CEST] <Kam_> DHE: I'm pulling in the WebM muxer in my configuration, not Matroska.  If AV1 is not part of WebM, then #ifdef'ing it down might also be an idea additionally.
[15:03:33 CEST] <JEEB> Kam_: it'll be in both webm profile and matroska profile in the end
[15:03:49 CEST] <JEEB> after all, webm is just a subset of matroska, a profile by GOOG
[15:12:59 CEST] <zerodefect> Using the C-API, is there a way to get the frame type from the x264 encoder in AVPacket (once encoded). I've seen there is a way to get if a keyframe but that doesn't differentiate between say B/P frames.
[15:13:29 CEST] <Kam_> I added 'av1.o' to OBJS-$(CONFIG_MATROSKA_MUXER) and OBJS-$(CONFIG_WEBM_MUXER) in libavformat/Makefile and it works -- thanks! :)
[15:28:58 CEST] <DHE> zerodefect: keyframes are indicated as such. otherwise you will have to guess that P-frames have pts==dts and B-frames do not
[15:29:10 CEST] <DHE> even so that's not quite 100% but it's most of the way there
[15:30:08 CEST] <zerodefect> Oh ok. Thanks for tip. Can I ask when that won't hold true? Seems easier than parsing NAL units :)
[15:38:33 CEST] <DHE> a keyframe and an I-Frame are not necessarily the same thing with H264. I and IDR have different semantics
[15:41:25 CEST] <termos> I've seen this way of getting frame types, is it not correct for H264? https://github.com/FFmpeg/FFmpeg/blob/master/fftools/ffmpeg.c#L741-L744
[15:41:29 CEST] <zerodefect> Ok thanks. I'll do some research - I need to understand more. You've given me a lot to go on.
[15:42:54 CEST] <zerodefect> Thanks @termos, nifty. I'll try that out.
[16:27:00 CEST] <certaindestiny> Hi All, We have a blackmagic decklink card installed in this server and are able to transcode video, So far so good, however we would like to check the bitrate for the video every couple of seconds so we can generate graphs etc of the video being sent
[16:27:06 CEST] <certaindestiny> I tried this with ffprobe -f decklink -i <input> but it is not working
[16:27:24 CEST] <certaindestiny> input/output error
[16:28:46 CEST] <certaindestiny> Anybody any idea how to achieve this
[16:36:27 CEST] <Nacht> certaindestiny: Might be worth looking at this examepl
[16:36:29 CEST] <Nacht> https://github.com/zeroepoch/plotbitrate
[16:44:45 CEST] <certaindestiny> Nacht, Thanks for the response, We however do not control the way the stream gets started as this is done by third party software. We are simply looking to query the decklink card for its current bitrate
[16:46:55 CEST] <kepstin> i thought the decklink cards just gave you raw video, did they add hardware encoders to them?
[16:49:32 CEST] <kepstin> if you want to find the bitrate of the encoded video, you'd probably want to analyze the stream being output by the third party software...
[16:52:32 CEST] <JEEB> pretty sure the avdevices don't take in non-raw
[16:52:44 CEST] <JEEB> so if it's input via decklink avdevice then it's raw
[17:55:48 CEST] <leif> If avstream_duration() returns -9223372036854775808, does that mean something special to ffmpeg?
[17:56:07 CEST] <leif> (Like, I don't know how a stream can have a negative duration per se.)
[17:56:50 CEST] <leif> OHHH....that's the AV_NOPTS_VALUE...facepalm.
[17:57:01 CEST] <leif> I wonder why a file would have that though...
[18:05:29 CEST] <JEEB> leif: there are various formats where the duration is not known before hand
[20:02:48 CEST] <dscastro> hello, i'm wondering why i'm losing quality when getting a rtsp feed from a security camera.
[20:05:01 CEST] <dscastro> this is the command i ended up:
[20:05:04 CEST] <dscastro> ffmpeg -rtsp_transport http -i 'rtsp://${RTSPURL}' \
[20:05:04 CEST] <dscastro> -map 0 -r 20 -g 40 -bufsize 512K -maxrate 2M -crf 15 -vf scale=1920:1080 -vf scale=1280:720 \
[20:05:04 CEST] <dscastro> -f segment -segment_time 10 -segment_format mp4 -strftime 1 "capture-%03d-%Y-%m-%d_%H-%M-%S.mp4"
[20:11:54 CEST] <ChocolateArmpits> dscastro, why have you specified -vf scale two times ?
[20:15:24 CEST] <dscastro> ChocolateArmpits: could be lack of understand but as far as i could se, ffmpeg is smart enough to detect the best scale within a INPUT, isn't it?
[20:15:32 CEST] <dscastro> *see
[20:15:55 CEST] <ChocolateArmpits> Other than that, your bufsize and maxrate may be inadequate for the crf you picked. The crf you picked would by itself generate higher bitrate video, but due to ratecontrol constraints it probably never gets utilized fully
[20:16:40 CEST] <ChocolateArmpits> dscastro, you're supposed to specify -vf only once and then have a chain of filters that process input frames
[20:17:52 CEST] <dscastro> ChocolateArmpits: you think this is the root of the poor quality?
[20:18:09 CEST] <ChocolateArmpits> What's the output resolution?
[20:19:11 CEST] <dscastro> ChocolateArmpits: this is a standard 1Mpx camera, so i assume this will be 1280x720
[20:19:34 CEST] <ChocolateArmpits> dscastro, can you post your console output?
[20:22:26 CEST] <ChocolateArmpits> The console output should have input parameters listed like resolution
[20:22:55 CEST] <dscastro> ChocolateArmpits: https://paste.fedoraproject.org/paste/ScVoxuakuE3eFEeCsVnNXA
[20:24:08 CEST] <ChocolateArmpits> dscastro, the input resolution is only 352x288. Are you sure you have configured the camera correctly or have picked the right stream?
[20:25:27 CEST] <dscastro> ChocolateArmpits: ohh.. you are right
[20:25:45 CEST] <dscastro> i haven't payed attention on it.
[20:25:53 CEST] <dscastro> let me see
[20:29:35 CEST] <dscastro> ChocolateArmpits: you were right, i picked up the wrong stream url
[20:30:31 CEST] <dscastro> ChocolateArmpits: would you mind pointing me to the right direction about buffer_size, maxrate , crf and etc?
[20:31:33 CEST] <dscastro> i have a early beta to get rtsp streams and put it on a s3 bucket to storage and even a "near real" stream
[20:38:18 CEST] <ChocolateArmpits> dscastro, you can read this https://slhck.info/video/2017/03/01/rate-control.html
[20:47:16 CEST] <dscastro> ChocolateArmpits: tks !
[20:48:26 CEST] <ChocolateArmpits> np
[22:05:38 CEST] <feedbackmonitor> Hello, I had some video footage at 24 fps which was 4.3 gb and then output it to 60 fps, but that file is 315.4 mb . The command I used was  ffmpeg -i P1170226.MOV -c:v libx264 -preset veryslow -crf 24  -r 60  -profile:v high -level 4.1 -tune film  -acodec copy   output60fps.MOV
[22:05:38 CEST] <feedbackmonitor>    My concern is quality of the output.
[22:06:52 CEST] <feedbackmonitor> Because that will be used for a 'master project' where it is set at 60 fps for the majority of the footage
[22:07:26 CEST] <BtbN> Why would you convert 24 fps to 60? There's nothing to gain by doing that.
[22:07:43 CEST] <furq> well your source was 50mbit 1080p so the output isn't going to be that high
[22:07:48 CEST] <furq> you probably still want to set crf though
[22:08:19 CEST] <feedbackmonitor> BtbN, In the video editor I have, when I input the 24 fps into the 60 fps project, the audio /video is out of sync, so there is that
[22:08:51 CEST] <BtbN> If you want this only for editing, you can probably use some lossless intermediate codec
[22:09:31 CEST] <feedbackmonitor> My project is recorded in h264, what do you advise?
[22:09:45 CEST] <furq> what video editor are you using
[22:09:49 CEST] <feedbackmonitor> Blender
[22:10:19 CEST] <furq> blender supports ffv1, so try that
[22:10:56 CEST] <furq> actually it doesn't say if it supports lossless h264 but if it supports ffv1, it's probably using ffh264 to decode
[22:11:04 CEST] <furq> so if you're making a file with a ton of dup frames, try that first
[22:11:17 CEST] <furq> add -qp 0 to the command you have now
[22:11:44 CEST] <furq> also bear in mind both of those will be way bigger than 50mbit
[22:12:19 CEST] <feedbackmonitor> furq, Where do I add the -qp 0?
[22:12:28 CEST] <furq> as an output option
[22:12:38 CEST] <furq> get rid of -crf 24 and put -qp 0 there
[22:13:26 CEST] <feedbackmonitor> furq, I can still output to mov format?
[22:13:35 CEST] <furq> yes with lossless h264
[22:13:45 CEST] <furq> you'd need mkv or nut for ffv1
[22:13:55 CEST] <furq> blender supports mkv so that's fine
[22:14:34 CEST] <BtbN> mov isn't exactly a good format for that stuff anyway
[22:15:24 CEST] <feedbackmonitor> BtbN, That is what my camera records at
[22:15:36 CEST] <BtbN> so?
[22:15:43 CEST] <feedbackmonitor> BtbN, Nothing I can do until I buy a new camera
[22:15:54 CEST] <BtbN> And you need to output to mov again because?
[22:16:15 CEST] <feedbackmonitor> BtbN, This is not final output, it is just to have working files
[22:16:32 CEST] <feedbackmonitor> BtbN, for the master project and then I can do final output to something else
[22:16:40 CEST] <BtbN> And why do you use mov for that?
[22:17:07 CEST] <feedbackmonitor> BtbN, Because that is what my camera records at, unless you think I should mix formats in the video project?
[22:17:24 CEST] <furq> it makes no difference what the files you import into blender are
[22:18:20 CEST] <feedbackmonitor> furq, http://pastebin.centos.org/1601886/
[22:18:45 CEST] <furq> get rid of -profile, -level and -tune
[22:19:37 CEST] <feedbackmonitor> furq, thanks
[23:17:43 CEST] <_Mike_C_> Hello, could anyone point me to an example that would show me how to programatically set up the h264_nvenc and hvec_nvenc?
[00:00:00 CEST] --- Fri Aug 24 2018


More information about the Ffmpeg-devel-irc mailing list