[Ffmpeg-devel-irc] ffmpeg.log.20170614

burek burek021 at gmail.com
Thu Jun 15 03:05:02 EEST 2017


[04:39:30 CEST] <ac_slater> hey guys. I'm attempting to mux video and data into an mpegts container via the APIs. Is it weird if my data stream's PTS starts at 1 and increments by 1 for every "data packet"? I guess I don't understand if both's stream's PTS values should be interleaved or not
[04:52:17 CEST] <ac_slater> Is there a good example or anything for handing remuxing of video data with b-frames?
[04:52:24 CEST] <ac_slater> ie - DTS != PTS
[07:30:51 CEST] <lullabybunny> Hello, I am very new to the usage of ffmpeg, and I am trying to do a little troubleshooting. I have the Zeranoe ffmpeg shared build and I'm trying to do some QSV troubleshooting. Basically the program that uses ffmpeg is saying that my QSV codecs aren't implemented so I'm trying to use commandline to comfirm they do in fact work. the problem is all the guides i have read on commandline usage basically assume I know what I'm doi
[07:30:53 CEST] <lullabybunny> Can someone helpo?
[08:02:04 CEST] <lullabybunny> is anyone around, really need some help
[09:00:02 CEST] <lullabybunny> hello, i have an i5 4590 and i am getting the error "Error initializing an internal mfx session: unsupported (-3). this was supposedly fixed in ffmpeg 2.8. I am currently using  3.3.1
[09:00:13 CEST] <lullabybunny> can someone please advise
[09:03:02 CEST] <lullabybunny> https://puu.sh/wjzPG/11ff3a3e6e.png
[09:23:56 CEST] <amey> Hi all. I am working on audio fingerprinting for GSoC project. I want decode audio to PCM then mono channel it and downsample it 5512 Hz. With tutorials on web I tried following, since I don't know how to test if it is correct if anyone of you review, it would be great help. Also I after swr_convert output is in uint8_t form. But I want it in float. How can I do that?
[09:23:56 CEST] <amey> https://pastebin.com/dP9W9qw7
[12:23:37 CEST] <pihpah> /usr/bin/ffmpeg -i "$INPUT" -map $VIDEO_STREAM -map $AUDIO_STREAM -c:v libx264 $audio_ops -sn -movflags faststart -strict -2 $crf_ops $filter_ops "$output"
[12:24:10 CEST] <pihpah> But for some reason other streams are being copied too, how can I copy only video and audio streams I specify?
[13:49:44 CEST] <pihpah> Anyone?
[13:51:07 CEST] <pihpah> He who helps me will see my girlfriend's ass, and she is hot!
[14:07:35 CEST] <Jonuz> https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRwpeNkZH2rmtHxnny-EOZ0O7hIL83_UvBr0Z1AQTyHf_ky1cwmHSHBuHI
[16:10:25 CEST] <alexpigment> hey guys, I'm looking at doing multiple transcodes with ffmpeg like this: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs#Differentparalleloutputs
[16:11:07 CEST] <alexpigment> unfortunately, my CMD window on Windows is giving me the error "Unable to find a suitable format for '\'"
[16:11:20 CEST] <alexpigment> is there a way to chain these together into one line rather than multiple lines like the example?
[16:11:36 CEST] <relaxed> yes, use one line and omit the backslash
[16:11:53 CEST] <alexpigment> oh gotcha
[16:12:00 CEST] <alexpigment> thanks
[16:17:19 CEST] <alexpigment> secondly, I'm needing to transcode 1080p into MPEG-1, but I get tons of these "buffer underflow" / "ignoring buffer limits to mux it" messages
[16:17:35 CEST] <alexpigment> i tried setting a bufsize and maxrate to the same as the target bitrate, but it's still the same thing
[16:17:43 CEST] <alexpigment> is there a way to silence these warnings?
[16:18:05 CEST] <DHE> mpeg1? what bitrate?
[16:18:09 CEST] <alexpigment> 24mbps
[16:18:29 CEST] <alexpigment> i realize it's out of spec, but it's unimportant for my case
[16:18:38 CEST] <DHE> you are using a capital M for your bitrate, right?
[16:18:45 CEST] <alexpigment> yes
[16:18:46 CEST] <alexpigment> well
[16:18:52 CEST] <alexpigment> i'm actually just using 24000000
[16:18:53 CEST] <alexpigment> but yes
[16:18:58 CEST] <alexpigment> Mbps is what i'm intending
[16:23:09 CEST] <dorvan> hi all
[16:23:51 CEST] <zack_s_> I want to concatenate two video, without rencoding
[16:23:51 CEST] <zack_s_> ffmpeg.exe -f concat -i "video_1.mp4" -i "video_2.mp4" -c:v copy "output.mp4"
[16:24:01 CEST] <dorvan> i have problem to trancode a liveview from an ip camera (h264) to a file to be read in (near)realtime
[16:24:27 CEST] <zack_s_> however I get following ouptut: video_1.mp4 Invalid data found when processing input
[16:27:30 CEST] <kerio> DHE: maybe it's one bit every 41 seconds instead
[17:31:03 CEST] <FishPencil> Is there a correct/better/best way to normalize audio?
[17:34:03 CEST] <zerodefect> IMX/D10 normally has a height of 608 (720x608), what is the best way to handle this and keep it 576 using the C-API?  I don't want to throw the data away because I may use it at a later stage (the raw ancillary data is quite useful).   Is there a way to move the data pointers to the pertinent pixels?
[17:43:52 CEST] <durandal_1707> FishPencil: normalize means 2 pass processing
[17:47:40 CEST] <slowWLAN> is there an easy subtitle format that is text-only?
[17:48:18 CEST] <zerodefect> SRT
[17:48:26 CEST] <FishPencil> durandal_1707: Right
[17:50:19 CEST] <slowWLAN> https://matroska.org/technical/specs/subtitles/srt.html
[17:50:33 CEST] <slowWLAN> zerodefect, thx. seems like what i've been looking for :)
[17:51:09 CEST] <zerodefect> No problem.
[18:04:45 CEST] <kepstin> zerodefect: yeah, if you just adjust the data pointers in the avframe, and adjust the height field appropriately, it should do what you want. This is actually basically what the crop filter does, fwiw.
[18:06:15 CEST] <zerodefect> Terrific, thanks @kepstin.  I'll give that a go.
[18:08:23 CEST] <zack_s_> how can I concatenate multiple videos without specifing a text file?
[18:08:40 CEST] <zerodefect> So I have an AVFrame which is a reference to the original IMX/D10 AVFrame.  Presumably if I change the data pointers (pixel data pointers) on my shallow copy, it won't affect the AVBufferRef pointers?
[18:08:49 CEST] <zerodefect> Just checking that there are no gotchas
[18:09:13 CEST] <kepstin> zerodefect: yes, the pointers and lengths for the underlying buffers are stored separately
[18:09:21 CEST] <zerodefect> Thanks :)
[18:09:52 CEST] <kepstin> zack_s_: start with https://ffmpeg.org/faq.html#How-can-I-concatenate-video-files_003f it shows a few different ways
[18:10:34 CEST] <zerodefect> Kepstin, am I one of the few asking C-API related questions?  You seem to be one of the few able to answer them :) !
[18:10:52 CEST] <kepstin> zerodefect: I don't even do very much ffmpeg programming :/
[18:11:15 CEST] <kepstin> and yeah, you're one of only a few people. Most of the help in this channel is for the cli tool
[18:11:44 CEST] <zerodefect> Ha. You have skillz (notice the 'z'!).  Am I in the best place for my Q's?
[18:12:42 CEST] <kepstin> zack_s_: but if you're talking about a file, you're probably looking at the concat demuxer - and there isn't really a better way to handle that. If you're on linux and using a modern shell you can probably fake it by using special substitutions, but the file is usually easier.
[18:13:43 CEST] <kepstin> zerodefect: I don't really think there's anywhere better. The dev channel is more for talk of developing ffmpeg itself rather than applications using ffmpeg.
[18:15:08 CEST] <zerodefect> Yeah, I've observed the dev channel once or twice after you mentioned it to me.
[18:16:42 CEST] <zack_s_> kepstin: I need to concatenate mp4 files without re-encoding
[18:17:14 CEST] <kepstin> zack_s_: ok, then the concat demuxer is the best option, and the easiest way to use it is with a file.
[18:17:16 CEST] <zack_s_> what does "yuv4mpegpipe " do? and the transcoding step is almost lossless? what does almost mean?
[18:18:21 CEST] <zack_s_> when I do it described like above it doesnt work, when I do it with the text file it will work ffmpeg -f concat -safe 0 -i concate-file.txt -c copy output.mp4
[18:18:27 CEST] <zack_s_> is this a bug in ffmpeg?
[18:18:38 CEST] <zack_s_> I mean doing it with text file or not, should not make a difference
[18:20:21 CEST] <kepstin> zack_s_: the different options do different amounts of demuxing and decoding/encoding - for what you asked, the "concat" demuxer (which uses an input text file) will match your requirements best.
[18:21:08 CEST] <kepstin> zack_s_: pretty much the only other way to do it would be to convert all your input mp4 files to mpegts (using -c copy), concatenate those, then convert back to mp4 - which seems a lot more annoying to do :)
[18:26:19 CEST] <zack_s_> kepstin: and the other concate option what does this do?
[18:26:33 CEST] <zack_s_> ffmpeg.exe -f concat -i "video_1.mp4" -i "video_2.mp4" -c:v copy "output.mp4"
[18:27:20 CEST] <kepstin> zack_s_: that won't concatenate anything, that'll just give an error. It'll try to open the file "video_1.mp4" with the concat demuxer, but the concat demuxer can only read text files, so you get an error.
[18:28:17 CEST] <zack_s_> kepstin: but somebody said, it should work like this: https://superuser.com/a/718041
[18:28:53 CEST] <kepstin> zack_s_: they're wrong :/
[18:29:04 CEST] <zack_s_> okay
[18:29:07 CEST] <zack_s_> got it now
[18:29:15 CEST] <zack_s_> Anyway, when I use the text file I got following warning: when using the text file I got following warning [mp4 @ 0000000002650560] Non-monotonous DTS in output stream 0:0; previous: -400, current: -400; changing to -399. This may result in incorrect timestamps in the output file.
[18:29:24 CEST] <zack_s_> is this critical, what does it mean?
[18:31:21 CEST] <zack_s_> kepstin: ?
[18:31:57 CEST] <kepstin> zack_s_: it may or may not be an issue. If the audio/video sync is fine in the output file, then don't worry about it.
[18:32:41 CEST] <zack_s_> kepstin: okay, thx
[18:42:49 CEST] <alexpigment> is there any reason why people disable SSE optimizations when building ffmpeg?
[18:43:03 CEST] <alexpigment> is it just for size considerations? or is there a potential for problems?
[18:48:00 CEST] <durandal_1707> alexpigment: very low IQ
[18:48:43 CEST] <alexpigment> haha
[18:48:49 CEST] <alexpigment> ok, so there are no downsides?
[18:49:00 CEST] <alexpigment> i just figured I'd ask since there are options to disable them
[18:50:18 CEST] <durandal_1707> same with audophiles and various nonsense and alternative facts spreading propaganda
[18:51:44 CEST] <alexpigment> well, i'm a bit of an audiophile myself, but i'm not a FLAC-head (although I guess I would be if compatibility was higher and mobile devices had more storage)
[18:51:55 CEST] <alexpigment> but yes, I get your point ;)
[19:21:14 CEST] <dorvan> i have problems on a lossless rtsp stream acquisition, ffmpeg -y -re -i XXX -movflags isml+frag_keyframe -f ismv -loglevel warning -c:v copy -preset ultrafast -tune zerolatency -crf 0 YYY   ...i can solve partially, but it's not a good result.
[20:05:37 CEST] <Tatsh> new DVDAs, Rush - Fly By Night and Rush - Moving Pictures
[20:05:41 CEST] <Tatsh> sound amazing
[20:06:12 CEST] <durandal_1707> share it with us
[20:06:20 CEST] <Tatsh> that's teh illEGAL
[20:06:31 CEST] <durandal_1707> nope
[20:06:43 CEST] <Tatsh> of course i rip these and convert to FLAC; no loss whatsoever :)
[20:06:55 CEST] <durandal_1707> thats illegal
[20:07:00 CEST] <Tatsh> my xbox one can then play these in VLC, but VLC has to downsample to 48K
[20:07:22 CEST] <Tatsh> humans ears can't hear 96K
[20:07:26 CEST] <Tatsh> whatever..
[20:45:37 CEST] <Tatsh> very interesting; the bass guitar goes straight to LFE :)
[21:33:01 CEST] <Kirito> Whenever I extract frames from a video using ffmpeg and go to re-encode those frames after, the resulting file is always longer than the original despite the files being encoded with the exact same frame rates. Is there any way I can accurately avoid/prevent this from happening?
[21:33:49 CEST] <Kirito> I mean you can use setpts to try and guess/"hack" the length to match but that is literally just a horrible hack and can still result in audio sync issues
[21:34:21 CEST] <Kirito> I'm not sure if it's because the original source video has a variable frame rate (?) or what
[21:35:39 CEST] <styler2go> Hi, i am trying to generate a green vidoe file but i need idt to be 16:9, can someone help what i need to add? i currently have: ffmpeg -f lavfi -i color=color=green -t 5166 red.mp4
[21:37:15 CEST] <Tatsh> styler2go, that's in the metadata or you can set the resolution
[21:37:21 CEST] <Tatsh> -aspect 16:9
[21:37:42 CEST] <Tatsh> or you can use -s, -s 160/90 for example
[21:37:50 CEST] <styler2go> nice, thank you
[21:37:52 CEST] <Tatsh> i'm not sure what will happen if you don't specify -aspect
[21:37:56 CEST] <Tatsh> you should specify that regardless
[21:37:57 CEST] <furq> don't do either of those things
[21:38:09 CEST] <Tatsh> okay listen to furq
[21:38:13 CEST] <furq> -f lavfi -i color=color=green:s=160x90
[21:38:25 CEST] <Tatsh> shouldn't he set the metadata for the mp4 file?
[21:38:37 CEST] <furq> you don't need to set aspect if the source aspect is 16:9
[21:38:47 CEST] <furq> -aspect is for anamorphic
[21:39:03 CEST] <styler2go> well currently the sourc eis nothing
[21:39:08 CEST] <Tatsh> i use -aspect where i have a number of videos coming from 720x480 but they are 4:3
[21:39:17 CEST] <furq> well yeah that's anamorphic
[21:39:25 CEST] <Tatsh> styler2go, then you just need to give it a size
[21:39:30 CEST] <styler2go> anyway, it worked. thanks a lot
[21:40:28 CEST] <furq> you also probably shouldn't use -aspect if you're encoding
[21:40:32 CEST] <furq> it's better to use -vf setsar/setdar
[21:40:51 CEST] <furq> with that said, everything i've ever used has respected dar metadata
[21:40:55 CEST] <styler2go> Hmm.. i need it to be in 1920x1080.. but it's pretty slow if i try to create a file with that resolution.. can i speed it up? it's just green anyway
[21:41:03 CEST] <Tatsh> -vf setdar=16:9 ?
[21:41:09 CEST] <furq> 16/9
[21:41:10 CEST] <furq> but yeah
[21:41:11 CEST] <Tatsh> ok
[21:41:16 CEST] <Tatsh> will update my encoding scripts
[21:41:48 CEST] <furq> i've totally used -aspect when encoding and have no complaints
[21:41:58 CEST] <furq> but it's presumably better to do that at the stream level than at the container level
[21:42:29 CEST] <Tatsh> styler2go, it's one colour; are you streaming this?
[21:42:40 CEST] <Tatsh> wondering if you can use a simple upscaler and it won't look bad
[21:42:48 CEST] <styler2go> no i need it as a dummy file with the exact length of 5166s
[21:43:15 CEST] <styler2go> pretty much just a greenscreen file to mask out some stuff
[21:43:25 CEST] <Tatsh> -f lavfi -i color=color=green:s=1920x1080
[21:43:37 CEST] <Tatsh> stretch it out to the DAR in whatever app you are using
[21:43:42 CEST] <furq> maybe -preset superfast will help
[21:43:47 CEST] <furq> also that's already the correct dar
[21:43:48 CEST] <Tatsh> it will have enough pixels to match
[21:43:52 CEST] <styler2go> Tatsh, well this works but it's slow
[21:44:01 CEST] <styler2go> in 160x90 it was like 2 seconds
[21:44:03 CEST] <Tatsh> but you are only generating this once
[21:44:18 CEST] <Tatsh> how slow do you mean?
[21:44:20 CEST] <furq> i doubt you'll get any faster than that
[21:44:32 CEST] <styler2go> 292 fps
[21:44:45 CEST] <furq> yeah that's pretty quick for 1080p x264
[21:44:48 CEST] <furq> even if it is one frame
[21:44:57 CEST] <furq> like i said, try using a faster preset
[21:45:03 CEST] <furq> or a higher crf
[21:45:07 CEST] <furq> it's not like you'll notice any difference
[21:45:09 CEST] <styler2go> hmm.. what if i first egenrate 160x90 and then upscale it? woudl that be gast?
[21:45:13 CEST] <furq> no
[21:45:20 CEST] <styler2go> i just tried superfast, no change
[21:45:30 CEST] <Kirito> No one on encoding individual frames? ¯\_(Ä)_/¯
[21:45:37 CEST] <Kirito> I don't understand why it doesn't Just Work
[21:45:38 CEST] <styler2go> ok, last question: can i somehow set the framerate for this generated green?
[21:45:52 CEST] <furq> s=1920x1080:r=25
[21:45:56 CEST] <styler2go> nice
[21:45:59 CEST] <furq> Kirito: how are you extracting the frames
[21:46:08 CEST] <furq> styler2go: you probably want to set the fps to 1 or something
[21:46:12 CEST] <furq> that'll speed things up
[21:46:18 CEST] <styler2go> ok at 1080p60 we'Re at 296 fps... thats good
[21:46:26 CEST] <styler2go> i think i need the 60fps
[21:46:31 CEST] <styler2go> the program i will use is pretty dumb
[21:46:31 CEST] <Tatsh> Kirito, are you hand editing every frame and trying to reconstruct the video?
[21:46:45 CEST] <Kirito> furq: https://gist.github.com/FujiMakoto/6c28eaf8581c1ca452af362125d40d20
[21:46:49 CEST] <Kirito> Tatsh: essentially yes
[21:46:52 CEST] <Tatsh> by hand editing i mean, using a photo editor
[21:47:12 CEST] <furq> why are you setting -r in the first ffmpeg invocation
[21:47:52 CEST] <Kirito> Tatsh: essentially, yes; in this case batch editing with Photoshop
[21:47:53 CEST] <furq> that whole -ignorefps bit doesn't do anything unless you're trying to drop/dup frames
[21:47:56 CEST] <Tatsh> furq, it would make sense if he chooses to encode to a simple framerate like 25
[21:48:06 CEST] <furq> he's not encoding a video
[21:48:11 CEST] <furq> a directory of pngs doesn't have a framerate
[21:48:27 CEST] <furq> all -r does there is drop frames to maintain the same duration
[21:48:27 CEST] <Kirito> the source video does
[21:48:34 CEST] <furq> so?
[21:48:40 CEST] <Kirito> I want them to be consistent?
[21:48:43 CEST] <Kirito> How do I do this?
[21:48:48 CEST] <furq> line 19
[21:48:55 CEST] <kepstin> it makes sense to use a constant framerate output (using -r output option or fps filter) when converting to a directory of images, because then you can just use the '-framerate' input option when turning them back into a video to make it the correct timing.
[21:49:01 CEST] <Tatsh> if you're getting frames extracted from 29.97 or 23.976 how can you have .976 frames?
[21:49:16 CEST] <furq> you know that videos don't have to be .000 seconds long, right
[21:49:56 CEST] <furq> kepstin: surely you actually just want every frame, then use -r when rejoining
[21:49:58 CEST] <Tatsh> no but if you extract exactly 1 second of a 29.97 video, you'll have 30 separate images
[21:50:09 CEST] <furq> he's not extracting exactly one second
[21:50:14 CEST] <furq> he's extracting every frame
[21:50:16 CEST] <styler2go> Hmm.. So there's no way to speed up that video generation? what's the bottleneck?
[21:50:28 CEST] <kepstin> furq: but then if the original video was variable framerate the timing could get screwed up.
[21:50:31 CEST] <Tatsh> you'll have lower quality if you do anything else styler2go
[21:50:33 CEST] <furq> sure
[21:50:38 CEST] <Tatsh> possibly weird artifacts in the video
[21:50:39 CEST] <furq> if it's vfr then -r makes some sense
[21:50:50 CEST] <furq> although if it's vfr then this is generally going to turn out badly
[21:50:51 CEST] <styler2go> Can there be artifacts if there is only green?
[21:51:02 CEST] <Tatsh> depends on the quality level probably
[21:51:04 CEST] <furq> if it's cfr then running ffprobe is a waste of time
[21:51:22 CEST] <Tatsh> i imagine if you do -crf 40 on that video you'll see garbage somewhere
[21:51:32 CEST] <furq> i'd be surprised if you noticed any difference
[21:51:33 CEST] <styler2go> i'll just tryi it
[21:51:35 CEST] <furq> you can probably just use -qp 51
[21:51:45 CEST] <furq> i'd expect -qp to be faster
[21:51:47 CEST] <Tatsh> the algorithm is to look for differences and compress them but since there are no differences you get no benefit
[21:51:54 CEST] <styler2go> ffmpeg -f lavfi -i color=color=green:s=1920x1080:r=60 -t 5166 -aspect 16:9 -preset superfast -crf 40 red.mp4
[21:52:05 CEST] <furq> styler2go: use -qp instead of -crf
[21:52:07 CEST] <styler2go> that's my current line. can i use a different -f which is faster maybe?
[21:52:11 CEST] <styler2go> -qp 40?
[21:52:13 CEST] <Tatsh> i get 9x on that
[21:52:15 CEST] <furq> you might as well use 50
[21:52:30 CEST] <furq> you're not going to get macroblocking on a single colour
[21:52:37 CEST] <Kirito> So what commands should I run when extracting frames / re-encooding to prevent a 2 minute 18 second video from turning into a ~2 minute 40 second on re-encode?
[21:52:37 CEST] <Tatsh> it's not particularly fast
[21:52:43 CEST] <Tatsh> styler2go, do you have hardware encoding?
[21:52:44 CEST] <styler2go> Unrecognized option 'qp'. Error splitting the argument list: Option not found
[21:53:06 CEST] <furq> er
[21:53:08 CEST] <styler2go> Tatsh compiled it myself with cuda support so probably yes
[21:53:20 CEST] <kepstin> Kirito: is your source video constant framerate? If so, when turning it back from images to video, use the "-framerate" input option with the framerate from the original video.
[21:53:22 CEST] <Tatsh> hmm even with hardware i only get 9x
[21:53:23 CEST] <furq> -qp should work
[21:53:27 CEST] <furq> are you sure it's using libx264
[21:53:30 CEST] <Tatsh> it's trying to compress but it can't find anything
[21:53:50 CEST] <Tatsh> ffmpeg -f lavfi -i color=color=green:s=1920x1080:r=60 -t 5166 -aspect 16:9 -c:v h264_nvenc -rc constqp -qp 51  red.mp4
[21:53:54 CEST] <Tatsh> 9.4x
[21:53:59 CEST] <styler2go> My current line is: ffmpeg -f lavfi -i color=color=green:s=1920x1080:r=60 -t 5166 -aspect 16:9 -preset superfast -qp 40 red.mp4
[21:54:18 CEST] <Tatsh> looks good to me though
[21:54:20 CEST] <kepstin> Kirito: if your video is not constant framerate, use the "-r" *output* option (after -i) when turning the video into images, then use that same number on the "-framerate" input option when turning the images back to video
[21:54:23 CEST] <styler2go> Tatsh: Unrecognized option 'rc'.
[21:54:25 CEST] <Tatsh> but it's not fast; it will 5166/9 secnods
[21:54:28 CEST] <Tatsh> seconds*
[21:54:34 CEST] <Tatsh> you can wait :P
[21:54:39 CEST] <imperito> I'm trying to use ffmpeg to create a live stream of an image file. I thought something like "ffmpeg -loop 1 image.png out.m3u8" would work, but it terminates after a short time
[21:54:48 CEST] <Tatsh> styler2go, -rc is only for h264_nvenc
[21:55:03 CEST] <imperito> Can anybody tell me what I need to add or change?
[21:55:39 CEST] <Kirito> kepstin: I am, and I believe it should be constant framerate, but I'll need to check and be sure I guess. This is the command I'm using to re-encode: ffmpeg -threads 16 -f image2 -i "D:\Video\final\%06d.png" -framerate 29.97 -c:v huffyuv -r 29.97 "final.avi"
[21:55:48 CEST] <styler2go> so mien doesnt have nvenc anymore.. might have deleted my compiled one
[21:55:58 CEST] <Tatsh> styler2go, won't save much time anyway
[21:56:17 CEST] <styler2go> are there pre-compiled ffmpegs with nvenc?
[21:56:26 CEST] <styler2go> Tatsh i am just curios right now
[21:56:30 CEST] <Tatsh> no that's not allowed
[21:56:36 CEST] <kepstin> Kirito: you have a bunch of options in the wrong place. Input options go before the input! and remove the -r option on the re-encode.
[21:56:39 CEST] <styler2go> hmm
[21:56:49 CEST] <styler2go> well i am bored anyway.. why not compile ffmpeg lol
[21:56:54 CEST] <Kirito> kepstin: right, okay, thanks @_@
[21:56:58 CEST] <Tatsh> well, it's allowed i think
[21:57:05 CEST] <Tatsh> but you may not find any builds
[21:57:29 CEST] <Tatsh> legally distributed builds will be lacking things fdk-aac, any other things that are patented
[21:57:35 CEST] <kepstin> Kirito: also note that the exact value for the ntsc framerate is "30000/1001", using 29.97 is an approximation. Probably close enough for short videos.
[21:57:54 CEST] <Kirito> ahh, thanks
[21:58:12 CEST] <styler2go> https://p.styler2go.de/54932/ but it looks like i have nvenc enabled
[21:59:58 CEST] <Tatsh> but do you have the hardware?
[22:00:12 CEST] <Tatsh> i've got everything except HEVC on my GTX 980
[22:00:24 CEST] <styler2go> i got an 980ti
[22:00:29 CEST] <styler2go> si it hsould be same as yours
[22:00:54 CEST] <Tatsh> for i in encoders decoders filters; do echo $i:; ffmpeg -hide_banner -${i} | egrep -i "npp|cuvid|nvenc|cuda|vaapi|vdpau|vda|dxva2|nvdec|qsv"; done
[22:01:03 CEST] <Tatsh> bash ^
[22:01:06 CEST] <styler2go> i am on windows
[22:01:24 CEST] <Tatsh> not sure the equivalent on windows; but you can use powershell's grep perhaps
[22:01:26 CEST] <Kirito> I think only the 10xx series GPU's have HEVC decoding capabilities
[22:01:26 CEST] <Tatsh> either way
[22:01:37 CEST] <Tatsh> ffmpeg -hide_banner -encoders
[22:01:52 CEST] <Tatsh> ffmpeg -hide_banner -encoders | grep nvenc in powershell
[22:02:00 CEST] <styler2go> Unrecognized option 'hide_banners'.
[22:02:01 CEST] <styler2go> lol
[22:02:08 CEST] <styler2go> oh ok, misread
[22:02:09 CEST] <Tatsh> it's -hide_banner no s
[22:02:25 CEST] <Tatsh> on windows you might have findstr in cmd
[22:02:29 CEST] <styler2go> https://p.styler2go.de/1973572/ it does have some
[22:02:34 CEST] <styler2go> different name tho
[22:02:36 CEST] <Tatsh> ffmpeg -hide_banner -encoders | findstr nvenc
[22:02:51 CEST] <Tatsh> yup
[22:02:56 CEST] <Tatsh> so you even have HEVC
[22:02:58 CEST] <styler2go> wow
[22:03:04 CEST] <styler2go> nvenc fives me 450 fps
[22:03:10 CEST] <styler2go> instead of 290
[22:03:19 CEST] <BtbN> the encoder being listed in -encoders does not mean it works.
[22:03:31 CEST] <BtbN> There is no hardware feature detection done before you actually use it.
[22:03:38 CEST] <Tatsh> that's true
[22:03:43 CEST] <Tatsh> the hevc one fails for me
[22:03:51 CEST] <styler2go> But it still doesn't work with rc option
[22:03:52 CEST] <furq> yeah i have hevc_nvenc and i don't have an nvidia card
[22:03:52 CEST] <Tatsh> some of the decoders fail to
[22:04:12 CEST] <Tatsh> styler2go, show the output for
[22:04:20 CEST] <Tatsh> ffmpeg -h encoder=nvenc_h264
[22:04:29 CEST] <BtbN> h264_nvenc
[22:04:54 CEST] <furq> they both work
[22:05:01 CEST] <styler2go> https://p.styler2go.de/9797058/ only those
[22:05:02 CEST] <BtbN> all but that are deprecated
[22:05:09 CEST] <Tatsh> hmm strange styler2go
[22:05:16 CEST] <Tatsh> use -preset or -profile
[22:05:31 CEST] <BtbN> you muste be using a very old ffmpeg version
[22:05:31 CEST] <styler2go> the hvec is way slower for me
[22:05:33 CEST] <Tatsh> or perhaps you can use -cbr 33k
[22:05:43 CEST] <styler2go> BtbN ffmpeg version N-76757-g1c3e43a Copyright (c) 2000-2015 the FFmpeg developers
[22:05:45 CEST] <Tatsh> the approximate bitrate needed to preserve static green
[22:05:48 CEST] <Tatsh> 33k
[22:05:48 CEST] <BtbN> yes.
[22:05:52 CEST] <BtbN> Update that, it's ancient.
[22:06:04 CEST] <styler2go> i don't want to set up all those build tools again :/
[22:06:11 CEST] <BtbN> then download a binary
[22:06:23 CEST] <Kirito> kepstin: original file duration: 00:02:18.15, output duration: 00:02:18.13
[22:06:25 CEST] <Kirito> \o/
[22:06:47 CEST] <Kirito> yay doing things the wrong way forever and not realizing it. Helps to actually know how these things works and not just do what some guy on Google said you should
[22:06:52 CEST] <styler2go> Tatsh: trying to set -cbr says: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[22:07:09 CEST] <Tatsh> yea update your ffmpeg before pursuing further
[22:07:11 CEST] <styler2go> BtbN theres no windows binary with nvenc sadly
[22:07:19 CEST] <styler2go> humm
[22:07:24 CEST] <furq> the zeranoe builds have nvenc
[22:07:34 CEST] <furq> all builds have nvenc now
[22:07:37 CEST] <styler2go> what
[22:07:39 CEST] <styler2go> really?
[22:07:41 CEST] <furq> yeah
[22:07:44 CEST] <styler2go> wooow
[22:08:04 CEST] <furq> the nvenc headers have been part of the ffmpeg source for a while
[22:08:14 CEST] <furq> you don't need to fuck about with the sdk any more
[22:08:32 CEST] <styler2go> but does it have mp3 now?
[22:08:37 CEST] <styler2go> i am not sure if it had in the past
[22:08:52 CEST] <styler2go> but there was a reason i compiled with enable-nonfree
[22:08:57 CEST] <Tatsh> if you need mp3 you can use lame
[22:09:02 CEST] <furq> you only need nonfree for fdk-aac
[22:09:13 CEST] <furq> as far as audio encoders go
[22:09:18 CEST] <furq> and the builtin aac encoder is acceptable now
[22:09:22 CEST] <Tatsh> if you're streaming mp4 you're streaming AAC 99% of the time
[22:09:38 CEST] <styler2go> cool, thanks
[22:09:55 CEST] <furq> fdk is only worth it if you need he-aac
[22:09:57 CEST] <furq> and you probably don't
[22:10:31 CEST] <furq> it'd be worth it if you were encoding aac for archival, but why would you ever do that
[22:10:37 CEST] <furq> for streaming, who gives a fuck
[22:13:15 CEST] <styler2go> Tatsh: [nvenc_h264 @ 00000000026d9ee0] Unable to parse option value "33k" as boolean
[22:13:26 CEST] <styler2go> so it's not -cbr
[22:14:11 CEST] <styler2go> ok, it's running now.. 450fps .. i guess i just have to accept that
[22:36:42 CEST] <pgorley> can the h264 decoder output different pixel formats directly (ie without converting the frames after the call to decode)?
[22:38:17 CEST] <JEEB> pgorley: decoders are decoders, they output what there is in coded form
[22:38:55 CEST] <kepstin> well, in theory the decoder could output in planar or non-planar form, i guess
[22:39:25 CEST] <kepstin> but there's no real reason to do that, it's easier to write the decoder to work on one format without internal conversions.
[22:40:03 CEST] <JEEB> planar/non-planar usually is just a thing between HW dec and SW dec. HW prefers NV12 and P010
[22:40:20 CEST] <JEEB> while SW stuff usually has things in planar format
[22:40:45 CEST] <JEEB> the actual data is the sample of course (just in a different form)
[22:40:50 CEST] <JEEB> *same
[22:42:27 CEST] <kepstin> the x264 encoder says it accepts both packed and planar formats, but it actually just converts the packed ones to planar internally before encoding, it's just for convenience of people using the library
[22:42:51 CEST] <JEEB> I thought x264 internally used NV12
[22:43:22 CEST] <JEEB> http://git.videolan.org/?p=x264.git;a=commit;h=387828eda87988ad821ce30f818837bd4280bded
[22:43:55 CEST] <kepstin> huh, alright then
[22:44:13 CEST] <kepstin> turn my statement around "it actually just converts the planar ones to packet internally before encoding" :)
[22:46:12 CEST] <imperito> I'm trying to make a live-stream video of an image file. I think I've fixed the issue with ffmpeg terminating after a short time, but I've still got 2 odd behaviors. One is that tht "time" displayed by the encoder advances faster than wall clock time. Is this a problem?
[22:46:39 CEST] <kepstin> hmm, maybe not. I haven't looked into the x264 code recently, I wonder if it actually does have different codepaths for packed vs planar.
[22:46:44 CEST] <imperito> the other is that the created video is all black on my iphone
[22:47:57 CEST] <imperito> (the input image is not all black)
[22:49:36 CEST] <pgorley> oh, alright then, thanks
[22:49:39 CEST] <imperito> I'm wondering if there is something I need to do to set a frame rate
[22:49:53 CEST] <imperito> since the input doesn't really have a frame rate
[22:56:22 CEST] <kepstin> imperito: You have an image source? use the "-framerate" input option to set the framerate of the video it generates. I think the default is 25fps.
[22:56:31 CEST] <kepstin> huh, so x264 internally uses the 'NV12' format for 4:2:0 and 'NV16' for 4:2:2, and 'I444' for 4:4:4, in 8-bit at least.
[22:56:56 CEST] <kepstin> NV12 and NV16 are weird, because they have a separate luma plan, then interleaved chroma planes
[22:56:58 CEST] <kepstin> wierd
[22:57:07 CEST] <kepstin> I gues it's whatever makes their assembly faster :/
[22:57:41 CEST] <JEEB> basically it puts the data of the two chroma channels closer to each other I guess
[22:57:42 CEST] <imperito> kepstin: I'd assume that 25fps would be satisfactory for live streaming?
[22:59:04 CEST] <kepstin> imperito: depends what you're doing, but it doesn't seem unreasonable.
[22:59:30 CEST] <imperito> the only reason I wondered at the framerate was the time displayed by ffmpeg advancing faster than real time. I wouldn't have expected that
[22:59:56 CEST] <kepstin> imperito: what exactly are you trying to do, what "time" are you looking at?
[23:01:08 CEST] <imperito> I've got a png file which is created and updated by a separate process. I'm trying to make a live video stream of this file using ffmpeg
[23:01:45 CEST] <imperito> The time I'm looking at is the time=xx:xx:xx.xx field of the ffmpeg command line output while it is running
[23:02:05 CEST] <imperito> (which may or may not be important, I suppose)
[23:02:59 CEST] <imperito> The previous problem I had was that the external process was not updating the source image atomicly, which was causing ffmpeg to abort
[23:03:30 CEST] <imperito> I changed that and now it seems to run continuously
[23:04:20 CEST] <imperito> though the output is not yet as desired
[23:04:57 CEST] <kepstin> imperito: ok, so ffmpeg by default runs its encoder as fast as possible, since it was designed originally as a batch tool - this means it's usually faster than realtime  (the status line should have a speed indicator saying what x speed it's running
[23:05:23 CEST] <kepstin> imperito: for live streaming from a source on the HD, you want to add the "-re" option to the ffmpeg command line, this slows ffmpeg down to run at realtime
[23:05:46 CEST] <imperito> anywhere in particular?
[23:06:07 CEST] <kepstin> input option, so before the -i
[23:06:41 CEST] <kepstin> it basically makes ffmpeg pretend it's reading from a webcam or something that provides frames at a fixed speed
[23:07:16 CEST] <imperito> seems reasonable. It is running that way now, and the fps=25 instead of like 57 as before
[23:07:20 CEST] <styler2go> hey, i also need to add a fake soundlane to my generated video. any idea how?
[23:07:53 CEST] <styler2go> my current command looks like that: ffmpeg -f lavfi -i color=color=green:s=1920x1080:r=60 -t 5166 -aspect 16:9 -c:v nvenc_h264 -rc constqp -qp 51 red.mp4
[23:07:57 CEST] <imperito> Though still a black screen on my phone
[23:09:25 CEST] <kepstin> styler2go: "-f lavfi -i sine" or something like that
[23:09:39 CEST] <styler2go> but i already have the input
[23:09:51 CEST] <styler2go> i mean, i already set -i color=
[23:09:57 CEST] <kepstin> you can have multiple inputs...
[23:10:26 CEST] <imperito> I do get a warning about "No pixel format specified, puv444p for H.264 encoding chosen". Could that be the problem?
[23:10:41 CEST] <imperito> er, yuv444p rather
[23:11:13 CEST] <kepstin> imperito: ah, yes, that would do it. Most hardware encoders can only do 4:2:0. That message should also helpfully tell you what option to add to fix the problem, if you read it :)
[23:11:35 CEST] <kepstin> or a nearby message, anyways
[23:12:11 CEST] <imperito> There is a suggested alternative for outdated media players
[23:12:17 CEST] <styler2go> any other ideas?
[23:12:21 CEST] <imperito> I didn't think my phone was that old... :(
[23:13:10 CEST] <kepstin> imperito: well, your phone probably can't do 10bit h264 video either, hardware decoders in portable devices are always some ways behind the times ;)
[23:13:30 CEST] <kepstin> but yeah, use "-pix_fmt yuv420p" as an output option for max compatibility
[23:13:37 CEST] <imperito> I see.Using yuv420p seems to have helped
[23:16:07 CEST] <styler2go> Does anyone know how i can add a second simulated input? ffmpeg tells me: color=color=green:s=1920x1080:r=60: Invalid argument when i try ffmpeg -f lavfi -i aevalsrc=0 -i color=color=green:s=1920x1080:r=60 -t 5166 -aspect 16:9 -c:v nvenc_h264 -rc constqp -qp 51 red.mp4
[23:16:07 CEST] <imperito> Doesn't seem to be updating, though...
[23:17:24 CEST] <kepstin> styler2go: you need to use -f lavfi again on the second input
[23:17:25 CEST] <imperito> If I do something like "ffmpeg -loop 1 -re image.png", would it constantly re-read image.png?
[23:17:57 CEST] <furq> no
[23:18:18 CEST] <furq> you'd need to do something like while true; do cat image.png; done | ffmpeg -f image2pipe -i -
[23:18:39 CEST] <imperito> OK, I'll try that, thanks
[23:18:42 CEST] <styler2go> well, that worked already, thanks kepstin
[23:18:54 CEST] <furq> and yes, that does suck really badly, but what are you going to do
[23:19:35 CEST] <imperito> Eh, I'm not worried about a pretty command line, this is all on a cloud VM anyway
[23:20:03 CEST] <imperito> Should I still use -re?
[23:20:04 CEST] <furq> i don't just mean the aesthetics of the command line
[23:20:11 CEST] <furq> and if you're streaming this then yes
[23:21:29 CEST] <imperito> Ah, now it seems to be working as intended
[23:25:35 CEST] <imperito> I wonder if I should add a sleep with the cat, so the pipe between the processes tends to be empty rather than full? Would that reduce latency?
[23:27:29 CEST] <imperito> Eh, it looks good enough.
[23:29:11 CEST] <furq> sleeping will make no difference
[23:29:23 CEST] <furq> at least as far as my understanding of unix goes
[23:29:33 CEST] <furq> other than that it'll probably cause shit to break if you sleep for too long
[23:29:49 CEST] <imperito> Lets not do that, than :)
[23:30:37 CEST] <imperito> I'll leave it running like this overnight, hopefully nothing crashes
[23:36:30 CEST] <imperito> Heh, python sitting on half my memory, ffmpeg on half my CPU. If I do more streams I'm going to need a bigger VM
[00:00:00 CEST] --- Thu Jun 15 2017


More information about the Ffmpeg-devel-irc mailing list