[Ffmpeg-devel-irc] ffmpeg.log.20170609

burek burek021 at gmail.com
Sat Jun 10 03:05:01 EEST 2017

[03:35:51 CEST] <Obliterous> Anyone well versed with ffserver available?
[05:06:48 CEST] <waqas> Anyone here up for small paid job for ffmpeg?
[05:07:26 CEST] <johnjay> did someone say money???
[05:07:45 CEST] <johnjay> I have a special script that alerts me anytime someone mentions money
[05:08:13 CEST] <waqas> lol yes
[05:10:07 CEST] <johnjay> well I only used ffmpeg one time for streaming
[05:10:23 CEST] <johnjay> but hey maybe I should learn all the command line switches if it means teh moneyz
[05:10:32 CEST] <johnjay> what kind of job is it?
[05:10:54 CEST] <waqas> Just need to create a slide show that has pan and zoom effect.
[05:11:28 CEST] <johnjay> oh like from a given video?
[05:11:53 CEST] <waqas> so take brand video, text, images and create a slide show
[05:12:31 CEST] <waqas> here is a example video
[05:12:32 CEST] <waqas> https://www.youtube.com/watch?v=9dvmyuJpt6k&t=1s
[05:14:07 CEST] <johnjay> that audio is wicked loud in my speakers
[05:15:11 CEST] <johnjay> that is creepy as hell lol
[05:15:21 CEST] <johnjay> the sound makes it seem like a horror movie
[05:15:29 CEST] <waqas> lol
[05:15:41 CEST] <johnjay> like there's zombies behind those doors when it zooms in all slowly
[05:16:38 CEST] <johnjay> so minus the ghostly horror audio of doom
[05:16:51 CEST] <johnjay> you're talking about that part where the logo fades in and out followed by text fading in and out?
[05:17:23 CEST] <waqas> and images
[05:17:38 CEST] <johnjay> oh. so basically turning images into a slide show
[05:17:47 CEST] <waqas> yes sir
[05:17:56 CEST] <waqas> with pan and zoom effect
[05:18:28 CEST] <johnjay> hmm. idk there was any zoom in that vid
[05:18:31 CEST] <johnjay> although i only skimmed it.
[05:19:24 CEST] <waqas> images are changing with zoom in and out i think
[05:20:53 CEST] <johnjay> like the 3d effect at the start?
[05:21:15 CEST] <waqas> thats just video
[05:21:26 CEST] <waqas> i will provide the video
[05:21:34 CEST] <johnjay> oh yeah i see the zoom effect now
[05:21:43 CEST] <johnjay> it's what made that part with the door so creepy
[05:21:51 CEST] <johnjay> like someone's gonna jump out of that door any second
[05:21:59 CEST] <SpicySalt> is it just me or x265 is slow?
[05:22:19 CEST] <johnjay> idk why but when it zooms, it's always into something like a door or shower
[05:22:26 CEST] <johnjay> giving a creepy "this is the burglar cam" vibe lol
[05:24:24 CEST] <johnjay> anyways waqas good luck with your project
[05:24:41 CEST] <johnjay> i can't even compile ffmpeg let alone do things with it.
[05:29:52 CEST] <SpicySalt> i have i7 ivy bridge  yet x265 is incredibly slow.  what do i need?
[05:33:37 CEST] <c3r1c3-Win> A ton more cores... Maybe some better settings.
[05:58:03 CEST] <Obliterous> Anyone well versed with ffserver available?
[06:02:02 CEST] <johnjay> why Obliterous
[06:02:04 CEST] <johnjay> what is it you need?
[06:04:13 CEST] <Obliterous> Need to figur out how to embed a stream from ffserver into a webpage and have it actually play.
[06:05:10 CEST] <Obliterous> the documentation is somewhat less than stellar
[06:11:26 CEST] <c3r1c3-Win> Obliterous: that more depends on your video player on the web page.
[06:12:23 CEST] <Obliterous> Recomendations?
[06:12:53 CEST] <c3r1c3-Win> For what? A player? Stream format?
[06:13:05 CEST] <Obliterous> both
[06:13:42 CEST] <c3r1c3-Win> Obliterous: Google for open-source html5 video players. Pick one. The one you pick will decide which streaming formats you can use.
[06:13:48 CEST] <Obliterous> nothing I have right now is set in stone except for the server hardware. :-)
[06:14:44 CEST] <c3r1c3-Win> To answer you question a bit more pointly, No. I don't have any recommended players. I use JWplayer, and want to move away from them.
[06:15:21 CEST] <Obliterous> Okay. that gives me a hint right there.
[06:18:00 CEST] Last message repeated 1 time(s).
[06:18:53 CEST] <c3r1c3-Win> Nothing wrong with the JW-people (nice crew), but I don't want to pay $300+/yr for a player.
[06:19:02 CEST] <Obliterous> My target goal is to take a few webcams from around my house and stream them so that my wife can watch birds etc.
[06:23:07 CEST] <Obliterous> probably going to either use mp4 or flv & video.js
[06:29:04 CEST] <kepstin> SpicySalt: x265 is incredibly slow, yes. if it's too slow for you, you should probably be using x264 instead.
[06:29:19 CEST] <SpicySalt> x265 has better compression
[06:29:24 CEST] <kepstin> that said, make sure your x265 build has assembly optimizations enabled and working
[06:29:35 CEST] <SpicySalt> kepstin how do i do that
[06:29:41 CEST] <kepstin> if you tweak x265 to run as fast as x264, it no longer has better compression :/
[06:30:06 CEST] <c3r1c3-Win> ^
[06:30:15 CEST] <kepstin> the way x265 works is that if you wait longer than x264, you can get better compression than the slowest x264 preset
[06:30:46 CEST] <kepstin> (well, ignoring placebo because nobody should use placebo)
[06:50:48 CEST] <Obliterous> Hrmmmmm
[06:51:04 CEST] <Obliterous> new error now that I'm using video.js
[06:51:42 CEST] <Obliterous> ' Error writing output header for stream 'pine.mp4': Invalid argument'
[07:47:37 CEST] <JohnDoe_71Rus> https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos if i have only 2 sources. is the way set dummy stream?
[08:16:04 CEST] <JohnDoe_71Rus> ok/ 2 sources. what is wrong? http://paste.ubuntu.com/24813694/
[08:56:05 CEST] <JohnDoe_71Rus> fix some error mesage. command http://paste.ubuntu.com/24813825/ log http://paste.ubuntu.com/24813827/ no translation (
[09:03:13 CEST] <JohnDoe_71Rus> without -y option no error, but no translation too
[09:16:57 CEST] <Remi73> Hi there, I have a question concerning h264 decoder & hw acceleration. Is it possible to build libavcodec without any software decoding capability (for h264), but with the hardware acceleration capability ?
[09:17:22 CEST] <Remi73> My conclusion after trying different configuration is that is not possible to use hwaccel with h264 decoder disable, is it right ?
[14:08:32 CEST] <Tatsh> :) alias ffhwsupported='for i in encoders decoders filters; do echo $i:; ffmpeg -hide_banner -${i} | egrep -i "npp|cuvid|nvenc|cuda|vaapi|vdpau|vda|dxva2|nvdec|qsv"; done'
[14:09:02 CEST] <Tatsh> with nvenc, use constqp and -qp <some number>, like 18
[14:09:17 CEST] <Tatsh> it's not extremely predictable especially across different resolutions
[14:09:34 CEST] <Tatsh> but it is nice to process progressive 480p at 60x
[15:02:29 CEST] <Tatsh> furq, where can i find an nnedi ASIC and use it :P
[15:33:58 CEST] <Fyr> guys, I see that Zeranoe's ffmpeg doesn't support Intel QSV.
[15:34:05 CEST] <Fyr> does it?
[15:34:41 CEST] <Fyr> I have core i7, it would be great if I could use hwaccel here.
[15:36:02 CEST] <furq> i'm pretty sure it does
[15:36:21 CEST] <furq> that whole fuss about xp support being dropped was because zeranoe builds with libmfx
[15:37:38 CEST] <Fyr> furq, how do you make FFMPEG discover Intel CPU?
[15:37:45 CEST] <furq> shrug
[15:37:52 CEST] <furq> i don't have a cpu with quicksync
[15:40:32 CEST] <BtbN> It should just work.
[15:40:39 CEST] <BtbN> Everything it needs comes with the Intel Driver
[15:41:41 CEST] <Fyr> BtbN, standard Linux driver omits it?
[15:42:00 CEST] <BtbN> QSV is mostly a Windows thing
[15:42:19 CEST] <BtbN> There is some horrible wrapper to use it on Linux as well, but there is not really a point
[15:45:20 CEST] <Tatsh> it's too bad nvidia's deinterlace -deint is pretty limited
[15:45:33 CEST] <BtbN> limited?
[15:45:39 CEST] <Tatsh> for content coming from VHS it's tolerable for things to be broadcast immediately
[15:45:53 CEST] <BtbN> It deinterlaces the video, what else do you want it to do?
[15:45:56 CEST] <Tatsh> BtbN, yea it's for really really predictable content
[15:46:04 CEST] <Tatsh> similar to yadif
[15:46:19 CEST] <Tatsh> otherwise, i'm sticking to nnedi
[15:46:32 CEST] <Tatsh> but it is cool that you can crop and resize in the hardware
[15:46:48 CEST] <BtbN> last time I tested nnedi it was ridiculously slow to the point of being useless
[15:46:55 CEST] <Tatsh> not for me; i can wait
[15:46:56 CEST] <Fyr> BtbN, is there a way to combine hwaccels?
[15:47:06 CEST] <BtbN> like, 10 seconds per frame slow
[15:47:08 CEST] <Tatsh> yea
[15:47:10 CEST] <Fyr> for instance, using opencl, h264_nvenc etc.
[15:47:11 CEST] <Tatsh> it's 15 fps for me
[15:47:15 CEST] <Tatsh> for 480p
[15:47:32 CEST] <Tatsh> but that's the difference between really jumpy video and cleaned up video
[15:47:47 CEST] <Tatsh> with typical deinterlacing i get extremely jumpy video and that's not what it's like prior to deinterlacing
[15:48:30 CEST] <BtbN> There is basically no opencl accel in ffmpeg.
[15:48:37 CEST] <Tatsh> nope
[15:49:02 CEST] <BtbN> There are two filters that support it, and due to the way it's implemented its usefulness is limited
[15:49:04 CEST] <Fyr> BtbN, ok, dxva2, cuvid and h264_nvenc.
[15:49:14 CEST] <Tatsh> basically if -deint can't work for your content, use nnedi  :)
[15:49:18 CEST] <Tatsh> just be prepared to wait
[15:49:23 CEST] <Fyr> BtbN, deload the CPU, it's very useful.
[15:49:37 CEST] <BtbN> they all use the same decode hardware
[15:49:37 CEST] <Tatsh> i'm getting 20X using my GPU's capabilities
[15:49:40 CEST] <BtbN> there is no point in mixing APIs
[15:49:41 CEST] <Fyr> when I convert via h264_nvenc for my phone, my CPU load ~10%.
[15:49:48 CEST] <Tatsh> with -deint adaptive -crop -resize
[15:49:52 CEST] <Tatsh> no filters her
[15:49:53 CEST] <Fyr> it's very handy.
[15:49:53 CEST] <Tatsh> here*
[15:49:58 CEST] <Tatsh> no need for scale_npp
[15:50:01 CEST] <BtbN> If you are not in any kind of real-time time pressure, there is no point in using a hwaccel
[15:50:05 CEST] <Tatsh> it's really nice
[15:50:09 CEST] <Tatsh> BtbN, but i do like saving time
[15:50:34 CEST] <BtbN> if you are ok with paying with a horrible quality or massive increase in bitrate
[15:51:23 CEST] <Fyr> BtbN, sometimes there is a real-time pressure.
[15:51:27 CEST] Action: JEEB would rather use some lolfast preset than hwaccel
[15:51:29 CEST] <Tatsh> i imagine the function that takes the arguments uses this x format
[15:51:40 CEST] <Fyr> right now, I'm converting all the episodes of Desunoto.
[15:51:40 CEST] <Tatsh> for crop
[15:51:53 CEST] <Tatsh> ffmpeg -y -c:v h264_cuvid -deint adaptive -crop 10x10x20x36 -resize 640x480
[15:52:00 CEST] <Tatsh> adaptive is far better than bob
[15:52:05 CEST] <Tatsh> but it's not as fast
[15:52:41 CEST] <Fyr> Tatsh, hardware decoding is very fast.
[15:52:47 CEST] <BtbN> keep in mind you have to manually specify the input framerate for adaptive deint
[15:52:47 CEST] <Tatsh> yup
[15:52:48 CEST] <Fyr> and really helpful.
[15:52:56 CEST] <Tatsh> BtbN, i do?
[15:52:57 CEST] <BtbN> Decoders in ffmpeg are unable to double the framerate
[15:53:02 CEST] <BtbN> So you have to do so manually
[15:53:03 CEST] <Tatsh> it's encoding to 29.97
[15:53:13 CEST] <BtbN> for 25i content, it's 50 fps
[15:53:37 CEST] <Tatsh> the original content is 29.97 but unfortunately not marked correctly in the metadata
[15:53:44 CEST] <Tatsh>     Stream #0:0: Video: h264 (High), yuv420p(progressive), 720x480 [SAR 8:9 DAR 4:3], 29.97 fps, 29.97 tbr, 1k tbn, 59.94 tbc (default)
[15:53:52 CEST] <BtbN> just pass the correct doubled framerate as input option to cuvid
[15:53:57 CEST] <BtbN> or to ffmpeg itself rather
[15:53:58 CEST] <Tatsh> -r ?
[15:54:02 CEST] <Tatsh> before the -i
[15:54:04 CEST] <Nacht> When using -maxrate. Is that the maximum bitrate you'd like to have ? or is it the maximum difference to that of your set bitrate with b:v ?
[15:54:43 CEST] <DHE> Nacht: it's the bitrate of the entire container (mpegts, right?). including the overhead
[15:54:48 CEST] <Tatsh> BtbN, what is the correct way to capture interlaced content to h264?
[15:54:54 CEST] <Tatsh> my capture card is doing this
[15:55:02 CEST] <Tatsh> i want it to correctly mark this content
[15:55:05 CEST] <BtbN> what?
[15:55:20 CEST] <DHE> using libx264, add parameter -flags ildct  to force interlaced encoding
[15:55:33 CEST] <Tatsh> i did that
[15:55:34 CEST] <Tatsh> -flags +ilme+ildct
[15:55:44 CEST] <Tatsh> and the line is yuv420p(progressive)
[15:55:51 CEST] <Nacht> DHE: I see. So it differs per container ? (TS vs MP4) ?
[15:55:52 CEST] <BtbN> doesn't the card just give it to you correctly?
[15:55:53 CEST] <Tatsh> it doesn't matter realy
[15:56:18 CEST] <Tatsh> this is the card
[15:56:19 CEST] <Tatsh>     Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 720x480, 165722 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc
[15:56:43 CEST] <DHE> Nacht: only mpegts supports muxrate. it is designed for real-time streaming and has a "null" packet concept which allows it to pad the stream to meet the muxrate
[15:57:51 CEST] <Nacht> DHE: I was talking about maxrate, not muxrate
[15:58:32 CEST] <DHE> oh, oops my bad...
[15:58:41 CEST] <Tatsh> BtbN, when i use -r 30000/1001 before -i after -deint, the video is extremely jumpy
[15:58:51 CEST] <DHE> okay. so maxrate specifies the bitrate at which the bitrate buffer is filled
[15:58:51 CEST] <Nacht> np, I can understand the error :)
[15:59:14 CEST] <Tatsh> but without it, the video looks really good
[15:59:19 CEST] <Tatsh> and the framerate is 30
[15:59:22 CEST] <Tatsh> 29.97
[15:59:42 CEST] <BtbN> if you deinterlace it, the framerate doubles.
[15:59:53 CEST] <Tatsh> something else is setting the framerate back to 29.97
[15:59:56 CEST] <Nacht> Hm, yeah I read something similar on: https://trac.ffmpeg.org/wiki/Limiting%20the%20output%20bitrate
[16:00:03 CEST] <Nacht> I just can't picture yet how that works.
[16:00:06 CEST] <Tatsh> ffmpeg -y -c:v h264_cuvid -deint adaptive -crop 10x10x20x36 -resize 640x480 -i capture/raw2/to-cut-willboro-unknown.mkv -c:v h264_nvenc -rc constqp -qp 25 -pixel_format yuv420p -ss 00:06:00 -profile:v high -level 4.1 -aspect 4/3 out2.mp4
[16:00:12 CEST] <BtbN> 30000/1001 looks like 29.97 to me
[16:00:14 CEST] <DHE> in VBV mode, there's a buffer (of size -bufsize) that all encoded frames are pulled from. if an encoded frame is 80 kilobytes, it drains 640,000 bits from the buffer.  if it's at 30fps and the mitrate is 3 megabits, the buffer will be filled by 100,000 bits.
[16:00:28 CEST] <DHE> s/mitrate/maxrate/
[16:00:30 CEST] <Tatsh> for some reason if i specify -r 30000/1001 the output is different
[16:00:33 CEST] <DHE> (max bitrate)
[16:00:50 CEST] <Tatsh> like this
[16:01:02 CEST] <Tatsh> ffmpeg -y -c:v h264_cuvid -deint adaptive -crop 10x10x20x36 -resize 640x480 -r 30000/1001 -i ...
[16:01:18 CEST] <Nacht> I see
[16:03:09 CEST] <Nacht> Then another related question, but have you ever seen a videoplayer crashing due to the video having high spikes in bitrate even tho the average is still normal ? (15Mbit vs 305Mbit spikes)
[16:03:48 CEST] <JEEB> I mean, if it's some embedded plastic thing it might just not have the buffer for the spike if it goes over its VBV/HRD capabilities
[16:03:51 CEST] <DHE> I can't say I've heard of that specifically, but that is a HUGE spike...
[16:04:13 CEST] <Nacht> Yeah, it sure is
[16:04:55 CEST] <Nacht> It's a player running on a GearVR, which crashes saying it doesn't have enough resources, and that wouldn't suprise me with those numbers
[16:04:58 CEST] <Tatsh> i'm pretty comfortable with nvenc i can switch over from libx264 now :)
[16:05:05 CEST] <Nacht> Sad thing is, it doesn't always crash. Just occasionally
[16:05:16 CEST] <Tatsh> still have to encode and encode to check quality though
[16:05:40 CEST] <Nacht> I'm experimenting with 2pass encoding icm maxrate/bufsize
[16:05:54 CEST] <DHE> how long does this spike last? is it just one frame? keyframes can be large
[16:05:55 CEST] <BtbN> 2pass only makes sense for cbr
[16:06:24 CEST] <Tatsh> BtbN, if you are trying to stream live how do you maintain the bitrate then?
[16:06:34 CEST] <BtbN> not with 2pass
[16:06:41 CEST] <BtbN> How would that even work with live content?
[16:06:42 CEST] <DHE> I use x264 with a cranked read-ahead when 2-pass isn't possible
[16:06:56 CEST] <Tatsh> that's what i'm thinking
[16:06:59 CEST] <DHE> it's loosely like 2-pass, but only sees a few seconds into the future (at best)
[16:08:31 CEST] <furq> 14:46:48 ( BtbN) last time I tested nnedi it was ridiculously slow to the point of being useless
[16:08:41 CEST] <furq> when people say nnedi they normally mean with vapoursynth
[16:08:41 CEST] <Nacht> We're using a HLS VOD container.
[16:08:50 CEST] <furq> which is still slow but is at least frame multithreaded
[16:09:21 CEST] <Nacht> But judging from your reactions, I should really read more upon the technique of 2 pass
[16:09:37 CEST] <BtbN> you do the entire encode twice
[16:09:44 CEST] <BtbN> so you know in advance where the video will need more bitrate
[16:10:05 CEST] <BtbN> it's only ever useful if you target a specific filesize
[16:10:08 CEST] <Tatsh> so like -deint adaptive is okay but nnedi is still better
[16:10:23 CEST] <Tatsh> not jumpy but still blocky in some areas of this content i have
[16:10:40 CEST] <Nacht> Yeah I was hoping to use 2pass to reduce the giant spikes in bitrate. But hearing what you say wouldn't really help with that, it could even create more spikes
[16:10:50 CEST] <DHE> Nacht: 2-pass helps, but it depends on the situation. using -b and -maxrate at the same value will help a bit.
[16:10:56 CEST] <BtbN> if you don't want spikes, use a small vbv buffer size
[16:10:59 CEST] <DHE> Nacht: select a sane bufsize
[16:11:04 CEST] <furq> Nacht: are you setting -bufsize
[16:11:06 CEST] <Tatsh> https://i.imgtc.com/nOGQoNX.png see the airplane figure on the right
[16:11:10 CEST] <Tatsh> all blocky
[16:11:16 CEST] <Nacht> I was experimenting with both
[16:11:31 CEST] <BtbN> that's not blocky, that just not properly deinterlaced.
[16:11:37 CEST] <DHE> I would start with a bufsize of about 1/2 the bitrate. also make -b and -maxrate identical
[16:11:42 CEST] <Nacht> I luckely found an idle server with lots of power, cause my poor laptop was slowly dying. It's in HEVC as well :/
[16:11:44 CEST] <DHE> see how that turns out
[16:11:50 CEST] <DHE> oh... oh dear...
[16:12:04 CEST] <Nacht> Pushing a load of 30 on my server is funny tho :)
[16:12:34 CEST] <Nacht> 2048x2048 HEVC, always fun :)
[16:12:53 CEST] <DHE> pfft. my video encoder server was at load average of 120.00 most of yesterday
[16:12:59 CEST] <Tatsh> if i encode with bob it will be intolerable
[16:14:20 CEST] <Nacht> Jeez, how much cores does it have ?
[16:14:26 CEST] <Tatsh> but at least with bob it is not so blocky
[16:14:54 CEST] <DHE> Nacht: 2 sockets * 20 cores each * 2 (hyperthreaded) = 80 total threads
[16:15:31 CEST] <Nacht> Niiice :D
[16:15:49 CEST] <furq> not bad
[16:15:56 CEST] <furq> you might be able to run nnedi in realtime on that
[16:16:32 CEST] <DHE> maybe. but I have cgroups set up to limit any given process to 1 socket for memory latnecy reasons
[16:16:42 CEST] <DHE> ...
[16:17:25 CEST] <Tatsh> BtbN, there's no perfect live deinterlace filter
[16:17:28 CEST] <Tatsh> :|
[16:17:33 CEST] <furq> s/live//
[16:17:49 CEST] <Tatsh> nnedi does wonders at the extreme cost of time
[16:17:54 CEST] <Nacht> Another question, but do you lads know why, when I use FFprobe on HEVC, I don't get any codec_picture_numbers when using -show_frames ? Is it's a HEVC thing ?
[16:18:00 CEST] <Tatsh> -deint bob messes up colours for me not sure why
[16:18:11 CEST] <Tatsh> -deint adaptive works well most of the time but i gave an example where it didn't work
[16:18:30 CEST] <Tatsh> yadif is similar in this regard
[16:18:41 CEST] <furq> are you sure that's not bff being treated as tff or something
[16:18:59 CEST] <Tatsh> well with -deint on nvenc i have no way to specify bff or tff
[16:19:03 CEST] <furq> that floating line at the bottom of the plane looks pretty suspect
[16:20:12 CEST] <Tatsh> idet output not so great on this input
[16:20:15 CEST] <Tatsh> [Parsed_idet_0 @ 0x10d2280] Repeated Fields: Neither:  8600 Top:   119 Bottom:   136 [Parsed_idet_0 @ 0x10d2280] Single frame detection: TFF:   562 BFF:  5519 Progressive:    56 Undetermined:  2718 [Parsed_idet_0 @ 0x10d2280] Multi frame detection: TFF:   777 BFF:  7993 Progressive:     0 Undetermined:    85
[16:20:33 CEST] <Tatsh> basically it seems it's BFF
[16:20:36 CEST] <furq> that sure looks like bff
[16:22:46 CEST] <Tatsh> yea and unfortunately cuvid.c is hard-coded to use tff if it encounters an interlaced frame
[16:22:57 CEST] <Tatsh> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/cuvid.c#L624
[16:23:12 CEST] <BtbN> no it's not.
[16:23:23 CEST] <furq> yeah that's hardcoded to use whatever the frame flags say
[16:23:51 CEST] <furq> i think you can remux and rewrite those? not sure
[16:23:53 CEST] <Tatsh> so is it possible the original video has frames badly marked?
[16:24:28 CEST] <Tatsh> i encoded it with -flags +ilme+ildct
[17:25:54 CEST] <kepstin> Tatsh: you can try using -vf setfield=bff - if that fixes it, then yeah, the frames are incorrectly marked as tff rather than bff.
[17:26:31 CEST] <kepstin> but yeah, I don't think ffmpeg can edit that without re-encoding
[17:31:12 CEST] <furq> isn't the deinterlacing done by cuvid/nvdec/whatever it's called this week
[17:31:19 CEST] <furq> so presumably before it gets to the filterchain
[17:36:23 CEST] <kepstin> no idea. I mean, I suppose you should be able to tell the hwaccel not to deinterlace and do it in software, maybe?
[17:38:54 CEST] <furq> i think he specifically wants nvdec's deinterlacing for speed
[17:38:55 CEST] <Filystyn> so anywone knows the answer?
[17:39:06 CEST] <Filystyn> to my question
[17:39:21 CEST] <furq> yes, but i don't know what the question is
[17:39:46 CEST] <Filystyn> ok
[17:39:50 CEST] <Filystyn> Im going to ask now
[17:39:57 CEST] <Mavrik> We're ready!
[17:41:09 CEST] <Filystyn> When i use the C api ffmpeg function to count the metadata :    printf( "%d", av_dict_count( av_formcont->metadata ) );   I get return 0;
[17:41:19 CEST] <Filystyn> Everythign would be ok if there was no metadata the thing is
[17:41:44 CEST] <Filystyn> this returns on stderror  the metdata and general info   av_dump_format( av_formcont, 0, PLAYFILE, 0);
[17:42:07 CEST] <Filystyn> on mp3 For example i see the data but on my ogg file i don't like there is some  trick to it
[17:42:10 CEST] <Filystyn> i can't see
[17:42:34 CEST] <Filystyn> the second function always see all metadata
[17:43:00 CEST] <Filystyn> the first one doe snot same goes with the reading with
[17:43:03 CEST] <Filystyn>   while( ( tag = av_dict_get( av_formcont->metadata,  "", tag,
[17:43:31 CEST] <Filystyn> i did look at source but i miss to udnerstand what it is doing  ( dumo.c )
[17:43:52 CEST] <Filystyn> it uses the function i use and does some extra things i totaly don't get
[18:08:57 CEST] <Guest1318> We are using the libav api to encode a series of microscope pictures using ffv1 into an mkv container. Question: Is there any support for mkv segments in the api? Where should i start to look? Thanks!
[18:24:39 CEST] <DHE> if you mean making multiple small segments, you can either do it yourself or use the 'segment' output format to assist you.
[18:26:39 CEST] <Filystyn> where is the fucking suport ;-)
[18:29:53 CEST] <Filystyn> ok i got it ;-)
[18:31:37 CEST] <Guest1318> thanks for the support =), by small segments i mean chunks of ~1000 images that i would consider logically connected (and fit into RAM when decoded). From https://matroska.org/technical/diagram/index.html i see that there are segments in the container format, that can be chained (as far as i understood). Where should i start looking in the libav for this?
[18:32:19 CEST] <Guest1318> Or from the other side: what do you (DHE) mean by "you can do it yourself"?
[18:35:00 CEST] <DHE> well if you're getting into the nitty gritty of mkv then that's beyond my level of expertise. I just figured you meant you wanted the video cut into, say, 60 second chunks per file
[18:38:01 CEST] <furq> hardly anything supports multi-segment matroska afaik
[18:38:27 CEST] <furq> you're probably better off just using the segment muxer
[18:54:52 CEST] <Nobliterous> Anyone know the current method to get ffserver to stream webm? its not accepting libvpc as a valid coded even though its installed
[18:55:50 CEST] <Guest1318> thanks DHE, i really thought about mkv segments, which i would rather do not want to use (thanks furq). The reason i am thinking about this, is that i would like to implement some sort of random access, that would allow me to decode e.g. images 1000-1100 without decoding the ones before or after. Am i right to assume that the MetaSeek part of the mkv spec could help me there? Is this supported by libav?
[18:57:06 CEST] <DHE> mkv-specifics aside, video playback requires starting at a key frame. seek accuracy is only as coarse/fine as how often keyframes come. so set AVCodecContext->gop_size to how many frames you want your seek accuracy to be (worst case)
[18:57:23 CEST] <DHE> actually you said ffv1, so that's a nonissue
[19:03:40 CEST] <Filystyn> what is this https://ffmpeg.org/doxygen/3.0/structAVFormatContext.html#a58c8c4d0ea974e0fcb0ce06fb1174f9f
[19:03:46 CEST] <Filystyn> what is that number of programs?
[19:05:14 CEST] <DHE> some formats (notably mpegts) support carrying multiple videos at once. eg: an over-the-air broadcast near where I live carries both the HD and SD version of the same channel on the same frequency. ffmpeg will report 2 programs on that feed
[19:05:44 CEST] <Filystyn> ihm
[19:07:09 CEST] <BtbN> OTA broadcasts usually have 10+ channels in one mpegts mux
[19:23:01 CEST] <Guest1318> What i would like to do is fast (frame accurate) seeking in a fully sequential stream of images. Where should i begin , trying to understand how seeking is implemented in ffmpeg (i do not see an obvious example for it in the doc/examples folder)?
[19:29:15 CEST] <DHE> BtbN: depends on the region I guess. around here it's usually MPEG-2 1080i at 15+ megabits. with a QAM throughput of around 19 megabits that's the end of it.
[19:34:54 CEST] <kepstin> Guest1318: you have to remember that video codecs are designed so that you don't have to sequentially decode all of the images in a stream - instead you only have to start at a keyframe. A keyframe is the start of a GOP (group of pictures), you can set the length of a gop with the '-g' option in ffmpeg.
[19:35:19 CEST] <kepstin> Guest1318: Most file formats (including mkv) automatically index the locations of the keyframes, so a player will jump immediately to the nearest keyframe and start decoding there
[19:35:37 CEST] <kepstin> Guest1318: in otherwords you don't need to do anything special, the codecs already do what you want
[19:36:23 CEST] <BtbN> DHE, mpeg2 for 1080? wat?
[19:36:34 CEST] <JEEB> Japan also does MPEG-2 Video 1080i
[19:36:42 CEST] <kepstin> Guest1318: so the only thing you need to do is use an existing player with a frame accurate seek option (e.g. mpv), or make sure to use the ffmpeg seek apis correctly.
[19:37:01 CEST] <BtbN> And what kinda small muxes are these, to only contain a single channel? At least on cable/satelite, a single mux is like 50Mbps+
[19:37:03 CEST] <kepstin> satellite providers here in canada are only just finally moving streams from mpeg2 1080i to avc.
[19:37:03 CEST] <DHE> BtbN: over the air? yeah..
[19:37:12 CEST] <DHE> ATSC is 18.8 megabit iirc
[19:37:16 CEST] <Soni> does ffmpeg strip JPEG EXIF?
[19:37:28 CEST] <BtbN> if you tell it to
[19:37:42 CEST] <thebombzen> If you want to force it you could use -map_metadata
[19:37:52 CEST] <thebombzen> but it should by default map EXIF data
[19:38:13 CEST] <Soni> does discord strip EXIF?
[19:39:13 CEST] <thebombzen> Yes and no. Discordapp re-encodes a smaller file size preview of uploaded jpegs that afaik don't have the exif metadata. but if a user clicks "open original" they'll get the original file in their browser undoctoered
[19:39:17 CEST] <thebombzen> undoctored*
[19:39:22 CEST] <Soni> hmm
[19:39:27 CEST] <kepstin> Soni: kind of off topic here, but looks like not https://feedback.discordapp.com/forums/326712-discord-dream-land/suggestions/15192411-strip-exif-data-from-uploaded-images
[19:39:29 CEST] <BtbN> does what strip EXIF?
[19:39:34 CEST] <Soni> after -vf transpose it seems ffmpeg strips EXIF
[19:40:02 CEST] <thebombzen> Soni: what EXIF metadata are you looking for? the data that says how it is rotated?
[19:40:17 CEST] <Soni> no
[19:40:18 CEST] <Soni> any
[19:40:34 CEST] <Soni> I did `exiv2 <output from ffmpeg>` and it comes out clean
[19:40:37 CEST] <thebombzen> what if you use -map_metadata 0 (or whatever input # it is)
[19:40:45 CEST] <Soni> the input has exif but the output doesn't
[19:41:42 CEST] <Soni> with command `ffmpeg -i photo.JPG -vf 'transpose=<something>' photo-rot.JPG`
[19:49:53 CEST] <Nobliterous> Why am I getting this erro with an mpeg4 stream? ::: Error writing output header for stream 'pine.mp4': Invalid argument
[19:50:42 CEST] <Soni> anyway glad I don't have location services enabled on my phone anyway
[19:54:42 CEST] Action: Obliterous beats on it with a hammer
[19:58:19 CEST] <BtbN> Nothing4You, is your output seekable?
[19:58:38 CEST] <BtbN> Obliterous, ^
[19:58:39 CEST] <BtbN> ...
[19:58:53 CEST] <Nothing4You> mine isn't
[19:58:55 CEST] <Obliterous> nope. just s tream from a webcam
[19:59:06 CEST] <BtbN> that's your input...
[19:59:53 CEST] <Obliterous> ...
[20:00:11 CEST] <Obliterous> I honestly have no clue.
[20:00:12 CEST] <DHE> have you, say, made a named pipe (mkfifo) and tried outputting to that?
[20:01:55 CEST] <Obliterous> here's the pertinent bits of the ffserver.conf : https://pastebin.com/5L3A6cdx
[20:02:23 CEST] <Obliterous> trying to use video.js
[20:03:18 CEST] <furq> ffserver sucks and nobody uses it
[20:04:04 CEST] <furq> if you want to stream to a browser then use the hls muxer directly to serve from the same machine as ffmpeg, or nginx-rtmp otherwise
[20:07:15 CEST] <Obliterous> as I'm aggregating multiple cam streames onto one page, I'll investigate ninx-rtmp. Care to point me in the propper direction?
[20:07:50 CEST] <furq> https://github.com/arut/nginx-rtmp-module/
[20:11:17 CEST] <Obliterous> I'm going to explore ffserver more before I go down the road of changing webservers
[20:11:34 CEST] <furq> you don't need to change web server
[20:13:27 CEST] <Obliterous> Sure seems that way from the docs
[20:14:34 CEST] <furq> you don't have to use this to serve http
[20:14:49 CEST] <furq> you can presumably build nginx without the http module if you want
[20:15:13 CEST] <furq> hls just dumps a playlist and a bunch of mpegts fragments in a directory
[20:15:19 CEST] <furq> you can serve that directory with whatever you want
[20:16:09 CEST] <furq> even if you do want to serve http with it, though, you can just run it on a different port and reverse proxy
[20:16:21 CEST] <Obliterous> thats more pain that its worth
[20:16:31 CEST] <furq> well yeah there's not much reason to do that
[20:17:02 CEST] <Obliterous> as I said. I'll finish exploring the options with ffserver before I head down that road.
[20:17:11 CEST] <MrZeus1> Parallelism in C++ using arrays and gcc: https://www.youtube.com/watch?v=Pc8DfEyAxzg
[20:17:22 CEST] <furq> i'm pretty confident you're wasting your time
[20:17:27 CEST] <furq> ffserver is basically useless
[20:17:41 CEST] <furq> it shouldn't even exist any more really
[20:47:02 CEST] <thebombzen> Soni: try -map_metadata 0
[20:47:04 CEST] <thebombzen> see what that does
[20:47:15 CEST] <thebombzen> that intentionally maps metadata from input 0
[20:47:19 CEST] <thebombzen> if that doesn't work, that seems like a bug
[21:09:16 CEST] <Tatsh> BtbN, seen tis option?
[21:09:18 CEST] <Tatsh> this*
[21:09:19 CEST] <Tatsh>   -drop_second_field <boolean>    .D.V.... Drop second field when deinterlacing (default false)
[21:09:39 CEST] <BtbN> why would you want that?
[21:09:40 CEST] <Tatsh> not sure yet what it really means, but it's false by default
[21:09:52 CEST] <BtbN> it drops the second field.
[21:10:15 CEST] <Tatsh> you were saying before that the framerate doubles but i'm not experiencing that
[21:12:14 CEST] <BtbN> well, if you drop the second field, it obviously doesn't anymore.
[21:30:32 CEST] <shamb> Hey, I am attempting to encode an audio file in chunks to aac. When I come to concat them together, the audio gradually goes out of sync. It seems like the concat format is not taking into account the audio priming at the start of each chunk. Is there anyway around this?
[21:30:36 CEST] <shamb> thanks!
[21:34:43 CEST] <furq> use the demuxer?
[21:34:55 CEST] <kepstin> yeah, you won't be able to do anything about that without decoding and re-encoding the audio
[21:36:12 CEST] <kepstin> it's not just the priming at the start, but also the padding to full frame sizes at the end, etc.
[21:36:53 CEST] <shamb> That's what I was worried about - I am trying to distribute the audio encoding across servers by chunking - I guess this is not possible
[21:37:55 CEST] <kazuma_> shamb same issue with copy /b ?
[21:37:57 CEST] <kepstin> audio encoding's usually so much faster than video that it's not a problem - you could have 1 server do audio, then n-1 doing video chunks, for example
[21:38:12 CEST] <kazuma_> ffmpeg concat always gives me sync issues, but mkvmerge and copy /b does not
[21:38:14 CEST] <kazuma_> i don't know why
[21:39:05 CEST] <shamb> I have tried mkvmerge but found the same problem - will give it another go though
[21:40:00 CEST] <shamb> #kepstin thanks for the info, I see the argument that audio won't need this, but for our user case, encoding audio as a whole is still too long
[21:40:13 CEST] <kepstin> mkvmerge's append mode might work, but I think it does it by timestamp manipulation in the container?, so I'm not sure how well it'll survive a transcode
[21:40:23 CEST] <kazuma_> copy /b file1.aac + file2.aac +file3.aac output.aac
[21:40:27 CEST] <kazuma_> worth a try
[21:41:10 CEST] <kepstin> kazuma_: that should provide equivalent results to the ffmpeg concat format (except that ffmpeg can take aac in containers, rather than just raw aac)
[21:42:32 CEST] <kepstin> of course, if you're only doing a small number of concats, the issue's probably not super noticable - i'd think a typical error of around 20-40ms? But if you have a lot, that adds up :/
[21:45:17 CEST] <shamb> yep, have taken that into consideration
[21:45:23 CEST] <kepstin> and there'll be audible glitches on the joins no matter what you do :/
[21:46:27 CEST] <kepstin> since without re-encoding you can only cut on frame boundaries, which are ~23ms in aac at 44.1KHz
[21:47:43 CEST] <BtbN> if you make sure your segments are exactly on frame boundaries, it should be fine
[21:47:47 CEST] <BtbN> But that'll be hard
[21:48:12 CEST] <kepstin> well, you have to make sure you know exactly how many samples the encoder adds as priming to get that right :/
[21:48:33 CEST] <kepstin> and you'd probably want to do overlapping segments, then cut a few frames off the start and end
[21:48:39 CEST] <kepstin> might be ok then?
[21:49:48 CEST] <kepstin> it would be hard work just cutting up the audio to encode for that, depending on whether you can seek with sample accuracy in your input file :)
[21:50:29 CEST] <shamb> but then I still have the issue with concatenating afterwards right?
[21:52:30 CEST] <kepstin> if you know that the encoder uses e.g. 2112 priming samples, then you can do something like encode the audio starting 960 samples early, ending 1024 samples late, then drop the first three frames and last frame before concatenating
[21:52:49 CEST] <kepstin> and it'll probably turn out okish
[21:53:10 CEST] <Soni> thebombzen: well the idea is that I *don't* want metadata
[21:53:31 CEST] <Soni> thebombzen: also ffmpeg fails to use exif rotation
[21:54:19 CEST] <kepstin> shamb: and each segment obviously has to be a multiple of 1024 samples long
[21:54:30 CEST] <kepstin> (plus the extra on the start+end)
[21:54:38 CEST] <thebombzen> Soni: it doesn't "fail" it just doesn't do it unless you tell it to
[21:54:46 CEST] <thebombzen> Soni: -map_metadata -1 refuses to map metadata
[21:54:59 CEST] <thebombzen> that's how you intentionally decline to map metadata
[21:55:43 CEST] <thebombzen> in fact, you could even do ffmpeg -i input.jpg -map_metadata -1 -c copy output.jpg to strip all metadata, but there's probably a better way of doing that than ffmpeg.c
[21:56:15 CEST] <Soni> thebombzen: the flag is called -noautorotate IIRC, the default is to rotate
[21:58:00 CEST] <thebombzen> how is that failing
[21:58:04 CEST] <thebombzen> that sounds like succeeding
[21:58:37 CEST] <thebombzen> or rather, what exactly are you trying to do?
[22:00:05 CEST] Action: kepstin thinks about it a bit more, and would actually suggest starting 1984 samples early if the encoder uses 2112 priming samples, so there's at least 1024 real samples in the overlap window
[22:15:01 CEST] <alexpigment> hey guys. does anyone know if the intel quick sync encoder is the same for all CPUs within a given generation of chips?
[22:15:32 CEST] <alexpigment> in other words, could i get a $40 Celeron G3930 and expect the same H264 encoding performance as a 7700K?
[22:22:47 CEST] <c3r1c3-Win> alexpigment: Depends and generally No.
[22:24:20 CEST] <alexpigment> so is the graphics card number a more accurate factor in determining speed?
[22:24:49 CEST] <alexpigment> for example, if the 7700K has an HD 630 chip, I could find the cheapest CPU with an HD 630 and get the same performance?
[22:24:53 CEST] <jkqxz> The things which matter are, roughly in order of importance, (a) number of video codec blocks, (b) memory bandwidth, (c) graphics clock speed, and (d) number of graphics execution units.  (c) and (d) are both less, so it will be noticably slower but not hugely.
[22:25:18 CEST] <alexpigment> gotcha
[22:25:20 CEST] <jkqxz> A 7300 would be the same as the 7700K in all of those, and would give the same performance.
[22:25:35 CEST] <alexpigment> i was under the impression that the GPU speed wasn't applicable, so that's good to know
[22:26:25 CEST] <jkqxz> It doesn't vary by very much, so while it's significant it doesn't end up making very much difference in reality.
[22:28:10 CEST] <alexpigment> ok, thanks for the heads up
[22:28:13 CEST] <jkqxz> In terms of chip cost / total codec throughput, the cheapest ones do totally win.  (Though you have to be doing something pretty weird for that to be the only constraint.)
[22:29:12 CEST] <alexpigment> yeah, i'm really just trying to get a few test systems to measure speed and quality on each generation
[22:29:50 CEST] <alexpigment> but you're right in thinking that there could be other bottlenecks
[22:29:59 CEST] <alexpigment> due to the number of CPU cores
[23:51:33 CEST] <Tatsh> the -crop option for h264_cuvid is not quite the same as crop= filter
[23:51:38 CEST] <Tatsh> of course the syntax is different
[23:51:44 CEST] <Tatsh> the way it works is different too
[23:52:07 CEST] <Tatsh> it appears that if i crop on the top by 2 pixels, this affects the right side by 2 pixels
[23:52:34 CEST] <Tatsh> i wrote a function to convert between the normal syntax and nvenc and i'm not getting good results
[23:52:51 CEST] <Tatsh> it seems like i have to add 2 pixels for each side for what i would use with the normal crop filter
[23:53:10 CEST] <BtbN> how would the top affect the right?
[23:53:44 CEST] <Tatsh> i'm using nnedi for deint and every time i use my crop value i got from mpv, and then i do the math to convert it, i get green sides
[23:54:00 CEST] <Tatsh> in particular, green right and bottom
[23:54:18 CEST] <BtbN> Can't reproduce that.
[23:54:43 CEST] <Tatsh> for right now i'm sticking to nnedi with regular crop as there is no speed difference
[23:55:05 CEST] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/cuvid.c;hb=HEAD#l143
[23:55:10 CEST] <BtbN> that's all the cropping code there is
[23:55:10 CEST] <Tatsh> the built-in deint leaves much to be desired
[23:55:45 CEST] <BtbN> complain to nvidia about that.
[23:57:09 CEST] <Tatsh> yea
[23:57:16 CEST] <Tatsh> they need nnedi implemented in hardware so badly
[23:57:20 CEST] <Tatsh> or qtgmc
[23:57:22 CEST] <Tatsh> i'll take either one
[23:58:33 CEST] <Tatsh> hmm
[23:58:41 CEST] <Tatsh> with ffplay i can preview with nnedi and see the effect
[23:58:49 CEST] <Tatsh> so i guess i should crop that way instead
[00:00:00 CEST] --- Sat Jun 10 2017

More information about the Ffmpeg-devel-irc mailing list