burek021 at gmail.com
Tue Jun 27 03:05:01 EEST 2017
[00:38:52 CEST] <h0par> hi
[00:39:15 CEST] <h0par> I'm getting error Bad audio codec 2 (Sorenson-H263). Accepted audio codecs: H264
[00:39:42 CEST] <h0par> command was ... -c:a aac
[00:40:36 CEST] <furq> you might want to paste the full command there
[00:40:37 CEST] <h0par> should I install/update anything or change parameters??
[00:41:11 CEST] <kerio> audio? :o
[00:42:01 CEST] <h0par> https://pastebin.com/dZZqZqmX
[00:42:31 CEST] <furq> add -c:v libx264
[00:42:58 CEST] <h0par> I don't have it
[00:43:07 CEST] <h0par> should I recompile ffmpeg ?
[00:44:19 CEST] <iive> flv1, h264 ... flash supported something else too... but i can't remember what.
[00:44:22 CEST] <furq> probably
[00:44:26 CEST] <furq> sorensen spark
[00:44:38 CEST] <furq> and nellymoser audio iirc
[00:45:00 CEST] <furq> i assume every modern streaming site will laugh in your face if you try to send those codecs though
[00:45:13 CEST] <iive> isn't spark from the rang of animated gif?
[00:45:15 CEST] <h0par> that's the case
[00:45:21 CEST] <h0par> fb says it's bad
[00:45:24 CEST] <h0par> (Sorenson-H263)
[00:46:41 CEST] <furq> oh wait sorenson spark is flv1 isn't it
[00:46:53 CEST] <furq> vp6 is the other one
[00:47:14 CEST] <furq> but yeah that's still ffmpeg's default video codec in flv1 for some reason
[00:47:29 CEST] <furq> s/flv1/flv/
[00:47:52 CEST] <furq> also idk how you even got a build of ffmpeg without libx264
[00:49:57 CEST] <iive> well, if you build it yourself, on windows...
[00:50:24 CEST] <h0par> Invalid encoder type 'libx264'
[00:50:39 CEST] <h0par> my bad, it was for audio, now it's fine
[00:51:05 CEST] <h0par> what type of person I must be to compile it myself, on windows..
[00:51:57 CEST] <h0par> now it works, with `-c:a copy -c:v libx264`
[00:52:00 CEST] <furq> kerio: just fyi i am still laughing at the idea of an italian radiohead fan
[00:52:03 CEST] <h0par> thanks
[02:47:59 CEST] <rk[ghost]> can i use ffmpeg to amplify an audio file?
[02:51:16 CEST] <furq> !filter volume @rk[ghost]
[02:51:16 CEST] <nfobot> rk[ghost]: http://ffmpeg.org/ffmpeg-filters.html#volume
[03:43:51 CEST] <Wallboy> Hello, I'm trying to encode some low res content (between 320x240 and 640x480) using x264 and outputting at the same resolution. On a 4c/8t machine, I can only get ~30% CPU Usage. I'm wondering what the bottleneck is that I'm only getting this CPU usage? I've googled a bit and found out it could be because the decoder is not getting frames faster enough for the encoder?
[03:45:03 CEST] <Wallboy> But if that's the case, what is the bottleneck at the decoder level then? It can't be CPU. I checked my SSD and noticed nothing over 5% usage there
[03:47:29 CEST] <furq> you can benchmark the decoder with ffmpeg -i foo.mp4 -f null -
[03:48:58 CEST] <Wallboy> 17500 fps
[03:49:10 CEST] <Wallboy> my SD content was getting ~620
[03:50:39 CEST] <Wallboy> my command i'm using is ffmpeg -y -i input.mp4 -c:v libx264 -preset faster -crf 23 output.mp4
[03:51:08 CEST] <Wallboy> my real command is much longer with a lot of filters, but even this simple one I get the same 30% or so CPU usage
[03:51:26 CEST] <furq> you're probably being bottlenecked by the filters then
[03:51:43 CEST] <Wallboy> no no, i took the filters out to get that out of the equation
[03:51:43 CEST] <furq> oh wait nvm i can't read
[03:53:09 CEST] <Wallboy> i am using a year old or so custom build of ffmpeg... i'll try something more official to rule that out
[03:55:43 CEST] <Wallboy> 1000 fps on the latest x64 static nightly build, cpu usage up to 40% now. But my custom build is x86, so that may be the difference... Either way it's still not loading the CPU
[03:56:15 CEST] <furq> does the source have audio
[03:57:01 CEST] <Wallboy> it does
[03:57:08 CEST] <furq> are you using -c:a copy
[03:57:22 CEST] <Wallboy> ffmpeg -y -i input.mp4 -c:v libx264 -preset faster -crf 23 output.mp4
[03:57:29 CEST] <furq> yeah that's encoding the audio
[03:57:34 CEST] <furq> add -c:a copy
[03:58:57 CEST] <Wallboy> your own to something sir, 2600 fps 75% usage now
[03:58:59 CEST] <Wallboy> on*
[04:00:20 CEST] <Wallboy> why would the default audio encoding be causing that then?
[04:00:49 CEST] <furq> it's singlethreaded
[04:00:50 CEST] <Wallboy> my actual command line filtergraph uses lib-rubberband, so I can't actually use -c:a copy
[04:01:06 CEST] <Wallboy> any suggestions on a multithreaded audio codedc?
[04:01:12 CEST] <furq> i don't think there is such a thing
[04:01:40 CEST] <furq> there definitely isn't one in ffmpeg for aac
[04:02:19 CEST] <furq> are you doing a constant speed change
[04:02:26 CEST] <furq> you might be able to do that without reencoding
[04:03:19 CEST] <Wallboy> [aconcat]rubberband=pitch=1.05[apitch]
[04:03:29 CEST] <furq> is that pal to ntsc
[04:03:40 CEST] <furq> if the source is h264 then you don't need to reencode
[04:04:18 CEST] <furq> http://vpaste.net/nojjo
[04:04:23 CEST] <Wallboy> no the video input is h264 encoded
[04:04:38 CEST] <furq> yeah you can just remux the video stream with new timestamps
[04:04:43 CEST] <furq> you do need to reencode the audio though
[04:06:10 CEST] <furq> also i meant pal to film, not pal to ntsc
[04:08:45 CEST] <Wallboy> if i ahve re-encode the audio anyway, wouldn't it be the same speed if I seperated them and muxed them back together after?
[04:10:35 CEST] <furq> separated them?
[04:10:59 CEST] <furq> this won't be any faster than reencoding if you're bottlenecked by the audio, but you won't lose any video quality
[04:11:12 CEST] <furq> and you can run multiple jobs in parallel
[04:11:13 CEST] <Wallboy> i'm trying to find a better audio codec
[04:11:30 CEST] <furq> your only real choices are aac and libfdk_aac
[04:11:41 CEST] <furq> and you probably don't have a build with fdk because it's not distributable
[04:15:32 CEST] <Wallboy> how is that I can't even find threads talking about this issue with aac bottlenecking encoding speeds. You would think many would have ran into this issue...
[04:17:26 CEST] <furq> most people aren't encoding video at 30x realtime
[04:29:21 CEST] <Wallboy> libmp3lame encodes fast... until i throw in the rubberband filter, then it drops the speed again :/
[04:30:30 CEST] <furq> like i said, just run jobs in parallel
[04:30:37 CEST] <furq> xargs will do it if you're on *nix
[04:31:09 CEST] <Wallboy> windows
[04:31:25 CEST] <atomnuker> if you need to speed up the aac encoder use -aac_coder fast
[04:36:14 CEST] <Wallboy> when adding that I get: "Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height"
[04:36:46 CEST] <Wallboy> and "Coders other than twoloop require -strict -2 and some may be removed in the future"
[04:38:36 CEST] <Wallboy> added -strict -2 and it's encoding now... however we're down to 15% CPU usage :P
[04:41:40 CEST] <Wallboy> trying some other audio codecs, and it's that rubberband filter that seems to be the bottleneck
[04:42:40 CEST] <atomnuker> why are you using an ffmpeg version which is almost a year old?
[04:42:51 CEST] <atomnuker> that option was changed in august last year
[04:42:57 CEST] <Wallboy> because lib-rubberband was built with it
[04:43:32 CEST] <Wallboy> none of the official builds have rubberband
[04:43:44 CEST] <atomnuker> fair enough, not much has changed to the encoder since then
[04:45:29 CEST] <Wallboy> i'm curious how the stand alone rubber band utility performs, going to try it
[04:46:14 CEST] <Wallboy> uggh, but I'm concatenting other videos and audio streams in my main ffmpeg command line, and only using rubberband on the main video/audio... that complicates things
[04:50:55 CEST] <Wallboy> bah, this rubberband standalone only accepts wav files
[04:52:33 CEST] <androclu`> hey, all.. i'm using ffmpeg and since upgrading to debian stretch 9.0, i'm getting segmentation faults.
[04:52:49 CEST] <Wallboy> kind of makes you wonder if that's why it's so slow, ffmpeg has to convert it to wav behind the scenes first?
[04:53:01 CEST] <furq> it does have to do that but that's not what's slow
[04:53:04 CEST] <androclu`> i've tried stable release, a static build from someone else, and my own compile. they're all seg-faulting, and i am dead in the water with 3 projects to get out the door
[04:53:37 CEST] <androclu`> the segfault unfortunately gives me absolutely no information about what it didn't like. it just dumps. Any idea how I can pursue it?
[04:53:57 CEST] <furq> gdb?
[04:54:13 CEST] <furq> if the static build is segfaulting after an upgrade then i can only assume it's your libc
[04:54:27 CEST] <furq> that's the only bit that would have changed
[04:54:35 CEST] <androclu`> hmmm...
[04:54:41 CEST] <furq> does ffmpeg from the repos work
[04:54:50 CEST] <androclu`> no, it segfaults, too
[04:54:53 CEST] <furq> fun
[04:54:59 CEST] <furq> i never had any issues with ffmpeg on stretch
[04:55:09 CEST] <androclu`> they all 'work' for a while, and i put it on a script and walk away, and a few minutes later it dies
[04:55:14 CEST] <androclu`> hmm...
[04:55:56 CEST] <androclu`> i was thinkikng maybe RAM problems, but nothing else is showing a problem
[04:56:20 CEST] <androclu`> or maybe something wierd about an audio or video codec.. but then working on several different files results the same
[04:56:40 CEST] <furq> i'd have thought you'd get actual lockups/reboots from hardware issues
[04:56:53 CEST] <androclu`> i have to admit, after all these years using linux, i've no idea how to use the debugger (gdb) to debug something like this
[04:56:57 CEST] <furq> but yeah iirc relaxed's static builds come with a debug binary
[04:57:02 CEST] <androclu`> furq: exactly so
[04:57:11 CEST] <furq> run that under gdb and type "bt" when it segfaults
[04:58:47 CEST] <androclu`> furq: Thank you. Sorry, I'm lame tonight. I googled relaxed ffmpeg but don't see a repo or site ..(?)
[04:59:00 CEST] <furq> https://www.johnvansickle.com/ffmpeg/
[04:59:28 CEST] <furq> nvm there's no debug binary
[05:00:08 CEST] <furq> and debian has helpfully stopped making a ffmpeg-dbg package
[05:03:45 CEST] <Wallboy> so it's absolutely that rubberband filter causing it. i just converted the audio to wav and extracted it and ran it through the standalone rubberband and i'm getting the exact same CPU usage
[05:04:10 CEST] <furq> i take it -af atempo isn't high enough quality for you
[05:04:39 CEST] <Wallboy> i have used that in the past before, i may have to try it again though, but rubberband definately did sound better
[05:06:29 CEST] <Wallboy> i also had to go through a bunch of other filters to make the audio the same length
[05:06:37 CEST] <Wallboy> apitch, atempo, and something else
[05:07:05 CEST] <Wallboy> had to calculate the samplerate as well which can be variable... i just remember the whole thing being annoying to figure out and why i opted for rubberband
[05:07:14 CEST] <Wallboy> variable per video i mean
[05:08:18 CEST] <Wallboy> -af "aresample=45600,atempo=0.95,asetrate=48000"
[05:08:26 CEST] <Wallboy> from an old ffmpeg command i used to use
[05:08:37 CEST] <furq> you shouldn't need to resample for atempo
[05:09:06 CEST] <furq> you do for asetrate
[05:09:38 CEST] <androclu`> furq: since i was successful in making my own compile, do you happen to know off the top of your head what i would have to add on command-line / configure to add debug symbols?
[05:09:49 CEST] <Wallboy> i wanted to change the pitch of the audio, while maintaining the same audio duration
[05:09:55 CEST] <furq> iirc you should get an ffmpeg_debug binary by default when you build yourself
[05:10:06 CEST] <furq> Wallboy: yeah that's asetrate
[05:10:20 CEST] <furq> oh wait nvm
[05:10:32 CEST] <furq> either way atempo by itself doesn't need resampling
[05:13:06 CEST] <androclu`> furq: oh, there it is, duh: enable-debug=LEVEL
[05:21:05 CEST] <Wallboy> trying to get atempo to work again. How do I maintain the same audio length while changing the pitch?
[05:21:58 CEST] <Wallboy> i can't remember what i was doing when i did: -af "aresample=45600,atempo=0.95,asetrate=48000"
[05:22:19 CEST] <Wallboy> i think i was changing the sample rate first, then changing the atempo to change pitch, and then using asetrate to get it back to the correct duration
[05:23:10 CEST] <Wallboy> ya i must have since i got 45600 from 48000 * 5%
[05:23:24 CEST] <Wallboy> is there a better way?
[05:23:57 CEST] <Wallboy> why i chose those particular sample rates i don't know
[05:38:57 CEST] <Wallboy> i might just have to accept that rubberband is a slow unoptimized pos and deal with it
[05:42:15 CEST] <Wallboy> if it's going to bottleneck me, i might as well use a slower x264 preset as well... uggh whatever... thanks for all the help though
[09:27:14 CEST] <Fyr> guys, can I burn subtitles without re-encoding?
[09:27:58 CEST] <Fyr> I know that MP4 and MKV support streams and video player are able to play two streams at once.
[09:28:55 CEST] <Fyr> I'm thinking that it's possible to create a stream, like PGS and make it work.
[12:12:00 CEST] <Syl20> Hi
[12:12:54 CEST] <Syl20> I'm trying to compile ffmpeg with --enable-decklink option, receive this error ERROR: DeckLinkAPI.h header not found
[12:13:08 CEST] <Syl20> where to put this file ?
[13:24:28 CEST] <elmarikon> cheers! I was wandering if there is a filter in ffmpeg to detect freeze/doubled frames. I read it would be possilble vie opencv (but also not how..:-), so maybe also with the ffmpeg-ocv filer.
[13:24:52 CEST] <elmarikon> So I'm kind of lost here... any suggestions!?
[13:40:03 CEST] <durandal_1707> elmarikon: yes see signalstats filter
[13:49:58 CEST] <elmarikon> i will, thanx...
[14:08:20 CEST] <elmarikon> durandal_1707: sorry but I don't get it... How would I use this filter to detect freeze/doubled frames? Do u have a hint for me, please!?
[14:09:34 CEST] <durandal_1707> elmarikon: see frame metadata
[14:10:03 CEST] <durandal_1707> see that udif,ydif etc are 0
[14:11:21 CEST] <elmarikon> durandal_1707: aaah, now i see it, thanx, I will give it a try.
[14:21:11 CEST] <DHE> elmarikon: if the frames are absolutely identical, decimate may help. if they're not absolutely identical, it may still help but you have to give it thresholds
[14:21:31 CEST] <DHE> or do you want to just detect? decimate will drop them
[14:31:57 CEST] <elmarikon> DHE: I just have to detect them... It's for validation stuff...
[14:34:37 CEST] <kerio> is there a filter to blow up the dynamic range of a frame
[14:35:14 CEST] <kerio> like turn the minimum luma and turn it into 0, the maximum luma into 255
[14:35:21 CEST] <kerio> *take the minimum luma
[14:43:39 CEST] <elmarikon> kerio: that should be possible, converting from one colorspace to another...
[14:46:20 CEST] <meriipu> nvenc: https://developer.nvidia.com/nvidia-video-codec-sdk#NVENCFeatures seems to list Maxwell (GM206) as supporting hevc with yuv444p as the pixel format. My current card is a GTX 950, which to my understanding has that architecture. ffmpeg seems to indicate no support (for yuv444p), though? https://bpaste.net/show/3370cd36befb is there an inconsistency in information somewhere or am I misunderstanding?
[14:49:51 CEST] <BtbN> increase the log level to get a more detailed error.
[14:53:29 CEST] <meriipu> [hevc_nvenc @ 0x2677fe0] YUV444P not supported https://bpaste.net/show/d17bb6b1516a I suppose
[14:55:03 CEST] <kerio> elmarikon: this is some gray16 that i want to blow up into gray8
[15:12:21 CEST] <meriipu> how would I list the supported pixel formats of a specific encoder (if that is even a thing that makes sense to do) ?
[15:13:55 CEST] <Mavrik> I usually just look at the source :(
[15:13:57 CEST] <bencoh> by iterating on .pix_fmts I'd say
[15:14:04 CEST] <BtbN> meriipu, http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/nvenc.c;h=f79b9a502e33c66e62513c56e54e9cec041e9d45;hb=HEAD#l254
[15:14:25 CEST] <BtbN> It's a hard capability check, the nvenc encoder just reports all potentially supported formats, and then errors at runtime.
[15:14:50 CEST] <meriipu> BtbN: that makes sense. Thanks.
[15:14:55 CEST] <BtbN> If you get that error message, the nvidia driver itself reports no support for YUV444
[15:16:16 CEST] <meriipu> oh wait really so it is the driver and not ffmpeg not having implemented it?
[15:20:33 CEST] <BtbN> See the linked code.
[15:20:34 CEST] <meriipu> Mavrik: I suppose these https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/nvenc.c#L39 and then it errors at runtime if it fails, as mentioned?
[15:24:01 CEST] <meriipu> BtbN: I think I misunderstood what you meant by hard capability check and/or assumed too soon what the check did. I am not too strong in C
[15:29:17 CEST] <kerio> why isn't the opus encoder called opossum
[15:34:57 CEST] <atomnuker> you can call it celty if you'd like
[15:35:07 CEST] <atomnuker> since it only does celt currently
[15:35:52 CEST] <atomnuker> that's not a bad idea actually, might make a patch
[15:42:45 CEST] <gmoryes> hello
[15:43:53 CEST] <gmoryes> can anybody help
[15:44:29 CEST] <gmoryes> if i do ffmpeg -i file.webm -codec copy -y file_clear1.webm
[15:44:48 CEST] <gmoryes> and ffmpeg -i file.webm -codec copy -y file2.webm
[15:45:17 CEST] <gmoryes> the md5(file_clear1.webm) != md5(file2.webm)
[15:45:37 CEST] <BtbN> It contains the creation date and other stuff that changes per invocation.
[15:46:16 CEST] <gmoryes> The differ in 16 bytes after first 332 bytes
[15:46:31 CEST] <gmoryes> Where I can read what in these 16bytes?
[15:46:55 CEST] <BtbN> With a hex editor of your choice I guess?
[15:47:25 CEST] <gmoryes> yes, with hex editor, but I don't know what are these 16bytes mean
[15:47:56 CEST] <gmoryes> where I can read for example, that first 4byte of 16 - creation date
[15:48:10 CEST] <gmoryes> the next 4byte - something other info, and so on
[15:48:16 CEST] <BtbN> You'll have to consult the matroska specification about that
[15:49:34 CEST] <gmoryes> Okey, thanks! Is there any option to ffmpeg, so that it will not change these 16bytes after per invocation?
[15:50:03 CEST] <BtbN> you can probably use one of the bitexact flags. But why? As long as the frames don't change, it seems pointless to insist on that.
[15:51:06 CEST] <gmoryes> Because I want to md5(file1.webm) be equal to md5(file2.webm)
[15:51:11 CEST] <BtbN> why?
[15:51:28 CEST] <gmoryes> It is checking that the input file is same
[15:51:29 CEST] <BtbN> If you want to verify the integrity of the video, there's framemd5 for that.
[15:52:18 CEST] <gmoryes> The aim is not to encode input_file, if we encoded it later
[15:52:41 CEST] <gmoryes> You say, that the framemd5 will help with this task?
[15:52:59 CEST] <BtbN> It will hash the actual frames, not the file with random metadata
[15:53:14 CEST] <gmoryes> oh, thanks very much
[16:00:06 CEST] <meriipu> so say I have a filter graph, is there such a thing as the encoded video before and after the filter? Could I save the video to file before and after scaling, for instance, at little additional processing cost (save for the IO), or have I misunderstood something? -vf "hwupload_cuda,scale_npp=w=1440:h=-1:format=yuv444p:interp_algo=lanczos,hwdownload,format=yuv444p"
[16:00:41 CEST] <meriipu> this is for h264_nvenc, by the way.
[16:01:50 CEST] <BtbN> you'll probably need a complex filtergraph, that has a tee filter somewhere that sends to an additional output, and then have two encoders
[16:02:46 CEST] <meriipu> so the graph does work on some unencoded object, only encoding it to something "outputable" at the end?
[16:02:54 CEST] <BtbN> what?
[16:03:14 CEST] <BtbN> filters obviously operate on raw frames
[16:03:18 CEST] <BtbN> can't scale h264
[16:04:34 CEST] <meriipu> so there is quite some overhead in having to encode two files, then?
[16:04:48 CEST] <BtbN> I don't follow
[16:07:07 CEST] <Diego__> Just an easy question and (I suppose) fast answer. Why if I don't specify a -t option with a value > 0 when adding an "empty" input with color black FFMpeg keeps compiling forever? (e.g. ffmpeg -i video1.webm -i video2.webm -f lavfi -i color=s=640x480:color=black -i audio1.opus -i audio2.opus ...)
[16:07:24 CEST] <BtbN> It'll keep going
[16:08:08 CEST] <meriipu> so in one case there is duplicating the same output to two destinations, and in the other (the one mentioned above) outputting two versions, one to each destination. Is the additional computation required about twice as much in the second case, or could the scaled version somehow speed up its work by using results from the encoded pre-filter version ?
[16:10:03 CEST] <BtbN> you'll obviously have to encode twice, if you want to diffrent encoded outputs
[16:11:58 CEST] <meriipu> thanks
[16:35:54 CEST] <meriipu> how are codec options like preset or pix_fmt set independently on split streams? following each -map "[vout]" ?
[16:39:09 CEST] <BtbN> They affect the output they are in front of.
[16:39:16 CEST] <furq> Diego__: color=black:d=10
[16:39:35 CEST] <furq> or use -shortest if you're overlaying something onto it
[16:45:25 CEST] <meriipu> so I should use say -map [options] output1 -map [options] output2 rather than tee? https://bpaste.net/show/5fc73a0febc6 the outputs at least have different resolutions, though I am not sure how to specify encoding options with tee.
[16:50:52 CEST] <BLZbubba> hello there, is there a simple cmd line way to identify the information of media files? I remember back in the "old days" the file utility would show resolution & codecs, but it seems useless these days
[16:51:02 CEST] <furq> ffprobe
[16:53:08 CEST] <BLZbubba> ok cool ty
[16:56:56 CEST] <Diego__> furq: I've used -shortest option, without the d= option for the black screen, and still not working. Adding d=1 or -t 1 seems to do the same. I just want to add that black screen without duration, as if it were an image
[16:57:20 CEST] <Diego__> to "fulfill" the video mosaic
[17:01:14 CEST] <BLZbubba> furq: what does the letter r stand for in "tbr"?
[17:01:18 CEST] <BLZbubba> tbn too
[17:06:47 CEST] <meriipu> what does format -> stream #1:0 on line 7 mean? https://bpaste.net/show/127021f2b2a6
[17:33:54 CEST] <PsyDebug> Hi all! I need overlay video from some rtsp, but ffmpeg is down every some minutes https://pastebin.com/PEKszVbj What can i try? Thx!
[18:05:51 CEST] <DHE> meriipu: you have 2 output files. #1:0 is output stream 0 (first stream) of output file 1 (second file)
[18:06:22 CEST] <DHE> looks like you're taking video captured from a desktop and outputting a high quality version and low quality version at the same time
[18:24:59 CEST] <meriipu> DHE: why is one line split:output1 -> and the other format -> Stream though?
[18:26:21 CEST] <meriipu> I would have expected split:output1 and split:output2, I do not really understand what is meant by format
[18:26:29 CEST] <kerio> furq: i fixed my hwdec porn problem :o
[18:26:45 CEST] <kerio> by setting hwdec-codecs
[18:26:50 CEST] <kerio> to something much more sensible
[18:26:54 CEST] <kerio> specifically, h264,hevc
[18:27:26 CEST] <furq> i have no memory of this problem but i'm glad that you can now honk one off again
[18:27:32 CEST] <alexpigment> haha
[18:33:01 CEST] <kerio> apparently the default for hwdec-codecs includes vc1, wmv3 and a bunch of other shit
[18:33:10 CEST] <kerio> that has no place whatsoever in the appley world
[18:33:12 CEST] <kerio> i think
[18:41:42 CEST] <DHE> meriipu: it's going into some kind of filterchain. from the ffmpeg standpoint it's a stream that goes "somewhere" and a distinct stream that comes out "somewhere", hence distinct lines
[18:57:25 CEST] <meriipu> DHE: so format is shorthand for something like split:output2 -> filter_chain_stuff [-> Stream #1:0 (h264_nvenc)] ?
[19:03:45 CEST] <RandomCouch> JEEB: Hey! Just wanted to let you know I finally got ffmpeg working with Unity on android ! :D
[19:03:54 CEST] <RandomCouch> all that research was helpful
[19:03:57 CEST] <JEEB> ok
[19:04:10 CEST] <RandomCouch> but now I'm having a small issue
[19:04:19 CEST] <RandomCouch> I'm trying to concat 2 videos together, and it's working but it's taking a very long time
[19:04:32 CEST] <JEEB> are you bit stream copying?
[19:04:47 CEST] <RandomCouch> I'm using the method where you convert both vids to .ts first, using ffmpeg -i vid1.mp4 -qscale:v 1 intermediate1.ts
[19:04:56 CEST] <RandomCouch> and then merge both intermediates and re-encode
[19:05:11 CEST] <RandomCouch> using ffmpeg -f concat -i files.txt -c copy output
[19:06:55 CEST] <RandomCouch> I've tried different methods too, like merging both intermediate videos into an intermediate_all.ts and then reencoding that to mp4
[19:07:01 CEST] <RandomCouch> but that took just as long
[19:07:45 CEST] <RandomCouch> I also tried ffmpeg -i video1.mp4 -i video2.mp4 -c copy output.mp4 but that only gave me back the first video without concatenating
[19:12:04 CEST] <furq> 18:04:47 ( RandomCouch) I'm using the method where you convert both vids to .ts first, using ffmpeg -i vid1.mp4 -qscale:v 1 intermediate1.ts
[19:12:07 CEST] <furq> where did you get this command from
[19:14:47 CEST] <RandomCouch> I was doing some googling and found it in a stackoverflow question
[19:14:54 CEST] <RandomCouch> don't remember where exactly
[19:15:56 CEST] <RandomCouch> but basically it goes like -i vid1.mp4 -qscale:v 1 intermediate1.ts then -i vid2.mp4 -qscale:v 1 intermediate2.ts then -i concat:intermediate1.ts|intermediate2.ts -qscale:v 2 intermediate_all.ts
[19:19:21 CEST] <RandomCouch> the process of re-encoding the .ts file to .mp4 though, is what is taking so long
[19:32:34 CEST] <alexpigment> Hey guys. I'm running into a crash that seems pretty weird to me. I've got a 1080p60 video that I'm using as the basis of a transcoding test bank. One of the tests was trying to create a 1440x1080 video that was interlaced, so I used "-vf crop=1440:1080,interlace=scan=tff"
[19:32:38 CEST] <alexpigment> this causes a crash
[19:32:54 CEST] <alexpigment> but I can crop or interlace by themselves without issue
[19:33:27 CEST] <alexpigment> and I can subsequently interlace/crop the results of those individual encodes
[19:33:50 CEST] <alexpigment> so effectively, I can do it in two separate passes without issue. if i try to do it at once, ffmpeg crashes
[19:34:19 CEST] <alexpigment> Do any of you guys know of anything about 1080i that would explain this?
[19:37:44 CEST] <alexpigment> Oh, and one other important detail: If I scale to 720x480 before interlacing, this doesn't cause a crash. (e.g. "-vf crop=1440:1080,scale=720:480,interlace=scan=tff")
[19:38:32 CEST] <JEEB> you probably want to use gdb :)
[19:39:14 CEST] <alexpigment> oh, compiling with debugging
[19:39:21 CEST] <JEEB> it does that by default
[19:39:22 CEST] <alexpigment> I'll try that out
[19:39:39 CEST] <JEEB> it will then strip the ffmpeg binary as ffmpeg and I think the ffmpeg_g thing is the debug one?
[19:39:43 CEST] <JEEB> check which is bigger
[19:39:51 CEST] <alexpigment> ok, i've got that already built
[19:39:53 CEST] <alexpigment> i'll just copy it over
[19:40:08 CEST] <alexpigment> just gotta fire up this linux VM ;)
[19:40:21 CEST] <JEEB> gdb ./ffmpeg_g and then I have to recall how to set the command line parameters in gdb again
[19:40:47 CEST] <JEEB> ah right
[19:40:48 CEST] <JEEB> http://www.unknownroad.com/rtfm/gdbtut/gdbuse.html#RUN
[19:40:49 CEST] <alexpigment> well, if you remember, let me know. otherwise, i'll look it up in a minute after I get this file in place
[19:40:50 CEST] <kepstin> my guess is that the way the crop filter works - it just adjusts the start pointer, line length, number of lines but doesn't actually change the size of the memory allocation - is causing the interlace filter to do something wrong.
[19:40:52 CEST] <JEEB> run parameters
[19:41:02 CEST] <alexpigment> awesome, thanks JEEB
[19:41:32 CEST] <JEEB> and yea that tutorial was a nice thing when I had to remind myself the arcane ways of gdb debugging the last time
[19:41:46 CEST] <alexpigment> kepstin: I suppose that makes sense. It didn't happen in a build I had from 2014. I assume it's a new bug, but I wanted to check to make sure I wasn't creating a file that was inherently out of spec
[19:42:10 CEST] <JEEB> well a crash is never good
[19:42:27 CEST] <kepstin> doing it in two passes or with the scale filter means that the frame's reloaded into a new buffer matching the size of the frame, rather than being only part in the middle of a larger frame.
[19:42:29 CEST] <JEEB> and if you can then recreate it with something like the lavfi input so you don't even need a file to recreate it
[19:42:31 CEST] <alexpigment> true, but my own priority on that crash would be less important if the resultant file wasn't realistic in the real world
[19:42:40 CEST] <kepstin> so yeah, I'm guessing it's a bug in the interlace filter
[19:42:47 CEST] <alexpigment> JEEB: true. i'll get on that in a second
[19:45:03 CEST] <alexpigment> hmmm, i don't think i've tried to mess with lavfi in this way I guess. how do I set it's properties prior to the filtering?
[19:45:17 CEST] <furq> 18:40:48 ( JEEB) http://www.unknownroad.com/rtfm/gdbtut/gdbuse.html#RUN
[19:45:20 CEST] <alexpigment> like if i do smptebars as in the input, how do i make them 1080p60
[19:45:22 CEST] <furq> you can just do gdb --args ffmpeg -i foo bar
[19:45:33 CEST] <furq> and then r
[19:45:50 CEST] <alexpigment> furq: I was specifically trying to just simulate my crash without a real input first
[19:46:30 CEST] <alexpigment> I've gotta copy this build from my VM which takes a while to load up, so I'm trying to do some other tests while I wait for that
[19:48:22 CEST] <furq> -f lavfi -i testsrc=s=1920x1080:r=60 -vf ...
[19:48:37 CEST] <alexpigment> great, thanks for that
[20:41:59 CEST] <nicolas17> how can I know if the transcoding bottleneck is decoding or encoding?
[20:43:13 CEST] <__jack__> you can decode to /dev/null and see
[20:43:42 CEST] <nicolas17> huh, true
[20:45:30 CEST] <nicolas17> looks like decoding goes at 18fps
[20:46:59 CEST] <shincodex> "max_delay" what is its purpose
[20:47:12 CEST] <shincodex> i try and it seems like nothing good comes from it
[20:47:23 CEST] <shincodex> I thought it was good for buffer up incase corrupt packets
[20:51:25 CEST] <nicolas17> does ffmpeg decode and encode in different threads?
[20:53:01 CEST] <DHE> the encoders and decoders may support internal multi-threading for their operation, but ffmpeg itself is mostly single-threaded
[20:54:22 CEST] <shincodex> h264 is threaded
[20:54:32 CEST] <nicolas17> if I transcode from image2 (JPEG) to null muxer, it goes at 18fps
[20:54:33 CEST] <shincodex> unless i think you turn that option off or on to tell it not to
[20:54:47 CEST] <komanda> hey has anyone heard of some new security vulnerability with .avi files?
[20:55:23 CEST] <nicolas17> if I transcode from image2 (JPEG) to libx264, it goes at 8fps but ffmpeg still uses only one core
[20:55:23 CEST] <DHE> komanda: can you be more specific? like a link or CVE number?
[20:56:00 CEST] <komanda> dont have a link to CVE, just a proof of concept link here:
[20:56:03 CEST] <komanda> https://github.com/neex/ffmpeg-avi-m3u-xbin/blob/master/gen_xbin_avi.py
[20:56:27 CEST] <komanda> basically it can access internal files and make a movie of the file contents
[20:56:53 CEST] <komanda> https://github.com/neex/ffmpeg-avi-m3u-xbin
[20:57:11 CEST] <nicolas17> !
[20:57:27 CEST] <DHE> so, they're taking an arbitrary file on disk and forcibly muxing it into an AVI file?
[20:57:59 CEST] <BtbN> are they embedding an m3u8 into an avi? That's a thing?
[20:59:04 CEST] <nicolas17> DHE: it seems transcoding a specially crafted .avi file may access arbitrary video files from the local filesystem
[20:59:19 CEST] <nicolas17> and end up transcoding that instead
[20:59:44 CEST] <BtbN> that's something that can happen if you open an untrusted m3u8 list
[20:59:50 CEST] <BtbN> But avi?
[21:01:39 CEST] <DHE> could you bypass the protocol whitelists by opening an m3u8 that actually came out of an AVI?
[21:03:35 CEST] <nicolas17> this performance is weird... encoding 640x480 is not that much faster than encoding 2592x1944? :/
[21:07:46 CEST] <nicolas17> ah looks like at high res it uses 150% CPU rather than 100%
[21:08:22 CEST] <nicolas17> anyway, seems I won't speed this up, I'll just run multiple transcoding tasks simultaneously, if I/O allows...
[21:09:08 CEST] <shincodex> so
[21:12:41 CEST] <nicolas17> I wish I could transcode this on a fast VPS instead of my laptop
[21:12:55 CEST] <nicolas17> but uploading the source material will take longer, by several orders of magnitude :x
[21:18:35 CEST] <komanda> would there be a way to build ffmpeg m3u8s cannot open local files?
[21:20:41 CEST] <alexpigment> JEEB (or kepstin or furq): I just got back from lunch and ran my crashing command with gdb
[21:21:04 CEST] <alexpigment> the result was "Thread 1 received signal SIGSEGV, Segmentation fault."
[21:21:16 CEST] <furq> did you get a backtrace
[21:21:20 CEST] <alexpigment> "0x0000002b in ?? ()"
[21:21:24 CEST] <JEEB> "bt full" will give you a long backtrace
[21:21:33 CEST] <furq> are you running ffmpeg_g
[21:21:33 CEST] <JEEB> aww :< was that the stripped binary?
[21:21:44 CEST] <furq> yeah ?? means no debug symbols
[21:21:47 CEST] <alexpigment> ahh
[21:21:54 CEST] <alexpigment> looks like this ffmpeg_g isn't going to cut it then
[21:22:26 CEST] <alexpigment> I did get a message that it was reading the debug symbols though
[21:22:30 CEST] <JEEB> oh
[21:22:36 CEST] <JEEB> then check bt and bt full
[21:22:40 CEST] <JEEB> bt is shorter, bt full is longer
[21:23:00 CEST] <JEEB> pastebin 'em and we'll see if it's *any* use
[21:23:19 CEST] <alexpigment> ok 1 sec
[21:23:31 CEST] <furq> you might also want to get libc6-dbg or whatever your distro calls it
[21:24:31 CEST] <alexpigment> (starting to feel that "in over my head because i'm not a developer" feeling :))
[21:24:48 CEST] <alexpigment> sorry, i have to rerun this because it's not letting me exit
[21:24:54 CEST] <JEEB> ?
[21:24:58 CEST] <furq> ^D to exit gdb
[21:25:00 CEST] <JEEB> it should let you be in the gdb terminal
[21:25:10 CEST] <alexpigment> it says to type <return> to continue or <return> to exit
[21:25:19 CEST] <furq> i forget if there's a way to rerun after a segfault
[21:25:19 CEST] <JEEB> write "bt"
[21:25:20 CEST] <alexpigment> typing return (or <return>) does nothing
[21:25:24 CEST] <furq> oh right
[21:25:28 CEST] <JEEB> and then enter
[21:25:31 CEST] <furq> yeah you can get a backtrace now
[21:25:43 CEST] <alexpigment> jeeb, already did that. one sec
[21:31:51 CEST] <alexpigment> https://pastebin.com/2tQjBFZZ
[21:32:40 CEST] <JEEB> umm, sounds like you're using 32bit gdb with 64bit windows binaries?
[21:33:07 CEST] <alexpigment> hmmm, i should be using 32-bit all the way, but I can rebuild and try again
[21:33:22 CEST] <JEEB> or I'm misreading those messages
[21:33:29 CEST] <JEEB> regarding libx265
[21:33:46 CEST] <JEEB> but yea, that has nada with regards to debug info
[21:34:48 CEST] <alexpigment> yeah, I'm not sure what's up. oddly, it doesn't happen with a purely synthetic source (testsrc), but it happens with two very different real sources (one x264 1080p60 and one ProRes 1080p60)
[21:35:03 CEST] <alexpigment> something about cropping to 1440x1080 and interlacing makes ffmpeg blow up
[21:35:33 CEST] <alexpigment> I guess I'll just log it on trac and someone can get to it when they get to it
[21:35:41 CEST] <JEEB> if it only happens with "actual" sources then you'll probably have to produce a sample that makes it happen
[21:35:46 CEST] <JEEB> and yes, post it on trac with a command line
[21:36:02 CEST] <alexpigment> Yeah, I'll include a sample
[21:36:37 CEST] <nicolas17> how do I skip every other frame?
[21:37:00 CEST] <RandomCouch> are certain video encoders faster in terms of re-encoding?
[21:37:11 CEST] <RandomCouch> for example would it be faster to reencode to mpeg4 than h264
[21:38:03 CEST] <furq> nicolas17: -vf select="mod(n\,1)"
[21:39:11 CEST] <nicolas17> I'm currently using -framerate 120 -i "*.jpg" -r 60 output.mp4 and that seems to work but it still reads and decodes every jpeg file, any way to avoid that?
[21:39:39 CEST] <furq> delete every other file?
[21:40:29 CEST] <nicolas17> yeah I may have to do that... the naming structure doesn't make that trivial, but...
[21:40:42 CEST] <furq> or maybe -pattern_type glob -i "*.jpg"
[21:40:49 CEST] <nicolas17> hmm
[21:41:30 CEST] <nicolas17> I think that would work :O
[21:43:35 CEST] <nicolas17> ok, looks like it was slower than I thought
[21:43:38 CEST] <nicolas17> it now says fps=4
[21:43:48 CEST] <nicolas17> so I think previously it was counting the dropped frames when saying fps=8
[21:43:53 CEST] <nicolas17> this will take forever to transcode
[21:44:40 CEST] <nicolas17> but the  trick means it won't murder my disk cache so much
[21:54:14 CEST] <alexpigment> JEEB: ok, I logged that issue: https://trac.ffmpeg.org/ticket/6491
[21:54:32 CEST] <alexpigment> not expecting you to look into it, but I just wanted to follow up with you
[21:57:52 CEST] <kepstin> it looks like it's an issue where the interlace filter is incorrectly using linesize instead of width in a couple of places
[21:58:14 CEST] <kepstin> so if linesize > width, it might do some out-of-bounds writes on the dest frame
[21:58:21 CEST] <BtbN> that shouldn't really matter, should it?
[21:58:37 CEST] <kepstin> from a *really* fast review that might be wrong ;)
[21:59:54 CEST] <alexpigment> kepstin: what exactly is linesize?
[22:00:14 CEST] <feliwir> hey, how do i synchronise two seperate files with libav for playback? E.g. i have a video file (vp6) and an audio file that is the audiostream for that video (mp3)
[22:00:36 CEST] <kepstin> "linesize" is the amount you have to add to a pointer in one of the data frames in order to get to the same column on the next line.
[22:01:10 CEST] <alexpigment> i see. are linesize and width usually equal?
[22:01:16 CEST] <BtbN> no
[22:01:26 CEST] <BtbN> You want the linees to be aligned
[22:01:31 CEST] <kepstin> hmm, I might be wrong about that, it looks like the interlace filter is just using av_image_copy_plane, it is passing the width (cols) correctly, so It *should* be working.
[22:01:50 CEST] <kepstin> someone's gonna have to actually debug the issue :)
[22:02:15 CEST] <nijoakim> Hey all! I am on Debian, trying to stream my desktop to a webm on localhost, (right now, just trying it out with an avi movie). It seems I can connect ffmpeg to ffserver, but when I try to access the stream URL in chromium, just a tiny player with a grayed out play button shows up. Any ideas on how to debug this?
[22:02:39 CEST] <BtbN> you won't find a lot of help with ffsever, it's abandoned.
[22:03:03 CEST] <kepstin> in particular, the way the crop filter works is that it leaves the actual buffer of image data the same, so the linesize is the same, but it adjusts the 'width' field so later stuff only reads part of the image data.
[22:03:03 CEST] <alexpigment> k sounds good. well, it's not a huge deal for me - I already got the files I need by using two separate passes. I was intentionally trying to create every format I could think of for testing, but I imagine that most people aren't ever going to run into this
[22:06:55 CEST] <nijoakim> BtbN: Aha, didn't know that. What do people used to stream on linux these days?
[22:07:14 CEST] <BtbN> nginx-rtmp
[22:07:28 CEST] <nijoakim> okay! Will check it out, thank you
[22:07:29 CEST] <BtbN> Or just output to hls and put any normal http server in front.
[22:07:44 CEST] <nicolas17> or using ffmpeg to send rtmp into someone else's streaming server like youtube :P
[22:08:39 CEST] <furq> if you're streaming to localhost you can probably just use the hls muxer
[22:09:06 CEST] <nijoakim> Okay... I didn't know what hls what until just now. What is the hls muxer?
[22:09:19 CEST] <furq> https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[22:09:21 CEST] <kerio> HLS is a weird thing that apple et al. came up with
[22:09:32 CEST] <nijoakim> Yeah, youtube will probably be my backup plan, but I will try to see if I can get this to work on localhost first.
[22:09:49 CEST] <kerio> it's a specially-formatted m3u playlist that points to a bunch of .ts files
[22:10:05 CEST] <nijoakim> Heh, okay! Yeah, sounds weird enough :)
[22:10:07 CEST] <furq> if you need the server to run on a different machine than the encoder then use nginx-rtmp
[22:10:07 CEST] <kerio> with directives to HLS-compliant players to keep reloading the playlist to add more and more elements
[22:10:20 CEST] <kerio> it adds quite a bit of delay to the video
[22:10:24 CEST] <nijoakim> Ah, I need that (server on differnt machine)
[22:10:27 CEST] <kerio> but has the advantage of just being http
[22:10:33 CEST] <nijoakim> So, I'll go with nginx-rtmp then
[22:10:46 CEST] <furq> well yeah you'd normally use nginx-rtmp to create hls anyway
[22:10:46 CEST] <nijoakim> Aha
[22:10:52 CEST] <nijoakim> Ah, okay
[22:10:53 CEST] <furq> since rtmp requires flash these days
[22:10:55 CEST] <JEEB> kerio: first we had proper streaming protocols (rtsp) and then people decided they wanted to use standard caching infra for video
[22:11:17 CEST] <nijoakim> Ah, so this will be to a flash stream?
[22:11:18 CEST] <furq> and also that they didn't want to have to bother setting up their firewalls
[22:11:23 CEST] <furq> rtmp is flash, yeah
[22:11:27 CEST] <furq> but nginx-rtmp will remux to hls for you
[22:11:30 CEST] <kerio> well, twitch.tv is at like 8 seconds of delay nowadays
[22:11:32 CEST] <kerio> that's not bad at all
[22:11:33 CEST] <furq> and create the playlist that you need etc
[22:11:36 CEST] <nijoakim> Ahaa
[22:11:54 CEST] <furq> it takes rtmp as ingest and outputs rtmp and optionally hls and mpeg-dash
[22:11:59 CEST] <nijoakim> Okay! I will give it a try
[22:12:16 CEST] <furq> but yeah i think just about everyone in here who has a streaming server is using nginx-rtmp
[22:12:18 CEST] <kerio> and as a bonus, since it's all in nginx, you can tell nginx to also serve the hls files :o
[22:12:22 CEST] <kerio> as http, that is
[22:12:40 CEST] <nijoakim> And I convert my stream to rtmp with ffmpeg, did I understand that correctly?
[22:12:51 CEST] <kerio> yes
[22:13:01 CEST] <nijoakim> Great! Thanks
[22:13:02 CEST] <kerio> well, somehow produce a stream
[22:13:02 CEST] <furq> -c:v libx264 -c:a aac -f flv rtmp://localhost/app
[22:13:06 CEST] <kerio> ye that
[22:13:17 CEST] <nijoakim> oh, nice
[22:13:19 CEST] <furq> on which note
[22:13:27 CEST] <furq> dear devs, is there any chance of getting the default codecs for flv changed
[22:13:40 CEST] <furq> i don't think anyone is using sorenson spark any more
[22:13:53 CEST] <nicolas17> whatyearisit.jpg
[22:14:15 CEST] <kerio> furq: and also add a default output format of flv if the output starts with "rtmp://"
[22:31:18 CEST] <alexpigment> I'm trying to do a QSV encode, and I get the following warning: "[h264_qsv @ 0000000002b6f780] No device available for encoder (device type qsv f
[22:31:18 CEST] <alexpigment> or codec h264_qsv)."
[22:31:30 CEST] <alexpigment> but it still creates the video, seemingly with QSV
[22:31:45 CEST] <BtbN> qsv has software fallback.
[22:32:11 CEST] <alexpigment> interesting. it's a software implementation of the same encoder?
[22:32:37 CEST] <BtbN> Well, it's obviously an encoder for the same codec.
[22:33:13 CEST] <alexpigment> I guess I was asking for clarification that it wasn't a fallback to x264 (which this doesn't seem to be)
[22:33:37 CEST] <utack> where would a problem with zscale be reported? is that a separate project, or ffmpeg bug?
[22:33:50 CEST] <JEEB> zscale is a filter in FFmpeg, zimg is the library
[22:34:12 CEST] <JEEB> so depends on which is the one that has the boog: zscale or zimg
[22:34:21 CEST] <utack> nvm..it does not seem to be a bug https://github.com/sekrit-twc/zimg/issues/60
[22:34:29 CEST] <JEEB> k
[22:34:36 CEST] <utack> HDR is definitely confusing if you are not working with it a lot
[22:34:48 CEST] <JEEB> yes
[22:34:49 CEST] <utack> the picture looks very different from what mpv makes of it
[22:35:00 CEST] <JEEB> mpv recently switched its tone mapping algo
[22:35:03 CEST] <JEEB> in the opengl renderer
[22:35:15 CEST] <JEEB> if you want to discuss these things with haasn, he's on #mpv most of the time
[22:35:19 CEST] <nicolas17> looks like input I/O is a significant bottleneck here, ugh
[22:35:23 CEST] <utack> so tone mapping clip in my config should probably go?
[22:35:23 CEST] <JEEB> he's the opengl renderer guy and who I have been feeding specs to
[22:35:38 CEST] <utack> i will look into it, thanks
[22:36:34 CEST] <JEEB> but yes, HDR is a lot of "fun" when all the screens can look different while technically being correct
[22:36:50 CEST] <JEEB> because the tone mapping hasn't seen a standard yet, for example
[22:37:37 CEST] <utack> is HDR as messy as it appears to the uneducated observer, or is it a smart design behind it?
[22:37:52 CEST] <JEEB> there's a whole lot of mess in it, just like SDR
[22:38:20 CEST] <JEEB> I prefer HLG to SMPTE ST.2084 tho, since that does remember that people tend to not have a specifically standardized-calibrated setup
[22:38:53 CEST] <utack> for example i do not get the mastering display properties at all. why does that matter, just display the brightest pixel in the stream the brightest your local display can show? or is that too simple thinking?
[22:39:15 CEST] <JEEB> it's to have a constant max/min over the whole clip I guess
[22:39:26 CEST] <JEEB> they did add dynamic HDR metadata recently though :D
[22:39:39 CEST] <JEEB> so you can adjust per-scene! :D
[22:40:23 CEST] <utack> dear god, it sounds terrible
[22:41:08 CEST] <JEEB> anyways, we've often had discussions on all the funky things and how the vocabulary is not always stable between specifications etc
[22:42:01 CEST] <utack> someone is securing their job. "oh you totally can't implement that without me, only i know what hte latest spec really means"
[22:42:21 CEST] <nicolas17> lol
[22:42:34 CEST] <nicolas17> sounds like the spec is shit then
[22:42:46 CEST] <utack> on the left is mpv "clip", on the right everything 709 after zscale https://i.imgur.com/4oymAXZ.png
[22:42:53 CEST] <utack> on my non hdr display
[22:43:19 CEST] <JEEB> yea, zscale doesn't do tone mapping IIRC. so if stuff goes above what's possible with SDR, you're out of luck
[22:43:34 CEST] <utack> yeah
[22:43:38 CEST] <JEEB> in a way, not doing tone mapping is not strictly incorrect
[22:43:42 CEST] <JEEB> it just looks like shit
[22:43:44 CEST] <JEEB> :D
[22:44:39 CEST] <JEEB> anyways, if you have interest on HDR, #mpv and #mpv-devel are probably places you might want to lurk in
[22:46:11 CEST] <utack> thanks! i primarily have interest in sane conversion with ffmpeg, but seems like that is not happening, due to the standards fault :D
[22:48:20 CEST] <Mavrik> Btw, what's the dynamic log gamma thing?
[22:48:35 CEST] <Mavrik> I've been seeing it around with "will support with firmware update" notes
[22:49:26 CEST] <Mavrik> *hybrid
[22:54:56 CEST] <JEEB> Mavrik: another transfer function, made together by NHK and BBC
[22:55:22 CEST] <JEEB> SMPTE ST.2084 is the one made by Dolby ("PQ")
[22:55:48 CEST] <JEEB> even if I didn't dislike dolby it seems like HLG is generally the saner spec
[22:56:41 CEST] <Mavrik> Ahh.
[23:02:05 CEST] <alexpigment> BtbN: for what it's worth, I tracked this issue with the QSV encoder giving a "No device available" message down to about June 14th. All builds tested (including the one I just compiled) exhibit that error on my machine
[23:02:19 CEST] <alexpigment> I'm going to go ahead and log it on trac (unless this is a known issue here)
[23:13:34 CEST] <BtbN> How does the ocr filter work? Is it supposed to write out the text somewhere? It seems to put it into some metadata. How do I get it?
[23:24:17 CEST] <alphabitcity> Is it possible to have ffmpeg overlay a dynamic image? As in, an image rendered by software that's changing constantly .. Ideally force ffmpeg to keep re-fetching it?
[23:28:36 CEST] <iive> afaik, you can overlay 2 videos if you want
[23:32:01 CEST] <alphabitcity> iive: yea, good thinking. thank you
[23:32:29 CEST] <alphabitcity> Does anyone know if the AMF functions here https://www.ffmpeg.org/doxygen/0.6/group__amffuncs.html are exposed in the ffmpeg CLI?
[23:33:44 CEST] <hanna> JEEB: technically HLG tone-mapping is standardized
[23:33:54 CEST] <hanna> but I don't know if that's only for HDR monitors or also for SDR monitors
[23:34:04 CEST] <JEEB> ah
[23:34:06 CEST] <hanna> actually let's try
[23:35:08 CEST] <hanna> oh right I forgot I don't actually have a HLG test clip
[23:35:22 CEST] <hanna> so I can't tell you what it looks like if I take the HLG curve and insert the SDR parameters
[23:35:43 CEST] <hanna> but in theory, your display's peak white point is part of the HLG OOTF equation
[23:36:05 CEST] <hanna> so if you set that to equal 1 (i.e. the same as the reference white, which is only true for an SDR display), it *should* work
[23:36:12 CEST] <hanna> as in, have a defined result
[23:39:49 CEST] <komanda> Hi, we got a report today of an ffmpeg vulnerability (https://pastebin.com/raw/vwdXuk9m)
[23:40:23 CEST] <komanda> Does anyone know how to disable ffmpeg from grabbing local files in m3u8 format?
[23:42:33 CEST] <JEEB> I thought there was a white list kind of thing already implemented?
[23:43:21 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=commit;h=189ff4219644532bdfa7bab28dfedaee4d6d4021
[23:43:33 CEST] <JEEB> and quite a bit of older stuff, too
[23:44:23 CEST] <iive> komanda: is this a joke?
[23:46:26 CEST] <iive> komanda: you run a program locally and give it the passwd file as argument, that program creates a video containing the content of the given file as image
[23:46:35 CEST] <iive> and you upload the video to some site.
[23:46:52 CEST] <iive> how is that different than uploading your passwd to the site directly?
[23:52:39 CEST] <hanna> iive: no, no, clearly ffmpeg has a bug and should not allow you to choose your own files to upload, you should create a clip store and only allow using clips from this clip store
[23:53:05 CEST] <hanna> JEEB: have I mentioned yet, a HLG test clip would really help
[23:53:11 CEST] <hanna> surely some meme broadcasts have been done in HLG?
[23:53:15 CEST] <hanna> given how much the BBC and NHK love it
[23:53:26 CEST] <JEEB> I have one meme broadcast in HLG
[23:53:42 CEST] <hanna> 22:37 <utack> is HDR as messy as it appears to the uneducated observer, or is it a smart design behind it? <- there's a smart design behing HLG
[23:53:46 CEST] <hanna> PQ is a pile of shit
[23:53:51 CEST] <hanna> that everybody likes because hurr SMPTE has money
[23:54:25 CEST] <hanna> 22:38 <utack> for example i do not get the mastering display properties at all. why does that matter, just display the brightest pixel in the stream the brightest your local display can show? or is that too simple thinking? <- you get clipping artifacts if you just convert and clamp
[23:54:40 CEST] <hanna> unless your device exactly corresponds to the specifications of the mastering device
[23:54:42 CEST] <hanna> or you use HLG
[23:54:44 CEST] <hanna> either one of the two, really
[23:54:56 CEST] <hanna> HLG doesn't clip by design
[23:55:13 CEST] <JEEB> hanna: didn't you have TravelXP_4K_HDR_HLG.ts ?
[23:55:14 CEST] <hanna> at least not for HDR monitors
[23:55:18 CEST] <hanna> JEEB: nope
[23:55:21 CEST] <JEEB> welp
[23:55:33 CEST] <hanna> tone mapping to SDR it would still be good to know the mastering display metadata, but for HLG it's not that important
[23:55:40 CEST] <hanna> because the standard HLG peak is reasonably picked
[23:55:54 CEST] <hanna> it's only important again for PQ because the PQ people went absolutely apeshit and decided to use 10,000 cd/m² as the peak
[23:56:01 CEST] <utack> thanks for the information hanna
[23:56:03 CEST] <hanna> which is completely unreasonable even in SMPTE bizzaro-land
[23:56:29 CEST] <hanna> (the HLG reference peak is 1000 cd/m² which is much more sane)
[23:56:56 CEST] <JEEB> hm? wasn't it much higher than 1000?
[23:57:03 CEST] <hanna> nop
[23:57:15 CEST] <hanna> the signal peak exactly 12.0 by definition, which is in scene-referred colors
[23:57:23 CEST] <hanna> technically for the OOTF you plug in whatever your peak is
[23:57:37 CEST] <hanna> but in the absence of this information, or when building a reference display, they've standardize peak=1000cd/m² gamma=1.2
[23:57:50 CEST] <hanna> so you could say the true dynamic range is about 10:1
[23:58:14 CEST] <hanna> at least this is how mpv considers it
[23:58:21 CEST] <JEEB> right
[23:58:23 CEST] <hanna> and a dynamic range of 10:1 is also what typical HDR clips seem to be using
[23:59:19 CEST] <JEEB> oh wow, an ITU-R document that has a part called "common misconceptions on HDR"
[23:59:41 CEST] <hanna> the one wtf thing about HLG is that there's no analytical solution to reverse the EOTF and therefore encode something *to* HLG such that it round-trips on a reference monitor
[23:59:56 CEST] <hanna> but that's a consequence of making HLG backwards-compatible with SDR
[00:00:00 CEST] --- Tue Jun 27 2017
More information about the Ffmpeg-devel-irc