[Ffmpeg-devel-irc] ffmpeg.log.20180223

burek burek021 at gmail.com
Sat Feb 24 03:05:01 EET 2018


[00:00:39 CET] <furq> http://gentlelogic.blogspot.co.uk/2011/11/exploring-h264-part-2-h264-bitstream.html
[00:00:41 CET] <Filarius> no, it was looks like early "proof of convept" of h264
[00:00:43 CET] <furq> there's plenty of stuff like this as well
[00:00:43 CET] <shtomik__> furq: auto-inserting filter 'auto_resampler_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_1', but how it convert to code?
[00:00:55 CET] <Filarius> and located on same old website
[00:01:44 CET] <jkqxz> If whatever service you are using is happy to pass your streams straight through then just put your data directly in SEI messages.  If not then you'll need to encode it in the data, and that will be rather fun because you'll need some fuzziness to account for reencodes on their side.
[00:02:05 CET] <furq> shtomik__: presumably just add aresample to the end of your filterchain
[00:02:30 CET] <davemacdo> Hi all. I was on here a few days ago getting some help (thanks furq) setting up a YouTube stream of a Raspberry Pi desktop. I'm getting closer.
[00:02:57 CET] <davemacdo> I rebuilt ffmpeg with the right options, and the stream is starting, but things break down after a few seconds on YouTube.
[00:04:01 CET] <Filarius> ...i already have some simple application what prepare yuv420 frames for h264, and it use DCT/IDCT to make data be prepared for encoding and have better error-tolerance, but, heal, its working too slow for my current needs
[00:06:20 CET] <davemacdo> Here's a pastebin of my command and output: https://pastebin.com/brD2vZrt
[00:07:29 CET] <davemacdo> I've experimented with changing -thread_queue_size, -probesize, -b:v, and -framerate values
[00:08:02 CET] <davemacdo> ffmpeg continues to run, but YouTube stops receiving data.
[00:09:28 CET] <furq> does it run at full speed at -framerate 30
[00:09:45 CET] <furq> youtube will probably reject it at such a low framerate
[00:10:19 CET] <davemacdo> I think I've tried 30 and had the same result.
[00:10:23 CET] <davemacdo> I'll try again.
[00:10:35 CET] <davemacdo> Is the probesize error something to worry about?
[00:11:19 CET] <furq> does it still do that at 30fps
[00:11:21 CET] <davemacdo> Nope. Same result with -framerate 30
[00:11:32 CET] <furq> does it actually run at full speed though
[00:11:50 CET] <davemacdo> ffmpeg runs.
[00:12:35 CET] <davemacdo> I can let it run forever regardless of settings and it continues to log each new line to the terminal.
[00:13:02 CET] <davemacdo> I don't see anything that would lead me to believe ffmpeg crashed.
[00:18:42 CET] <davemacdo> In the output from ffmpeg, does it mean anything that frame= 5 on every line? I don't know what the frame value is. Is it incrementing?
[00:19:26 CET] <furq> you might want to make your terminal wider
[00:19:44 CET] <shtomik__> Guys, how often ways to set up aresample filter in code? Only on filter_spec string? Or with av_opt_set_bit this real too?
[00:20:35 CET] <shtomik__> How many ways*
[00:20:37 CET] <shtomik__> So sorry
[00:21:17 CET] <davemacdo> furq, It's the very first thing on each line.
[00:21:39 CET] <furq> yeah that should all be on one line
[00:22:58 CET] <shtomik__> furq: I set the filter, but sound with noise ;(
[00:23:12 CET] <davemacdo> furq, it is all on one line.
[00:23:58 CET] <shtomik__> furq: [auto_resampler_0 @ 0x7fe8f7d8bb00] [SWR @ 0x7fe8f8be0e00] Using fltp internally between filters, [auto_resampler_0 @ 0x7fe8f7d8bb00] ch:2 chl:stereo fmt:flt r:44100Hz -> ch:2 chl:stereo fmt:fltp r:44100Hz
[00:24:09 CET] <davemacdo> Each line looks like this: frame=    5 fps=0.0 q=-0.0 size=     110kB time=00:00:03.04 bitrate= 295.1kbit
[00:24:17 CET] <davemacdo> There are fifteen lines like that.
[00:24:25 CET] <shtomik__> wtf I have the same settings, but my sound with noise
[00:29:15 CET] <davemacdo> Making the wider terminal window makes the frame all the same. It's showing a lot of dropped frames.
[00:29:42 CET] <davemacdo> Sorry, I mean it makes the frame info all on one line that keeps updating.
[00:35:53 CET] <davemacdo> I get the probesize error suggesting I increase it every time. I've tried a probesize of 50000M and it still suggests increasing it.
[00:42:11 CET] <davemacdo> Do I have the probesize flag in the right place?
[00:55:02 CET] <shtomik__> How to use av_buffersink_set_frame_size ? In initializing filters?
[00:55:26 CET] <shtomik__> Can somebody explain m, please?
[01:04:13 CET] <davemacdo> Ok. I've moved the -probesize 500M flag to the input arguments. Now, I get exactly 151 frames and 4.99 seconds of streaming each time I run it before it stops. Any ideas?
[01:42:50 CET] <tclassict> hi, somebody know how use nvenc for mjpeg, like vaapi_mjpeg?
[01:46:48 CET] <DHE> nvenc only supports h264 and h265 (with specifics varying between models and generations)
[02:47:58 CET] <shtomik__> Guys, how to add aresample filter to felterchain in transcoding.c example file?
[03:20:39 CET] <DHE> shtomik__: there's a string where the (workhorse) filter and its parameters are provided. this is just like the "-af" or "-vf" in ffmpeg and accepts ,commas, for building a filter pipeline
[03:22:10 CET] <shtomik__> DHE: filter_spec = "anull"; or filter_spec = "aresample=44100, aformat=..."; Is it right? Thanks for your reply!
[03:24:17 CET] <shtomik__> DHE: avfilter_graph_parse_ptr(filter_graph, filter_spec, &inputs, &outputs, NULL); avfilter_graph_config(filter_graph, NULL);
[03:25:30 CET] <DHE> the first call takes the string from filter_spec (which is effectively what's passed to -vf or -af) and the second finishes up
[03:26:18 CET] <DHE> avfilter_graph_parse_ptr(filter_graph, "aformat=channel_formats=5.1, aresample=48000, volume=-3dB", ...);  // or whatever
[03:26:57 CET] <shtomik__> yeah, okay, it's clear... but after transcoding pcm to aac(mp4), I had spoiled sound ;( with noise
[03:27:55 CET] <shtomik__> I'm using ffmpeg with -v debug, and saw that: [format_out_0_1 @ 0x7fe8f7c05980] auto-inserting filter 'auto_resampler_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_1'
[03:27:56 CET] <shtomik__> [AVFilterGraph @ 0x7fe8f7f26e60] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
[03:27:56 CET] <shtomik__> [auto_resampler_0 @ 0x7fe8f7d8bb00] [SWR @ 0x7fe8f8be0e00] Using fltp internally between filters
[03:27:56 CET] <shtomik__> [auto_resampler_0 @ 0x7fe8f7d8bb00] ch:2 chl:stereo fmt:flt r:44100Hz -> ch:2 chl:stereo fmt:fltp r:44100Hz
[03:28:54 CET] <shtomik__> But if I use a transcoding.c example for this work, I have an incorrect sound ;(
[03:29:45 CET] <shtomik__> AAC encoder(1024 - frame size), but I always have 512 nb_samples(this an error?).
[03:32:20 CET] <shtomik__> DHE: thanks for your reply!
[03:40:41 CET] <shtomik__> DHE: tell me, please, how to set av_buffersink_set_frame_size(buffersink_ctx, 1024); concerning to transcoding.c?
[10:01:00 CET] <Fyr> guys, does FFMPEG support raw image format?
[10:31:02 CET] <Romano> Anyone here?
[10:34:53 CET] <Romano>  ffmpeg -i CompDelivery.mp4 -f segment slices/out%03d.m4s --> Output: "Output file #0 does not contain any stream"
[10:35:24 CET] <Romano> Why can't ffmpeg create m4s files??
[10:36:45 CET] <Fyr> Romano, what is "m4s" file extension?
[10:37:04 CET] <Fyr> I can't find it over the Internet.
[10:37:44 CET] <Romano> Yup, that's the problem, I can't find it either
[10:38:02 CET] <Romano> It's the format MPEG-DASH generates for segmentation
[10:38:07 CET] <Fyr> Romano, what are you trying to achieve?
[10:39:16 CET] <Romano> I'm looking to create m4s files separately from the segmentation process so I can replace some segments with others
[10:43:25 CET] <Fyr> Romano, why are trying to divide a video into segments? maybe there is a better way. =/
[10:44:13 CET] <Romano> MPEG-DASH uses segments to provide variable bitrate streaming, that's why
[10:46:31 CET] <Fyr> ok, mp4box is your friend.
[10:49:03 CET] <Romano> Can MP4Boxconvert a file into an m4s?
[10:49:28 CET] <Fyr> looks like
[14:31:57 CET] <H3> Hello, I wonder if it is possible to set the output duration of an audio to the same as the input (-i effects.wav -c:a libfdk_aac -profile:a aac_he -b:a 64k effects-cbr-64.mp4)
[14:32:54 CET] <JEEB> it should be the same if whatever reading that thing knows how to handle the encoder delay signaling that is happening with mp4
[14:32:58 CET] <JEEB> ffmpeg.c doesn't do it
[14:33:03 CET] <JEEB> since it tries to keep all of the data
[14:33:23 CET] <JEEB> so think of it like this, you have an mp4 with timestamps starting from a negative values
[14:33:37 CET] <JEEB> then you have an edit list that tells the player to start from timestmap zero
[14:33:54 CET] <JEEB> now what ffmpeg.c does is it sees the negative stuff, and looks at WAV
[14:34:08 CET] <JEEB> WAV cannot have negative timestamps so it will just make that negative value zero
[14:34:12 CET] <JEEB> and boom! headshot :P
[14:34:26 CET] <JEEB> you get the full PCM samples instead of the actual original length
[14:34:27 CET] <H3> Okay, what should I do to solve it?
[14:34:53 CET] <JEEB> if you want to use ffmpeg.c for the procedure of dumping audio back to WAV then file a bug
[14:35:01 CET] <JEEB> if you're using something else
[14:35:02 CET] <H3> the effect.wav is actually a audio sprite, with x amount of sound that needs it position to be intact
[14:35:11 CET] <Fyr> JEEB, is there a way to set the duration of the output file to one of the input files?
[14:35:31 CET] <JEEB> then file a bug there since whatever you're using isn't taking the offset into mention within the edit list
[14:35:34 CET] <JEEB> :P
[14:35:39 CET] <JEEB> I just know that ffmpeg.c does it wrong as well
[14:36:13 CET] <JEEB> H3: I understand, but modern ffmpeg.c does flag the encoder delay correctly into mp4. so whatever you're using to read that mp4 isn't taking that into account
[14:38:16 CET] <H3> Ohh okay
[14:39:00 CET] <H3> It will be used on browsers
[14:43:29 CET] <H3> Is there a way to avoid the delay? So it just has the exact same time stamps
[14:56:11 CET] <kepstin> H3: I've written some browser audio stuff, and yeah, a *lot* of browsers get the delay on mp3 and aac wrong. If you know the delay you can manually trim if after decoding - or if possible just use vorbis or opus instead, which all browsers get right if they support the codec at all
[14:57:05 CET] <kepstin> I actually ended up using a *javascript mp3 decoder* (emscripten) rather than the browser built-in one, so it could trim the delay correctly.
[14:59:59 CET] <H3> What tool did you use to trim the sound? I've never worked with sound before so I just got Audacity and also got the ffmpeg plugin to trim it myself, but it still gets messed up during the export
[15:02:00 CET] <kepstin> if you're working on an audio effect in a browser game you're probably using web audio api, and the decoder there gives you an array of samples, so it's trivial to edit it after decoding
[15:02:24 CET] <kepstin> but the problem is that browsers are inconsistent, so you don't know whether or not you need to trim and how much
[15:05:26 CET] <kepstin> you can't trim before encoding because the extra data is added *by the encoder* - it's encoder delay before the audio and padding to frame size at the end. The decoder is supposed to remove it, but :/
[15:08:20 CET] <H3> Okay, the only reason Im doing all this it because i want to reduce the size of our audio files
[15:08:28 CET] <H3> mybe im going the wrong way about it
[15:09:18 CET] <H3> Today we have a working process that converts our audio sprites to mp4 and webm (for browser support) though fluent-ffmpeg
[15:10:01 CET] <H3> And by using libfdk_aac i noticed i got down the size by several mbs
[15:12:07 CET] <kepstin> really, just use opus where available.
[15:12:28 CET] <kepstin> browsers that support it are sample-accurate in decoding, and the quality is good for the size
[15:12:59 CET] <kepstin> (of course you will need a fallback depending on your browser support requirements)
[15:18:36 CET] <H3> kepstin: great advise Ill dig in to it, thanks!
[15:19:51 CET] <kepstin> if you're feeling like going the crazy route, I have a precompiled asm.js version of mpg123 that has sample-accurate decoding: https://github.com/kepstin/aurora-mpg123.js
[15:20:15 CET] <kepstin> but if you can avoid doing js audio decoding that's probably a better option ;)
[15:24:47 CET] <H3> Ohh nice!
[15:25:37 CET] <H3> I need to learn more about audio, I know too little about all the different formats and suck
[15:25:42 CET] <H3> I need to learn more about audio, I know too little about all the different formats and such
[15:25:59 CET] <iranen> just use OPUS
[15:52:25 CET] <H3> How do you guys create audio sprites?
[15:55:21 CET] <saml> what is audio sprite
[15:56:17 CET] <H3> Instead of having 10 audio files you have all he sounds in one audio file with like 1 sec of silence inbetween
[15:56:41 CET] <H3> and a .json file that tell you where which sound is and the duration of said sound
[15:59:34 CET] <saml> oh i'm not sure which container supports such
[16:00:08 CET] <saml> like, you're saying play audio t1~t2  while playing video f1~f2
[16:00:42 CET] <saml> or are you looking to concat 10 audios with silence?
[16:01:00 CET] <JEEB> H3: it doesn't really help with compressed audio since you're not gaining any compression
[16:01:12 CET] <JEEB> might as well just have them separate, and yes - you have to handle encoder delay somehow
[16:01:30 CET] <JEEB> opus might have an effectively hard-coded encoder delay and thus everything might play that well enough
[16:01:33 CET] <JEEB> in that case, use it
[16:03:10 CET] <saml> how can I play two videos side by side? scale and offset and overlay?
[16:03:54 CET] <Fyr> by using scale, offset and overlay fitlers?
[16:04:12 CET] <saml> yeah
[16:04:33 CET] <saml> or is there a way without reencoding?
[16:04:57 CET] <Fyr> no
[16:05:31 CET] <saml> i'm trying to pick a frame at the same interval from two videos of different framerate
[16:06:02 CET] <saml> fps filter picks different frames if videos have different framerate
[16:06:46 CET] <saml> ffmpeg -i a.mp4 -i b.mp4 -filter_complex '[0] fps=10 [a]; [1] fps=10 [b]; [a][b] psnr' -f null -
[16:07:06 CET] <saml> i thought that will pick matching frames and psnr will work alright
[16:08:00 CET] <saml> but if a.mp4 and b.mp4 have different framerate, fps filter doesn't pick the same frame.  i manually verified by writing result of fps filter to '%04d.jpg'
[16:08:49 CET] <Fyr> I would convert it into PNG files and combined them back into a video.
[16:09:54 CET] <saml> but problem is  ffmpeg -i a.mp4 -filter_complex 'fps=10' '%04d.png'   doesn't select the same frames as b.mp4
[16:10:26 CET] <saml> i tried to use select filter.. but failed as well
[16:10:53 CET] <saml> picking the closest frame on every 10th of a second.
[16:14:14 CET] <Fyr> why do you not convert it into PNGs with the necessary framerate?
[16:36:01 CET] <saml> Fyr, how do i select the necessary frame?  imagine two videos of same duration but with different framerate.   one video is created from the other using -r as output option
[16:36:45 CET] <saml> no i forgot what i wanted to do
[16:36:57 CET] <Fyr> saml, convert the video into PNG, delete unecessary frames or duplicate the necessary ones.
[16:37:28 CET] <saml> yeah that's what -r option does
[16:37:53 CET] <Fyr> saml, if the -r does it, then the problem is solved.
[16:37:55 CET] <saml> i didn't want to create intermediate video using -r
[16:38:10 CET] <saml> i wanted filter version of -r. but there's no filter that behaves the same as -r
[16:38:47 CET] <saml> fps filter removes frames at a regular interval.  -r removes frames differently.  if i'm going from high framerate to lower
[16:40:13 CET] <Fyr> saml, how differently?
[16:40:22 CET] <Fyr> http://ffmpeg.org/ffmpeg-all.html#toc-Video-Options
[16:40:36 CET] <Fyr> >>As an output option, duplicate or drop input frames to achieve constant output frame rate fps.
[16:51:04 CET] <H3> You guys have an example for converting .wav to .opus?
[16:51:22 CET] <furq> ffmpeg -i foo.wav bar.opus
[16:51:23 CET] <furq> hth
[16:52:53 CET] <H3> ffmpeg -i effects.wav -b:a -compression_level 5 effects.opus
[16:53:16 CET] <H3> gives me:  "Unable to find a suitable output format for '5' 5: Invalid argument"
[16:54:11 CET] <furq> -b:a takes an argument
[16:55:36 CET] <H3> Ohh, missed that. Thanks!
[16:55:45 CET] <furq> also you shouldn't set compression_level unless you're trying to do low-delay stuff
[16:55:51 CET] <furq> it defaults to 10 which is the maximum
[16:57:05 CET] <H3> Whats the best approach if i want to get the size down?
[16:57:16 CET] <furq> lower the bitrate
[16:57:17 CET] <H3> lower the bitrate?
[16:57:22 CET] <H3> ok, ty!
[16:57:37 CET] <furq> -b:a is vbr mode for opus, not abr
[16:57:50 CET] <furq> so -b:a 128k will give you something like lame -V5
[16:58:06 CET] <furq> that's the recommended mode to use
[16:58:47 CET] <furq> also obviously "something like" in terms of nominal bitrate, not quality
[16:59:55 CET] <kepstin> like, if you're doing mono sound effect clips, you can probably use around 32K with good results.
[17:16:06 CET] <classic_user> Hi, somebody know how use vaapi decode, for extract MJPEG stream?
[17:32:19 CET] <jkqxz> What do you mean by "extract"?  The decoder can be used to decode them.
[17:41:18 CET] <classic_user> ok, h264 input ---- f mjpeg output
[17:42:00 CET] <classic_user> I see vaapi_mjpeg codec, but my gpu doesn`t supportit.
[17:42:56 CET] <classic_user> so....my gpu support decode h264 over hw, and now I want to encode to mjpeg, by software encode
[17:43:36 CET] <saml> Fyr, https://i.imgur.com/RHEGRt5.jpg   fps drops frame 02, 05, ...   -r drops frame 08, 11, ... etc
[17:43:44 CET] <saml> framerate filter blends
[17:44:05 CET] <saml> i want a filter that can drop the same frame as -r
[17:44:24 CET] <saml> original was 60fps.   -r 40   fps=40 framerate=40
[17:45:13 CET] <kepstin> wow, the results from -r is really bad, that's uneven frame drops and it could cause (very slight) a/v sync issues. I thought -r just inserted an fps filter.
[17:46:27 CET] <kepstin> saml: run one ffmpeg with the -r option, outputting raw video and pipe that indo a second ffmpeg, I guess.
[17:46:37 CET] <jkqxz> classic_user:  You should be able to just specify "-hwaccel vaapi" on the input and "-c:v mjpeg" on the output to do that.
[17:47:12 CET] <kepstin> saml: You *might* be able to get closer by changing the rounding mode on the fps filter
[17:47:35 CET] <kepstin> saml: try fps=fps=40:round=up maybe
[17:47:47 CET] <jkqxz> classic_user:  The mjpeg_vaapi encoder requires Braswell/Skylake or later Intel platform.
[17:48:43 CET] <classic_user> I can use sucesfully this command : ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -i   input.h264  -r 2 -vf  'deinterlace_vaapi,scale_vaapi=w=320:h=240,hwdownload,format=nv12' -f mjpeg  output
[17:53:58 CET] <saml> kepstin, what's raw video? i tried various containers and codecs to pipe into second ffmpeg.. but they all result in bad psnr. i haven't dumped frames and inspected manually
[17:54:14 CET] <saml> i'll try round=up
[17:56:09 CET] <kepstin> saml: try something like "ffmpeg <input stuff> -r 40 -c:v rawvideo -f nut - | ffmpeg -i - <output stuff>"
[17:56:45 CET] <kepstin> if the round=up option doesn't match the behaviour
[18:13:15 CET] <colekas> i'm trying to measure audio using ebur128 of a multicast stream and the input has 2 audios (0:1 and 0:2, 0:0 is video) however, no matter what -map option I put it always seems like the first audio (0:1) is selected... does anyone know if ffmpeg/ebur128 takes a preference based on language or something silly like that?
[18:14:03 CET] <colekas>  ./ffmpeg -probesize 1M -analyzeduration 1M -hide_banner -threads auto -nostats -drc_scale 0 -i udp://blah -map 0:2 -filter_complex ebur128=peak=true -f null -
[18:18:51 CET] <kepstin> colekas: with that -filter_complex command you've only put in one ebur128 filter, and since you haven't specified any inputs it's just grabbed the first available input
[18:19:15 CET] <kepstin> colekas: if you use -af instead of -filter_complex, it will separately filter each audio stream
[18:19:48 CET] <kepstin> colekas: or you can put muliple ebur128 in the -filter_complex command, and specify the input to use for each one.
[18:33:20 CET] <Johnjay> how do i get the average and max db audio again?
[18:45:01 CET] <DHE> the volumedetect filter
[19:05:44 CET] <Johnjay> it says mean volume -43.2dB and  max_volume -14.2 dB
[19:10:50 CET] <Johnjay> is sound measured with the power decibel or the amplitude decibel?
[19:10:56 CET] <Johnjay> wikipedia says the latter involves an extra square root
[19:11:26 CET] <Johnjay> hmm it says sound is a field, so the latter is
[19:21:12 CET] <kepstin> Johnjay: note that "volume" doesn't really correspond to loudness as perceived by humans - you might be interested in using the ebur128 filter to get a loudness measurement.
[19:22:46 CET] <Johnjay> hrm ok. i'm trying to adjust the loudness of a file and i've been adjusting it in increments of 1dB in the volume filter
[19:22:54 CET] <Johnjay> er no actually i was mulitiplying it
[19:22:58 CET] <Johnjay> like x1.5 or x2
[19:24:49 CET] Action: kepstin notes that a change of ±6dB in audio is (very close) to muliplying the LPCM signal values by 2 or ½.
[19:25:57 CET] <Johnjay> would that correspond to volume=6dB?
[19:25:58 CET] <kepstin> the "volume" filter in ffmpeg can either take a multiplier or a value in dB (with unit suffix)
[19:26:20 CET] <Johnjay> i.e. volume=2 is similar to volume=6dB
[19:26:29 CET] <kepstin> approximately, yeah.
[19:29:25 CET] <Johnjay> hmm ok.
[19:29:42 CET] <Johnjay> basically i'm making a file for an alarm clock and i need to test different loudness levels
[19:30:07 CET] <Johnjay> what's a good increment of dB to use? 3?
[19:30:58 CET] <kepstin> hmm, 3 is kind of a big step (particularly if you're using headphones)
[19:31:28 CET] <Johnjay> my idea is i want my alarm to be just loud enough to wake me up but not too loud so i need to try a variety of levels
[19:31:48 CET] <kepstin> iirc, usually people need a change of around 1-1.5dB to be able to notice a difference in level.
[19:32:05 CET] <Johnjay> i see
[19:32:05 CET] <kepstin> does this alarm clock not have its own volume control?
[19:32:48 CET] <Johnjay> yeah lol
[19:33:05 CET] <Johnjay> but it's not relevant for this purpose
[19:33:46 CET] <kepstin> well, it is, because how loud your output will be is a combination of the levels in your audio file and the setting of the volume knob :)
[19:35:49 CET] <colekas> kepstin: thanks!!!
[19:40:23 CET] <Johnjay> hmm so on the db scale used for sound 20 dB is a whisper, 60db is a conversation at 100ft, and 140 is a jet taking off next to you
[19:41:43 CET] <kepstin> Johnjay: that's a dB SPL
[19:41:55 CET] <kepstin> Johnjay: in digital audio you're talking about dBFS, which is a completely different scale
[19:42:08 CET] <Johnjay> oh no. is one that square root thing
[19:42:11 CET] <Johnjay> and other is not?
[19:42:29 CET] <kepstin> no, the relative size of a dB is the same in both, it's just the reference level that is different
[19:42:48 CET] <kepstin> in dB SPL, 0dB is the threshold of human hearing for quietest sound (approximately)
[19:42:58 CET] <kepstin> in dBFS, 0dB is the loudest possible representable sound
[19:43:22 CET] <Johnjay> ok. i think what i'll do is use volume=1dB, volume=2dB, etc up to some number. then test those out and see which ones work the best
[19:43:51 CET] <Johnjay> kepstin: is that why in audacity and ffmpeg it shows db as negative?
[19:43:55 CET] <furq> yes
[19:44:08 CET] <Johnjay> i thought it was something like that
[19:44:09 CET] <Johnjay> thanks
[19:44:27 CET] <furq> dB SPL is a useless metric for digital audio because it depends on the actual playback chain
[19:44:32 CET] <Johnjay> the file i'm working with is natural sounds track i found and it says max vol is -14.3 dB
[19:44:35 CET] <Johnjay> so that's dBFS
[19:44:39 CET] <furq> right
[19:44:55 CET] <kepstin> Johnjay: yes. that means you can increase the volume by up to 14.3dB without introducing clipping.
[19:45:21 CET] <Johnjay> so is there no way to infer loudness from a dBFS number? it just depends on the playback equipment?
[19:45:33 CET] <kepstin> Johnjay: like I said, it depends on your volume knob :)
[19:45:52 CET] <Johnjay> well the volume knob is very limited on every pc or phone i've ever used
[19:45:55 CET] <kepstin> and amplifier power, etc.
[19:45:57 CET] <Johnjay> so i assume they make some kind of assumptions
[19:46:24 CET] <Johnjay> e.g. -10dB will sound like a conversation approximately on 50% volume knob
[19:46:40 CET] <furq> yeah there's no kind of standardisation at all like that
[19:47:34 CET] <Johnjay> ok.
[19:47:39 CET] <kepstin> the volume limiter seen on e.g. phones is based on music, and music typically is mastered to be as loud as possible, so they limit it such that the loudest possible sound will be no more than XX dB SPL when played through common type of headphones
[19:48:01 CET] <furq> phone max volume is artificially limited by the EU now iirc
[19:48:03 CET] <furq> idk about overseas
[19:48:06 CET] <Johnjay> right. i mean when you watch youtube vids you don't need a mixer
[19:48:11 CET] <Johnjay> to constantly adjust volume levels
[19:48:23 CET] <kepstin> youtube normalizes the volume when they encode videos
[19:48:31 CET] <furq> do they really
[19:48:37 CET] <kepstin> (or in the player, not totally sure)
[19:48:42 CET] <furq> i've never noticed that
[19:48:46 CET] <bazzy> When I try the following command with mkv as output ext: `ffmpeg -i outtrim.mkv -filter:v "setpts=2.0*PTS" outslo.mkv` then the timestamps play back too fast. mp4 works fine though. I'm not sure what to do I've searched the docs for "timestamp" and tried all config options I could find such as -copyts, copytb, -avoid_negative_ts 1
[19:48:49 CET] <furq> i've definitely had to turn the volume up for some videos
[19:49:02 CET] <Johnjay> normalize, as in make the max sound 0 db?
[19:49:08 CET] <furq> no that's just amplifying
[19:49:20 CET] <Johnjay> oh
[19:49:34 CET] <kepstin> Johnjay: they use some sort of perceptual loudness measurement, and if the video is outside an acceptable range they reduce it
[19:49:49 CET] <kepstin> i don't know if they also increase quiet videos or not
[19:50:04 CET] <Johnjay> if you squeeze down the entire waveform is that compression, not normalization?
[19:50:14 CET] <furq> the terms get conflated a bit
[19:50:30 CET] <furq> that's technically DRC but a lot of things call that normalisation
[19:50:39 CET] <kepstin> the general term "normalization" just means "make everything normal", so it depends on what you mean by normal ;)
[19:50:48 CET] <alexpigment> normalization is usually about "peak normalization", which is not compression at all. it just makes the peaks be a certain level (e.g. 0db) and everything else is scaled proportionally
[19:51:05 CET] <alexpigment> compression is the only way to decrease the dynamic range and make something sound "louder" at any volume
[19:51:21 CET] <kepstin> yeah, compression is also an overloaded term, because it usually refers to non-linear adjustments
[19:51:58 CET] <kepstin> and also to the things that encoders do to reduce file size, which is completely separate ;)
[19:52:01 CET] <alexpigment> well, i agree that the word "compression" without context means several different things :)
[19:52:02 CET] <furq> but yeah i'm pretty sure youtube doesn't touch the dynamic range
[19:52:06 CET] <furq> or if they do then this is recent
[19:52:28 CET] <alexpigment> youtube messing with the dynamic range seems like something that would be hard to automate and hence they won't do it
[19:52:38 CET] <furq> i watch a few channels which are live comedy in front of an audience and they're just peak normalised
[19:52:47 CET] <alexpigment> on the other hand, they might do some sort of dynamic range compression with certain partners..
[19:52:49 CET] <furq> and it's very obvious because the actual mic feed is always too quiet
[19:53:04 CET] <furq> yeah if they do that i assume it's an option for uploaders
[19:53:09 CET] <furq> it would be crazy for them to do that to everything
[19:53:13 CET] <furq> especially given how much music is on there
[19:53:22 CET] <Johnjay> peak normalize sounds different than just clipping everything above a certain level?
[19:53:32 CET] <furq> uh
[19:53:42 CET] <Johnjay> or like, clipping and then normalizing
[19:53:50 CET] <Johnjay> to get rid of the largest spikes
[19:54:05 CET] <furq> it's just amplifying the loudest point to 0dB
[19:54:16 CET] <furq> i might have said that wasn't normalizing before, ignore that
[19:55:16 CET] <alexpigment> johnjay: you're talking limiting i believe
[19:55:23 CET] <kepstin> I think youtube just measures integrated loudness (using some algorithm, either replaygain or ebur128 who knows?), and lowers the video playback volume if the loudness is above a threshold.
[19:55:33 CET] <Johnjay> ok so that's normalizing got it
[19:55:35 CET] <furq> yeah that seems likely
[19:55:59 CET] <Johnjay> i use audacity mostly so
[19:56:10 CET] <Johnjay> as long as the terms you define are sort of similar to that i can use them.
[19:56:22 CET] <kepstin> the "normalize" operation in audacity is "scale peaks to 0dBFS", yeah.
[19:57:01 CET] <alexpigment> side rant, i can't even think of using audacity these days. when audition exists (yes, it's not free), audacity feels like a toy. a very janky toy
[19:57:38 CET] <Johnjay> alex, i think it's just adaptation. when you can buy the expensive tool and use it then suddenly you can't do without it
[19:57:41 CET] <alexpigment> then again, i work with audio on both a hobby and professional level, so i realize i'm biased
[19:57:44 CET] <Johnjay> even if you used gum and string before that
[19:58:18 CET] <alexpigment> yeah, i know it's not really relevant. i just always like to point out other options, because most things you pay for are immensely better than audacity
[19:58:28 CET] <kepstin> sometimes I just want a gui tool to, uh, make horrible cut up versions of carly rae jepsen songs, and audacity is fine for that. https://glitch.social/@kepstin/99572831807122922 ;)
[19:58:29 CET] <alexpigment> and people don't realize that
[19:58:38 CET] <Johnjay> that's very much appreciated, even if i dont' have the money atm to buy whatever audition is
[19:58:55 CET] <alexpigment> it's an adobe product fwiw
[19:59:15 CET] <alexpigment> they saw a tool out there that was a great - cool edit pro - and they bought it and renamed it
[19:59:48 CET] <Johnjay> kepstin: lol nice
[20:00:13 CET] <Johnjay> alexpigment: yeah the dream of every small business is to be aquired by a major corporation
[20:00:22 CET] <furq> audacity is fine for most people
[20:00:43 CET] <alexpigment> johnjay: that's debatable, but i hear what you're saying
[20:00:51 CET] <alexpigment> i.e. i know what you mean ;)
[20:00:54 CET] <saml>  ffmpeg <input> -r 40 -c:v rawvideo -f nut - | ffmpeg -i - '%04d.jpg'          and  ffmpeg <input> -r 40 '%04d.jpg'       results in different frames
[20:01:00 CET] <Johnjay> alexpigment: it was slightly tongue-in-cheek, lol. but yeah
[20:01:20 CET] <Johnjay> ok thanks for the help, gtg
[20:01:28 CET] <saml> what's a good way to apply -r  and pipe to second  ffmpeg?
[20:01:30 CET] <alexpigment> later jay
[20:02:08 CET] <kepstin> saml: just never use -r, and always use the fps filter, and then your stuff will always be consistent :/
[20:02:38 CET] <alexpigment> kepstin: out of curiosity, what's wrong with using -r?
[20:02:45 CET] <alexpigment> (asking a person who uses -r a lot)
[20:03:13 CET] <kepstin> alexpigment: saml is running into an issue where it seems to be really inconsistent/non-repeatable, and it doesn't pace frames as evenly in the fps filter.
[20:04:03 CET] <kepstin> alexpigment: I have no idea why. I used to be under the impression that the -r option just throw an fps video filter on. But apparently it doesn't do that, and whatever it does isn't as good.
[20:04:03 CET] <alexpigment> but i guess if i have an input video that's, say 59.94 and i use -r to make it 30000/1001, is that problematic?
[20:04:18 CET] <alexpigment> not that i would generally ever do that to a 60fps video, but stil :)
[20:04:22 CET] <Peetz0r> hey! I am trying to record video from a VCR connected to a analog-to-USB v4l2 device, I am running fedora 27 with a 4.15 kernel and ffmpeg 3.3.6. The recording starts fine but when the actual video on the tape starts or ends (there are multiple video's on there) fmpeg crashes. Even when I press pause on the VCR ffmpeg crashes.
[20:04:35 CET] <kepstin> alexpigment: it may or may not actually drop exactly every second frame. You'd want to check to make sure it's not introducing judder.
[20:05:14 CET] <alexpigment> well, if i just switch to using -vf fps=whatever, it should be a simple substitution?
[20:05:22 CET] <kepstin> alexpigment: yep.
[20:05:30 CET] <alexpigment> well, consider this noted
[20:05:31 CET] <alexpigment> thanks
[20:06:04 CET] Action: kepstin knows way more about the fps filter than he ever wanted to, now that he's rewritten it :/
[20:06:23 CET] <alexpigment> yay for knowledge and difficult labor-intensive work :)
[20:06:28 CET] <kepstin> tricky to rewrite such that the output is still the same as before the rewrite, but I think I've done that.
[20:06:56 CET] <Peetz0r> kepstin: https://paste.sigio.nl/pp8sjoxcq
[20:06:58 CET] <alexpigment> well, if i get some judder with fps, i'm going to knock on your door
[20:07:01 CET] <alexpigment> ;)
[20:07:11 CET] <alexpigment> "hi, is kepstin here? he really fucked my videos up"
[20:07:17 CET] <Peetz0r> (also, ffmpeg doesn't like fpaste very much it seems)
[20:07:23 CET] <kepstin> alexpigment: if you do, make sure you come with the ffmpeg output while using -v debug
[20:07:28 CET] <alexpigment> lol
[20:07:46 CET] <alexpigment> i'll print it out on my dot matrix printer and bring it to you neatly folded along the tear-lines
[20:08:23 CET] <kepstin> Peetz0r: hmm, ffmpeg says that it got "invalid input" from the capture device and exited (not crashed). It might be a driver problem.
[20:09:10 CET] <alexpigment> so there's a thing that VCRs do, and i'm not sure this is related, but they have some copy protection stuff built in, and normal capture cards without some sort of processor in between will freak out when the video signal changes
[20:09:11 CET] <kepstin> that "Dequeued v4l2 buffer contains 414720 bytes, but 829440 were expected." is strange, seems like it got only one field instead of a full frame.
[20:09:20 CET] <alexpigment> because it triggers the copy protection stuff
[20:09:24 CET] <Peetz0r> qv4l2 can display the video just fine while pausing and onpausing the VCR
[20:09:38 CET] <kepstin> most of these cheap usb capture devices ignore macrovision, iirc
[20:09:42 CET] <alexpigment> Peetzor: you don't happen to have a time base corrector or perhaps a DVD recorder around, do you?
[20:09:53 CET] <Peetz0r> only a DVD player :p
[20:09:59 CET] <alexpigment> hmmm
[20:10:00 CET] <alexpigment> ok
[20:10:35 CET] <kepstin> but yeah, if it plays back in qv4l2 fine then it's probably not that
[20:11:12 CET] <kepstin> just some glitch in the capture card where it's returning broken frames or half-frames when re-syncing to the signal on transitions i guess
[20:11:26 CET] <Peetz0r> can I tell ffmpeg to ignore those?
[20:12:52 CET] <kepstin> Peetz0r: hmm. It depends on how exactly the error is getting into ffmpeg. Try adding the option "-max_error_rate 1.0" to the start of your ffmpeg command.
[20:13:24 CET] <Peetz0r> you mean, before -i?
[20:14:10 CET] <kepstin> i don't think it actually matters where it is, but I always put it at the start
[20:14:45 CET] <kepstin> also, make sure you have the tv standard set correctly - it's trying to read PAL right now, so you'll have to change it if you want NTSC.
[20:15:25 CET] <Peetz0r> nope, I'n in europe
[20:15:34 CET] <Peetz0r> but I could try the different PAL variations
[20:15:43 CET] <kepstin> k. should be fine then, that's probably not the issue
[20:16:24 CET] <Peetz0r> with -max_error_rate 1.0 the same error happens, but less frequently I think
[20:16:52 CET] <Peetz0r> but maybe that's just normal variation?
[20:17:18 CET] <Peetz0r> maybe this only happens some of the time, becuase it only happens when I pause at even (or odd) fields in a frame?
[20:35:51 CET] <kepstin> Peetz0r: who knows, you'd need someone to look into the hardware or kernel driver to find out exactly what's going wrong.
[20:36:59 CET] <saml> is writing a custom filter hard?
[20:37:16 CET] <saml> you should write a tutorial on how
[20:37:28 CET] <saml> or i should read filter code :P
[20:38:05 CET] <saml> https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_fps.c is this it?
[20:38:28 CET] <saml> this looks difficult. i give up
[20:41:03 CET] Action: kepstin would not recommend using the fps filter - either the old one or his new rewritten one - as a reference.
[20:41:41 CET] <kepstin> Well, i guess it depends on what exactly you want your filter to do
[20:41:57 CET] <kepstin> and even then, don't use the old one as a reference ;)
[20:43:39 CET] <saml> given two videos of different framerate, my filter will do some magic and calculate psnr
[20:44:16 CET] <saml> or my filter will align frames somehow  so that I can use frame-by-frame filters
[20:44:35 CET] <saml> but the more i think of it.. it's a difficult problem
[20:45:19 CET] <saml> for example, if i had 1 second video at 2fps.  that's reference. the entire video is 2 frames.   and I did an encoding either using -r 1  or fps=1  . so the encoded has 1 frame. not sure which of the two frames got dropped.
[20:46:06 CET] <saml> you need to compare the encoded frame to reference's possible candidates
[20:46:41 CET] <kepstin> and pick whichever one is closest, yeah. Gonna be a complex filter with multiple inputs and buffered frames :/
[20:47:17 CET] <kepstin> and probably still not really help with your final goal, since it still won't let you account for the perceptual quality loss caused by reducing video framerate.
[20:47:25 CET] <saml> true
[20:47:44 CET] <saml> not sure what vmaf does. not even sure if it's doing frame-by-frame comparison
[20:47:52 CET] <Peetz0r> kepstin: I'll find a kernel developer and give him my hardware then
[20:47:55 CET] <kepstin> pretty sure vmaf is frame by frame.
[20:48:09 CET] <saml> one thing i notice is that run_vmaf executable is much faster than ffmpeg filter version
[20:48:21 CET] <kepstin> if you figure out how to account for framerate changes, let me know so I can read your phd thesis paper on the subject.
[20:48:22 CET] <Peetz0r> (not a joke btw, one of my friends is kernel developer and sometimes likes a new project)
[20:48:51 CET] <saml> yeah i was gonna say this is a research project, out of my league
[20:50:35 CET] <kepstin> as for computing the quality loss of a video not accounting for framerate changes, you just need to make sure that you're using a consistent, reproducable method of changing framerate
[20:50:38 CET] <saml> it's like you have a choir of 100 people. and few got dropped. and by listening to two mp3 files you have to determine who is absent.
[20:50:46 CET] <kepstin> (in other words, just always use the fps filter)
[20:51:13 CET] <durandal_1707> use minterpolate
[20:51:52 CET] <saml> minterpolate makes room temperature rise, saving heating bill
[20:51:54 CET] <kepstin> hmm, but minterpolate will occasionally give you motion artifacts, won't it?
[20:52:15 CET] <durandal_1707> you fix that manually
[20:52:24 CET] <kepstin> be interesting to see how vmaf scores compare between original video and video with fps reduced, then fps increased with minterpolate
[20:52:41 CET] <alexpigment> i've been meaning to ask about minterpolate
[20:53:05 CET] <alexpigment> i found some videos from archive.org - old MTV captures - but the video capture is dumb and has random dropped frames
[20:53:25 CET] <alexpigment> the dude who uploaded them wouldn't let me help for future captures, and was insulted and yet not even aware of what i was talking about
[20:53:41 CET] <alexpigment> so i wanted to know if there's a way to make minterpolate happen only on duped frames
[20:54:05 CET] <kepstin> alexpigment: use a filter to drop the duped frame to leave a timestamp gap, then minterpolate should fill it in
[20:54:08 CET] <kepstin> i think
[20:54:20 CET] <durandal_1707> thats currently not possible
[20:54:35 CET] <kepstin> oh, does it need cfr input? :/
[20:54:38 CET] <durandal_1707> filter doesnt behave that way
[20:55:37 CET] <alexpigment> i think i saw somewhere that someone had written an avisynth filter to do it
[20:55:49 CET] <alexpigment> but i haven't messed with it because i don't really have an avisynth workflow yet
[20:57:47 CET] <alexpigment> it would be a nice filter to have directly in ffmpeg - drop duped frames an interpolate only those dropped frames
[20:58:10 CET] <alexpigment> i've seen so many videos out there that have a distracting judder that would only be fixable with that workflow
[20:58:13 CET] <kepstin> just taking a quick read through of the minterpolate filter, it *looks* like it should fill in timestamp gaps fine?
[20:58:32 CET] <alexpigment> well, what is durandal_1707 talking about then?
[20:58:48 CET] <kepstin> but keep in mind that it's a *quick* read through, and a complex filter :)
[20:59:18 CET] <alexpigment> is there a particular dupe dropping filter you'd recommend?
[20:59:23 CET] <alexpigment> i haven't looked lately to see if there are more than one
[20:59:23 CET] <saml> can i think of video as analog signal and i can sample  frames at some interval to have consistent frame selection of two videos?
[20:59:26 CET] <durandal_1707> well, try and report findings
[20:59:35 CET] <kepstin> alexpigment: mpdecimate should be fine to drop dup frames
[20:59:38 CET] <alexpigment> will do. i'll get back to you next week on this 10 minute sample ;)
[20:59:45 CET] <alexpigment> k, thanks kepstin
[20:59:46 CET] <saml> instead of trying to pick frames from existing frames
[21:00:16 CET] <saml> i guess what i'm saying is to play the video and do screen capture at certain frequency
[21:00:31 CET] <kepstin> saml, to do that you need to interpolate between frames.
[21:00:37 CET] <saml> then regardless of framerate of videos, i get to choose matching frames of the two videos
[21:00:37 CET] <alexpigment> kepstin: i assume it only drops pure duplicate frames?
[21:00:42 CET] <alexpigment> or is there a threshold i need to set
[21:00:51 CET] <kepstin> alexpigment: please read the docs.
[21:00:55 CET] <alexpigment> fair enough
[21:01:07 CET] <kepstin> saml: and linear interpolation, like the 'framerate' filter looks terrible.
[21:01:10 CET] <saml> so, is there a filter that simulates video playback and screen capture  via interpolation and sampling?
[21:01:57 CET] <durandal_1707> mpv have something like that...
[21:01:59 CET] <saml> yeah probably interpolation introduces too much noise
[21:02:15 CET] <kepstin> saml: you could build a filter that drops non-matching frames (via some threshold, probably compare frame psnr) fairly easily.
[21:02:36 CET] <saml> whoa
[21:03:12 CET] <saml> if psnr(a1,b1) < threshold:  b1++;  or something
[21:03:17 CET] <kepstin> I mean, assuming you're a decent C coder and are familiar with ffmpeg filter writing, the actual logic there isn't hard.
[21:03:24 CET] <kepstin> most of the work's done by the framesync framework.
[21:03:47 CET] <kepstin> you just have to check if they match, if not get the next set of frames to compare, if so, output both frames.
[21:03:48 CET] <gamlegaz> Does anyone have a good resource/link on how to write a seek function for a custom IO context?
[21:04:03 CET] <kepstin> but the 'check if they match' is hard :)
[21:04:38 CET] <saml> why "they"?  I thought you were going to match frame by frame advancing pointers for psnr below threshold
[21:05:11 CET] <saml> similar to merging two sorted lists into one
[21:05:31 CET] <kepstin> saml: sure, but what threshold? if it's too high then it might not pass matching frames because the encoder quality was too low; if the threshold is too low then it might pass through mismatched frames
[21:05:32 CET] <saml> but yeah, where do I start to learn ffmpeg filter writing?
[21:05:48 CET] <saml> ah that's true :(
[21:06:01 CET] <saml> i guess.. i could use one of those image fingerprint algorithms
[21:06:30 CET] <saml> that's used in reverse image search
[21:06:40 CET] <saml> but first, i need a hello world filter
[21:06:43 CET] <JEEB> look at some simple video or audio filter
[21:06:53 CET] <kepstin> i suspect those would be too loose at matching - if you have a slow pan or zoom or something it might still match prev/next frames
[21:06:54 CET] <JEEB> it's most likely a struct at the end that defines it and has the function pointers
[21:07:09 CET] <JEEB> and then you have the basic init/feed etc functions
[21:07:11 CET] <JEEB> :)
[21:07:31 CET] <JEEB> example http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavfilter/vf_chromakey.c;h=88414783bc8001bb39f7f53e02aa6731dae403ab;hb=HEAD
[21:07:33 CET] <saml> during filter development, you keep recompilng ffmpeg, right? filters aren't dynamic plugin
[21:07:38 CET] <JEEB> yes
[21:07:48 CET] <saml> noice thanks
[21:07:48 CET] <JEEB> although after the initial compilation make should just compile the changed files
[21:07:58 CET] <JEEB> (and re-link libavfilter/ffmpeg in your case)
[21:08:18 CET] <saml> okay you can first write a lua scripting filter then i can just write lua script :P
[21:08:29 CET] <JEEB> sounds like you want vapoursynth
[21:08:34 CET] <JEEB> which lets you play with python
[21:09:15 CET] <alexpigment> kesptin, durandal: it looks like minterpolate after mpdecimate, still interpolates all the frames
[21:09:46 CET] <alexpigment> like basically this noisy VHS source now has a weird haze over it from interpolating all the noise
[21:09:59 CET] <kepstin> alexpigment: make sure your input timestamps are clean, e.g. use the fps filter before the input.
[21:10:00 CET] <alexpigment> not a terrible look, per se, but not natrual
[21:10:48 CET] <alexpigment> just -fps 30000/1001 before -i?
[21:10:54 CET] <alexpigment> it says unrecognized option
[21:10:54 CET] <kepstin> alexpigment: should do it
[21:10:59 CET] <alexpigment> maybe i need to update again
[21:10:59 CET] <kepstin> er, no
[21:11:01 CET] <kepstin> fps filter
[21:11:16 CET] <kepstin> -vf fps=XXX,mpdecimate=XXX,minterpolate=XXX
[21:11:27 CET] <alexpigment> oh, first in the filter chain
[21:11:28 CET] <alexpigment> nm
[21:11:57 CET] <alexpigment> testing now
[21:12:01 CET] <kepstin> if your input is mp4 or mkv or something the timestamps will be in a different timebase and rounded slightly, so they'll not be the exact values, and minterpolate will think it has to interpolate slightly.
[21:12:13 CET] <alexpigment> nah, it's just a poorly formatted MPEG-2
[21:12:17 CET] <kepstin> the fps filter will reset to the same timebase, so it should be exact.
[21:12:30 CET] <kepstin> mpeg-ts/ps uses 90K timebase, iirc, so same issue
[21:13:24 CET] <alexpigment> ok, that does seem to be better
[21:13:34 CET] <kepstin> if the timestamps exactly match, there's a fast-path in the filter that simply copies the original frame
[21:13:47 CET] <alexpigment> the interpolated frame kinda stands out now because it's not as noisy, but i may be able to fine tune minterpolate settings
[21:13:50 CET] <kepstin> so it should be much faster too :)
[21:14:02 CET] <alexpigment> yeah, it was ~2x as fast
[21:14:11 CET] <alexpigment> near realtime, actually
[21:23:50 CET] <alexpigment> you may have helped me take some very unwatchable video (to my eyes anyway) and make it watchable
[21:23:56 CET] <alexpigment> kepstin, i mean
[21:24:09 CET] <saml> wait what if a vcodec results in "poorer" frames  but when they are played in a video  humans perceive them "better"?
[21:24:14 CET] <alexpigment> it's not as transparent on grainy sources, but it works great
[21:24:19 CET] <saml> then frame by frame comparison won't work
[21:25:18 CET] <saml> i can imagine a smart codec does different stuff to each frame based on target fps.
[21:25:47 CET] <kepstin> saml: x264 already does motion detection and behaves differently in high motion vs low motion scenes
[21:27:33 CET] <saml> like at fps higher than 30, researchers found that inserting all white frame from time to time makes the video look crispy and better.
[21:27:50 CET] <saml> then frame by frame comparison won't work due to those white frames
[21:29:09 CET] <alexpigment> saml: is that true?
[21:29:14 CET] <alexpigment> i have a feeling my eyes would notice that
[21:29:22 CET] <alexpigment> but maybe i've been duped ;)
[21:31:05 CET] <saml> no it's not true. my imagination/example
[21:31:32 CET] <saml> codecs doing different things based on output fps, not only based on scene
[21:42:00 CET] <gunstick> Hi. I have a 5 channel wav file. I want to display 3 of the channels as waveforms. I don't want to do anything with channels 0 and 1. But I don't know how. I use lavfi with asplit. Here is my command line showing channels 0,1,2. What I want is the same thing, showing 2,3,4 instead. ffplay -f lavfi 'amovie=Bla.wav,asplit=4[out1][a][b][c]; [a]showwaves=s=640x240[waves]; [b]showspectrum=s=640x240[spectrum]; [c]showwaves=s=640x240[waves2];
[21:42:00 CET] <gunstick> [waves][waves2][spectrum] vstack=inputs=3[out0]'
[21:42:54 CET] <saml> do you have the wav file i can download?
[21:43:04 CET] <furq> gunstick: asplit just duplicates the input
[21:43:06 CET] <furq> you want channelsplit
[21:43:14 CET] <gunstick> Yes, one moment... uploading file
[21:44:29 CET] <durandal_1707> furq: simpler is to use pan filter
[21:44:59 CET] <furq> is that simpler if he doesn't want to mix them
[21:46:22 CET] <gunstick> http://gkess.homeip.net/~georges/Bla.wav
[21:47:26 CET] <gunstick> so this is a chiptune. the conversion makes channels 1 and 2 silent. so I wnat to drop them. and then graph the 3 other channels as oscilloscope
[22:08:57 CET] <gunstick> someone tried? I'm now googling for pan filter :-)
[22:09:30 CET] <saml> gunstick,   ffplay -f lavfi 'amovie=Bla.wav,pan=3c|c0=c2|c1=c3|c2=c4,showwaves=split_channels=1:s=640x240 [out0]'
[22:09:34 CET] <saml> I charge 1000 bitcoin
[22:10:04 CET] <gunstick> haha. I will mention you if I do a youtube video :-)
[22:11:13 CET] <alexpigment> is pan=3c just a generic way of saying 3.0?
[22:11:18 CET] <alexpigment> or is it different?
[22:11:22 CET] <gunstick> looks good. thanks!
[22:11:44 CET] <alexpigment> i guess ultimately it doesn't matter if you're just showing waveforms
[22:12:43 CET] <gunstick> and if I want to additionally play the sound.
[22:14:24 CET] <alexpigment> well, the official standard would be to say pan=3.0
[22:14:30 CET] <alexpigment> i'm not sure if it's the same as 3c or not
[22:14:40 CET] <alexpigment> maybe it's interpreted differently
[22:18:06 CET] <gunstick> added "asplit=[out1]" and now plays the sound. Insert meme "I have no idea what I'm doing" :-)
[22:43:52 CET] <lyncher> how should I open a lavfi device in libavcodec?
[00:00:00 CET] --- Sat Feb 24 2018


More information about the Ffmpeg-devel-irc mailing list