[Ffmpeg-devel-irc] ffmpeg.log.20171003

burek burek021 at gmail.com
Wed Oct 4 03:05:01 EEST 2017


[00:00:52 CEST] <JEEB> dunno about genius, but he is clearly capable of getting things done/started
[00:01:21 CEST] <Dianaxxyyzz> if we had 100 mans like  Fabrice Bellard I think sotware was like in StarTreck
[00:01:25 CEST] <durandal_1707> lookat code qualuty
[00:01:49 CEST] <JEEB> ^ as good as this English
[00:01:58 CEST] <furq> i've never looked at his code
[00:01:58 CEST] <durandal_1707> lol
[00:02:07 CEST] <furq> i'm going to be really upset if it turns out he's one of those guys who uses 1-space indentation
[00:03:34 CEST] <Dianaxxyyzz> :)
[00:04:06 CEST] <Dianaxxyyzz> does a man like Fabrice have a personal life?
[00:04:15 CEST] <Dianaxxyyzz> time for kids , wife , a dog
[00:04:16 CEST] <Dianaxxyyzz> ?
[00:04:27 CEST] <Dianaxxyyzz> or just time for coding ?
[00:04:28 CEST] <Dianaxxyyzz> :)
[00:05:02 CEST] <durandal_1707> no at all, he is on irc stalking random people all the time
[00:05:53 CEST] <JEEB> furq: indentation usually tends to be the least of the issues
[00:06:04 CEST] <JEEB> something like clang's style nazi app can generally fix that
[00:06:28 CEST] <JEEB> the problem is when you've got 1990s optimization techniques and "totally clear to me" stuff :D
[00:07:50 CEST] <JEEB> furq: when I was doing my google summer of code submission, this is what I was requested to make out of "align to 32bit" http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/utvideoenc.c;h=840742caf713060788fa83eb632e5691a614e687;hb=HEAD#l386
[00:08:00 CEST] <JEEB> this is what I consider the primary issue with readability
[00:08:03 CEST] <Dianaxxyyzz> i made some coding about ffmpeg and rtmpdump , but just because I needed to do
[00:08:05 CEST] <JEEB> (at least I was let to have the comment there)
[00:08:13 CEST] <Dianaxxyyzz> I found complicated to code for them
[00:09:14 CEST] <Dianaxxyyzz> first time i head about ffmpeg i said :wtf we need it for
[00:09:20 CEST] <Dianaxxyyzz> and now it looks a genious tool
[00:09:50 CEST] <JEEB> FFmpeg's libraries are most likely on your TV, and are powering pretty much all video services on the internets
[00:10:07 CEST] <Dianaxxyyzz> yes
[00:10:15 CEST] <Dianaxxyyzz> audo video streaming is the future
[00:10:27 CEST] <durandal_1707> lies, its gstreamer
[00:10:46 CEST] <Dianaxxyyzz> i gund gstreamer too buggy
[00:10:55 CEST] <Dianaxxyyzz> maybe will be good for future
[00:11:11 CEST] <Dianaxxyyzz> but for my needs in 2015 was not goot , just ffmpeg was
[00:11:37 CEST] <Dianaxxyyzz> i had a video streaming palform to android tv boxes
[00:11:57 CEST] <Dianaxxyyzz> i found ffmpeg magnific
[00:12:02 CEST] <Dianaxxyyzz> and wowza too
[00:12:12 CEST] <Dianaxxyyzz> ffmpeg is not good like a steaming server
[00:12:51 CEST] <Dianaxxyyzz> but can be used in way you dnt care about that
[00:13:30 CEST] <JEEB> you can write a media server utilizing FFmpeg's libraries if you want to, but yes - ffmpeg.c is not a media server
[00:14:00 CEST] <Dianaxxyyzz> ffserver is problem
[00:14:04 CEST] <Dianaxxyyzz> ffmpeg is good
[00:14:24 CEST] <Dianaxxyyzz> but maybe ffserver was not improved for too long time
[00:14:45 CEST] <BtbN> ffserver is dead
[00:14:55 CEST] <Dianaxxyyzz> ..that must be the reason
[00:15:21 CEST] <BtbN> A lot of people want it to be gone. But others fight for it, but also don't want to maintain it.
[00:15:39 CEST] <Dianaxxyyzz> i see
[00:16:00 CEST] <JEEB> also ffserver was this weird mutation of not really media server and not really a transcoder
[00:16:06 CEST] <Dianaxxyyzz> is first time i come to this channel , never talked to somebody about ffmpeg , i just write my personal experience
[00:16:08 CEST] <furq> and not really good
[00:16:10 CEST] <JEEB> and it utilized a lot of the internals (and still does as far as I can tell)
[00:18:15 CEST] <Dianaxxyyzz> yes
[00:18:26 CEST] <Dianaxxyyzz> i cound not use ffserver for nothing
[00:18:35 CEST] <Dianaxxyyzz> even if it worked for searming some movies
[00:18:43 CEST] <Dianaxxyyzz> some did not straemed good
[00:18:47 CEST] <Dianaxxyyzz> and a lot of crashes
[00:18:59 CEST] <Dianaxxyyzz> but waza was good
[00:19:10 CEST] <rabbe> ffplay was not included in the static builds.. is that obsolete?
[00:19:11 CEST] <Dianaxxyyzz> and lol wowza worked nice with ffmpeg
[00:19:39 CEST] <Dianaxxyyzz> ffpay is like a demo of ffmpeg playing videos
[00:19:42 CEST] <Dianaxxyyzz> from what i knowws
[00:19:46 CEST] <durandal_1707> rabbe: no,  ffplay is not obsolete
[00:21:50 CEST] <furq> ffplay does exactly what it's supposed to do
[00:21:52 CEST] <furq> which is not much
[00:22:32 CEST] <Dianaxxyyzz> for me was ok the sourec of ffplay , helped me to undestand the code better
[00:22:38 CEST] <Dianaxxyyzz> demuxing etc
[00:22:57 CEST] <Dianaxxyyzz> of course i needed a stripped down version of ffpay
[00:23:31 CEST] <durandal_1707> ffpay - i need that too
[00:24:01 CEST] <Dianaxxyyzz> the problem with ffpay and ffmpeg is that you can not find inside the source some coments about what code do for some "beginers"
[00:24:17 CEST] <Dianaxxyyzz> i searched all internet to try to undestand what is what and what it really do
[00:24:27 CEST] <durandal_1707> ffplay != ffpay
[00:24:37 CEST] <Dianaxxyyzz> but after i got familiar with code was better
[00:25:29 CEST] <Dianaxxyyzz> is god we still ahve the console out of erros ,we cehck the errors text to undetstand what he do
[00:25:30 CEST] <Dianaxxyyzz> :))
[00:25:45 CEST] <Dianaxxyyzz> We undestand what he failed to do :))
[00:25:57 CEST] <Dianaxxyyzz> from errors text
[00:30:53 CEST] <Dianaxxyyzz> all pppls from thsi chat develop for ffmpeg ?
[00:30:54 CEST] <Dianaxxyyzz> :)
[00:31:34 CEST] <vans163> does framerate have a big effect on bandwith?
[00:31:55 CEST] <vans163> 30fps vs 60fps, is not double bandwith is it?
[00:32:07 CEST] <Dianaxxyyzz> it is
[00:32:24 CEST] <Dianaxxyyzz> more fps mode band for streaming needed
[00:32:32 CEST] <Dianaxxyyzz> but
[00:32:38 CEST] <Dianaxxyyzz> depends what you do
[00:32:51 CEST] <Dianaxxyyzz> if you have a live camera
[00:33:08 CEST] <Dianaxxyyzz> is you want to improve streaming , you must lower the fps of camera
[00:33:19 CEST] <Dianaxxyyzz> if you have a mp4 taht you want to stream
[00:33:26 CEST] <Dianaxxyyzz> you must lower the quality
[00:34:05 CEST] <Dianaxxyyzz> depends what you do
[00:34:38 CEST] <Dianaxxyyzz> if you record at 30 fps you can have 1 minute of video = 50 mb
[00:34:48 CEST] <Dianaxxyyzz> if you record at 60 fps you can have 1 minute of video = 80 mb
[00:34:56 CEST] <Dianaxxyyzz> must not exacly be doubled
[00:34:59 CEST] <Dianaxxyyzz> but is increased lol
[00:35:25 CEST] <Dianaxxyyzz> it also depends if camera fims movieng objects or not
[00:36:04 CEST] <Dianaxxyyzz> if you compress on the fly the camera stream , and no objects move , or is night , the band will be lower than if you record and compress a day movie
[00:38:17 CEST] <furq> vans163: depends on a bunch of things, but if you're stuck with baseline it'll likely be a big bump if you double the framerate
[00:38:36 CEST] <vans163> thanks, i see yes stuck baseline likely
[00:38:59 CEST] <ZEEX> capturing a desktop window with GDI
[00:38:59 CEST] <ZEEX>  have to supply the name of the window to the -i parameter
[00:38:59 CEST] <ZEEX> ffmpeg -f gdigrab -i title="theWindow" -framerate ...
[00:38:59 CEST] <ZEEX> is there a way to use wildcard or Regular Expressions
[00:38:59 CEST] <ZEEX> for window name ?
[00:39:05 CEST] <furq> if you were using like 16 bframes on low-motion video then you can get much better results
[00:39:36 CEST] <furq> but that's probably completely out of the question because of latency
[00:41:14 CEST] <Dianaxxyyzz> ZEEX: yes it is a way
[00:41:30 CEST] <vans163> furq: yes out, in the close future it maybe possible to stream VP9
[00:41:45 CEST] <vans163> as long as the hardware decoder supportsit
[00:41:54 CEST] <vans163> VP9 cpu decoding is insanely expensive
[00:41:58 CEST] <ZEEX> :)
[00:41:59 CEST] <Dianaxxyyzz> ZEEX: put the ffmpeg command in a bash script and execute any expression and ffmpeg from that too and use  title="$value"
[00:42:02 CEST] <Dianaxxyyzz> bla bla
[00:42:35 CEST] <ZEEX> wildcards would be fine... ?/*...
[00:42:36 CEST] <furq> vans163: decoding's not that slow
[00:42:40 CEST] <Dianaxxyyzz> man
[00:42:45 CEST] <furq> i'm pretty sure ffvp9 is faster than ffh264
[00:43:02 CEST] <furq> a lot of browsers still use libvpx's decoder though, which sucks
[00:43:29 CEST] <ZEEX> expanded wildcard funcionality like *chrome* would be great...
[00:43:36 CEST] <Dianaxxyyzz> ZEEX: just do this inside a script.sh i have this code : $val= search for my windows , and then in same script ffmpeg -f gdigrab -i title="$val" -framerate ...
[00:43:44 CEST] <Dianaxxyyzz> i gived you the ideea not working code
[00:43:52 CEST] <vans163> furq: wow really? i remember watching 4k youtube videos, nightmare on media boxes
[00:44:07 CEST] <vans163> maybe it wasnt vp9.. but vp8 or h264 in 4k?
[00:44:17 CEST] <Dianaxxyyzz> h264 k4
[00:44:20 CEST] <Dianaxxyyzz> 4k
[00:44:25 CEST] <furq> i don't think you'd ever get served vp8 unless you were on windows xp
[00:44:44 CEST] <furq> maybe h264 if you showed up just after the video was uploaded
[00:44:49 CEST] <ZEEX> this is for Windows... no BASH...
[00:45:01 CEST] <Dianaxxyyzz> android play mp4 container and h264 codecs very good , so all are mp4 h264
[00:45:09 CEST] <vans163> furq: thats interesting i should give vp9 a try then as chrome supports it
[00:45:18 CEST] <furq> chrome on *nix uses the ffmpeg decoder afaik
[00:45:19 CEST] <Dianaxxyyzz> ZEEX: use a .bat
[00:45:29 CEST] <vans163> i didnt realize the cpu usage of the software decoder would be par/lower then h264 cpu decoder (openh264)
[00:45:37 CEST] <vans163> not sure what chrome uses for vp9 probably ffmepg
[00:45:43 CEST] <Dianaxxyyzz> ZEEX: or write a prigram that modify on the fly the ffmpeg parameters
[00:45:45 CEST] <furq> but yeah obviously on a phone or something vp9 is likely to suck unless it's very new
[00:45:46 CEST] <ZEEX> yep... but the $thing do not work... as you may know...
[00:46:22 CEST] <Dianaxxyyzz> write a program taht search for windows  name and put that for ffmpeg param and execute it
[00:49:59 CEST] <lightslategray> Hi. "ffmpeg -i v.webm -vf scale=100:100 -i a.webm outp.webm" fails with "Option vf (set video filters) cannot be applied to input url a.webm ..." what is correct syntax?
[00:51:49 CEST] <Dianaxxyyzz> ffmpeg -i v.webm -i a.webm outp.webm -vf scale=100:100
[00:52:30 CEST] <Dianaxxyyzz> if you put "-vf scale=100:100 "   before any "-i"  , you apply video filtering to that "-i" , and you put it before audio
[00:52:47 CEST] <Dianaxxyyzz> so it try to apply video filters to your audio input
[00:53:06 CEST] <Dianaxxyyzz> jo just move "vf scale=100:100 " after "-i"
[00:53:47 CEST] <Dianaxxyyzz> ffmpeg -i v.webm -i a.webm vf scale=100:100 outp.webm
[00:53:51 CEST] <Dianaxxyyzz> try that
[00:53:59 CEST] <Dianaxxyyzz> bout you di dnot specify any codecs
[00:54:23 CEST] <lightslategray> Dianaxxyyzz: it works. thank you
[00:54:30 CEST] <Dianaxxyyzz> ok
[00:54:33 CEST] <Dianaxxyyzz> np
[00:56:02 CEST] <Dianaxxyyzz> ffmpeg -i v.webm -i a.webm -vf scale=100:100 outp.webm       (i forgot a - )
[00:56:20 CEST] <Dianaxxyyzz> any way the ideea is if you put any filters before "-i" it will aply to taht input
[00:56:33 CEST] <lightslategray> Dianaxxyyzz: yes, I added the hyphen
[00:58:18 CEST] <lightslategray> Dianaxxyyzz: do I understand correctly that -fv is an _output_ processing thing? Or relates to inputs? Or it just king of global?
[00:58:18 CEST] <Dianaxxyyzz> lightslategray: great! :) ffmpeg is so complicated it can not be understand in one day , so just dnt worry sometime you will knwo all !
[00:58:19 CEST] <Dianaxxyyzz> :)
[00:59:00 CEST] <lightslategray> "kind of global", correction
[00:59:24 CEST] <Dianaxxyyzz> so
[00:59:29 CEST] <Dianaxxyyzz> if I write
[00:59:59 CEST] <Dianaxxyyzz> ffmpeg -vf scale=100:100  -i v.mp4   , it try to aply filter before it outputs
[01:00:04 CEST] <Dianaxxyyzz> for xample
[01:00:07 CEST] <Dianaxxyyzz> if i do
[01:01:00 CEST] <Dianaxxyyzz> ffmpeg -read_fps 10  -i v.mp4  it will read just 10 fps per second from input video , (-read_fps do not exists just you do undestand)
[01:01:11 CEST] <Dianaxxyyzz> but if you do it after -i
[01:01:41 CEST] <Dianaxxyyzz> ffmpeg   -i v.mp4  -vf scale=100:100 -o output-scalled.mp4  , then it will aply filter to output video
[01:01:53 CEST] <Dianaxxyyzz> -i just specify a input
[01:02:03 CEST] <Dianaxxyyzz> -i a.mp4 , specify input is a.mp4
[01:02:33 CEST] <Dianaxxyyzz> but if you add anything before -i , it try to aply it to the input whyle it read the input ,
[01:02:49 CEST] <Dianaxxyyzz> and filters can not be aplyit to input because require first a demuxing etc
[01:03:11 CEST] <Dianaxxyyzz> just the output can be a filtered video , so it must be after -i
[01:04:22 CEST] <Dianaxxyyzz> ffmpeg -re -f lavfi -i aevalsrc="sin(400*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp://127.0.0.1:1234
[01:04:38 CEST] <Dianaxxyyzz> here aply -re and format of video to -i
[01:05:00 CEST] <Dianaxxyyzz> it tells ffmpeg to (-re) read at native speed and (-f lavfi) format is lavfi
[01:05:02 CEST] <lightslategray> Dianaxxyyzz: so it relates to what it precedes, and in case of outp.webm it relates to output (where -o was omitted), and it formally can be applied to input also (if precedes -i ), but this just not works?
[01:05:35 CEST] <Dianaxxyyzz> -re -f lavfi can be put before -i
[01:05:47 CEST] <Dianaxxyyzz> it tells ffmpeg to read at native speed and the input format is lavfi
[01:05:53 CEST] <Dianaxxyyzz> but this do not inpact output
[01:06:21 CEST] <Dianaxxyyzz> ffmpeg -re -f mpegts -i  a.ts -o x.flv
[01:06:37 CEST] <Dianaxxyyzz> i tell ffmpeg to read input at native speed and inut format is mpegts
[01:06:48 CEST] <lightslategray> Dianaxxyyzz: now I got it. Thank you very much, Dianaxxyyzz!
[01:06:49 CEST] <Dianaxxyyzz> but output will be a flv :)
[01:07:09 CEST] <Dianaxxyyzz> great!
[01:07:10 CEST] <Dianaxxyyzz> :)
[01:07:12 CEST] <Dianaxxyyzz> np
[01:09:02 CEST] <Dianaxxyyzz> normaly you do not need anything before -i
[01:09:03 CEST] <Dianaxxyyzz> :)
[01:10:09 CEST] <Dianaxxyyzz> but in some complicates situations you need to put something before -i , for exampel when ffmpeg do not know what the format input file is , you put before -i the file format option for example (-f mpegts -i a.ts_
[01:13:47 CEST] <lightslategray> Dianaxxyyzz: got it, like when it can't figure it itself, eg when reading weird file or from stdin
[01:13:56 CEST] <Dianaxxyyzz> yes
[01:14:04 CEST] <Dianaxxyyzz> i needed to put before -i
[01:14:14 CEST] <Dianaxxyyzz> when i read input from a socket
[01:14:27 CEST] <Dianaxxyyzz> it did not know what data format comes
[01:14:34 CEST] <Dianaxxyyzz> or from you read raw data
[01:14:36 CEST] <Dianaxxyyzz> of images
[01:14:41 CEST] <Dianaxxyyzz> it do not knwo pix format
[01:14:46 CEST] <Dianaxxyyzz> so you have to tell him
[01:15:24 CEST] <Dianaxxyyzz> you someday will have a file data.raw
[01:15:40 CEST] <Dianaxxyyzz> that contains a raw video
[01:15:52 CEST] <Dianaxxyyzz> so you must specify teh pix format and yuv
[01:16:09 CEST] <Dianaxxyyzz> cause it is a raw file it does nto have a medatada
[01:16:15 CEST] <Dianaxxyyzz> it do not knwo what is inside file
[01:17:20 CEST] <Dianaxxyyzz> it must to know what file contains so he to knwo what demuxes he use
[01:18:01 CEST] <lightslategray> Dianaxxyyzz: seems ffmpeg is a great and extremely versatile tool. thanks for your explanations
[01:18:17 CEST] <Dianaxxyyzz> and in some special cases more often when you deal with sockets or raw data , he can not read metadata of file , cause you dnt have a medatata like in a mp4 so you must put someting before -i
[01:18:31 CEST] <Dianaxxyyzz> np
[01:20:07 CEST] <Dianaxxyyzz> that had to be writed somehwre "do not put anything before -i ,you need in just special cases" , taht wold same me a lot of time too
[01:20:08 CEST] <Dianaxxyyzz> :)
[01:20:23 CEST] <Dianaxxyyzz> *save
[01:26:40 CEST] <jasan> hello everyone
[01:26:47 CEST] <kevev1> Hello everyone
[01:27:06 CEST] <kevev1> I am seeking some help with an hls live stream where the audio is slowly going out of sync.
[01:27:15 CEST] <Guest39397> i'm stumpted trying to change aspect ratio without stretching video
[01:27:18 CEST] <kevev1> I am attempting to use ffprobe to figure out the issue, but am confused.
[01:27:22 CEST] <kevev1> Anyone willing to help?
[01:27:24 CEST] <Guest39397> ffmpeg -i "a.mov" -i "b.mp4" -filter_complex "[0:v]setsar=sar=${SAR}[a];  [1:v]setsar=sar=${SAR}[b]; [a][0:a] [b][1:a] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" -preset veryfast -crf 18 "merged.mp4"  using the above code I am able to successfully merge a.mov(1920x816) with b.mov(1920x816). However, Youtube end screens require video to be 16:9 aspect ratio. Thus I tried scale filter and setting SAR to 16:9. however this lead
[01:27:56 CEST] <Guest39397> the 816 height video to fit the 1080 height.
[01:28:35 CEST] <Guest39397> using the above code I am able to successfully merge a.mov(1920x816) with b.mov(1920x816). However, Youtube end screens require video to be 16:9 aspect ratio.
[01:28:46 CEST] <Guest39397> Thus I tried scale filter and setting SAR to 16:9. however this leads to the output video stretching to fit the 16:9 ratio
[01:29:03 CEST] <furq> !filter pad @Guest39397
[01:29:03 CEST] <nfobot> Guest39397: http://ffmpeg.org/ffmpeg-filters.html#pad-1
[01:29:06 CEST] <Guest39397> Essentially The remaining vertical space should just be black without stretching the 816 height video to fit the 1080 height.
[01:29:23 CEST] <Guest39397> filter pad
[01:29:34 CEST] <Guest39397> ?
[01:29:40 CEST] <redrabbit> is there a windows software to transcode on demand so i can access my media library from example, my phone
[01:29:52 CEST] <redrabbit> that uses ffmpeg of course
[01:30:37 CEST] <redrabbit> ideally would output a folder that is the mirror of my media library but when i open the files it should serve me a transcoded file
[01:30:58 CEST] <Guest39397> furq where would i add the pad in my code
[01:31:13 CEST] <furq> replace both setsars with pad
[01:31:59 CEST] <furq> or just get rid of them and add pad after concat
[01:32:18 CEST] <furq> also are you sure youtube requires 16:9
[01:32:53 CEST] <Guest39397> so instead of this setsar=sar=${SAR} --> pad=1920:1080:black
[01:34:03 CEST] <Guest39397> yes I received an error while trying to add endscreens on non  16:9 video
[01:35:36 CEST] <kevev1> the source stream is being forwarded as the main stream. This one is being pushed via ffmpeg as an rtmp stream. The nginx server is forwarding this main stream and transcoding. The transcodes streams are beginning to have audio drift slowly.
[01:38:05 CEST] <kevev1> exec_push /root/bin/ffmpeg -r 30 -i rtmp://127.0.0.1:1935/$app/$name -async 1 -vsync -1 -c:v libx264 -c:a aac -b:v 256k -b:a 32k -vf "scale=480:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://127.0.0.1:1935/show/$name_low -c:v libx264 -c:a aac -b:v 768k -b:a 64k -vf "scale=720:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://127.0.0.1:1935/show/$name_mid -c copy -f flv rtmp://127.0.0.1
[01:38:16 CEST] <kevev1> That's the command I use.
[01:38:30 CEST] <kevev1> I can't figure out why the audio drifts out of sync. Driving me nuts.
[01:41:02 CEST] <redrabbit> -async 1
[01:41:40 CEST] <redrabbit> nvm actually idk
[01:41:50 CEST] <kevev1> redrabbit: I already have that.
[01:42:07 CEST] <kevev1> Or do I need to put it for each stream in that command?
[01:45:53 CEST] <kevev1> ./ffmpeg -version ffmpeg version N-86920-g1193301 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609 configuration: --prefix=/root/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --bindir=/root/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libx264
[01:49:25 CEST] <kevev1> Anyone know how I can use ffprobe to check the time stamps?
[01:50:03 CEST] <JEEB> -show_streams -show_frames -of json
[01:50:10 CEST] <JEEB> and then parse the json with python or something
[01:50:58 CEST] <kevev1> ffprobe -show_streams -show_frames -of json -i http://host:port
[01:51:03 CEST] <kevev1> Is that correct?
[01:51:40 CEST] <JEEB> oh, live stuff?
[01:51:48 CEST] <JEEB> also no -i needed for ffprobe
[01:52:07 CEST] <kevev1> yes live stuff
[01:52:08 CEST] <kevev1> m3u8
[01:52:19 CEST] <JEEB> possible but can be funky
[01:52:28 CEST] <kevev1> arg
[01:52:28 CEST] <kevev1> :p
[01:52:41 CEST] <JEEB> ffmpeg -debugts -v debug -i thing -f null -
[01:52:47 CEST] <JEEB> is probably more useful, maybe
[01:52:49 CEST] <kevev1> JEBB trying to figure out why audio drifts out of sync when transcoding live stream. Not sure where to start.
[01:52:56 CEST] <Guest39397> furq I tried to use pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
[01:53:15 CEST] <Guest39397> The term 'ow-iw' is not recognized as the name of a cmdlet
[01:53:23 CEST] <Guest39397> i get the error: The term 'ow-iw' is not recognized as the name of a cmdlet
[01:54:39 CEST] <furq> sounds like powershell is interpreting the parens
[01:54:43 CEST] <furq> try escaping them i guess
[01:54:46 CEST] <kevev1> Unrecognized option 'debugts'. Error splitting the argument list: Option not found
[01:55:57 CEST] <kevev1> -debug_ts :D
[01:58:59 CEST] <kevev1> JEBB not sure how to decipher this stuff :P
[02:00:46 CEST] <kevev1> cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[02:00:53 CEST] <kevev1> I see this a bunch. Is that bad?
[02:06:17 CEST] <kevev1> I rejoined as kevev :)
[03:41:25 CEST] <kevev_1> I'm back
[03:41:44 CEST] <kevev_1> I am still working on the audio sync issue.
[03:42:01 CEST] <kevev_1> hls live stream transcode. Audio drifts out of sync over time.
[03:42:20 CEST] <kevev_1> Have tried -async 1
[10:01:44 CEST] <Nacht> Anyone ever experianced audio sync problems when transmuxing TS to MP4 ? If my automatic process does it at night, I sometimes get shifts in audio sync, but if I re-create it during the day with the exact same source files, it goes correctly
[10:47:58 CEST] <dragmore88> hi, i have some files where the AC3 track is generating some decoding errors on my stb. Is there a way to use FFPROBE to fully decode the whole file and log debug info to a file ?
[11:53:52 CEST] <feliwir> anyone can tell me how to reset a video stream to the beginning?
[11:54:20 CEST] <feliwir> av_seek_frame(format_ctx, 0, 0, AVSEEK_FLAG_BYTE); not working
[12:02:54 CEST] <Wodjin> Hello everyone, anyone has a guess on how to loop an audio track into a video using a complex filter?
[12:14:04 CEST] <feliwir> why does that not work: https://gist.github.com/feliwir/94eba85944d22d94fdf7c1fb7a8eca5c#file-video-cpp-L21 ?
[12:14:47 CEST] <feliwir> i reset my avio context and my format_ctx :(
[12:21:27 CEST] <feliwir> anyone?
[12:33:31 CEST] <feliwir> avcodec_flush_buffers doesn't help either
[14:07:05 CEST] <rom1v> hi
[14:09:11 CEST] <rom1v> suppose I have H.264 NALS stored in-memory (in a huge char[]). Is there a method similar to avformat_open_input() that may open the content from the buffer instead of a char* url?
[14:16:47 CEST] <rom1v> ok, I get it: call avio_alloc_context() and set the result in the AVFormatContext->pb field
[14:58:22 CEST] <dystopia> what settings do i need to add to my line to get "High 4:4:4 Predictive at L4" in the output video?
[14:58:41 CEST] <dystopia> i tried "-pix_fmt yuv444p" but that didn't seem to work
[15:10:15 CEST] <relaxed_> dystopia: pastebin your command and output
[15:20:11 CEST] <dystopia> i worked it out relaxed_
[15:20:44 CEST] <dystopia> i think having it before -vcodec libx264 in line caused the issue, placing it after that worked
[16:43:20 CEST] <YokoBR> hi folks
[16:43:28 CEST] <YokoBR> which audio codec is compatible with flv?
[16:47:41 CEST] <Nacht> In short, MP3 and AAC. Long version: http://download.macromedia.com/f4v/video_file_format_spec_v10_1.pdf#page=76
[16:50:10 CEST] <YokoBR> I'm getting this error:  FFMPEG:WriteN, RTMP send error 32 (42 bytes)
[16:50:17 CEST] <YokoBR> does anyone knows what it means?
[17:06:27 CEST] <YokoBR> please, could someone help me to improve this? https://gist.github.com/jersobh/e3b733d5dff3599bb73b629d10608d01
[17:22:29 CEST] <arpu> hi how can i use ffmpeg as a video live stream mixer ?
[17:22:47 CEST] <arpu> the idea i had is to use a named pipe  fifo  for input and output
[17:23:28 CEST] <arpu> how can i connect the pipe with ffmpeg and write a black frame if no conect is coming from the input pipe (ffmpeg should not get closed on empty input pipe)
[17:23:39 CEST] <arpu> hopefully the question is clear :/
[17:24:16 CEST] <JEEB> I would rather do that with the API
[17:24:49 CEST] <JEEB> because with multiple inputs I bet it wouldn't be able to discern if there was something there or not and would just wait for input
[17:24:54 CEST] <JEEB> possibly jamming the flow
[17:25:01 CEST] <YokoBR> how can I put two outputs? '-c' 'copy' isn't working
[17:26:26 CEST] <alexpigment> YokoBR: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
[17:27:46 CEST] <arpu> JEEB,  what do you mean with the API ?
[17:28:05 CEST] <JEEB> libav{format,codec,filter}
[17:28:20 CEST] <JEEB> the stuff that FFmpeg provides, and ffmpeg.c is one of the clients using them
[17:28:25 CEST] <arpu> so to make my own ffmpeg client
[17:28:26 CEST] <YokoBR> alexpigment: I've read that. Mine is now '-f', 'flv', 'rtmp://live-api-a.facebook.com:80/rtmp/mydata'. But I wan't to send to another rtmp server simultaneously
[17:28:45 CEST] <YokoBR> the '-c copy' option gives me an error
[17:28:49 CEST] <arpu> JEEB,  hm i think this is not needed
[17:29:00 CEST] <JEEB> well godspeed then
[17:29:06 CEST] <arpu> :>
[17:29:16 CEST] <arpu> but i do not know how
[17:29:20 CEST] <alexpigment> YokoBR: what is the error?
[17:39:09 CEST] <YokoBR> alexpigment: https://pastebin.com/7YKLhfcF
[17:39:48 CEST] <alexpigment> i don't think flv supports vp8/opus
[17:40:25 CEST] <YokoBR> but if I use only 1 output it works
[17:40:29 CEST] <alexpigment> i believe what's happening here is that you're copying the *original* stream rather than the re-encoded stream
[17:41:18 CEST] <alexpigment> you're trying to copy streams 1:0 and 1:1 but you're actually copying 0:0 (vp8) and 0:1 (opus)
[17:43:47 CEST] <alexpigment> now this is where my knowledge gets kinda spotty, but I'm not sure if you need to be doing a separate ffmpeg process to copy from the output, or if you can use the -map options to copy the particular streams from the output
[17:44:00 CEST] <alexpigment> someone else here would know better
[17:46:29 CEST] <thebombzen> I'm pretty certain if you want to use ffmpeg.c to mux the same encoded stream twice, you have to split it with a separate process
[17:47:12 CEST] <thebombzen> something like: ffmpeg -i inputs OPTIONS -f matroska - | ffmpeg -f matroska -i - -c copy out1.flv -c copy out2.flv
[17:56:05 CEST] <YokoBR> mine now is https://gist.github.com/jersobh/7e944edc39324496cb2d08440d59d1cf
[17:56:10 CEST] <YokoBR> but still doesn't work
[18:05:08 CEST] <YokoBR> now I'm trying with tee
[18:16:52 CEST] <thebombzen> have you considered perhaps following my advice
[18:26:35 CEST] <furq> thebombzen: the tee muxer will totally do that
[18:26:39 CEST] <furq> that's the whole point
[18:26:54 CEST] <thebombzen> furq: when was the tee muxer added?
[18:26:59 CEST] <thebombzen> I haven't heard of it :O
[18:27:00 CEST] <furq> ages ago
[18:27:03 CEST] <thebombzen> wow
[18:27:04 CEST] <thebombzen> okay
[18:27:05 CEST] <furq> !muxer tee
[18:27:05 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-formats.html#tee-1
[18:27:49 CEST] <thebombzen> huh, that works
[18:27:57 CEST] <thebombzen> I thought they meant /usr/bin/tee :O
[18:27:59 CEST] <thebombzen> which wouldn't
[18:38:55 CEST] <Fenrirthviti> Hmm, anyone heard of nvenc failing to start with ffmpeg while premiere is open? Just looking to gather some facts or see if anyone on the ffmpeg side has looked into it
[18:39:10 CEST] <JEEB> not many have premiere open here most likely :D
[18:47:08 CEST] <relaxed> Fenrirthviti: I think so, doesn't premiere use cuda?
[18:49:13 CEST] <relaxed> pretty sure someone in here said you need a quadro card for multiple instances
[18:52:41 CEST] <jkqxz> Yeah, you're limited to two instances on cheap cards.  If another program is using both of those then making another just doesn't work.
[18:54:53 CEST] <Hopper_> furq: Thanks for all your help recently, it looks like I've got my video feed working!  https://imgur.com/a/hgoXt
[18:55:34 CEST] <Hopper_> I've got data updating for a .txt bottom left every frame, and a frame counter on the right.
[18:55:55 CEST] <Hopper_> And just imagine the 2 720 video streams hstacked above that.
[18:56:10 CEST] <Hopper_> You can see the bottoms of the frames.
[18:56:19 CEST] <Hopper_> Seems stable, running overnight.
[19:05:49 CEST] <Fenrirthviti> relaxed: It does, but this is a recent issue
[19:06:14 CEST] <Fenrirthviti> and it's only ffmpeg that has the problem, apparently.
[19:06:28 CEST] <Fenrirthviti> applications that have "native" nvenc implementations aren't affected
[19:06:32 CEST] <Fenrirthviti> shadowplay still works, etc.
[19:07:25 CEST] <Fenrirthviti> so I'm not convinced it's an nvenc/driver problem.
[19:07:53 CEST] <Fenrirthviti> The error being returned by ffmpeg is: OpenEncodeSessionEx failed: out of memory (10)
[19:08:01 CEST] <Fenrirthviti> which is just nonsense and a red herring here
[19:08:40 CEST] <JEEB> well it is the darn error that was returned
[19:08:53 CEST] <JEEB> the fact that it is not representative of reality is a separate thing
[19:09:12 CEST] <JEEB> unless the strings for the errors are now incorrect, that's the only thing possibly
[19:09:24 CEST] <JEEB> because it prints out the number as well in case that goes wrong
[19:13:51 CEST] <pgorley> hi, configure seems to still set _FILE_OFFSET_BITS=64 when i'm cross compiling for armv7 32 bits on my x64 machine
[19:16:54 CEST] <jkqxz> pgorley:  Yes.  Why would you ever want it not to?
[19:17:29 CEST] <pgorley> jkqxz: ndk 15+ makes it no longer compile
[19:17:38 CEST] <pgorley> https://github.com/android-ndk/ndk/issues/449
[19:18:15 CEST] <pgorley> _FILE_OFFSET_BITS=64 was a noop on android before ndk 15
[19:19:30 CEST] <JEEB> pgorley: I thought FFmpeg already had those workarounds for android
[19:19:38 CEST] <JEEB> since rcombs poked me towards them in mpv
[19:19:54 CEST] <JEEB> and I can build FFmpeg and mpv just fine for ARMv7 with NDK R15C
[19:20:25 CEST] <JEEB> "--arch=armv7 --cpu=armv7-a --enable-cross-compile --target-os=android --cross-prefix=arm-linux-androideabi-"
[19:20:30 CEST] <JEEB> this being the primary part :P
[19:20:59 CEST] <JEEB> and since they switched to clang, "--cc=arm-linux-androideabi-clang"
[19:21:13 CEST] <JEEB> since I cannot tell the configure that base CC name is "clang" and not "gcc"
[19:21:16 CEST] <JEEB> (´4@)
[19:21:23 CEST] <JEEB> or at least I haven't found the option
[19:21:30 CEST] <JEEB> otherwise the cross-prefix is enough
[19:22:40 CEST] <pgorley> if you still have those workarounds handy, could you link me to them?
[19:22:50 CEST] <JEEB> I mean, no. I'm building vanilla
[19:23:00 CEST] <JEEB> the workarounds/fixes are *in* FFmpeg proper
[19:23:20 CEST] <JEEB> so if it doesn't build for you, you've got something really old
[19:23:34 CEST] <pgorley> 3.3.3
[19:23:45 CEST] <pgorley> i'm looking into bumping it further up though
[19:23:53 CEST] <JEEB> dunno, but likely I've built master from around that during 2016
[19:24:00 CEST] <JEEB> although NDK R15C is newer
[19:24:30 CEST] <JEEB> I can vouch that with an API level 21 (no 64bit off_t) standalone toolchain from NDK R15C current master builds JustFine
[19:24:33 CEST] <JEEB> :P
[19:24:46 CEST] <JEEB> with the toolchain in PATH and those parameters
[19:24:57 CEST] <JEEB> (I have more to enable jni/mediacodec etc, but those are the base cross compilation parameters)
[19:26:09 CEST] <JEEB> pgorley: also I then added the same workarounds to mpv https://github.com/mpv-player/mpv/commit/f4c80d44021dd917aee64f58b0e2a79f20610057
[19:26:13 CEST] <JEEB> :P
[19:26:57 CEST] <pgorley> thanks
[19:27:33 CEST] <JEEB> those workarounds are what I gathered from FFmpeg's master already ("based on libavformat's things")
[19:28:00 CEST] <JEEB> there was some mmap stuff if you're aiming for <21, for which rcombs has posted some patches that might have gotten merged
[19:28:04 CEST] <JEEB> but you're not hitting that
[19:28:09 CEST] <JEEB> if it's the 64bit stuff
[19:28:10 CEST] <JEEB> :P
[19:28:34 CEST] <JEEB> anyways, I think android has had the 64bit offset workaround for ages
[19:28:48 CEST] <JEEB> *android in FFmpeg
[19:29:14 CEST] <pgorley> i'll get on a more recent version of ffmpeg and see if it compiles
[19:29:31 CEST] <JEEB> pgorley: http://git.videolan.org/?p=ffmpeg.git;a=commit;h=d5a6f1127263dd3dfcf08d26439ce4276dfda27d
[19:29:37 CEST] <JEEB> so in 2014
[19:29:54 CEST] <JEEB> although no, that's not the exact thing
[19:31:47 CEST] <JEEB> ah right, no, that was it :)
[19:31:52 CEST] <JEEB> so yea
[19:31:57 CEST] <JEEB> 2014-> should be OK
[19:32:23 CEST] <JEEB> most likely the define is defined, but FFmpeg itself should build just fine and it has all of the workarounds :P
[19:32:34 CEST] <JEEB> against android bionic insanity
[19:33:04 CEST] <JEEB> otherwise I wouldn't a) get it built b) have stuff like 64bit offsets work
[19:58:03 CEST] <laforest> Has anyone ever seen a problem where, when fetching an audio and video stream, then outputing them as two separate outputs, the audio is fine, but the video framerate drops from 30fps to ~10fps?
[20:00:05 CEST] <laforest> Command: ffmpeg -re -i <video url> -re -i <audio url> -map 0 -f rawvideo pipe:5 -map 1 -f wav pipe:6  (where file descriptors 5 and 6 are previously created)
[20:00:33 CEST] <laforest> Doing the same with two separate ffmpeg instances works fine, but falls out of sync eventually of course.
[20:11:30 CEST] <Fenrirthviti> JEEB: (belated response, lunch happened) Yeah, I understand. It's just strange that other applications that use NVENC will work fine under the exact same circumstances, and a quick check of available/reserved/used VRAM shows that it can't possibly be an actual out of memory condition.
[20:12:13 CEST] <JEEB> well blame nvidia for the misleading string unless it's an incorrect integer->string mapping in FFmpeg :P (most likely that error message comes also from nvidia, though)
[20:12:24 CEST] <Fenrirthviti> Oh no doubt, yes.
[20:12:36 CEST] <Fenrirthviti> I'm fully leaning toward nvidia being the one to blame here.
[20:13:03 CEST] <Fenrirthviti> I'm just trying to understand what the heck could be going on that other applications who are directly implementing nvenc support themselves work, whereas ffmpeg does not.
[20:13:07 CEST] <Fenrirthviti> Gotta be some difference.
[20:13:20 CEST] <Fenrirthviti> That's my current hangup that I'm trying to understand.
[20:13:36 CEST] <JEEB> you'd have to trace the API calls
[20:13:36 CEST] <Fenrirthviti> Or I wouldn't have even brought it up here, and instead to nvidia directly
[20:13:46 CEST] <JEEB> for FFmpeg you know what it does
[20:13:55 CEST] <pgorley> JEEB: it's still not compiling, it's complaining about an implicit mmap declaration: https://pastebin.com/AecaRyr9
[20:13:59 CEST] <JEEB> for a proprietary thing you'd have to use something like strace :P
[20:14:12 CEST] <JEEB> pgorley: yes the mmap thing happens on API level less than 21
[20:14:26 CEST] <JEEB> rcombs made a patch for that but I guess it wasn't merged yet?
[20:14:55 CEST] <pgorley> i'm on master now, just did a git pull
[20:14:55 CEST] <JEEB> yea, it hasn't been merged yet
[20:15:04 CEST] <JEEB> just checked http://git.videolan.org/?p=ffmpeg.git&a=search&h=HEAD&st=author&s=Rodger+Combs
[20:15:14 CEST] <Fenrirthviti> yeah, but outside my skillset currently to do something like that, was more asking if anyone had come across a similar case and dug into it
[20:15:31 CEST] <Fenrirthviti> But I appreciate the direction regardless.
[20:16:11 CEST] <JEEB> pgorley: https://ffmpeg.org/pipermail/ffmpeg-devel/2017-September/216828.html
[20:16:33 CEST] <JEEB> feel free to comment that it works for you on the mailing list if it works for you
[20:16:55 CEST] <pgorley> alright, thanks! :)
[20:17:12 CEST] <JEEB> because the only negative comment person didn't really understand the issue :P
[20:18:00 CEST] <JEEB> I'm building on API level 21 myself so I don't have this issue
[20:18:32 CEST] <JEEB> (the opengl renderer I'm using worked on zero 4.4.4 devices so it was really a no-problem to raise the minimum API level requirement)
[20:19:09 CEST] <JEEB> or well, one guy adamantly tried to say that it worked on his magical 4.4.4 device, but I will believe it when I see it
[20:27:16 CEST] <pgorley> JEEB: how do i check what api i'm building against?
[20:27:36 CEST] <pgorley> seems to be working as of yet, doesn't fail as quickly as without the patch
[20:27:49 CEST] <JEEB> when you create your standalone toolchain you set it
[20:28:08 CEST] <JEEB> --api 21 being API level 21
[20:28:11 CEST] <JEEB> etc
[20:28:16 CEST] <JEEB> see https://developer.android.com/ndk/guides/standalone_toolchain.html
[20:29:48 CEST] <pgorley> aww, it didn't work
[20:30:59 CEST] <Mista_D> Anyway to reduce Key-frame flickering when force-fey-frame is used? The video goes blurry on keyframe, using strict CBR x264
[20:32:28 CEST] <JEEB> pgorley: ?
[20:32:48 CEST] <pgorley> libavcodec/v4l2_buffers.c:409:44: error: implicit declaration of function 'mmap' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
[20:32:57 CEST] <JEEB> did you apply the patch?
[20:33:03 CEST] <pgorley> as well as: libavcodec/v4l2_fmt.c:125:12: error: implicit declaration of function 'lfind' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
[20:33:05 CEST] <pgorley> yes
[20:33:19 CEST] <JEEB> weird, because it should be fixing exactly that
[20:33:27 CEST] <pgorley> it fails further down the line
[20:33:34 CEST] <JEEB> no, that's an error
[20:33:36 CEST] <JEEB> it failed right ther
[20:33:59 CEST] <JEEB> the patch I mentioned specifically adds the osdep stuff in the v4l2 thing
[20:34:09 CEST] <pgorley> no no, it failed earlier in the build without the patch
[20:34:13 CEST] <JEEB> yes
[20:34:26 CEST] <pgorley> but it's weird that it still fails on mmap
[20:34:41 CEST] <JEEB> osdep not being included somewhere? you not cleaning the build dir?
[20:35:25 CEST] <pgorley> just cleaned it to see
[20:37:52 CEST] <JEEB> pgorley: if it still fails on current master with that patch, poke rcombs on #ffmpeg-devel
[20:38:00 CEST] <JEEB> (he will respond when he's around)
[20:38:12 CEST] <pgorley> alright, cool
[22:13:39 CEST] <doublya> I'm building ffmpeg with --enable-vaapi. vainfo shows hevc profile available.  ./configure of ffmpeg shows h263_vaapi and h264_vaapi in Enabled hwaccles; however, after build, ffmpeg -codecs | grep vaapi is none
[22:14:50 CEST] <jkqxz> You aren't building the encoders?
[22:15:06 CEST] <doublya> i want to is there a flag i'm missing
[22:15:48 CEST] <jkqxz> They should come by default with --enable-vaapi if you have a version supporting them, which H.265 support would indicate.
[22:15:54 CEST] <jkqxz> What is your configure line?
[22:16:48 CEST] <jkqxz> (Though note that encoders appear in the "encoders" section of configure output.)
[22:17:12 CEST] <doublya> quite long, but I'm currently just using  PKG_CONFIG_PATH="<my-path>" ./configure --enable-vaapi
[22:18:42 CEST] <doublya> I don't see any vaapi encoders in the 'Enabled encoders:' section.   Only under Enabled hwaccels
[22:19:44 CEST] <doublya> I'm linking against libva built from source, which is linked against libdrm and mesa built from source.
[22:20:51 CEST] <jkqxz> Is it maybe a very old version?  I'm not sure how else you could get the hwaccels but not the encoders.
[22:21:08 CEST] <doublya> I've build the 2017Q2 graphics stack
[22:21:40 CEST] <doublya> I've built libva with only drm, no X11 as I intend to use this build on a headless machine
[22:22:15 CEST] <jkqxz> Hmm.  That shouldn't make a difference.  Paste your ffbuild/config.log somewhere?
[22:27:05 CEST] <doublya> https://pastebin.com/sBNiXTJN
[22:28:07 CEST] <jkqxz> "fatal error: va/va.h: No such file or directory"  Something very odd is happening at configure time.
[22:32:44 CEST] <jkqxz> But the link test for libva actually working passes.  I think you have something very odd going on with your toolchain; no idea what else to suggest.
[22:34:17 CEST] <doublya> Thank you for looking into it for me. It's very strange that libva-utils build and works, but ffmpeg will not
[22:46:12 CEST] <ChocolateArmpits> Hey, I'm trying to encode a video using mjpeg, however surprisingly at the same bitrate a higher resolution version provides better detail for some reason. Is there any reason for this?
[22:47:06 CEST] <BtbN> well, it's a higher resolution
[22:47:52 CEST] <ChocolateArmpits> well that sort of goes against intuition or lower resolution having higher bits per pixel distribution
[22:47:55 CEST] <BtbN> How do you even define "same bitrate". I don't think you can run mjpeg in any kind of cbr more, can you?
[22:48:01 CEST] <ChocolateArmpits> at 1mbps a 1024x768 has more detail preserved than 640x480, both are scaled from 768x576
[22:48:40 CEST] <ChocolateArmpits> both look like poo, but an upscale much less
[22:48:47 CEST] <ChocolateArmpits> slightly rather than much
[22:50:30 CEST] <YokoBR> please folks, I still couldn't manage to stream multiple outputs using the tee mux
[22:50:33 CEST] <YokoBR> Output #0, tee, to '[f=flv:onfail=ignore]rtmp://a.rtmp.youtube.com/live2/mykey|[f=flv:onfail=ignore]rtmp://live-api-a.facebook.com:80/rtmp/myparams': Output file #0 does not contain any stream
[22:50:47 CEST] <furq> YokoBR: pastebin the full command
[22:51:02 CEST] <ChocolateArmpits> YokoBR, you need -map 0
[22:51:07 CEST] <ChocolateArmpits> or any mapping command
[22:51:15 CEST] <furq> ChocolateArmpits: by "upscale" do you mean less downscaled
[22:51:43 CEST] <YokoBR> https://gist.github.com/jersobh/cb1eba58afc88da0aecec5f3d43ce67e
[22:51:53 CEST] <YokoBR> ChocolateArmpits: where would that command be?
[22:52:21 CEST] <ChocolateArmpits> YokoBR, in the output commands, place it right after -f tee just so you know better
[22:53:19 CEST] <alexpigment> ChocolateArmpits: by chance, are you viewing with VLC? They do some stupid scaling by default that creates aliasing that isn't in the original stream. There's a setting to fix it, but I'd have to look it up.
[22:53:43 CEST] <ChocolateArmpits> alexpigment, I didn't notice anything funny
[22:53:49 CEST] <alexpigment> (I could go on for days about everything that VLC does wrong by default, but oh well)
[22:54:38 CEST] <alexpigment> ChocolateArmpits: you said that 1024x768 looked better than 640x480, even though they were from the same source. my question is if you're viewing them in VLC
[22:54:50 CEST] <ChocolateArmpits> alexpigment, I was
[22:54:53 CEST] <ChocolateArmpits> both of em
[22:55:25 CEST] <alexpigment> lemme, look at the vlc settings on my home system. i finally looked into it one day because it was 1 of a handful of reasons I never use it
[22:55:28 CEST] <YokoBR> Thanks, ChocolateArmpits . Do you know if that command would push the stream to both services?
[22:56:08 CEST] <YokoBR> '-map', '0', '[f=flv:onfail=ignore]rtmp://a.rtmp.youtube.com/live2/'+_config_youtube+'|[f=flv:onfail=ignore]rtmp://live-api-a.facebook.com:80/rtmp/'+_config_facebook
[22:56:24 CEST] <YokoBR> because it's working for youtube only
[22:57:30 CEST] <ChocolateArmpits> YokoBR, the -map 0 is a mapping command that maps input streams to output streams. It's tee that performs stream multiplication if you can call it that
[22:58:36 CEST] <ChocolateArmpits> YokoBR, if you can, try streaming to facebook separately to investigate the issue more closely, I think I had some problems related to it, can't remember exactly though
[22:58:55 CEST] <YokoBR> ChocolateArmpits: without the tee muxer it works flawlessly
[23:01:25 CEST] <alexpigment> ChocolateArmpits: well I can't figure out what the setting is, but either way, VLC is a garbage player. you might try another player and see if 1024x768 still looks better so much better. I only say this because 720x480 videos were unwatchable on my end, but anything high def was fine. Windows Media Player played the files correctly
[23:20:01 CEST] <ChocolateArmpits> ok here it is https://i.imgur.com/4wbp8wi.jpg . The video is initially downscaled to sub-sd, then piped to another instance of ffmpeg where it gets parallelly encoded to two mjpeg variants, one with the incoming resolution, other with upscaled. The upscaled one preserves original detail better (big player) than the incoming resolution encode (smaller player). You can see the command line in the top row
[23:20:37 CEST] <ChocolateArmpits> both are encoded at 500k bitrate
[23:21:18 CEST] <ChocolateArmpits> this isn't a unique scene or some artifact, this keeps happening repeatedly
[23:21:45 CEST] <ChocolateArmpits> I want to understand why this is
[23:24:28 CEST] <qmr> how can I mix together an audio file with existing audio with an offset and change volume levels?  I want to mess with audio without generation loss on video
[23:25:03 CEST] <qmr> I'm editing videos of flying.  cameras are obviously super noisy from the engine and prop, but I have cockpit audio recorded on my phone
[23:27:12 CEST] <klaxa> qmr: you probably want avolume, adelay and amix: https://ffmpeg.org/ffmpeg-filters.html maybe also see: https://stackoverflow.com/questions/44712868/ffmpeg-set-volume-in-amix
[23:27:30 CEST] <furq> it's just volume, not avolume
[23:27:33 CEST] <klaxa> oh
[23:27:54 CEST] <qmr> oh yea the last bit, sort of mentioned but didnt' finish thought, sorry, I want to mix camera audio and cockpit audio
[23:28:10 CEST] <qmr> little bit of vrrrrrrrrrrrrrrr in background for ambience, but the cockpit audio should be loud and clear
[23:28:38 CEST] <furq> well yeah you want amix then
[23:28:42 CEST] <furq> !filter amix @qmr
[23:28:42 CEST] <nfobot> qmr: http://ffmpeg.org/ffmpeg-filters.html#amix
[23:28:43 CEST] <klaxa> maybe it would be easier to do this in a graphical editor like audacity? then mux the audio and video again
[23:30:42 CEST] <alexpigment> ChocolateArmpits: are the actual files the same file size?
[23:31:25 CEST] <ChocolateArmpits> alexpigment, hmm I streamed the encode directly to ffplay for viewing, didn't think of writing them
[23:31:37 CEST] <ChocolateArmpits> could do it
[23:31:38 CEST] <alexpigment> it's possible your bitrates aren't being used
[23:32:29 CEST] <klaxa> also by design (afaik) jpeg images are always divided into 8x8 pixel blocks, so a bigger picture will have more blocks showing more details
[23:32:47 CEST] <furq> qmr: presumably something like -i camera.wav -i cockpit.wav -filter_complex "[0:a]adelay=12345,volume=-6dB,[1:a]amix"
[23:33:06 CEST] <alexpigment> klaxa: I thought of that to, but this does look like more macroblocking due to bandwidth starvation imho
[23:33:39 CEST] <klaxa> maybe
[23:34:06 CEST] <furq> the real question is, why are you using mjpeg in 2017 ad
[23:34:09 CEST] <ChocolateArmpits> alexpigment, welp looks like mjpeg encoder doesn't respect the bitrate in either case
[23:34:16 CEST] <alexpigment> i figured so
[23:34:22 CEST] <ChocolateArmpits> one ends up with ~700kbps other with ~2000kbps
[23:34:27 CEST] <furq> oh nice
[23:34:30 CEST] <furq> yeah that'll do it
[23:35:10 CEST] <alexpigment> another thing to take a look at is why one is 560x240 and one is 1120x480. those aren't the resolutions you mentioned, so it may be worth looking into it
[23:35:14 CEST] <furq> a lot of the builtin encoders tend to ignore you if you give them some hopelessly optimistic bitrate
[23:36:00 CEST] <klaxa> huh, that's what he said though, one is the (below-SD) downscaled one and the other is upscaled
[23:36:02 CEST] <alexpigment> fwiw, I'm looking at my own notes from the past and I always used -q:v 2 for MJPEG. either I didn't want to set the bitrate, or I ran into the same thing
[23:36:03 CEST] <ChocolateArmpits> alexpigment, nah the end effect is the same, so I guess bitrate misrespect is basically happening in the original case too
[23:36:59 CEST] <alexpigment> it may be worth setting both -q:v [whatever] and also the bitrate, and see if it gets honored there
[23:37:07 CEST] <alexpigment> some encoders require both if i recall correctly
[23:38:03 CEST] <alexpigment> ChocolateArmpits: re: the resolutions, I'm not saying it's a factor in what you're saying, it's just that you mentioned 1024x768 and 640x480 specifically before. I'm not seeing those in the ffmpeg info in your screenshot. if the size is fine, no worries
[23:38:34 CEST] <ChocolateArmpits> alexpigment, that was another case, don't have access to those files atm so have to improvise
[23:38:41 CEST] <alexpigment> gotcha
[23:39:04 CEST] <ChocolateArmpits> nope -b:v -q:v don't make it worth
[23:39:28 CEST] <alexpigment> interesting
[23:39:52 CEST] <alexpigment> well, I guess 1mbps bitrate and MJPEG aren't going to be workable in ffmpeg
[23:40:04 CEST] <alexpigment> it does raise the question of why you need MJPEG at that bitrate though
[23:40:11 CEST] <alexpigment> (in 2017, or any year prior)
[23:40:39 CEST] <alexpigment> rather, 500kbps in this case
[23:41:44 CEST] <ChocolateArmpits> the destination is an old version of vlc from  that doesn't support x264 intra refresh decoding so to achieve lower latency only mjpeg can be used
[23:41:48 CEST] <ChocolateArmpits> from '08
[23:41:57 CEST] <alexpigment> gotcha
[23:42:15 CEST] <alexpigment> is DivX / XviD an option?
[23:42:36 CEST] <ChocolateArmpits> actually it is slice based decoding
[23:42:36 CEST] <alexpigment> I don't know about the relative latencies of XviD MJPEG
[23:42:54 CEST] <ChocolateArmpits> the penalty is player buffering
[23:43:02 CEST] <ChocolateArmpits> I don't think those will work
[23:43:31 CEST] <ChocolateArmpits> haven't tested though
[23:43:32 CEST] <alexpigment> ok, I'm going to go out on a limb here
[23:43:38 CEST] <klaxa> i use jpeg for low latency live streaming as well
[23:43:42 CEST] <alexpigment> would using x264 with a gop of 1 work?
[23:44:01 CEST] <alexpigment> that should still be better than MJPEG, although functionally similar
[23:44:05 CEST] <pzich> GOP of 1? :O
[23:44:20 CEST] <klaxa> intra-only is not invalid
[23:44:27 CEST] <klaxa> worth a shot
[23:44:29 CEST] <klaxa> imo
[23:45:05 CEST] <alexpigment> probably worth trying -intra -g 1 -coder 0
[23:45:38 CEST] <alexpigment> and if they works, try taking away -coder 0
[23:45:42 CEST] <alexpigment> *that
[23:47:41 CEST] <ChocolateArmpits> alexpigment, What does -intra do? Can't find it in the docs
[23:48:23 CEST] <alexpigment> it sets it to intra-only mode encoding. I think it's effectively the same as -g 1, but if I recall, the H.264 profile reflects that it's an intra profile
[23:48:37 CEST] <alexpigment> don't quote me on that, I did these tests a long time ago and I'm just looking at my notes ;)
[23:49:04 CEST] <alexpigment> either there was some reason I specified both, or there wasn't. i'm hoping there was a reason
[23:50:50 CEST] <ChocolateArmpits> -tune lowlatency should be enough to replace -coder 0
[23:51:05 CEST] <ChocolateArmpits> coder 0 turns off cabac which lowlatency does too along with a few other
[23:51:31 CEST] <alexpigment> cool, i didn't know what VLC from 2008 supported or not
[23:53:46 CEST] <furq> it's zerolatency
[23:54:00 CEST] <furq> but yeah gop size of 1 is overkill, just turning off bframes is sufficient
[23:54:53 CEST] <furq> intra-refresh doesn't actually reduce latency afaik, it just avoids bitrate spikes at gop boundaries
[23:59:15 CEST] <Hopper_> furq: Mentioned you earlier, did you see my message?
[00:00:00 CEST] --- Wed Oct  4 2017


More information about the Ffmpeg-devel-irc mailing list