[Ffmpeg-devel-irc] ffmpeg.log.20170208

burek burek021 at gmail.com
Thu Feb 9 03:05:01 EET 2017


[00:18:28 CET] <xtina> interestingly, i've continuously had stuttering while piping arecord to ffmpeg. but i am able to get smooth audio if i write arecord to a named pipe and simultaneously read from it
[00:18:47 CET] <xtina> so (arecord -> named pipe -> ffmpeg) and (raspivid -> ffmpeg) is the best performance i can get
[00:19:02 CET] <kerio> xtina: what about using all named pipes
[00:19:04 CET] <xtina> with very low CPU usage
[00:19:27 CET] <xtina> a named pipe for the raspivid as well?
[00:19:32 CET] <kerio> sure
[00:19:41 CET] <xtina> sure, i can try it
[00:19:56 CET] <kerio> it would be cleaner
[00:21:01 CET] <xtina> perhaps it will also help with the pesky issue where my audio is currently playing in realtime but the video is skipping
[00:50:14 CET] <xtina> i must be doing something stupid but i can't figure out how to send raspivid to a named pipe
[00:50:20 CET] <xtina> mkfifo temp_video.h264
[00:50:27 CET] <xtina> raspivid -t 0 -fps 20 -b 3000000 -o temp_video.h264 &
[00:50:44 CET] <xtina> nothing gets put in temp_video.h264 .. weird
[00:57:21 CET] <Diag> i dont get how you think youre gonna encode h264 on a pi
[00:57:33 CET] <furq> maybe because it has a hardware h264 encoder
[00:57:50 CET] <Diag> can ffmpeg use it?
[00:57:57 CET] <xtina> i think so
[00:58:04 CET] <furq> yes
[00:58:19 CET] <Diag> huh
[00:58:29 CET] <Diag> well butter my ass and call me a biscuit
[00:58:47 CET] <kerio> you don't even want to use it like that
[00:58:54 CET] <kerio> if you're going to take video from the camera
[01:35:59 CET] <xtina> this is really bizarre
[01:36:21 CET] <xtina> if i create a regular file temp.v and run raspivid -fps 20 -v -b 3000000 -o temp.v -t 0 &
[01:36:26 CET] <xtina> then video gets written in temp.v
[01:36:33 CET] <xtina> if i run mkfifo temp.v and raspivid -fps 20 -v -b 3000000 -o temp.v -t 0 &
[01:36:36 CET] <xtina> then nothing ever goes into temp.v
[01:37:04 CET] <xtina> and with the raspivid output logs, nothing happens after 'Connecting camera video port to encoder input port'
[01:37:11 CET] <xtina> the next step should be opening the output file, but it doesn't happen
[01:37:21 CET] <xtina> am i doing something stupid?
[01:40:15 CET] <Diag> yes
[01:41:16 CET] <xtina> Diag: what am I doing wrong?
[01:41:47 CET] <hyponic> Willing to pay for a quick mod from a dev or a c programmer familiar with ffmpeg. 1 hour job. Please PM me.
[01:42:16 CET] <Diag> what sort of mod
[01:43:15 CET] <hyponic> Diag PM i will explain.
[01:43:21 CET] <xtina> Diag: can you spot my mistake? i don't know what it is :)
[01:44:39 CET] <Diag> hyponic: im no ffmpeg dude, its just youll probably catch more fish if you dangle your bait rather than keeping it in the boat
[01:45:06 CET] <hyponic> Diag https://trac.ffmpeg.org/ticket/5913
[01:45:38 CET] <Diag> i see
[01:46:01 CET] <hyponic> Diag make libzvbi give a picture format ffmpeg's dvbsub encoder understands or make ffmpeg understand the output libzvbi have now
[01:49:00 CET] <hyponic> Diag is that something that you potentially can help out with?
[01:50:38 CET] <Diag> no not specifically
[01:51:04 CET] <hyponic> ok
[01:52:54 CET] <xtina> Diag: do you know what I did wrong with my named pipe..?
[02:11:03 CET] <xtina> does anyone else mind testing if the following commands work for them?
[02:11:09 CET] <xtina> mkfifo test.video
[02:11:29 CET] <xtina> raspivid -o test.video -t 0 -b 10000000 &
[02:11:36 CET] <xtina> this does not succeed for me :(
[03:02:45 CET] <xtina> hey guys. i've made a livestream where the audio is playing at a normal speed, but the video is skipping forward like crazy. here's the streamed video: https://www.youtube.com/watch?v=hMD-EwVV-JI
[03:03:11 CET] <xtina> any idea why? my command: http://pastebin.com/UVVLtN0x
[03:04:00 CET] <xtina> essentially the raspivid video is playing *really* fast
[03:04:47 CET] <xtina> also, i recorded about 1 min of stream, but there's only 15 seconds in the result... it seems like the audio in the playback was the first 15s, but the video is skipping through the entire 1 minute
[03:06:05 CET] <Diag> have you tried adding -ID -10T
[03:09:26 CET] <xtina> Diag: did you just call me an idiot?
[03:09:36 CET] <Diag> damn, almost got him
[03:09:52 CET] <xtina> 1st. i'm not a guy
[03:09:55 CET] <xtina> 2nd. why are you being so rude?
[03:10:02 CET] <xtina> are there no mods in this channel? this is ridiculous
[03:10:18 CET] <Diag> Aww come on, im just havin a bit of fun
[03:10:29 CET] <xtina> right, calling people idiot is hilarious and clever
[03:10:32 CET] <Diag> People have been helping you since I was at work >4 hours ago
[03:10:40 CET] <xtina> you're right, everyone has been incredibly helpful
[03:10:46 CET] <Diag> except me
[03:11:11 CET] <xtina> except for you, who've done nothing but call me stupid and an idiot, and suggest my project is impossible when others have pointed out that your assumptions are wrong?
[03:11:29 CET] <xtina> do you have nothing better to do than to troll me?
[03:11:42 CET] <Diag> Listen mate, you were obviously trying to do a software x264 encode first of all
[03:11:44 CET] <Diag> secondly
[03:11:51 CET] <Diag> I did not know it had h264 hardware
[03:11:56 CET] <Diag> And when i was wrong i said so
[03:12:23 CET] <Diag> ID 10T is one of the oldest jokes around
[03:12:25 CET] <xtina> i'm not going to argue with a stranger over IRC, but i'd suggest you find a new 'hobby' if this is your main one
[03:12:58 CET] <Diag> Lol aight man
[03:12:59 CET] <xtina> but keep making hilarious jokes if you like, nobody's gonna stop ya
[03:13:30 CET] <Diag> keep on trying to encode h264 on the fly on a 80 cent processor, nobodys gonna stop ya
[03:14:11 CET] <xtina> you KNOW that that is not what i am trying to do. what is your vendetta against, exactly?
[03:14:26 CET] <Diag> its not? it was really sounding like it was
[03:14:29 CET] <xtina> me, my fanciful project, raspberry pi, ...?
[03:14:31 CET] <Diag> Im sorry, what exactly are you doing
[03:14:44 CET] <xtina> i'm trying to stream video and audio from my Pi Zero to the internet, i don't care how, as long as it works
[03:15:11 CET] <xtina> clearly i know very little about video encoding, so i tried x264 since someone HERE suggested it to me, now i am avoiding it because i've learned a faster way
[03:15:26 CET] <xtina> really no need to belittle me, just because i know less than you
[03:15:35 CET] <Diag> nobody said you know less than i lol
[03:15:44 CET] <Diag> I already mentioned before im not an ffmpeg guy
[03:15:53 CET] <xtina> great, i wasn't asking you for anything, Diag
[03:15:58 CET] <Diag> ????/
[03:15:59 CET] <xtina> i've been talking to others in here
[03:16:02 CET] <Diag> I tell you im wrong
[03:16:04 CET] <Diag> and you insult me
[03:16:06 CET] <Diag> I get it, ok
[03:16:13 CET] <Diag> Thanks
[03:16:21 CET] <xtina> all right. i'm stopping here, continue the convo if you like
[03:16:35 CET] <Diag> asshat
[03:17:11 CET] <xtina> wow, OK. friendly guy.
[03:17:24 CET] <xtina> :)
[03:59:32 CET] <llamapixel> just ignore xtina
[03:59:45 CET] <llamapixel> he obviously had some issues there.
[04:00:03 CET] <furq> you're missing a comma there
[04:00:43 CET] <llamapixel> You are missing a Capital Letter and full stop or period but I tolerate your engrish. ;)
[04:00:46 CET] <xtina> haha, yea, i got it :)
[04:10:51 CET] <thebombzen> xtina: I might be wrong about this, but I don't think a raw h.264 stream stores the framerate
[04:11:05 CET] <thebombzen> try adding -framerate X before -i input.h264
[04:11:13 CET] <thebombzen> (where X is the video's framerate)
[04:11:18 CET] <furq> iirc you need -r, not -framerate
[04:11:20 CET] <thebombzen> FFmpeg will assume 25 unless otherwise stated
[04:11:32 CET] <furq> there is no -framerate option for the h264 demuxer
[04:11:41 CET] <thebombzen> that sounds like an error
[04:11:49 CET] <furq> it sure does
[04:11:51 CET] <thebombzen> because -framerate is used to provide information not otherwise there
[04:11:52 CET] <xtina> i did just pop -framerate 20 in front of -i input.h264 (also going at 20fps in raspivid)
[04:12:05 CET] <thebombzen> whereas -r is used to coax the framerate
[04:12:11 CET] <xtina> now my video is not skipping anymore, it's just 10 seconds faster than my audio :)
[04:12:14 CET] <thebombzen> so if it uses -r and not -framerate that's probably bad
[04:12:21 CET] <furq> yeah i was confused by that
[04:12:43 CET] <xtina> hmmm
[04:12:48 CET] <furq> -r is a global option and -framerate is a demuxer private option
[04:12:57 CET] <xtina> so you're saying i should be using -r (but this is a bug)?
[04:13:01 CET] <thebombzen> xtina: what does raspivid do?
[04:13:07 CET] <thebombzen> does it grab the display?
[04:13:10 CET] <xtina> raspivid -fps 10 -v -b 3000000 -o temp_video.h264 -t 0 & \
[04:13:15 CET] <thebombzen> if so you can do that directly with FFmpeg, did you know that?
[04:13:16 CET] <xtina> grabs it and puts it in a named pipe
[04:13:18 CET] <furq> i'm saying you should use -r and it's not a bug, it's just confusing because it's -framerate for other formats
[04:13:24 CET] <xtina> furq: oh, i see
[04:13:41 CET] <thebombzen> furq: yea it's not a bug, it's just a confusing error
[04:13:42 CET] <furq> thebombzen: it apparently doesn't work properly with ffmpeg
[04:13:49 CET] <thebombzen> what do you mean?
[04:13:51 CET] <xtina> thebombzen: i know, but someone in here told me that if i send from raspivid to ffmpeg then it sends hardware encoded video to ffmpeg and bypasses CPU and keeps my CPU usage low
[04:14:05 CET] <xtina> i've gotten much better cpu usage by sending a raspivid named pipe to ffmpeg
[04:14:07 CET] <thebombzen> xtina: you can also hardware encodew ith ffmpeg
[04:14:09 CET] <furq> that's what h264_omx does
[04:14:17 CET] <xtina> yea, i was previously hardware encoding with h264_omx
[04:14:24 CET] <xtina> but the cpu usage was much higher
[04:14:31 CET] <xtina> than if i take it from raspivid
[04:14:36 CET] <furq> what format comes out of your camera
[04:14:37 CET] <xtina> it was at like 80-90% (i'm on a pi zero)
[04:15:40 CET] <thebombzen> xtina: what happened with -r 20?
[04:15:43 CET] <xtina> furq: i think h264
[04:15:46 CET] <furq> oh
[04:15:47 CET] <xtina> from the cam
[04:15:50 CET] <furq> why are you reencoding it then
[04:15:53 CET] <thebombzen> also xtina do not use -re with live grabbing
[04:16:04 CET] <xtina> i don't think i'm re encoding, just copying video
[04:16:08 CET] <xtina> here's my full command in pastebin:
[04:16:21 CET] <furq> i'm pretty sure raspivid reencodes
[04:16:23 CET] <xtina> http://pastebin.com/dMQNrbSj
[04:16:25 CET] <thebombzen> xtina: the -re will intentionally read the input file in realtime. it's used for streaming a file on a disk
[04:16:40 CET] <thebombzen> xtina: if you use -re for a live grab it will usually just cause random lags
[04:16:45 CET] <thebombzen> and give you no real benefit
[04:16:53 CET] <xtina> thebombzen: i'm writing both raspivid and arecord to named pipes. i'm grabbing from those pipes to ffmpeg, so i thought this was the right way to make them play in realtime?
[04:17:17 CET] <thebombzen> but the original source is in realtime
[04:17:31 CET] <thebombzen> by defautl ffmpeg will read as fast as it can, but given that you're using pipes, it'll just block if nothing's there
[04:18:04 CET] <thebombzen> if you use -re then if there's any sort of lag where ffmpeg fails to record one small bit at 1x speed, it won't read the rest at slightly higher to catch up
[04:18:13 CET] <thebombzen> it'll just stay behind forever
[04:18:22 CET] <xtina> hmmm, i see
[04:18:25 CET] <furq> xtina: ffmpeg -f alsa -ar 48000 -acodec pcm_s32le -i mic_sv -f v4l2 -framerate 20 -i /dev/video0 -c:v copy -acodec aac -f flv rtmp://209.85.230.23/live2/asdf
[04:18:29 CET] <furq> if your camera spits out h264 then try that
[04:18:58 CET] <thebombzen> also consider -use_libv4l2 true right after -f v4l2
[04:19:05 CET] <furq> what does that do
[04:19:10 CET] <xtina> i will try taking out both -re's then
[04:19:17 CET] <xtina> furq: will try
[04:19:30 CET] <thebombzen> furq: I don't know exactly but in my experiences it has the same effect as LD_PRELOADing v4l1compat.so
[04:19:39 CET] <furq> and what does that do
[04:19:52 CET] <thebombzen> gives you more options. hold on lemme get you a concrete example
[04:21:01 CET] <xtina> furq: [flv @ 0x1850c10] Video codec rawvideo not compatible with flv
[04:21:15 CET] <furq> fun
[04:21:25 CET] <furq> i have no idea why ffmpeg would cause more load than raspivid then, there's nothing to decode
[04:22:19 CET] <furq> actually maybe you need -f h264 and/or -c:v h264 before -i /dev/video0
[04:23:02 CET] <xtina> instead of -f v4l2?
[04:23:07 CET] <furq> both
[04:23:08 CET] <furq> before that though
[04:23:14 CET] <furq> run v4l2-ctl --list-formats-ext
[04:23:21 CET] <furq> or ffmpeg -f v4l2 -list_formats all -i /dev/video0
[04:23:57 CET] <xtina> won't paste long output but h264 is in there
[04:24:07 CET] <xtina> 	Index       : 2 	Type        : Video Capture 	Pixel Format: 'H264' (compressed) 	Name        : H264 		Size: Stepwise 64x64 - 1920x1080 with step 8/8
[04:24:19 CET] <thebombzen> use -f v4l2 -vcocdec h264
[04:24:20 CET] <furq> i think you use -c:v h264 as an input option to force that
[04:24:22 CET] <furq> yeah
[04:24:25 CET] <thebombzen> try that
[04:24:47 CET] <thebombzen> furq: http://0x0.st/W4E.txt
[04:24:54 CET] <furq> the video it puts out can't be any worse than the pi's encoder
[04:28:30 CET] <xtina> so i just tried
[04:28:32 CET] <xtina> ~/special/ffmpeg/ffmpeg -f alsa -ar 48000 -acodec pcm_s32le -i mic_sv -f v4l2 -vcodec h264 -c:v h264 -framerate 20 -i /dev/video0 -c:v copy -acodec aac -f flv rtmp://209.85.230.23/live2/KEY
[04:29:01 CET] <xtina> the logs look good, except for a alsa buffer xrun every ~20 seconds
[04:29:13 CET] <xtina> but the youtube stream is stuck in 'starting' for the 1-2 minutes i tried to stream
[04:29:16 CET] <xtina> which i'd never seen before
[04:29:26 CET] <thebombzen> that's actually typical
[04:29:28 CET] <xtina> i delivered ~15fps for 2minutes and nothing ever popped up on stream
[04:29:30 CET] <thebombzen> I've seen it many times
[04:29:38 CET] <thebombzen> the youtube stream is famous for having a lot of latency
[04:29:45 CET] <thebombzen> there isn't really a way around that tbh
[04:29:53 CET] <xtina> i typically get 15 seconds of latency
[04:29:59 CET] <xtina> but not 2 seconds waiting in 'starting'. hmmm.
[04:30:02 CET] <thebombzen> I've seen two minutes before
[04:30:09 CET] <xtina> let me try it again for longer, then
[04:30:14 CET] <thebombzen> okay sure
[04:30:20 CET] <thebombzen> I'd give it five and then start worrying
[04:30:22 CET] <xtina> i thought i was supposed to use -r
[04:30:26 CET] <thebombzen> you are
[04:30:28 CET] <xtina> not -framerate? '~/special/ffmpeg/ffmpeg -f alsa -ar 48000 -acodec pcm_s32le -i mic_sv -f v4l2 -vcodec h264 -c:v h264 -framerate 20 -i /dev/video0 -c:v copy -acodec aac -f flv rtmp://209.85.230.23/live2/KEY'
[04:30:40 CET] <thebombzen> oh
[04:30:43 CET] <thebombzen> for v4l2 use -framerate
[04:30:49 CET] <xtina> oh, i see
[04:30:58 CET] <thebombzen> so certain demuxers implement -framerate, which is says "yo, in case you didn't know, the framerate is this"
[04:31:04 CET] <xtina> and isn't h264 not the hardware accelerated encoding?
[04:31:20 CET] <thebombzen> I don't know - if h264 comes out of your /dev/video0 then I'd guess yes
[04:31:30 CET] <thebombzen> my guess is the raspi is recording and encoding with a hardware chip
[04:31:50 CET] <thebombzen> also by the way, I wouldn't recommend -c:a aac with a low bitrate
[04:32:03 CET] <thebombzen> ffmpeg's aac encoder is okay at high bitrates but it drops off at low bitrates
[04:32:06 CET] <xtina> OK. mp3liblame thing?
[04:32:11 CET] <thebombzen> no mp3 is terrible
[04:32:22 CET] <thebombzen> the issue is... does youtube require -f flv?
[04:32:29 CET] <thebombzen> cause if it does you're sort of forced
[04:32:40 CET] <thebombzen> but if there's any way to feed opus or vorbis I'd figure that out
[04:33:00 CET] <furq> if h264 comes out of the camera then it's not touching the pi's hardware at all
[04:33:03 CET] <thebombzen> if you build ffmpeg yourself you can use -c:a libfdk_aac but because it's nonfree you HAVE to build ffmpeg yourself
[04:33:05 CET] <xtina> it uses rtmp
[04:33:18 CET] <xtina> i did build ffmpeg myself
[04:33:19 CET] <furq> but maybe the pi camera is doing some weird shit with the onboard encoder
[04:33:20 CET] <thebombzen> xtina: oh yea rtmp requires flv hmm
[04:33:33 CET] <xtina> i followed furq's instructions :)
[04:33:34 CET] <thebombzen> xtina: it's legal to use libfdk-aac if you build it yourself from source
[04:33:39 CET] <xtina> but i think libfdk_aac was unfound
[04:33:48 CET] <thebombzen> yea you have to build libfdk-aac yourself
[04:33:49 CET] <xtina> and he said it should be there anyway (if i'm remembering correctly..)?
[04:33:51 CET] <xtina> so i didn't include that
[04:33:55 CET] <furq> i said that about alsa
[04:33:58 CET] <xtina> oh
[04:33:59 CET] <xtina> whoops
[04:34:08 CET] <thebombzen> libfdk-aac is "nonfree" which means that the source code is legal, but the binaries are not legal to be distributed
[04:34:14 CET] <furq> i said fdk didn't really matter because there's a builtin aac encoder which is ok
[04:34:14 CET] <thebombzen> it's totally okay to DL the source and build it yourself tho
[04:34:19 CET] <thebombzen> furq: not for low bitrates
[04:34:21 CET] <xtina> ah, i see
[04:34:29 CET] <thebombzen> the built in aac encoder is only okay at 128+ (or maybe 96)
[04:34:33 CET] <furq> it should be acceptable at low bitrate for voice
[04:34:44 CET] <thebombzen> once you get into HE-AAC territory I was under the impression it was real bad
[04:34:55 CET] <furq> i wouldn't worry about it yet
[04:34:57 CET] <thebombzen> but then again if you're only encoding your voice then don't worry about it
[04:34:58 CET] <furq> if the quality sucks then sure
[04:35:08 CET] <furq> i'd have thought fdk would be faster on a pi as well
[04:35:16 CET] <furq> although i notice native is faster on my desktop
[04:35:18 CET] <xtina> are these two not redundant? '-vcodec h264 -c:v h264'
[04:35:24 CET] <furq> yeah they're the same option
[04:35:29 CET] <furq> -c:v is the new name for -vcodec
[04:35:34 CET] <xtina> why both?
[04:35:39 CET] <xtina> can just use one?
[04:35:43 CET] <furq> sure
[04:35:46 CET] <xtina> OK
[04:41:15 CET] <thebombzen> furq: does fdk have arm optimizations?
[04:41:43 CET] <furq> it's optimised for mobile
[04:41:47 CET] <furq> it's all fixed-point iirc
[04:43:56 CET] <xtina> hmm, so there is no need for me to use hardware accelerated encoding (omx h264) because the pi camera is already giving me hardware encoded video, right?
[04:44:10 CET] <xtina> and if i were to use omx h264, it would just.. re encode already-encoded video?
[04:44:29 CET] <xtina> that option is more for other kinds of cameras?
[04:44:59 CET] <furq> yeah
[04:45:23 CET] <xtina> cool, just wanna make sure i understand
[04:46:54 CET] <xtina> furq: it is really bizarre, but i've never been able to make alsa ffmpeg work without stuttering audio the entire way through. this video is the result of the cmd i just ran: https://www.youtube.com/watch?v=K3DinyccuMU
[04:47:05 CET] <xtina> the video is perfect but the audio is all stuttering
[04:47:15 CET] <xtina> this doesn't happen when i pipe arecord into ffmpeg
[04:47:48 CET] <xtina> (i was counting from 1-30 in the vid, that should be the audio)
[04:48:56 CET] <xtina> here's the cmd again: http://pastebin.com/jDY03SmZ
[04:49:23 CET] <xtina> i do get buffer xrun messages, every 5 seconds, not sure if that's related
[04:49:30 CET] <xtina> (i don't get buffer xruns when i pipe arecor into ffmpeg)
[04:50:05 CET] <furq> maybe get rid of -ar 48000
[04:50:14 CET] <furq> the input option is -sample_rate but it defaults to 48000 anyway
[05:02:25 CET] <xtina> furq: hmm, no luck with that one
[05:03:06 CET] <xtina> i think someone told me the stuttering might happen because ffmpeg is single threaded and is missing audio when it does other things, whereas if it's reading from an arecord file in a separate process, it won't 'miss' anything?
[05:03:22 CET] <xtina> but in that case i'm not sure why there's no missing video... only missing audio..
[05:03:32 CET] <xtina> so that doesn't make sense
[05:09:05 CET] <xtina> i'm also still trying to figure out what 'alsa buffer xrun' means and why i keep getting it when using ffmpeg -alsa but not when piping arecord into ffmpeg
[05:11:20 CET] <kepstin> all "alsa buffer xrun" means is that the audio buffer from the sound card is filling up faster than the application is reading from it
[05:11:40 CET] <furq> you could try setting a bigger buffer_size in asoundrc
[05:11:43 CET] <xtina> oh i see. so it's no problem?
[05:11:49 CET] <kepstin> and any data that doesn't fit just gets thrown out, leaving you with choppy audio
[05:11:51 CET] <furq> not sure why arecord would work in that case though
[05:11:54 CET] <xtina> ohh, i see
[05:11:57 CET] <xtina> it gets thrown out..
[05:12:04 CET] <xtina> well with arecord
[05:12:08 CET] <xtina> it doesn't get thrown out right?
[05:12:10 CET] <kepstin> i'm guessing that the pipe between arecord and ffmpeg is adding a bit of extra buffer to smooth it out
[05:12:11 CET] <xtina> it's all written into the pipe
[05:12:25 CET] <furq> that's true
[05:12:46 CET] <kepstin> but if the pipe buffer gets full because ffmpeg is reading too slow, the same thing could still happen, because it'll block arecord from being able to keep going
[05:13:26 CET] <furq> apparently buffer_size and period_size in asoundrc are important options
[05:13:33 CET] <kepstin> but a bigger buffer would smooth it out if ffmpeg is making big "chunky" reads - like it wants a big chunk of audio, then waits for a bit, then grabs another big chunk
[05:20:30 CET] <xtina> kepstin: why would ffmpeg be making chunky reads of audio, but not video?
[05:20:57 CET] <xtina> i've never had this constant stuttering problem when streaming video directly from ffmpeg
[05:21:53 CET] <xtina> i will play around with buffer_size and period_size, furq
[07:08:20 CET] <satinder___> Thread message queue blocking; consider raising the thread_queue_size option (current value: 1024)
[07:08:32 CET] <satinder___> I am getting above warning
[07:08:51 CET] <satinder___> any problem with this during udp transmission
[07:10:46 CET] <satinder___> please anyone help me
[07:43:31 CET] <thebombzen> satinder___: your queue is really small
[07:43:58 CET] <satinder___> thebombzen : I resolve it Sir
[07:44:01 CET] <thebombzen> it means that you're trying to write to a full queue when streaming udp before your computer's networking takes it
[07:44:03 CET] <thebombzen> ah k
[07:44:25 CET] <satinder___> thebombzen : Sir , Can we reduce delay during streaming
[07:44:36 CET] <thebombzen> oh boy
[07:44:41 CET] <satinder___> I am getting 2 sec delay over udp
[07:44:45 CET] <thebombzen> this is... sort of a very long and complicated question
[07:44:53 CET] <thebombzen> and I've been trying to answer it myself for a long long time
[07:45:05 CET] <thebombzen> and I have gotten better than 2 seconds but never lower than 100 ms, which was my goal
[07:45:18 CET] <satinder___> ok
[07:45:22 CET] <satinder___> good sir
[07:45:32 CET] <satinder___> please help me
[07:45:41 CET] <thebombzen> if you're encoding with libx264, use -tune:v zerolatency and -x264-params intra-refresh=yes
[07:45:55 CET] <satinder___> ok
[07:45:59 CET] <thebombzen> try -fflags +nobuffer -avioflags +direct as output options
[07:46:01 CET] <satinder___> I show you my command
[07:46:04 CET] <thebombzen> try -probesize 32 as an input option
[07:46:11 CET] <thebombzen> (although this will probably require you to set the format)
[07:46:26 CET] <satinder___> ffmpeg -thread_queue_size 1024  -f alsa -ac 2 -i pulse -i /dev/video0  -vcodec libx264 -b:v 3M -maxrate:v 3M -minrate:v 2.7M  -pix_fmt yuv420p  -bufsize 2M -muxrate 3M   -tune zerolatency  -f mpegts udp://@227.40.50.60:1234
[07:46:35 CET] <thebombzen> you should probably increase that
[07:46:47 CET] <thebombzen> you haven't specified your audio codec
[07:46:52 CET] <thebombzen> I'd recommend using opus as it's low delay
[07:47:02 CET] <thebombzen> but opus doesn't go inside mpegts, so you might want to use matroska instead
[07:47:43 CET] <satinder___> what is wrong with above command can you help me for correct it
[07:47:45 CET] <satinder___> ok
[07:47:48 CET] <thebombzen> Never mind opus in mpegts works
[07:48:17 CET] <thebombzen> probably should use -f v4l2 -probesize 32 before -i /dev/video0
[07:48:19 CET] <furq> i was about to say
[07:48:27 CET] <furq> it's either being or been standardised
[07:48:29 CET] <thebombzen> I was confused and then tested it and it worked
[07:48:37 CET] <thebombzen> opus doesn't work in mp4 though, at least according to avformat
[07:48:45 CET] <furq> that i can believe
[07:49:22 CET] <thebombzen> it also doesn't work with -f nut, which is dumb (but probably easily fixable)
[07:49:31 CET] <satinder___> ok if I use -probesize 32 then can I reduce delay
[07:49:56 CET] <thebombzen> satinder___: perhaps. ffmpeg scans the file for the format to autodetect it, including v4l2 inputs like /dev/video0
[07:50:05 CET] <satinder___> ok
[07:50:07 CET] <thebombzen> if you set the probesize to be as low as it goes (32) then it will do that less
[07:50:13 CET] <thebombzen> but you most likely will have to tell it the format yourself
[07:50:23 CET] <furq> two seconds with -tune zerolatency sounds like your player is doing some buffering
[07:51:13 CET] <thebombzen> out of curiosity what is the @ before the IP address in udp:// do
[07:51:22 CET] <satinder___> thebombzen : ok , Sorry sir don't mind can I ask how you achieve 100 ms
[07:51:29 CET] <thebombzen> I didn't figure that out
[07:51:49 CET] <thebombzen> I've been trying but I don't think it's really possible with ffmpeg.c
[07:51:50 CET] <furq> thebombzen: nothing
[07:51:51 CET] <satinder___> furq : ok, how can I overcome
[07:52:09 CET] <furq> consult your player's docs or use a different one
[07:52:37 CET] <thebombzen> satinder___: you can test it with ffplay -f mpegts -probesize 32 -i udp://your_udp_stream
[07:52:39 CET] <furq> getting below 500ms or so with ffmpeg itself is a bit hit and miss, it depends on a lot of buffers you can't control
[07:52:57 CET] <magican> Hi guys !
[07:53:05 CET] <furq> people who are very serious about realtime tend to write their own tool which gives them better control over the whole pipeline
[07:53:09 CET] <satinder___> ok
[07:53:14 CET] <furq> you can use the ffmpeg libs for that, of course
[07:53:37 CET] <satinder___> furq : but sir that will take very long time
[07:53:41 CET] <furq> it sure will
[07:53:44 CET] <magican> Problem houston, got my ffmpeg command working fine, BUT, sound is out-of-sync. Pretty good at start, but incerease as the video/time goes. Solution please?? Tried -isync, don't work.
[07:53:47 CET] <magican> command: ffmpeg -f x11grab -video_size 1920x1080 -i $DISPLAY -f alsa -i default -c:v libx264 -c:a aac -isync video.mkv
[07:53:57 CET] <magican> furq: Idea ? :)
[07:54:14 CET] <furq> shrug
[07:54:24 CET] <furq> i've never used x11grab or alsa
[07:54:53 CET] <furq> in ffmpeg, obviously. i've used alsa to listen to soundtracks from 90s arcade games
[07:55:01 CET] <furq> that probably doesn't help you though
[07:55:06 CET] <magican> alternative command based on what u see ? (Just a thought)
[07:56:03 CET] <furq> well isync hasn't existed for years so that's probably not going to help
[07:56:53 CET] <thebombzen> magican: I hear that OBS is pretty good about keeping that in sync
[07:56:54 CET] <thebombzen> but idk
[08:00:26 CET] <magican> BUT, it can be guvcivew that's retarted aswell? (maybe)
[08:00:43 CET] <satinder___> furq : another thing is that , ffmpeg have already best code so why we develop again it
[08:00:54 CET] <furq> it doesn't
[08:00:58 CET] <furq> the ffmpeg libs are good
[08:01:10 CET] <furq> ffmpeg itself is a bit ropey in places
[08:01:40 CET] <satinder___> I mean  can we change little bit which can overcome delay issue
[08:01:50 CET] <thebombzen> yea libav* are great libraries, but ffmpeg.c is a swiss army knife
[08:01:55 CET] <furq> you can get the delay lower than two seconds
[08:02:03 CET] <satinder___> ok
[08:02:19 CET] <furq> getting it to within a few frames is very difficult without more control than ffmpeg gives you
[08:02:21 CET] <thebombzen> ffmpeg.c it does everything but for specialized applications that have particular hard to meet design requirements, it's not the best tool
[08:02:24 CET] <thebombzen> like a swiss army knife
[08:02:39 CET] <thebombzen> yea, even without using the network layer it's hard
[08:02:46 CET] <furq> don't bring a swiss army knife to a knife fight
[08:02:54 CET] <furq> unless your plan is to surprise your opponent with a romantic bottle of wine
[08:03:18 CET] <furq> and then give him a manicure
[08:03:19 CET] <satinder___> hahahha
[08:03:42 CET] <satinder___> furq : you mean I make a C program and read frames then make packets and transfer over the network
[08:03:49 CET] <thebombzen> something like that
[08:03:53 CET] <furq> yeah
[08:03:56 CET] <thebombzen> even without a network layer it's hard
[08:03:57 CET] <furq> the ffmpeg libs will do most of the heavy lifting
[08:04:00 CET] <thebombzen> I tried this: ffmpeg -probesize 32 -avioflags +direct -fflags +nobuffer -f x11grab -video_size 1920x1080 -framerate 60 -i :0.0 -c:v libx264 -preset:v ultrafast -crf:v 18 -tune:v zerolatency -x264-params intra-refresh=yes -avioflags +direct -fflags +nobuffer -f mpegts - | ffplay -probesize 32 -avioflags +direct -fflags +nobuffer -f mpegts -
[08:04:13 CET] <thebombzen> even with a pipe I couldn't get sub 100
[08:04:18 CET] <satinder___> furq : yeah you right
[08:04:24 CET] <furq> yeah i've done some brief testing with x264 over udp on localhost
[08:04:28 CET] <furq> i was getting 2-300ms
[08:05:03 CET] <satinder___> furq : localhost is another concept that is totally different there will be unix sockets
[08:05:06 CET] <furq> and that was just using testsrc and no audio, so no input buffers to worry about
[08:05:13 CET] <satinder___> because of that delay is less
[08:05:16 CET] <furq> yeah i know
[08:05:20 CET] <satinder___> ok
[08:05:27 CET] <furq> that was as low as i could get it with ffmpeg even without capture buffers and networking
[08:05:44 CET] <furq> although i didn't really try that hard
[08:05:46 CET] <thebombzen> furq: I tried testing the input delay with ffplay -f x11grab <options> -i :0.0 and I got really good latency there
[08:05:52 CET] <thebombzen> I find it unlikely that it's an input buffer
[08:05:57 CET] <furq> well it all adds up
[08:06:06 CET] <thebombzen> I also tried using OBS to grab the screen
[08:06:18 CET] <thebombzen> but I got stuck at the fact that it won't let you use anything other than mpeg2video inside mpegts
[08:06:21 CET] <furq> especially if you're syncing two different capture devices
[08:06:39 CET] <thebombzen> OBS is really good at syncing htem but it sucks because it doesn't let you use libx264 for mpegts
[08:06:47 CET] <thebombzen> it restricts it to mpeg2video for some really silly reason
[08:06:48 CET] <satinder___> furq : I think ffmpeg have some flags which explained by thebombzen , by using these we can overcome as compare to make a new tool
[08:06:50 CET] <furq> inasmuch as ffmpeg can do that, given that 99% of the conversation in here lately has been about alsa desyncs
[08:07:00 CET] <satinder___> are you agree
[08:07:13 CET] <furq> satinder___: it's worth a try
[08:07:18 CET] <thebombzen> satinder___: no real harm in trying
[08:07:24 CET] <thebombzen> just alsa desyncs are kind of a thing
[08:07:35 CET] <thebombzen> I'd probably not try to grab from two sources with ffmpeg.c and sync them
[08:07:42 CET] <thebombzen> it's not designed to sync live inputs well
[08:07:56 CET] <thebombzen> if both have wallclock timestamps (i.e. already done for you) it's probably the only way to do that
[08:08:16 CET] <furq> if 500ms or so is good enough for you then ffmpeg should be capable
[08:08:25 CET] <furq> if you need one or two frames of latency then good luck
[08:08:31 CET] <satinder___> Sir , I think development and R & D takes more time , if we do some better in ffmpeg that is good
[08:09:45 CET] <furq> thebombzen: no vbv and slice-max-size?
[08:10:46 CET] <satinder___> thebombzen : what about dranger tutorial
[08:10:56 CET] <satinder___> but that is very old
[08:11:00 CET] <thebombzen> furq: I never remember how to set those and it was just a quick test
[08:11:04 CET] <furq> it's not that old
[08:11:14 CET] <furq> it's still useful but it uses some deprecated 2.x apis
[08:11:26 CET] <satinder___> yes
[08:11:37 CET] <thebombzen> satinder___: this is true that writing your own program takes time and if it were possible in ffmpeg.c it'd be better. unfortunately I've come to the conclusion that it really isn't
[08:11:40 CET] <furq> looks like there are some forks on github which claim to have updated it
[08:11:57 CET] <thebombzen> I have the same issue where I'm working on a project to transmit low-latency video (sub-100ms)
[08:12:09 CET] <satinder___> thebombzen : I agree
[08:12:23 CET] <thebombzen> so I've done extensive testing on this and I've concluded that it really can't be done with just ffmpeg.c alone
[08:12:37 CET] <thebombzen> you need either a specilized tool (that I have yet to find) or I need to write my own
[08:12:54 CET] <furq> i think the people in here who do that for a living have all written their own
[08:13:06 CET] <furq> but presumably not using libx264 or else they'd be obliged to show you the source ;_;
[08:13:43 CET] <satinder___> I guys I have idea please correct me if I am wrong
[08:14:41 CET] <satinder___> that is for only linux
[08:16:12 CET] <satinder___> We call get frames using v4l2 api , I already doing side by side then we encode frames with h.264 api at the end we make a socket call which transfer whole data on network
[08:16:55 CET] <satinder___> I think ffmpeg have a huge framework if we want just streaming then we need make a streamer program
[08:17:03 CET] <satinder___> at low level
[08:17:45 CET] <satinder___> using alsa and v4l2 and BSD Sockets pthread also help
[08:20:04 CET] <satinder___> thebombzen : ??
[08:20:11 CET] <satinder___> furq : ??
[08:20:29 CET] <thebombzen> I've never done that
[08:20:33 CET] <thebombzen> so I can't help you there
[08:20:36 CET] <thebombzen> but it sounds like a plan
[08:20:45 CET] <satinder___> yes
[08:21:02 CET] <satinder___> because ffmpeg internally doing that
[08:21:23 CET] <satinder___> when that is dealing with video and images
[08:21:42 CET] <magican> furq: Found a command with the allighty google i just tried straight of..  : ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 30 -s 1920x1080 -i :0.0+0,0 -acodec pcm_s16le -vcodec libx264  -preset ultrafast  -threads 4 -y Test.mkv
[08:21:46 CET] <magican> 100% in sync.
[08:21:51 CET] <magican> wonder why though..
[08:22:04 CET] <satinder___> furq : what is your opinion
[08:22:06 CET] <satinder___> ??
[08:35:58 CET] <magican> Btw, anyone of you know why change resolution in guvcview is = crash.. ? Super anoying.
[08:36:08 CET] <magican> 640x480 feels stupid with a 1080p cam.
[08:44:00 CET] <magican> A little question again..  45Mb size for 1:20m screencapture @ 1920x1080. Is that considerd normal?  Or can I optimize it a bit more?
[08:45:10 CET] <teratorn> satinder___: could you use streaming that ffmpeg already supports?
[08:45:23 CET] <magican> *.mkv or *.mp4 doesn't do any diffrenet is size though.
[08:45:36 CET] <magican> I'm just thinking, a 20-30 min capture would be xx Gb :P
[08:45:45 CET] <teratorn> satinder___: have ffmpeg output to an rtmp:// url for instance, instead of writing your own socket code?
[08:50:48 CET] <bencoh> magican: well, ultrafast with default quality factor I guess
[08:51:32 CET] <bencoh> meaning it won't try to compress much in order to output encoded frames "fast"
[08:52:17 CET] <magican> Great. I'm happy with the command overall, sound in sync etc.. but just a bit big file. But sure, 1920x1080 takes some room :)
[08:52:27 CET] <magican> So it will be big however I do.
[08:53:44 CET] <bencoh> you could use a different preset
[08:54:51 CET] <magican> This works fine: ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 30 -s 1920x1080 -i :0.0+0,0 -acodec pcm_s16le -vcodec libx264 -c:a aac  -preset ultrafast  -threads 0 -y Test.mkv
[08:54:57 CET] <magican> So, tip for getting size down?
[08:56:04 CET] <furq> at least use -preset superfast
[09:01:56 CET] <magican> Well, 1920x1080 with sound, 1m 10s = 17.5Mb .. feels OK to me.
[09:02:54 CET] <magican> furq: Also, the command I just found on googling sync the sound 99.8% perfect, dunno whats the diffrent are, and I don't care that much either :P
[09:05:00 CET] <magican> But it actully looks like mp4 sync it better than mkv.. (?)
[09:05:29 CET] <furq> mp4 will produce an unplayable file if ffmpeg doesn't exit cleanly
[09:08:24 CET] <furq> bye
[09:47:28 CET] <satinder___> [h264 @ 0x7fb09c0b4180] no frame!
[09:47:28 CET] <satinder___> [h264 @ 0x7fb09c000d20] non-existing PPS 0 referenced    0B f=0/0
[09:47:29 CET] <satinder___> [h264 @ 0x7fb09c0fb280] non-existing PPS 0 referenced
[09:47:29 CET] <satinder___> [h264 @ 0x7fb09c0fb280] decode_slice_header error
[09:47:29 CET] <satinder___> [h264 @ 0x7fb09c0fb280] non-existing PPS 0 referenced
[09:47:29 CET] <satinder___> [h264 @ 0x7fb09c0fb280] decode_slice_header error
[09:48:08 CET] <satinder___> I am getting above errors
[09:49:33 CET] <satinder___> when playing udp stream with ffplay
[09:49:41 CET] <satinder___> my command is following :
[09:49:58 CET] <satinder___> ffmpeg -probesize 32 -thread_queue_size 1024  -f alsa -ac 2 -i pulse -i /dev/video0  -vcodec libx264 -b:v 3M -maxrate:v 3M -minrate:v 2.7M  -pix_fmt yuv420p  -bufsize 2M -muxrate 3M   -tune zerolatency  -f mpegts udp://@227.40.50.60:1234
[09:50:05 CET] <satinder___> please anyone help me
[09:50:11 CET] <satinder___> furq :
[09:50:21 CET] <satinder___> teratorn :
[09:50:31 CET] <satinder___> thebombzen :
[09:51:08 CET] <satinder___> anyone have any idea why I am getting above errors
[09:55:16 CET] <satinder___> durandal_1707 : are you there  ?
[10:05:27 CET] <Elirips> Hello. Was there a change in ffmpg, if I am extracting frames from a stream like 'ffmpeg -i <stream> -r 1 -an -q:v 5 -updatefirst 1 -y <path>', so that newer ffmpeg will write to <path>.tmp and then copy the file to <path> at the end?
[10:05:57 CET] <Elirips> I have an older program that opens a named pipe as path, like \\.\pipe\foo.jpg and older ffmpeg stream fine into that pipe
[10:06:13 CET] <Elirips> but newer ffmpeg complains that there is no file named \\.\pipe\foo.jpg.tmp (what is correct)
[10:06:59 CET] <Elirips> so, can I prevent ffmepg from using that tmp file and make it stream directly into foo.jpg ?
[10:17:43 CET] <DHE> satinder___: a corrupted stream was received. possibly your system can't keep up with the encoding requirements and data is overflowing the buffer?
[10:18:36 CET] <satinder___> DHE : ok
[10:18:57 CET] <satinder___> what will I can do now any suggestion
[10:19:11 CET] <DHE> try adding: -preset:v veryfast
[10:19:14 CET] <DHE> see if that helps any
[10:19:52 CET] <satinder___> ok
[10:19:54 CET] <satinder___> thanks
[10:20:03 CET] <satinder___> I will be comeback soon
[10:20:10 CET] <satinder___> with updates
[10:20:12 CET] <satinder___> :)
[10:21:35 CET] <satinder___> DHE : same errors
[11:13:20 CET] <Wizzup> Surprised to see AMD/ATI are not listed here at all - the radeon driver has been doing VDPAU for years - https://trac.ffmpeg.org/wiki/HWAccelIntro
[11:18:26 CET] <krstjns> howdy, has anyone dealt with deallocating h264_cuvid decoders?
[11:20:03 CET] <krstjns> i cant find a way to properly clean them up, after instantiating and cleaning a few of them up i end up with  CUDA_ERROR_OUT_OF_MEMORY
[11:41:55 CET] <BtbN> krstjns, is your VRAM actually full at that point?
[11:44:57 CET] <krstjns> it only reaches 1,5 gb, (out of 6)
[11:46:11 CET] <BtbN> I'm not aware of any leaks in the cuvid decoder. You also need to deallocate all the frames it created, and the hwframes context if you set one
[11:47:35 CET] <krstjns> the memory leaks appear only after i destroy a decoder and create a new one
[11:48:06 CET] <krstjns> is there anything speial i need to do for destorying the cuvid decoder instances?
[11:48:14 CET] <BtbN> no
[11:51:16 CET] <krstjns> does the cuvid have a hard limit on how much memory it can use? Adding an extra gig of usage would at the very least allow me to finish a run
[11:53:12 CET] <BtbN> None I'm aware off either
[11:53:37 CET] <BtbN> Which cuvid/cuda function throws the out of mem error?
[11:53:49 CET] <krstjns> okay, so with encoder runing i do go over the 1,5gb limit
[11:54:21 CET] <krstjns> this is the full error: ctx->cvdl->cuvidCreateDecoder(&cudec, &cuinfo) failed -> CUDA_ERROR_OUT_OF_MEMORY: out of memory
[11:54:35 CET] <BtbN> that's not even the point where it allocates a lot of memory
[11:55:12 CET] <BtbN> I have heard reports like that from a lot of users though, and I'm tempted to blame it on the nvidia driver.
[11:59:02 CET] <krstjns> welp, Ill try to keep digging, thanks for the info
[11:59:28 CET] <krstjns> hopefuly going back to software decoding wont be the solution i need to use -_-
[12:00:44 CET] <BtbN> Are you using ffmpeg.c, or your own application?
[12:01:57 CET] <krstjns> we are using ffmpeg c++ api in our application
[12:03:20 CET] <krstjns> c*
[12:03:57 CET] <krstjns> sorry, the application it self is c++
[12:03:58 CET] <BtbN> I assume you are not supplying an external hw_frames_ctx?
[12:04:30 CET] <krstjns> we are not
[12:05:54 CET] <BtbN> hm, I'm pretty sure that codepath is leak-free on the cuvid.c side
[12:06:22 CET] <BtbN> Not 100% sure about the external context, as lavc does weird stuff with it sometimes
[12:07:11 CET] <BtbN> Does the no-mem error persist over application restarts?
[12:12:55 CET] <krstjns> no, i can even have multiple instances of some test code runing and giving the error each at 1.5gb (after 20+ decodes have been instantiated and deallocated on each)
[12:25:50 CET] <BtbN> weird
[12:29:56 CET] <Chloe> How can I split a subtitle stream with multiple languages so each language gets it's own stream?
[12:30:19 CET] <c_14> subtitle stream with multiple languages?
[12:30:25 CET] <c_14> What subtitle codec supports that?
[12:30:40 CET] <Chloe> dvb_teletext apparently
[12:33:05 CET] <Chloe> I'm attempting to fix dvb_teletext -> dvb_subtitle, but it puts it all into one stream (which isn't supported by decoding).
[12:35:29 CET] <Chloe> It's in 4.2 of the dvb spec. 'A single subtitle stream can carry several different subtitle services.'
[12:37:34 CET] <c_14> I'm not sure that's supported in FFmpeg, you might be ablet to use the txt_page option of the teletext decoder
[12:37:47 CET] <c_14> But then you'd have to know which pages correspond to which language
[12:39:57 CET] <Chloe> Well there's a notice in FFmpeg which says: 'DVB subtitles with multiple languages is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.' So I know it's not supported, but I was wondering if the 'multiple languages in one stream' thing was
[12:39:57 CET] <Chloe> exclusive to DVB and if not could it be adapted.
[12:45:31 CET] <c_14> I'm not sure the FFmpeg design supports multiple languages in a single stream, it would have to be split into separate streams somehow
[12:47:07 CET] <Chloe> Maybe I'm really confused. 'Stream #0:5[0xc2c](swe,nor,dan,fin): Subtitle: dvb_teletext ([6][0][0][0] / 0x0006), 492x250' is what I think it is, right?
[12:48:16 CET] <sora> hello I could use a little help
[12:50:37 CET] <c_14> Chloe: yes, but afaik FFmpeg doesn't support having a decoder output multiple streams, so there's no way to do what you want currently
[12:50:44 CET] <c_14> sora: ask, and if someone can help you they will
[12:50:57 CET] <sora> I try to convert my pictures to a mp4 using ffmpeg , how can i make it work with my picture sequence being img1_2553.png, img2_9949.png, img3_8419.png ,...
[12:51:20 CET] <c_14> -pattern_type glob -i img*.png
[12:51:31 CET] <c_14> you'll probably have to escape that
[12:51:37 CET] <c_14> otherwise your shell will mess it up
[12:51:43 CET] <sora> I tried img%d_*.png but it doesnt seem to work
[12:52:12 CET] <c_14> you can't mix globs with format sequences
[12:54:42 CET] <sora> just using "-pattern_type glob -i img*.png" fixes it ?
[12:54:52 CET] <c_14> yes
[12:55:35 CET] <sora> ok thanks a lot for your help
[14:17:15 CET] <Zabuldon> Hi guys, Can anybody help me with filter_complex? I have one bg image and 8 input streams. Need to position my streams over background image. I have resolved it but probably my solution not good, also look like it works one by one.... Is it possible to make videos synchronous?
[14:17:26 CET] <Zabuldon> my command looks like:
[14:17:27 CET] <Zabuldon> ffmpeg -thread_queue_size 1024 -loop 1 -f image2 -i /root/mafia.png -thread_queue_size 512 -i rtmp://localhost/cam1_s/mystream -thread_queue_size 512 -i rtmp://localhost/cam2_s/mystream -thread_queue_size 512 -i rtmp://localhost/cam3_s/mystream -thread_queue_size 512 -i rtmp://localhost/cam4_s/mystream -thread_queue_size 512 -i rtmp://localhost/cam5_s/mystream -thread_queue_size 512 -i rtmp://loca
[14:18:19 CET] <Zabuldon> oh
[14:18:21 CET] <Zabuldon> http://pastebin.com/Uw1ZsaxE
[14:18:24 CET] <Zabuldon> there is full command
[14:24:15 CET] <timothy> hi, is there any way to enable h264_qsv as encoder?
[14:24:26 CET] <timothy> using --enable-libmfx I can only see it as decoder
[14:28:43 CET] <jkqxz> The encoder and decoder should have identical dependencies.
[14:28:53 CET] <jkqxz> Put your config.log somewhere and link to it?
[14:30:55 CET] <Zabuldon> Guys, please help me :(
[14:31:01 CET] <Zabuldon> its really hard for me
[15:38:06 CET] <timothy> jkqxz: I solved by using h264_vaapi instead :)
[15:42:46 CET] <durandal_1707> Zabuldon: how are videos not synchronous?
[15:44:30 CET] <Zabuldon> durandal_1707: I have started 8 streams from my machine (all with my web cam) and when I'm looking result video it looks like: 1 do womething; 2 sec; 2 - do something; 2 sec; etc.
[15:45:16 CET] <durandal_1707> what?
[15:45:17 CET] <Zabuldon> I can share live example
[15:46:12 CET] <durandal_1707> Zabuldon: does it stalls?
[15:47:10 CET] <Zabuldon> sent rtmp link
[15:50:10 CET] <durandal_1707> does it pauses each few seconds?
[15:50:35 CET] <Zabuldon> nope
[15:51:02 CET] <Zabuldon> but looks like each video has delay
[15:51:48 CET] <durandal_1707> are they of same fps and timebase?
[15:51:52 CET] <Zabuldon> yes
[15:56:31 CET] <durandal_1707> Zabuldon: each overlay gives extra delay?
[15:56:38 CET] <Zabuldon> yes
[15:56:44 CET] <Zabuldon> looks like it
[15:58:12 CET] <durandal_1707> and cpu is under 100%?
[15:58:42 CET] <Zabuldon> nope
[15:58:48 CET] <Zabuldon> I have 8 cores CPU
[15:59:07 CET] <Zabuldon> total cpu usage 40%
[16:10:52 CET] <faLUCE> hello. Is AVFrame part of libavutil? Or does it belong to libavformat, libavcodec or something else?
[16:11:23 CET] <Chris0> hello, I have an error during install I've never seen. Can you help me to fix this?
[16:11:39 CET] <DHE> faLUCE: it's in libavutil/frame.h, sure. but it's used all over the place. codecs and filters most significantly.
[16:12:07 CET] <faLUCE> DHE: yeah, sorry for the stupid question
[16:12:07 CET] <DHE> Chris0: state your problem clearly and concisely, don't wait for one-on-one help
[16:12:38 CET] <Chris0> here is the command and log: http://pastebin.com/9gTVey6d
[16:13:11 CET] <faLUCE> in order to use ONLY an AVFrame, do I have to alloc some global stuff, like av_register_all(); ?
[16:13:50 CET] <faLUCE> or does av_register_all() must be called before doing coding/decoding/muxing stuff?
[16:14:00 CET] <DHE> faLUCE: well, no I don't think so. but what good is an AVFrame by itself with no filters or codecs or containers supported?
[16:14:15 CET] <faLUCE> DHE: I have to wrap it into another library
[16:14:30 CET] <faLUCE> and I have to decide when to call av_register_all
[16:15:26 CET] <faLUCE>  then I wonder if I have to call av_register_all at the first coder/decoder/scaler/muxer instance, or I have to call it even when a AVFrame is created
[16:15:26 CET] <DHE> it's safe to call more than once, as long as it's not in a distinct thread than another caller
[16:16:07 CET] <faLUCE> DHE: ok, I can call it at each coder/decoder/muxer/scaler instance, but I wonder if it's necessary to call it for a AVFrame too
[16:16:11 CET] <Chris0> I wanted to reinstall ffmpeg with pic to complete the update of vapoursynth. Without it I can't reinstall VS
[16:18:06 CET] <coffee`> hey there, I'm trying to encode a apng file
[16:18:21 CET] <coffee`> % ffmpeg -loop 1 -i .\left_side_clock_member_%01d.png -f apng -ignore_loop 1 -default_fps 1 -y my_apng.png
[16:18:34 CET] <coffee`> ^ I have this command but it doesn't seem to work
[16:19:07 CET] <faLUCE> DHE: the answer should be, as far as I see, "call av_register_all" before using libavformat
[16:19:39 CET] <faLUCE> DHE: the answer should be, as far as I see, "call av_register_all" before using any libavformat's stuff
[16:20:14 CET] <faLUCE> but I'm not sure
[16:20:36 CET] <DHE> or any avcodec stuff, since it registers those as well
[16:21:09 CET] <DHE> so, if you're not using any of those, then go ahead and just make sure of AVFrame. it's just from my standpoint I don't see what benefit you could have unless you are implementing your own filtering or encoding
[16:21:54 CET] <faLUCE> DHE: I'm using all of them, but I separated files in my wrapper, then I ave to decide in which file to put av_register_all()
[16:22:17 CET] <faLUCE> I'll put it in any avcodec and avformat stuff
[16:23:43 CET] <faLUCE> DHE: does it need to be called before allocating swscale contexts too?
[16:24:05 CET] <coffee`> I found the answer, thanks to % ffmpeg -h muxer=apng
[16:24:33 CET] <DHE> pretty sure not. it literally just registers the list of codecs and formats into the av_find_* function lists
[16:50:35 CET] <faLUCE> thanks DHE
[17:00:45 CET] <Zabuldon> Guys, In case if I have 9 inputs and on of them not available is it possible to replace it with PNG and replace again in case if stream live?
[17:01:16 CET] <JEEB> you will have to make that with the API
[17:01:23 CET] <JEEB> ffmpeg.c is not that agile
[17:01:31 CET] <JEEB> (or you make ffmpeg.c agile)
[17:02:07 CET] <Zabuldon> I'm using just ffmpeg utility.
[17:05:24 CET] <BtbN> Then it's not possible, no.
[17:06:10 CET] <Zabuldon> ok. thanks
[17:07:37 CET] <Chris0> I can't reinstall ffmpeg. Any idea what this error is? http://pastebin.com/9gTVey6d
[17:07:59 CET] <Chris0> *compile
[17:08:12 CET] <BtbN> someone messed up the docs
[17:09:05 CET] <Chris0> @BtbN any way to bypass this during compiling?
[17:09:14 CET] <BtbN> disable the docs
[17:09:25 CET] <Chris0> Not sure how to do that
[17:09:33 CET] <Chris0> -disable-doc ?
[17:09:37 CET] <atomnuker> do a git pull, I fixed it 10 minutes ago
[17:09:46 CET] <Chris0> OK nice :)
[17:10:22 CET] <Chris0> If any other error I'll come back
[18:35:16 CET] <vans163> anyone know if using a dynamic slice mode where the encoded picture gets divided into slices would lower decode latency?
[18:35:27 CET] <vans163> section 3.5.8  https://developer.nvidia.com/sites/default/files/akamai/designworks/docs/NVIDIA_Capture_SDK_6/NVIDIA%20Capture%20SDK%20Programming%20Guide.pdf
[19:18:36 CET] <LetterRip> hi all, what would be commandline settings to pass through video, while reencoding to 32 bit mono audio?
[19:20:54 CET] <LetterRip> the file is m4v container, video and audio are default handbrake for their 'Apple 240p' preset
[19:29:06 CET] <thebombzen> LetterRip: if you want to pass the video through, use -c:v copy
[19:29:16 CET] <thebombzen> as for the audio, depends on the codec
[19:31:28 CET] <LetterRip> AAC (avcodec)
[19:31:37 CET] <LetterRip> thebombzen: thanks
[19:57:45 CET] <popara> is there any ffmpeg command to remove all codecs automatically that are not supported in the target container?
[19:57:49 CET] <popara> or i have to do it manually
[19:58:05 CET] <popara> i want to use -map 0 , but strip all unsupported codecs
[20:13:26 CET] <llogan> you'll probably have to do it manually
[20:45:33 CET] <acamargo> hello there. is there some formula to calculate the buffer_size and fifo_size params for input UDPs?
[20:47:42 CET] <acamargo> I read on several sites that people used to set fifo_size=1000000 and buffer_size=10000000. but isn't there a formula for those values?
[20:50:40 CET] <DHE> buffer_size is the kernel buffer, while the fifo_size is an in-ffmpeg buffer. usually inputs are handled by a distinct thread so the kernel buffer size at default is fine unless your source has a bad case of the crazies
[20:57:33 CET] <acamargo> I have a process who captures from decklink and output a mpegts multicast udp while others ffmpeg processes input via udp and output for streaming services. with a low buffer_size/fifo_size the second output presents video glitches. when I increased the buffers the problem has gone. but I wanted to know what's the "right" value for the buffers
[20:59:15 CET] <mete> I try to cut a TS stream I recorded earlier, but at the beginning I always get a chop after about 0.5s. here is the command I used and also the output of ffmpeg command line: http://pastebin.com/7jPn5cnG  any ideas what could be the issue?
[20:59:28 CET] <acamargo> well, gotta go. thanks for your time DHE
[21:04:16 CET] <llogan> mete: cutting while stream copying is not guaranteed to be frame accurate with non-intra frame formats
[21:05:26 CET] <llogan> you could do "-map 0 -c copy" top copy everythign instead of: A) relying on the default stream selection behavior, and B) having to provide a copy per stream.
[21:05:38 CET] <llogan> *to copy everything
[21:06:54 CET] <llogan> ...that that it will resolve your accuracy issue. for that you would need to re-encode.
[21:06:58 CET] <llogan> *not that.
[21:07:06 CET] <llogan> llogan: learn to type
[21:07:55 CET] <mete> hm, I don't have to cut very accurate... re-encode has always the downside of quality loss, that's why I don't want to re-encode
[21:08:25 CET] <llogan> yeah, and it takes much longer
[21:08:54 CET] <mete> cpu time is no problem.. it's a high power system and the videos are only ~1hour long
[21:09:47 CET] <mete> I try of course with a much shorter currently ;)
[21:13:16 CET] <mete> would it help if I demux the streams before cutting?
[21:16:32 CET] <llogan> you could list keyframes and choose to cut on those which are closest to your desired cut times
[21:17:16 CET] <llogan> a somewhat sloppy example: ffprobe -v error input.ts -select_streams v -show_entries frame=key_frame,pkt_pts_time -of csv=nk=1:p=0 | grep "1,"
[21:20:53 CET] <mete> ok, I get as output then 1,36.780000 for example, so at 36.78 seconds is a key frame, have I understand that correct?
[21:22:44 CET] <llogan> should be
[21:22:56 CET] <mete> ok, I will check that, thank you llogan :)
[21:26:11 CET] <llogan> if the output plays badly you can see if using -ss as an input option makes any difference.
[21:34:36 CET] <mete> ok, I will try out the options, thank you
[21:56:40 CET] <nohop> hey guys. I downloaded this "decoding_encoding.c" piece of libavcodec API example code and I'm noticing that the AV_CODEC_ID_H264 codec id does but the AV_CODEC_ID_MPEG4 codec id doesnt use multiple encoder threads. Is there a list somewhere of implemented codecs that take advantage of multiple threads/cores ?
[21:57:34 CET] <nohop> (we're doing a project at work that involves compressing an enormously ridiculous real-time video stream. We really need the speed here :) )
[22:01:10 CET] <kepstin> nohop: well, those codec IDs talk about the name of the codec, but not actually which implementation you're using...
[22:01:28 CET] <kepstin> so e.g. libx264, the external h264 encoder library that's normally picked by default is multithreaded
[22:01:39 CET] <nohop> oh!  I see
[22:02:05 CET] <kepstin> most of ffmpeg's internal mpeg-style encoders (including the defaults for mpeg2, 4) aren't multithreaded
[22:02:21 CET] <nohop> sorry, I'm a noob with ffmpeg. We were using opencv before, but looks like it's videowriter is not going to cut it :)
[22:02:52 CET] <nohop> ah, alright. I'll have to do some searching then, I guess...
[22:03:06 CET] <kepstin> if you need realtime encoding and compression efficiency isn't as important, you might want to look at using a hardware encoder (e.g. nvidia nvenc or intel qsv)
[22:03:25 CET] <kepstin> particularly if your cpu is busy generating the video stream already :)
[22:03:26 CET] <nohop> yeah, that might be an option
[22:04:05 CET] <nohop> meh... the generating is relatively CPU intensive, but not nearly as much as the compression
[22:04:18 CET] <kepstin> but if you've got cpu available, libx264 encoder can probably be tuned to meet your needs, it's a good all-around choice.
[22:04:47 CET] <nohop> generation is mostly grabbing frames from a (tcp) socket en rotating them (by multiples of 90 degrees), so...
[22:04:54 CET] <kepstin> start by using the "preset" option it to set the speed tuning.
[22:05:09 CET] <nohop> ok. I'll talk to my colleagues about using H264 then
[22:05:32 CET] <kepstin> do you have any specific needs, e.g. players supported, lossless encoding?
[22:05:42 CET] <nohop> is that available under windows aswell ?
[22:06:12 CET] <nohop> Well. 'they' were asking for lossless encoding, but I don't think that's going to work with the enourmous amount of data we'll be working with
[22:06:42 CET] <kepstin> most ffmpeg builds for windows will include libx264, and it will be used as the default h264 encoder. x264 supports lossless compression, but if you need lossless, there might be better options depending on video content and cpu req.
[22:06:47 CET] <nohop> (~450M/s of raw or so... )
[22:07:10 CET] <nohop> alright, thanks!
[22:07:23 CET] <kepstin> you should give ffv1 a try as a lossless codec too, it's builtin to ffmpeg
[22:07:45 CET] <nohop> yeah, i've noticed that one
[22:10:51 CET] <kerio> ffv1 is way less efficient than lossless h264
[22:11:05 CET] <kerio> it's just standard
[22:11:25 CET] <kerio> whereas nothing but libx264 will read lossless h.264
[22:11:45 CET] <kepstin> kerio: no idea what you're talking about, libx264 isn't a decoder, so it can't read anything :)
[22:11:47 CET] <kerio> nohop: if you're mostly interested in encoding speed, give ffvhuff a try
[22:11:54 CET] <kerio> kepstin: oh hm
[22:12:03 CET] <kepstin> ffmpeg's native h264 decoder can read lossless h264, but you're right that few other things can
[22:12:15 CET] <kerio> ye ok that
[22:12:21 CET] <faLUCE> why video programs generally don't allow change on the fly of v4l2 parametrs, like format, resolution etc. ?
[22:13:01 CET] <xtina> hey guys, it's me again :) i have a q about syncing audio + video
[22:13:17 CET] <xtina> i streamed on youtube this morning and my audio is perfect, but my video is skipping forward
[22:13:20 CET] <xtina> here's the stream: https://www.youtube.com/watch?v=Bz12pDbay0U
[22:14:02 CET] <xtina> how do i get audio and video in sync? my command: http://pastebin.com/K1xjS9QX
[22:14:48 CET] <nohop> kerio: Thanks again. I've looked into ffvhuff, and that got a little too crazy, data-size wise. :) But yeah. 264 seems like a really good option. I'm going to suggest is
[22:14:51 CET] <nohop> *it
[22:14:53 CET] <xtina> i'm counting off to a timer, theoretically my voice and the screen timer should be perfectly in sync
[22:16:12 CET] <llogan> is it in sync if you output to a local file? i'm wondering if your upload speed can handle the (probably relatively) high bitrate from raspivid.
[22:16:33 CET] <llogan> also, use -framerate instead of -r for raw h264 demuxer
[22:17:26 CET] <xtina> llogan: should i just try replacing -f flv rtmp:blah with -f flv test.mp4 or something?
[22:18:28 CET] <llogan> just "output.flv"
[22:19:06 CET] <llogan> out of curiousity, what is this project for?
[22:21:17 CET] <xtina> llogan: i'm trying your suggestion now. i'm trying to build a livestreaming neckalce
[22:21:20 CET] <xtina> necklace*
[22:21:28 CET] <xtina> a robust prototype, if possible
[22:21:39 CET] <llogan> no wonder you ignored suggestions to use a laptop
[22:21:53 CET] <xtina> my wifi module is theoretically capable of 45mbps upload
[22:22:12 CET] <xtina> yeah :)
[22:22:18 CET] <xtina> i'm very space constrained
[22:22:43 CET] <llogan> got to go
[22:28:30 CET] <xtina> so when i write to -f flv test.flv, the audio and video are perfectly in sync
[22:29:42 CET] <xtina> i just tested the internet speed on my Pi wth speedtest-cli
[22:29:51 CET] <xtina> it's 7mbps Down and 4.5 mbps Up
[22:30:23 CET] <xtina> i understand if the alignment is not *perfect* when streaming, but how can i make the video and audio both *attempt* to sync up to the same timestamp?
[22:30:32 CET] <xtina> right now, the video is doing stuff like skipping 10 seconds ahead
[22:30:50 CET] <xtina> if my upload speed is 5mbps, it seems like i could do better than that
[22:31:25 CET] <xtina> here's the example stream i recorded with video skipping: https://www.youtube.com/watch?v=Bz12pDbay0U
[22:32:39 CET] <xtina> can i use async and vsync to do this?
[22:36:08 CET] <xtina> kerio: please, do you have any suggestions?
[23:00:34 CET] <xtina> hmm.. anyone?
[23:31:47 CET] <xtina> furq: sorry to bug, are you around?
[23:59:44 CET] <Threadnaught> It's me again with the weird mystery problems with ffmpeg, trying to take a series of png frames and combine it with an mp3 to make a video and it always either comes out as a black screen and 2 minutes of audio or just the frames, I don't have -shortest specified either
[00:00:00 CET] --- Thu Feb  9 2017


More information about the Ffmpeg-devel-irc mailing list