[Ffmpeg-devel-irc] ffmpeg.log.20151208

burek burek021 at gmail.com
Wed Dec 9 02:05:01 CET 2015


[00:05:12 CET] <satt> hey guys, I'm scaling and padding inputs and the result is not centered.  I thought pad autocentered if you didn't specify a x/y - I guess I was wrong?
[00:06:25 CET] <satt> here's a line of the filter_complex
[00:06:27 CET] <satt> [0:v]scale=if(gt(ih\,iw)\,-2\,320):if(gt(ih\,iw)\,480\,-2),pad=320:480,setsar=sar=1/1[v0];
[00:07:08 CET] <satt> to make sure it's centered, I'd have to add a x/y to the pad argument right?
[00:08:20 CET] <llogan> (ow-iw)/2
[00:37:50 CET] <satt> llogan: thanks - so just pad=320:480:(ow-iw)/2:(oh-ih)/2, right?
[00:40:54 CET] <llogan> satt: yes
[00:41:15 CET] <satt> awesome, thanks for your help!
[04:06:41 CET] <shivaya> hello, i am trying to use ffmpeg with some embedded linux inside ip camera
[04:06:59 CET] <shivaya> the hw is very limited, whole os runs from 40mb ram
[04:07:18 CET] <shivaya> is there a way to add bare bone ffmpeg
[04:07:59 CET] <shivaya> it should be as small as possible
[04:12:15 CET] <c_14> It's possible
[04:12:41 CET] <c_14> --disable-everything --disable-all
[04:12:44 CET] <c_14> Then enable whatever you need
[04:12:59 CET] <c_14> (probably at least avformat and avcodec)
[04:13:09 CET] <c_14> Are you using the libraries or the commandline?
[04:16:27 CET] <c_14> And since this is an ip cam, you'll probably also want avdevice
[04:16:43 CET] <c_14> Then it's just a matter of finding all the codecs and muxers/demuxers you need
[04:26:25 CET] <shivaya> command line
[04:26:36 CET] <shivaya> sounds like I will be having some fun :)
[04:26:41 CET] <shivaya> thanks
[04:26:55 CET] <c_14> Then you'll also want --enable-ffmpeg --enable-avfilter --enable-swscale and --enable-swresample
[04:27:01 CET] <c_14> --enable-small will probably also be good
[06:42:08 CET] <pinPoint> where is the libaac github update page again?
[08:51:04 CET] <k_sze> With .nut containers and H.264 and FFV1 codecs, is it possible to quickly get a list of keyframes using one of the ffmpeg command line tools? I know ffprobe can output lines like "key_frame=1", but it seems to actually read the whole video. Are keyframes indexed at the end of the .nut container?
[09:18:47 CET] <k_sze> Do FFV1 and H.264 use keyframes drastically differently?
[09:19:38 CET] <k_sze> If I seek to a keyframe in a H.264 video, and then get X frames forward, the time increases linearly with X (until the next keyframe).
[09:20:11 CET] <waressearcher2> k_sze: hallo
[09:20:38 CET] <k_sze> But with a FFV1 video, I seek to a keyframe, and then get X frames forward, the time does not increase linearly with X. keyframe+1 and keyframe+10 essentially use the same time.
[09:21:45 CET] <pzich> what version do you have, and what are you using to seek?
[09:25:17 CET] <k_sze> 2.5.3. And we are using the PyAV wrapper, which actually uses av_seek_frame
[09:57:21 CET] <k_sze> Also, I'm reading this: https://trac.ffmpeg.org/wiki/Encode/FFV1
[09:57:51 CET] <k_sze> Why does it say "For archival use, GOP-size should be "1"."?
[09:58:40 CET] <maco> hi everone
[09:58:42 CET] <maco> Hi, I've got ffmpeg running on my Pi it was working fine but suddenly it stopped working and giving me and Illegal instruction instruction when running executing this code ffserver -f /etc/ffserver.conf & ffmpeg -v verbose -r 5 -s 640x480 -f video4linux2 -i /dev/video0 http://localhost/webcam.ffm
[10:25:23 CET] <DMJC> I need assistance porting from the old libavformat api to the new one, can someone point me to a porting guide?
[10:25:48 CET] <DMJC> the codebase is still using av_register_protocol2
[10:49:13 CET] <JEEB> DMJC: see the examples in the source code repo
[10:49:23 CET] <JEEB> those should tell you about the new API usage
[10:49:35 CET] <JEEB> apichanges file should contain api changes but it should be simpler to just look at examples
[10:58:36 CET] <t4nk722> I want to add a CREDITS screen at the end of my video & I am using the 'drawtext' filter of ffmpeg. It loads the dynamically created credits text from a textfile(.txt). I would like to know , if there is any way to format the credit text?? I can place the text at a specific (X,Y) position , but if the content of the text is large , the text becomes unaligned.
[10:59:45 CET] <JEEB> use ASS rendering with libass for that
[10:59:55 CET] <JEEB> create ASS subtitles with Aegisub
[11:02:56 CET] <t4nk722> Can I set the (X,Y) position coordinates if i use ASS ?? I need the text to come in Center of my video.
[11:03:17 CET] <JEEB> yes, there's a whole lot of ways you can do that
[11:03:31 CET] <JEEB> just look at the ASS override tag documentation on aegisub's site
[11:09:38 CET] <t4nk722> Ok. I ill do that. Thanks for the help. :)
[11:10:01 CET] <pkeuter> i have asked this before but i havent yet found a good way to go with this. i want to play a file that is currently recording in the browsers video-element. its a h264 encoded file. i use -movflags frag_keyframe+empty_moov, but the browser wants to fully load the file before it starts playing. is there any workaround that you know of?
[11:12:36 CET] <JEEB> no
[11:12:48 CET] <JEEB> if your thing doesn't support fragments then you're out of luck
[11:13:04 CET] <JEEB> although I'm not sure if those are the correct flags for fragments
[11:17:02 CET] <pkeuter> according to the internets they are the correct flags
[11:17:06 CET] <pkeuter> :)
[11:17:22 CET] <pkeuter> but it´s chrome.... it does support fragments, right?
[11:18:01 CET] <JEEB> no idea
[11:18:01 CET] <Mavrik> Chrome probably wants DASH for that kind of streaming :/
[11:18:15 CET] <JEEB> wouldn't expect it to support fragments since so few things do :P
[11:18:25 CET] <JEEB> and for DASH you have to use MSE
[11:18:36 CET] <JEEB> because of course only MS Edge supports mpd in video tags :)
[11:18:46 CET] <JEEB> (together with HLS m3u8, funny enough)
[11:19:35 CET] <pkeuter> god
[11:19:49 CET] <pkeuter> that kind of sucks
[11:20:33 CET] <pkeuter> so there is not a way to record and play at the same time?
[11:20:44 CET] <pkeuter> because vlc and wmp and all those players work just fine
[11:26:22 CET] <BtbN> pkeuter, don't record to a static file, but to non-live HLS/DASH instead.
[11:26:39 CET] <BtbN> You can play that in a Browser, and just remux it to a static file once done.
[11:27:38 CET] <furq> pkeuter: https://github.com/dailymotion/hls.js/
[11:27:41 CET] <pkeuter> BtbN, nice. How would i go with that?
[11:27:44 CET] <furq> you'll need that to play hls in a desktop browser
[11:27:54 CET] <BtbN> or dash.js for DASH
[11:28:00 CET] <furq> there is something similar for dash but i've never managed to get it working
[11:28:07 CET] <BtbN> I'd prefer DASH, as it doesn't involve scary remuxing-in-js
[11:28:30 CET] <BtbN> HLS is much easier to handle for everything else though
[11:28:38 CET] <BtbN> you can just get one big file via cat
[11:29:28 CET] <furq> also you need to use a short gop length for dash to work well
[11:29:40 CET] <furq> so if you just want to stream whatever you're encoding then hls is probably a better choice
[11:29:58 CET] <BtbN> same for HLS
[11:30:08 CET] <BtbN> It doesn't realy matter for a non-live list though
[11:30:13 CET] <furq> i've had no issues with that over hls
[11:30:15 CET] <BtbN> segments can just be 20s long
[11:30:32 CET] <BtbN> You can't make segments shorter than your gop, that just doesn't work, for both
[11:30:49 CET] <furq> that works fine for me with hls
[11:31:02 CET] <furq> i wasn't splitting with ffmpeg though
[11:31:34 CET] <BtbN> It can't, you can only split at an I/IDR frame.
[11:32:07 CET] <BtbN> The first frame of every ts or mp4 segment has to be independently decodable
[11:32:25 CET] <pkeuter> uhmmmmm huh
[11:32:37 CET] <pkeuter> so do i record with ffmpeg to hls?
[11:32:43 CET] <BtbN> or dash
[11:32:47 CET] <pkeuter> or dash
[11:32:53 CET] <furq> yeah
[11:32:58 CET] <furq> then remux the fragments into your final file
[11:33:15 CET] <pkeuter> man, i really dont have any idea on where to start with this
[11:33:25 CET] <BtbN> For hls, you don't need to remux at all, if you're fine with .ts
[11:33:46 CET] <BtbN> Well, you can do a cat-style concat for dash, too. But it's more complex.
[11:34:01 CET] <furq> afaik you just encode as you normally would but use out.m3u8 as your output file
[11:34:13 CET] <BtbN> Make sure to tell ffmpeg that it's not a live playlist though
[11:34:26 CET] <furq> then point the video tag to /output-dir/out.m3u8
[11:34:27 CET] <BtbN> otherwise players will seek to the end
[11:34:46 CET] <furq> or whatever the dash equivalent is
[11:34:48 CET] <pkeuter> so wait, can we go through this step by step?
[11:35:23 CET] <pkeuter> i got this decklink input. i want it to record to h264, for storage purposes. in the mean time i also want to show it in the browser
[11:35:46 CET] <pkeuter> so what i do is start an ffmpeg instance with the decklink input as source and then still encode it to h264?
[11:36:42 CET] <pkeuter> sorry if i am asking dumb questions
[11:36:55 CET] <pkeuter> i just dont know enough about this
[11:37:40 CET] <furq> yes
[11:38:18 CET] <pkeuter> but then how do i make a non live hls or dash stream?
[11:38:27 CET] <pkeuter> im serving it with node.js
[11:39:29 CET] <furq> afaik the default is non-live
[11:40:27 CET] <pkeuter> okay so im just using m3u8 as the output file
[11:40:30 CET] <furq> i think non-live will always play from the beginning of the file though
[11:40:36 CET] <furq> so you might want to use the tee muxer
[11:40:44 CET] <pkeuter> and then at the end i will remux it to a h264 file?
[11:40:53 CET] <furq> that's one way to do it
[11:41:15 CET] <BtbN> Why would you want a raw h264 file?
[11:41:21 CET] <furq> https://www.ffmpeg.org/ffmpeg-formats.html#hls-1
[11:41:36 CET] <pkeuter> its tv so someone needs to be able to edit it afterwards
[11:42:02 CET] <BtbN> raw h264 is horrible for that, why would you need that?
[11:42:08 CET] <BtbN> ts is just fine
[11:42:12 CET] <furq> oh actually i guess the hls muxer is live by default
[11:42:19 CET] <furq> -hls_list_size 0 for non-live
[11:42:31 CET] <pkeuter> ts is fine, but what is the difference? its just another container right?
[11:42:42 CET] <BtbN> It's a container at all?
[11:42:54 CET] <BtbN> "hls_flags single_file" also looks usefull.
[11:42:56 CET] <pkeuter> huh?
[11:42:57 CET] <furq> depends how much you like having audio
[11:43:08 CET] <pkeuter> i pretty much like having audio
[11:44:00 CET] <pkeuter> but i still dont get it.... what does hls have to do with ts
[11:44:16 CET] <furq> hls creates a bunch of .ts segments
[11:44:30 CET] <BtbN> Or one big file.
[11:44:34 CET] <furq> once you're done you can remux it to whatever you want
[11:44:44 CET] <BtbN> No need to remux them
[11:44:47 CET] <BtbN> just cat them together
[11:44:57 CET] <furq> or you can use the tee muxer and save to a file at the same time
[11:45:10 CET] <BtbN> There's not realy a point to that.
[11:45:30 CET] <furq> won't a non-live hls stream always play from the beginning of the recording
[11:45:46 CET] <furq> that's usually not desirable, although i didn't get an answer for that
[11:46:01 CET] <pkeuter> its fine that it starts at the beginning, as long as the user can seek
[11:46:25 CET] <furq> you'll only be able to seek through loaded segments
[11:46:34 CET] <BtbN> no.
[11:46:44 CET] <BtbN> The playlist contains all of them.
[11:46:52 CET] <BtbN> And contains all meta information needed for seeking.
[11:47:23 CET] <furq> never mind then
[11:49:22 CET] <pkeuter> lets see
[11:51:31 CET] <furq> 10:31:34 ( BtbN) It can't, you can only split at an I/IDR frame.
[11:51:48 CET] <furq> fwiw i just checked this and apparently nginx-rtmp is just ignoring the fragment size i specified and using the gop length of the source
[11:51:51 CET] <furq> so that's nice to know
[11:52:16 CET] <BtbN> it chooses the closest multiple of the gop length to your desired fragment size
[11:52:22 CET] <furq> that certainly explains why the fragment size was having no effect on latency
[11:52:22 CET] <BtbN> As that's the only option it has
[11:53:52 CET] <furq> i guess it doesn't do that with dash, which explains why this source was unwatchable over dash
[11:54:36 CET] <pkeuter> ok so unfortunately hls_flags single_file doesnt seem to work
[11:54:41 CET] <pkeuter> well it records to one file
[11:54:54 CET] <pkeuter> so thats fine, but it doesnt play
[11:55:28 CET] <pkeuter> another question: can i somehow share the decklink input between two ffmpeg instances?
[11:56:10 CET] <pkeuter> oh, and i dont get a duration
[11:58:02 CET] <pkeuter> all those problems
[12:04:30 CET] <pkeuter> BtbN, furq?
[12:04:54 CET] <BtbN> you don't get a duration?
[12:04:59 CET] <pkeuter> nope
[12:06:21 CET] <pkeuter> its all 0 seconds, and scrolling isnt very accurate either
[12:31:16 CET] <pkeuter> BtbN, is that normal when using hls?
[13:43:59 CET] <debdog> ahoy everybooody! does ffmpeg has an option to identify the contents of a container? something like "mplayer -identify -frames 0"
[13:55:58 CET] <waressearcher2> debdog: hallo
[13:56:25 CET] <debdog> o/
[14:00:03 CET] <waressearcher2> debdog: wie geht's es dir ?
[14:00:57 CET] <debdog> uhm, beschissen. aber will das schon wissen ;)
[14:01:32 CET] <debdog> naja, gerade gut. hab das erste mal ein video mit ffmpeg umgewandelt
[14:01:54 CET] <debdog> von 20 GB auf 1 GB ohne grossen qualitätsverlust, wie es scheint
[14:02:19 CET] <debdog> aber wäre hierfür nicht ein OT channel besser?
[14:02:35 CET] <debdog> will meine frage nicht verlabern
[14:03:42 CET] <waressearcher2> debdog: keine ahnung
[14:13:19 CET] <DHE> debdog: you want ffprobe
[14:48:04 CET] <debdog> DHE: thanks a lot!
[15:20:29 CET] <pkeuter> so how would i create a h264 dash file with ffmpeg?
[15:37:21 CET] <durandal_1707> isn't there muxer?
[21:41:04 CET] <TD-Linux> FYI ffmpeg might be seeing more Opus LBRR frames due to webrtc recordings now being a thing
[21:42:23 CET] <TD-Linux> in case someone is on a codec feature implementing bend :)
[22:49:35 CET] <KallDrexx> Hello all.  I'm trying to load test transcoding performance on a GPU bare metal box I have access to using NVEC.  I'm doing this by transcoding Big buck bunny 1080p to a RTMP server, then spawning a bunch of ffmpegs to pull down that stream and downscale it to 720p at 3M bitrate and push the it out to another rtmp server.  I am then trying to watch the video on the 2nd rtmp server.  Once I get to about ~5 ffmpegs I get a lot of buffering in the
[22:49:35 CET] <KallDrexx> video but no dropped frames or anything.  The nvidia cards are running at 8% capacity and CPU is only at 4% on the transcode box.  Any ideas on how to isolate if it's ffmpeg that's struggling or some other factor?
[00:00:00 CET] --- Wed Dec  9 2015


More information about the Ffmpeg-devel-irc mailing list