burek021 at gmail.com
Mon Jan 15 03:05:01 EET 2018
[00:04:40 CET] <`md> wow, it works :O
[00:05:08 CET] <`md> before it would just give some weird errors about there not being a vaapi device, despite it being there
[00:05:11 CET] <`md> hah
[00:05:15 CET] <`md> great success
[00:05:36 CET] <`md> (had to copy over some libs tho and modify LD_LIBRARY_PATH)
[00:06:22 CET] <oborot> exit
[00:08:29 CET] <BtbN> grunt_, mp4 without a moov atom is broken and unplayable
[00:30:19 CET] <grunt_> BtbN: well, yes and no. looks like the original is indeed er... peculiar (it appears to contain just the CONTENTS of the mdat box, and nothing else), but it is quite playable, at least in VLC it is. i see your point though. problem solved, kudos for the pointer, mate!
[10:55:53 CET] <Fyr> JEEB, it was PGS subs causing FFMPEG to start new cluster due to timestamp.
[10:56:13 CET] <Fyr> any idea how to restamp a PGS stream?
[11:04:49 CET] <Fyr> I can mux an m2ts with FFMPEG, however, I can't remux the m2ts with it.
[11:06:36 CET] <Fyr> FFMPEG returns:
[11:06:36 CET] <Fyr> Unsupported codec with id 100359 for input stream 2
[14:55:42 CET] <JaskaL> Hi, got a question.. what would be best way to stream ffmpeg output to another ffmpeg instance over network? Seems like while using tcp + mpeg-ts the receiving end is slowing down after a while
[14:55:54 CET] <JaskaL> over lan that is.
[14:58:07 CET] <JEEB> if you just want to pass through raw video I'd probably do NUT over UDP or something? and if you are OK with some extra go-arounds TCP might be OK as well
[14:58:20 CET] <JEEB> raw video and audio I mean, since that's the idea behind NUT
[14:58:29 CET] <JEEB> you can get good timestamps, and both video and audio in there
[14:59:12 CET] <JaskaL> ah sounds just what i need
[15:16:22 CET] <JaskaL> the nut container was just what i've been looking for! Thanks a bunch JEEB
[15:36:53 CET] <Fyr> JEEB, is there a way to get new time stamps for a PGS stream?
[16:11:48 CET] <lightbulb6> consider `ffmpeg -i "http://stream.radioreklama.bg/radio1.opus" -c:a copy capture.opus`; the input is a stream published with icecast, and as soon as you run ffmpeg like this, you can see it quickly processes some 15-20 seconds of the input, and only then starts to copy the stream close to 1.0x speed. this is because the icecast server provides a lengthy buffer when requesting the stream data. is it possible to reliably seek to the "live edge" at the
[16:11:48 CET] <lightbulb6> beginning, discarding the buffer?
[16:13:00 CET] <lightbulb6> i tried it with `-sseof 0`, but it fails because the length is not known, so ffmpeg cannot seek to the very end
[16:15:08 CET] <lightbulb6> and if i guess the right value with `-ss` used as an input option, then the behavior is what i want, but that needs to be guessed correctly
[16:15:32 CET] <lightbulb6> so if the buffer at connect is 10 seconds long, i can skip it with `-ss 10`
[16:17:13 CET] <lightbulb6> i can run it with some big value, like 60 seconds, which should cover most icecast streams, but then you have to wait (60 - initial_buffer_len) seconds until ffmpeg starts copying the stream
[17:35:53 CET] <FLHerne> Good afternoon
[17:37:52 CET] <FLHerne> What would be the best way to convert only streams that aren't already in a set of acceptable codecs?
[17:38:56 CET] <FLHerne> I have user-provided videos in arbitrary formats, where the audio stream might be AAC or MP3 or something weird
[17:39:41 CET] <FLHerne> I'm fine with either AAC or MP3 on the output, so I just want to copy the stream in either of those cases
[17:40:31 CET] <FLHerne> But in case it's not one of those two codecs, I need to convert it (probably to AAC)
[17:42:42 CET] <FLHerne> If I just specify `-c:a aac` for the output, MP3 input streams will be needlessly transcoded
[17:43:26 CET] <FLHerne> And of course with `-c:a copy` it'll copy formats that I don't want
[17:44:29 CET] <FLHerne> My current plan is to have a script test the input codec and set ffmpeg args accordingly, but that seems untidy
[17:46:18 CET] <JEEB> ffprobe -of json -show_streams -show_format INPUT and catch that standard output
[17:46:28 CET] <JEEB> parse as json, make decisions, run ffmpeg.c if you're using it
[17:46:45 CET] <JEEB> outside of that you could just use the API and make the decisions on-the-fly
[18:41:10 CET] <luc4> Hello! Im working on a project that uses ffmpeg libs to open input and extract the streams. Im now using it with a http mjpeg 1fps stream coming from vlc. Everything works properly, but I see avformat_open_input takes really much time, like 3 seconds or even more. Connection is over ethernet and opening VLC from a pc seems to take much less. Any idea of a possible explaination?
[19:02:09 CET] <teratorn> luc4: I would correlate it with a packet capture to see if anything stands out
[19:02:37 CET] <luc4> teratorn: you mean wireshark?
[19:02:41 CET] <teratorn> yea
[19:03:30 CET] <teratorn> see if there is any difference between ffmpeg client and vlc pc client
[19:03:48 CET] <luc4> I can also see this simply on a pc with ffplay vs vlc
[19:04:00 CET] <luc4> ffplay is over 5 seconds, vlc below 3
[19:04:17 CET] <teratorn> so this has nothign to do with buffering?
[19:04:59 CET] <luc4> the call to avformat_open_input seems to take long, but I may be wrong
[19:05:14 CET] <teratorn> well it's important..
[19:05:27 CET] <teratorn> because it sounds just like buffering differences between client
[19:05:44 CET] <luc4> maybe I can somehow ask ffplay not to buffer...
[19:05:46 CET] <teratorn> but mjpeg...
[19:05:52 CET] <teratorn> why would it buffer anything?
[19:06:05 CET] <teratorn> maybe still standard practice to buffer some seconds
[19:08:45 CET] <luc4> teratorn: 4.1 seconds for avformat_open_input
[19:09:52 CET] <luc4> vlc starts and shows the first frame in around 3, even less
[19:10:02 CET] <luc4> it seems that the call is a bit too slow
[19:10:15 CET] <teratorn> so you're talking 3 vs 4.1 seconds?
[19:11:09 CET] <luc4> yes, but 4.1 is just that single call, 3 includes everything from the command line to the first frame
[19:11:54 CET] <luc4> So 3 seconds include process startup, VLC interface, analysis of the stream etc...
[19:12:09 CET] <luc4> 4.1 is just the call to avformat_open_input
[19:12:12 CET] <teratorn> I would be curious of vlc is actually using some version of ffmpeg for that format, or not
[19:12:19 CET] <luc4> me too
[19:12:30 CET] <luc4> maybe with some more logs...
[19:13:55 CET] <teratorn> unless it's using lots of cpu it's probably waiting on io or something
[19:14:09 CET] <teratorn> so using a debugger might help figure out what it is doing
[19:14:48 CET] <teratorn> otherwise start instrumenting the source with printfs :)
[19:14:56 CET] <luc4> is vlc even using ffmpeg in this case?
[19:15:04 CET] <teratorn> no idea
[19:15:47 CET] <luc4> avcodec decoder debug: ffmpeg codec (Motion JPEG Video) stopped
[19:15:58 CET] <luc4> it seems the decoder is ffmpeg at least
[19:17:12 CET] <luc4> but I suspect it is not using ffmpeg for the http part and demux
[22:49:49 CET] <BenLubar> How would I convert this Audacity equalizer curve to a filter? https://pastebin.com/raw/LpqEDZyw
[22:50:16 CET] <BenLubar> I'm pretty sure I need to use https://ffmpeg.org/ffmpeg-filters.html#anequalizer but I don't know much about audio terminology
[22:59:45 CET] <BenLubar> Maybe I need highpass with some specific parameters?
[23:00:02 CET] <furq> http://vpaste.net/FiuLl
[23:00:05 CET] <furq> presumably something like that
[23:00:10 CET] <furq> that's the first three points
[23:00:21 CET] <BenLubar> ok, I'll try extrapolating that
[23:00:48 CET] <furq> f and d from audacity map to f and g
[23:01:51 CET] <BenLubar> so the w= is the distance between that one and the next f?
[23:02:17 CET] <furq> it's the width in hz
[23:02:23 CET] <furq> i assume f=50 w=10 affects 45 to 55hz
[23:07:59 CET] <bobe> Hey everyone! I've got an interesting problem when trying to crossfade many small videos. Ive got an entire writeup of many of my findings here: https://pastebin.com/HaD4vKhp
[23:08:07 CET] <bobe> Would appreciate any helkp
[23:10:56 CET] <PD__> Hi guys. I need a little help, I have let's say 10 video files which I want to merge them in one video BUT I would like crossfade between each of them. What's the good way of doing it? Now I do it by simply merging first and second video and making one temp video. Then I use temp video and add third video and so on.. I think this is terrible since I encode same fille multiple times...
[23:11:31 CET] <bobe> Thats not what Im doing...
[23:11:51 CET] <bobe> I hope.
[23:12:57 CET] <bobe> Unless you have a similar problem, and youre talking about that lol
[23:13:10 CET] <PD__> bobe: hahahaha I think it's exactly the same problem :)
[23:13:39 CET] <PD__> bobe: however I think you have a problem with ffmpeg version, you should use 2.8 or something like that
[23:14:51 CET] <bobe> Ok well what makes you think the entire file is getting encoded multiple times? Isnt splitting the files into 3 pieces saving processing power? since I just crossfade the 0.5s sections
[23:14:58 CET] <bobe> I'll give 2.8 a shot
[23:15:00 CET] <bobe> Thx
[23:16:26 CET] <PD__> beacuse that's my workflow... I marge two videos in one temp video and then use temp video to add third video and so on... so temp get's encoded multiple times
[23:17:02 CET] <PD__> Ah no... sorry I am talking about my problem not your :D
[23:22:14 CET] <bobe> Oh
[23:45:57 CET] <bobe> PD__: BTW it does seem like 2.8 reduces memory usage a lot. Still running tests, but it may be low enough for my uses. Do you know why?
[23:49:08 CET] <PD__> bobe: yes, it's a bug :)
[23:49:50 CET] <durandal_1707> bobe: what problem do you have?
[23:50:44 CET] <bobe> durandal_1707: https://pastebin.com/VmmpLAcy
[23:51:21 CET] <bobe> 2.8 was working but now quits with Invalid data found when processing input. But at least we're getting somewhere!
[23:51:49 CET] <atomnuker> using less threads should also reduce ram
[23:52:31 CET] <bobe> Yeah ive tried from 1->4 still an obscene amount of RAM used.
[23:55:50 CET] <durandal_1707> bobe: try with git master
[23:56:24 CET] <bobe> durandal_1707: I think I have, but I'll try again in a sec here
[23:57:43 CET] <bobe> durandal_1707: The zeranoe nightly should work right?
[23:58:31 CET] <durandal_1707> acrossfade might be problematic, should port it to new api
[23:59:39 CET] <bobe> durandal_1707: sorry, I dont get what you mean by new api? Either way I'll try without acrossfade and just the video portion
[00:00:00 CET] --- Mon Jan 15 2018
More information about the Ffmpeg-devel-irc