[Ffmpeg-devel-irc] ffmpeg.log.20181116

burek burek021 at gmail.com
Sat Nov 17 03:05:02 EET 2018


[00:11:22 CET] <pzy> haha oh duh, I'm dumb
[00:31:49 CET] <colemickens> so I have some code here that uses ffmpeg to transcode frames from dmabuf to a video file.
[00:32:12 CET] <colemickens> I'd instead like to take that and throw it into some sort of stream that I can consume with plain ole command line ffmpeg, so that I can crop, pick my own transcoding options, etc.
[00:32:29 CET] <colemickens> But I'm unsure of if there's a way to do this, and/or what combination of transport and codec would be recommendable.
[03:07:15 CET] <KombuchaKip> What is the best way to distinguish at compile time ffmpeg from libav headers on the user's system?
[03:20:00 CET] <pink_mist> KombuchaKip: look at how mpv does it maybe?
[03:21:11 CET] <pink_mist> KombuchaKip: https://github.com/mpv-player/mpv/blob/master/wscript#L437
[03:21:13 CET] <KombuchaKip> pink_mist: I didn't know it did, but do you have a specific location in the build environment or source level you'd suggest?
[03:21:18 CET] <KombuchaKip> pink_mist: Thank you.
[03:23:54 CET] <KombuchaKip> pink_mist: That's in Python which isn't really my first language. I see it's checking flags with pkg-config, but it's not really clear to me how it's distinguishing between the two.
[03:25:45 CET] <pink_mist> uhm, well, it's using the 'waf' build system, so it's not simply python that needs to be learned :P but basically it's compiling using 'cc' or whatever the C piece of code on line 457
[03:25:51 CET] <pink_mist> and checking what the result is
[03:26:05 CET] <pink_mist> wrapping it in a main and whatnot
[03:26:55 CET] <pink_mist> that is, if it fails to compile it knows it's one or the other
[03:27:05 CET] <pink_mist> and if it succeeds it knows it's the reverse :P
[03:27:19 CET] <KombuchaKip> pink_mist: Looks like it's checking the libavcodec/version.h header to see if LIBAVCODEC_VERSION_MICRO >= 100, and if so, then this is libav fork and not ffmpeg. Is that correct?
[03:27:28 CET] <pink_mist> I believe so
[03:28:06 CET] <pink_mist> on line 467 it does the reverse check, just to be on the safe side
[03:28:23 CET] <pink_mist> see how it switches which case returns 1 and which one returns -1?
[03:29:19 CET] <KombuchaKip> pink_mist: It's a bit weird, but it looks like if micro version is >= 100 then it thinks its libav.
[03:29:54 CET] <KombuchaKip> pink_mist: I'm surprised they still don't simply have an explicit macro in their headers to distinguish the two because what they're doing is pretty easy to break if ffmpeg bumps the micro version >= 100.
[03:30:33 CET] <pink_mist> I think it's the other way around
[03:31:28 CET] <pink_mist> I'd expect they've gotten some form of assurances that ffmpeg will always keep it at 100 or higher, and that libav will always keep it below
[03:31:33 CET] <pink_mist> but I don't rightly know
[03:32:24 CET] <KombuchaKip> pink_mist: Right
[13:24:16 CET] <shroomM> hey guys, one question ...
[13:24:28 CET] <shroomM> I'm trying to concat a couple of mp4 files
[13:24:35 CET] <shroomM> using the concat demuxer as described here... https://trac.ffmpeg.org/wiki/Concatenate
[13:25:16 CET] <shroomM> they are served by a web server and ffmpeg is reading them via http
[13:25:42 CET] <shroomM> this works fine, but i would like to speed it up, since all I'm doing is stream copy
[13:26:45 CET] <shroomM> the download of a single file is slow, if I download files in parallel, I get higher speeds
[13:26:59 CET] <shroomM> is there a way to instruct ffmpeg to "pre-fetch" files or download them in parallel?
[13:27:16 CET] <ariyasu> no
[13:27:41 CET] <ariyasu> but you could download them with another process like curl in multiple sessions for parallel downloading
[13:27:50 CET] <ariyasu> then do the concat job with ffmpeg
[13:28:11 CET] <shroomM> yeah, I'm limited with the disk space, so I want to do all with ffmpeg
[13:28:42 CET] <shroomM> mp4 files themselves are less than 100MB
[13:29:00 CET] <bencoh> well it's either disk or ram
[13:29:06 CET] <shroomM> ram is better :)
[13:29:09 CET] <shroomM> so I was thinking of launching 4 processes, downloading them to memory and writing them to named pipes
[13:29:13 CET] <bencoh> so you could always download to tmpfs
[13:29:36 CET] <shroomM> and then specifying named pipes in the concat file
[13:29:38 CET] <bencoh> just make sure to clean up temp files
[13:30:01 CET] <shroomM> ok, good idea, not sure if tmpfs is available in this environment, but I will check
[13:30:23 CET] <shroomM> what do you think about the named pipes, does that sound like something that should work
[13:31:00 CET] <shroomM> the reason i'm asking is that i have an example set-up, but the resulting file that ffmpeg produces is invalid :S
[13:31:09 CET] <shroomM> and I don't get why
[13:33:02 CET] <shroomM> I have a python script with a list of urls, each url gets its own fifo named pipe.
[13:33:50 CET] <shroomM> the script then starts by reading the first url in memory and starts pipe write operation, where it blocks until something starts reading it
[13:34:37 CET] <shroomM> separately i manually run ffmpeg with concat file with all the named pipes listed
[13:35:12 CET] <shroomM> and this actually finishes, but with a bunch of warnings from ffmpeg about "Non-monotonous DTS in output stream"
[13:35:57 CET] <shroomM> the resulting file does not work as expected :/
[13:36:06 CET] <shroomM> I don't get these errors if I specify http urls in the concat file and the resulting file works then
[16:54:28 CET] <atbd> hi everybody, is it possible to add to each AVPacket metadata the current epoch time during reencoding with ffmpeg cli?
[16:55:59 CET] <JEEB> usually you do something like set -itsoffset to the moment you start ffmpeg.c
[16:56:06 CET] <JEEB> API users have more alternatives
[16:56:17 CET] <JEEB> although I have also thought about following upipe's design
[16:56:24 CET] <JEEB> which has three timestamps
[16:56:28 CET] <JEEB> - coded timestamp
[16:56:31 CET] <JEEB> - interpreted timestamp
[16:56:38 CET] <JEEB> - receipt timestamp
[16:56:48 CET] <JEEB> first was what was in the source container
[16:56:55 CET] <JEEB> second one applies stuff like wrap-arounds
[16:57:13 CET] <JEEB> third one is just the timestamp when that AVPacket/Frame was filled
[17:04:14 CET] <atbd> okay thanks
[17:04:35 CET] <atbd> upipe's design as you describe it would be great for what i want
[17:50:37 CET] <trashPanda_> Hello, I have question regarding the H264 decoder using the api.  I am feeding the decoder packets with correct dts/pts but when I recieve the frame, the PTS has been changed to a large negative number.
[17:52:01 CET] <trashPanda_> The received frames dts is still correct.  This only happens with a certain video, does anyone know what could cause the decoder to give back a large negative PTS number?  Im talk like -6142771919317201 large...
[18:23:20 CET] <SpeakerToMeat> Hi all
[18:23:22 CET] <SpeakerToMeat> Question, I'm trying to burn some subs in ass in, but ffmpeg seems ot be ignoring  the color (and alpha) settings for the style :/ any idea why? everythiong else in the style (borders etc) works but not color changes.
[21:54:21 CET] <xgx> hello, can someone help me understand how to make an audio only m3u8. I found the other way around but this one is a bit harder\
[21:55:33 CET] <pzy> m3u8 is just a playlist
[21:56:34 CET] <pink_mist> xgx: you do that by specifying an audio file in the playlist
[21:57:11 CET] <xgx> i know. but i would like to send a rtmp stream to a server and with ffmpeg create a playlist with audio only
[21:57:49 CET] <pink_mist> then make an rtmp stream with only audio
[21:58:58 CET] <xgx> sounds simple enough. thought maybe it still needed to have some kind of vid data
[21:59:01 CET] <xgx> ill try that
[21:59:20 CET] <pink_mist> I have no idea tbh
[21:59:28 CET] <pink_mist> never worked with rtmp myself
[22:02:55 CET] <xgx> So far I have only done the simple stuff with it
[22:03:07 CET] <xgx> receive the stream and made it a multi bitrate
[00:00:00 CET] --- Sat Nov 17 2018


More information about the Ffmpeg-devel-irc mailing list