[Ffmpeg-devel-irc] ffmpeg.log.20180202

burek burek021 at gmail.com
Sat Feb 3 03:05:01 EET 2018


[00:13:34 CET] <alexpigment> bonk: static implies that there is audio
[00:14:42 CET] <alexpigment> unless, of course, you've got static in your speakers when you just generally turn up the audio
[02:26:52 CET] <damarusama> can I make a video darker trough ffmpg?
[02:53:38 CET] <leewdch> is it possible to encode a video, stop to encode if size limit is reached and apply a fade out to the audio at the end? maybe using 2-pass?
[02:53:57 CET] <leewdch> oterhwise I need to make a bash script for this
[05:08:02 CET] <mkdir> yooo
[05:08:10 CET] <mkdir> I need to find a good .wav library
[05:14:33 CET] <mkdir> Yo
[05:14:34 CET] <mkdir> Johnjay
[05:14:52 CET] <Johnjay> hey
[05:14:52 CET] <Johnjay> sup
[05:15:05 CET] <Johnjay> i think i met you in another channel mkdir but i don't see it now
[05:17:34 CET] <mkdir> Yeah
[05:17:44 CET] <mkdir> Possibly audacity?
[05:21:28 CET] <Johnjay> hmm maybe
[05:21:36 CET] <Johnjay> i'm not using ffmpeg much these days though
[05:21:47 CET] <Johnjay> i had to do a fade in effect on a mp3 file but that's about it
[05:22:33 CET] <mkdir> Ah I see
[05:22:42 CET] <mkdir> I need to find a good .wav library
[05:23:02 CET] <mkdir> Do you know of any good places to find sample .wav sound files
[05:23:26 CET] <mkdir> english words would be beneficial
[05:23:33 CET] <mkdir> It could have also been in ##C
[05:25:27 CET] <brettdong> ffmpeg: common/cpu.c:251: x264_cpu_detect: Assertion `!(cpu&(X264_CPU_SSSE3|X264_CPU_SSE4))' failed.
[05:25:36 CET] <brettdong> I'm running ffmpeg in a qemu vm, how can I do?
[07:19:11 CET] <butts> hi
[10:35:41 CET] <TheoTheo> I am trying to wrap my head around how filters work using the libavfilter library in C code. Does anyone know of a good guide or can / wants to help me understand?
[10:51:02 CET] <JEEB> TheoTheo: did you look under doc/examples?
[10:51:09 CET] <JEEB> there should be at least one or two filtering examples
[11:03:21 CET] <TheoTheo> Hi JEEB, i did look at the filter_audio and filtering_audio example, but i still don't quite get it
[11:03:56 CET] <TheoTheo> I would like to crossfade two streams
[11:06:55 CET] <TheoTheo> I know how to get the right filter by using avfilter_get_by_name("crossfade"), but i don't understand how filters are applied to my stream / frames
[11:15:15 CET] <Uzzi> I've restored video files from android phone. I cannot play videos. mp4 demux error: cannot create chunks index
[12:17:36 CET] <kr> Hi, I am using libavformat to mux rtp video stream (vp8) to webm. I am not getting how to set pts and dts values in AVPacket. Can i set rtp timestamp itself as pts or dts or do i need to do any other calculation ?
[12:51:14 CET] <JEEB> kr: usually the input format sets the AVPacket pts/dts values and you might only have to re-scale them to the output time base
[12:52:27 CET] <JEEB> kr: if you don't have those values set, then you will either have to fix the rtp protocol/demuxer, or poke the values yourself
[13:08:44 CET] <kr> by "re-scale them to output timebase" you mean, stream timebase or codec timebase?
[13:10:02 CET] <JEEB> AVPacket should be in the input stream time base, and you need most likely to scale it to the output stream time base
[13:10:17 CET] <JEEB> decoder's output would be in the decoder's time base
[13:15:09 CET] <kr> pardon me for my naivity, but what do u mean by decoder here. I am not using any decoder here. I am recieving already vp8 encoded rtp live stream at a port on my system. i am assembling  the stream frame wise. then initializing a AVPacket and setting its buffer to the assembled frame byte array.
[13:15:33 CET] <JEEB> yes
[13:15:46 CET] <JEEB> but you asked "codec time base" and that is generally the time base for the decoder or encoder
[13:16:04 CET] <JEEB> AVPackets are in stream time base that they are for
[13:16:33 CET] <JEEB> and then filter graph outputs have time bases as well :P
[13:16:59 CET] <kr> sorry i meant codec context timebase or stream timebase
[13:17:10 CET] <JEEB> codec context is for a decoder anyways
[13:17:26 CET] <JEEB> stream time base is the stream's time base and AVPackets' pts/dts is in that
[13:18:33 CET] <kr> So, how do i calculate that for my particular case ? do i have to use the rtp timestamps that i get or any other way ?
[13:19:39 CET] <JEEB> as I said in the very beginning that the RTP protocol/demuxer should give you DTS/PTS and the time base. if it doesn't then that protocol/container within that protocol doesn't have such things and you have to make them up?
[13:19:55 CET] <JEEB> (or you fix the RTP protocol / whatever demuxer is getting used)
[13:22:41 CET] <JEEB> raw streams might not have PTS/DTS set, so you either implment giving them timestamps in the raw format "demuxer", or you fix them after-the-fact
[13:24:48 CET] <kr> there is no container inside rtp. it's a realtime transfer protocol, which only transfers audio video frames in packetized way.
[13:26:08 CET] <JEEB> I've seen MPEG-TS within RTP
[13:26:12 CET] <JEEB> which is why I noted
[13:26:25 CET] <JEEB> and yes, with raw streams the "raw XXX" demuxer gets used after the protocol
[13:26:52 CET] <JEEB> (unless the protocol itself just passes the packets to a parser and skips "demuxers" altogether)
[13:27:05 CET] <kr> by "raw format demuxer" you mean rawvideo demuxer ? that video stream is not raw video. it is already vp8 encoded stream.
[13:27:11 CET] <JEEB> no
[13:27:19 CET] <JEEB> h264 demuxer is one of them
[13:27:34 CET] <JEEB> or possibly the parser
[13:27:39 CET] <JEEB> one of those two
[13:27:42 CET] <JEEB> 32
[13:49:59 CET] <kr> yes the protocol passes the packets to my custom parser, where i parse the rtp packets and get frame wise data. This data i am trying to mux into a webm file, but not sure how to set timestamps.
[13:53:33 CET] <kr> is there a way to achieve this i.e without passing the live stream input to any demuxer, directly writing to file. Because my stream is already vp8 encoded which shud have no problem with webm container.
[14:16:44 CET] <boingboing> ok chat, heres my issue: I have a command that is constantly fetching video into video.mp4. I then want to stream that to my devices. but, when I call another ffmpeg command to read from the mp4, it says it's invalid, and it doesnt work until i stop streaming from the camera first
[14:17:06 CET] <boingboing> whats the best way to make it constantly stream from the camera to the hard drive, and from the hard drive to my phone only when i want to
[14:17:19 CET] <boingboing> I dont need to store the file on the hard drive
[14:18:06 CET] <JEEB> nginx-rtmp  with hls/dash output?
[14:21:06 CET] <boingboing> doesn't that seem a bit much though? there has to be a way I can just stream to a file and then stream from that file
[14:21:50 CET] <klaxa> don't use mp4
[14:22:09 CET] <klaxa> mkv supports reading partial files, maybe try that instead
[14:22:21 CET] <JEEB> matroska or mpegts, yes
[14:22:30 CET] <klaxa> doing it the way you do will definitely fill up your harddrive though
[14:22:37 CET] <klaxa> in the long term
[14:22:38 CET] <JEEB> but nginx-rtmp would output what phones and web clients like
[14:22:54 CET] <JEEB> and not fill the hfd
[14:22:58 CET] <JEEB> *hdd
[14:23:37 CET] <boingboing> ill try the mkv
[14:24:01 CET] <boingboing> is there a way to have it auto delete video from say, 2 seconds behind current, then my hard drive wont fill up?
[14:24:41 CET] <JEEB> with the hls muxer maybe
[14:25:00 CET] <JEEB> since it has an option for how many segnents to keep
[14:51:41 CET] <furq> the hls muxer will be fine for that but you'll end up with way more than two seconds of latency
[15:58:35 CET] <Nacht> Anyone know why sometimes you have MPEGTS files who indicate that they have an adaptationfield, yet the Length is set to 0 ?
[16:35:32 CET] <DHE> Nacht: due to the fixed size frames of mpegts, adaptation fields of varying lengths are sometimes used to simulate a frame of smaller size. a field size of 0 still consumes the 1 byte for the field size itself, resulting in a payload of 183 bytes instead of 184.
[16:41:34 CET] <mkdir> Hi
[16:41:35 CET] <mkdir> There
[16:41:39 CET] <mkdir> It's mkdir
[16:41:42 CET] <mkdir> say it just like that
[16:41:50 CET] <mkdir> okay is there good text to speech software?
[16:43:51 CET] <Nacht> rmdir
[16:44:34 CET] <kepstin> mkdir: there's lots, but less so if you're limiting yourself to open-source options (also, this is kinda out of scope for ffmpeg)
[16:45:07 CET] <mkdir> oh sorry
[16:45:08 CET] <mkdir> yeah
[16:45:18 CET] <mkdir> I just want one that can convert to .wav
[16:45:27 CET] <mkdir> or saves as a .wav
[17:24:50 CET] <vidaoptics> Hi guys, I have an exotic question - how can I make ffmpeg stream audio and video from udp (multicast) to Icecast server, with no transcoding? doing .... -f mp3 icecast://source:password@ip:port/mount will stream only audio when input is udp, but when input is another http source from icecast with video - it works....
[17:26:01 CET] <furq> it works with -f mp3?
[17:27:51 CET] <vidaoptics> yes, but only when input stream is http://some-server:port/stream.ts
[17:28:37 CET] <furq> i have no idea what it would even be sending as the video stream if you're outputting mp3
[17:29:03 CET] <furq> that doesn't seem like something that should work at all
[17:29:06 CET] <vidaoptics> if stream is udp://239.0.0.1:1234 then ffmpeg fails with many errors
[17:29:28 CET] <furq> pastebin the full command line and output for both cases
[17:34:00 CET] <vidaoptics> failire: https://pastebin.com/5nCYqCMg
[17:35:01 CET] <kepstin> oh, right, 'mp3' takes a video stream because it can attach a picture as cover art
[17:35:01 CET] <mkdir> furq
[17:35:06 CET] <mkdir> what's good with you?
[17:35:19 CET] <furq> kepstin: i get that but surely that wouldn't actually work as far as watching it goes
[17:35:27 CET] <kepstin> definitely not, yeah
[17:36:35 CET] <furq> i'm going to guess the working stream is mjpeg or something
[17:37:19 CET] <kepstin> I believe icecast can do theora+vorbis in ogg and vp8+vorbis in webm
[17:37:22 CET] <kepstin> in terms of video
[17:37:27 CET] <furq> officially, yeah
[17:37:37 CET] <furq> iirc other stuff works but it's not officially supported
[17:38:00 CET] <furq> you've always been able to stream mp3 but it's never been officially supported
[17:38:03 CET] <vidaoptics> all stuff works fine, mpeg-ts is ok and mpeg-ps also fine
[17:38:12 CET] <kepstin> hmm, they list mp3 in the docs now
[17:38:12 CET] <furq> vidaoptics: use -f mpegts instead of -f mp3 then
[17:38:22 CET] <furq> mp3 doesn't support video streams
[17:38:33 CET] <vidaoptics> this is upd multicast stream from a TV channel in some area but we can't get it out via udp proxy due to network conditions...
[17:38:36 CET] <furq> so i'm still not entirely sure how that worked at all
[17:39:04 CET] <furq> either it was just writing one mjpeg frame as cover art and it wasn't actually sending video, or something tremendously hacky was happening
[17:39:16 CET] <furq> like ffmpeg just decided to write the entire mjpeg stream as cover art and this was somehow working
[17:39:37 CET] <vidaoptics> *I am an idiot* ... -f mpegts worked....
[17:40:04 CET] <furq> can you actually watch that
[17:40:30 CET] <furq> icecast will accept a lot of stuff and just forward it unmodified, but it's not actually guaranteed that anything will play it
[17:40:58 CET] <furq> if mpegts actually works then that's pretty interesting as far as streaming stuff goes
[17:41:07 CET] <furq> probably less hassle to get that up and running than nginx-rtmp
[17:41:33 CET] <kepstin> well, the nice thing about mpegts is that forwarding it unmodified is all you really need to do, stuff will just sync up with it eventually.
[17:42:02 CET] <furq> yeah i guess if anything will work over icecast then mpegts is most likely
[17:42:10 CET] <furq> i know there were issues with flv because it doesn't send the right mime-type
[17:42:29 CET] <vidaoptics> furq - it plays fine, no problem on STB's
[17:42:35 CET] <furq> neat
[17:42:44 CET] <furq> i take it it just acts like a regular mpegts over http stream
[17:42:47 CET] <kepstin> i know it has extra code for webm and ogg containers to basically remux per connected client.
[17:43:04 CET] <furq> i'll have to mess around with that later and see if i can get anything useful out of it
[17:44:07 CET] <vidaoptics> nope, you can't - it's a bad story, STB's do not like it at all :(
[17:44:27 CET] <vidaoptics> and in h264 you can get good stream anyway
[17:44:37 CET] <furq> STBs don't like what
[17:44:53 CET] <furq> if you meant webm then i didn't mean that, i already knew it could stream webm
[17:57:39 CET] <jfmcarreira> Heyy guys
[17:58:10 CET] <jfmcarreira> I am trying to read input stream using ffmpeg. is it possible that I convert the decoded frame to a different pixel format?
[17:59:49 CET] <JEEB> yes, avfilter
[18:18:50 CET] <mkdir> yo
[18:54:07 CET] <furq> so i guess mpegts over icecast sort of works except mpv will just bail out before starting the stream most of the time
[20:15:11 CET] <SortaCore> what's the equivalent to "--disable-doc --disable-encoders --disable-decoders --disable-filters --disable-demuxers --disable-muxers --disable-protocols --disable-parsers --disable-hwaccels --disable-bsfs --disable-indevs --disable-outdevs"
[20:15:53 CET] <kepstin> SortaCore: just not compiling ffmpeg at all should be approximately equivalent to that
[20:16:24 CET] <SortaCore> lol
[20:16:29 CET] <SortaCore> *saves time*
[20:16:38 CET] <SortaCore> nah, I mean before I manually enable all the ones I need
[20:20:13 CET] <c_14> --disable-everything
[20:20:22 CET] <c_14> eh, you still need --disable-doc
[20:20:44 CET] <c_14> but everything covers encoders/decoders/hwaccells/muxers/demuxers/parsers/bsfs/protocols/devices/filters
[20:21:34 CET] <c_14> and you'll probably want --disable-autodetect
[20:31:12 CET] <SortaCore> what does autodetect do?
[20:31:53 CET] <c_14> configure auto-enables certain "system" libraries
[20:32:02 CET] <c_14> that just disables the autodetection in configure
[20:35:13 CET] <SortaCore> so not related to cpu features?
[20:37:23 CET] <c_14> nope
[20:39:30 CET] <furq> there's also --disable-all but that will more or less not build anything that you don't manually enable
[20:39:35 CET] <furq> including things like libavcodec and ffmpeg
[20:41:56 CET] <DHE> --disable-everything will build the libraries but with no codecs, formats and filters. makes them useless but they'll build and follow the APIs
[20:59:45 CET] <SortaCore> --enable-hwaccel="h264_nvdec" does nothing (with --disable-autodetect or not)
[21:00:40 CET] <SortaCore> hmm
[21:00:51 CET] <SortaCore> nah, it's disable-autodetect that kills all the hwaccels
[21:04:16 CET] <c_14> check your config.log
[21:04:25 CET] <c_14> might need zlib or something like that that's normally autodetected
[21:05:03 CET] <c_14> wait, --enable-nvdec
[21:05:06 CET] <c_14> did you pass that?
[21:05:11 CET] <c_14> because that's normally autodetected
[21:05:14 CET] <SortaCore> ah, nope
[21:05:42 CET] <SortaCore> but it didn't have any of them, and I passed hwa "h264_nvdec,h264_d3d11va2,h264_dxva2,mpeg4_nvdec""
[21:08:09 CET] <SortaCore> hm, probably --disable-dxva2=no
[21:10:19 CET] <c_14> just --enable-dxva2?
[22:03:42 CET] <furq> SortaCore: aren't those decoders, not hwaccels
[23:01:00 CET] <ddubya> can I make hardware decoder priority above software one, e.g. I have an app that opens h264 codec, can it default to h264_cuvid ?
[23:04:38 CET] <BtbN> if you write your code that way, sure
[23:11:32 CET] <ddubya> was hoping for some hacky environment override :-)
[23:11:50 CET] <BtbN> they are entirely separate decoders
[23:12:40 CET] <ddubya> well yeah, but there could be priority
[23:13:27 CET] <BtbN> you select a specific decoder via the API
[23:13:30 CET] <BtbN> not a generic codec
[23:13:41 CET] <BtbN> so any priority logic you will have to do yourself
[23:19:47 CET] <ddubya> If I'm using hardware codec context->threads should be fixed to 1 right?
[23:20:05 CET] <BtbN> at least h264_cuvid does not care
[23:20:33 CET] <ddubya> well just in case
[23:30:17 CET] <Cu5tosLimen> hi
[23:30:24 CET] <Cu5tosLimen> so I have some really poorly made avis
[23:31:51 CET] <hojuruku> ddubya: i know how to do it in gstreamer, it involves source code patching. ffmpeg probably is the same - how does ffmpeg choose what decoders / hwaccell to use if any?
[23:32:33 CET] <ddubya> hojuruku, must be selected manually when opening codec
[23:32:54 CET] <ddubya> defaults to the non-prefixed version, e.g. "h264" instead of "h264_cuvid"
[23:33:05 CET] <ddubya> prefixed/suffixed
[23:33:43 CET] <hojuruku> yeah gstreamer uses priorities
[23:33:55 CET] <hojuruku> you don't patch your app's source you patch your gstreamer plugin's priority setting :)
[23:34:08 CET] <hojuruku> that's what I did to make openmax beat vaapi
[23:40:36 CET] <kerio> Cu5tosLimen: it's porn, isn't it
[23:40:50 CET] <Cu5tosLimen> kerio, I would never ;)
[23:41:09 CET] <Cu5tosLimen> so I'm trying to cut parts from these avis using timecodes recorded with somplayer
[23:41:52 CET] <Cu5tosLimen> using ffmpeg -i in.avi -ss 01:32:13.663 -to 01:32:38.000 -c copy out.avi for example
[23:42:03 CET] <Cu5tosLimen> but the time codes from smplayer does not match ffmpeg timecodes
[23:42:35 CET] <Cu5tosLimen> so inevitably I find the frame that ffmpeg starts at - get same frame from smplayer and work out delta and then adjust timecodes with this delta
[23:42:40 CET] <Cu5tosLimen> how can I get past this?
[23:43:03 CET] <kerio> use mpv :^)
[23:43:14 CET] <Cu5tosLimen> smplayer is running mpv
[23:43:19 CET] <kerio> is 01:32:13.663 a keyframe?
[23:43:29 CET] <Cu5tosLimen> no but the differences are huge
[23:43:32 CET] <Cu5tosLimen> like 2 mins or so
[23:43:51 CET] <Cu5tosLimen> for one avi it was datetime.timedelta(0, 204, 411000)
[23:44:01 CET] <Cu5tosLimen> for other it was datetime.timedelta(0, 88, 400000)
[23:44:08 CET] <Cu5tosLimen> I record timecodes by taking screenshots
[23:44:20 CET] <Cu5tosLimen> with this format specifier: cap_%F_%P_%02n
[23:45:17 CET] <kerio> something something -vsync passthrough?
[23:45:29 CET] <Cu5tosLimen> for smplayer?
[23:45:34 CET] <kerio> no, for ffmpeg
[23:45:43 CET] <Cu5tosLimen> no those are all the options I use
[23:45:45 CET] <kerio> but idk
[23:45:57 CET] <Cu5tosLimen> or are you saying I should try -vsync passthrough?
[23:46:05 CET] <kerio> yep
[23:46:09 CET] <kerio> or the other -vsync options idk
[23:46:19 CET] <Cu5tosLimen> ok thanks will try that for next one
[23:46:21 CET] <kerio> are these vfr or cfr videos?
[23:47:45 CET] <Cu5tosLimen> https://bpaste.net/raw/5559fd8d5422
[23:47:48 CET] <Cu5tosLimen> not sure how to tell
[23:47:53 CET] <Cu5tosLimen> that is mediainfo dump
[23:48:09 CET] <Cu5tosLimen> they use AVC codec
[23:50:45 CET] <Cu5tosLimen> this is ffmpeg output: http://termbin.com/1utc
[23:51:05 CET] <Cu5tosLimen> Stream #0:1[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, progressive), 640x480 [SAR 1:1 DAR 4:3], 30 fps, 30 tbr, 90k tbn, 60 tbc
[23:51:46 CET] <Cu5tosLimen> cfr = constant frame rate and vfr = variable frame rate I assume?
[23:51:50 CET] <Cu5tosLimen> I guess it is cfr
[00:00:00 CET] --- Sat Feb  3 2018


More information about the Ffmpeg-devel-irc mailing list