[Ffmpeg-devel-irc] ffmpeg.log.20180326

burek burek021 at gmail.com
Tue Mar 27 03:05:01 EEST 2018


[00:17:57 CEST] <draetheus> Does anyone have experience syncing multiple inputs in a combined mosaic output applied through a complex filter (like hstack + vstack)
[00:21:51 CEST] <kepstin> I do stuff like that pretty regularly, what exactly is the issue you're having?
[00:23:09 CEST] <draetheus> So I'm taking in 4 RTMP inputs, applying hstack + vstack + scale down to a single output
[00:23:22 CEST] <draetheus> Even if I use the same input 4 times, the feeds always end up out of sync
[00:23:39 CEST] <draetheus> I do the same thing in OBS studio and it has no issues, all feeds are in sync
[00:24:11 CEST] <kepstin> right, so you're probably just hitting issues with the fact that ffmpeg isn't designed for live sources
[00:24:20 CEST] <draetheus> Ah
[00:24:32 CEST] <kepstin> it opens the inputs one at a time, and any data from the first will get buffered
[00:24:55 CEST] <kepstin> so the end result is that they'll be offset by whatever time there is between opening the inputs
[00:27:21 CEST] <draetheus> Argh, oh well, thanks
[00:27:32 CEST] <draetheus> I was hoping to have a headless solution
[00:28:04 CEST] <kepstin> i've seen people do workarounds like having one ffmpeg per input going to pipes and having a final one read the pipes and do the combining
[00:28:15 CEST] <kepstin> that way if you start the inputs at the same time they'll be synced
[00:28:39 CEST] <kepstin> but the proper fix is using an app better designed for this (which can use the ffmpeg libraries, of course)
[00:32:34 CEST] <draetheus> as far as I have found there is no other CLI/headless solution
[00:44:57 CEST] <debianuser> user890104: if by any chance you're user234234 - you have some issue with the driver or hardware. "Input/output error" for arecord means that the driver for some reason couldn't get the audio from the card. Maybe some power or usb bandwidth issue. If the driver detected the reason, it should be in `dmesg` - check if there're any hints in there.
[12:05:59 CEST] <barhom> Anyone know of an nvidia patch (nvenc patch) to remove the 2stream limitation?
[12:08:37 CEST] <BtbN> no
[12:08:49 CEST] <BtbN> It's not ffmpeg imposing that limit
[12:09:04 CEST] <BtbN> you need a quadro card to not have it
[12:10:09 CEST] <barhom> BtbN: I know it isnt ffmpeg, its the nvidia driver
[12:10:19 CEST] <BtbN> Which is closed source.
[12:10:27 CEST] <barhom> doesnt mean you cant patch it
[12:10:38 CEST] <barhom> (morals are put aside right now)
[12:10:38 CEST] <BtbN> That's exactly what it means
[12:10:52 CEST] <BtbN> The only thing you can patch is the binding layer that wires it into the Kernel
[12:10:57 CEST] <BtbN> And that's not what generates the limit
[12:11:26 CEST] <barhom> I have it from a good source it does work, some gdb magic and change some instructions
[12:13:11 CEST] <barhom> https://developer.nvidia.com/video-encode-decode-gpu-support-matrix < shows number of chips and such of the tesla/quadro cards. Do we have such info for the desktop ones?
[12:13:31 CEST] <BtbN> nope
[12:13:35 CEST] <BtbN> but the Chip names match up
[12:13:45 CEST] <BtbN> each generation has the same silicon
[12:18:46 CEST] <barhom> BTBN: Which comparison NVENC should I look up for the GTX960 4gb then?
[12:21:16 CEST] <BtbN> Should be Maxwell
[12:22:36 CEST] Action: JEEB double-blinks at scale=w=-2:h=480 not giving expected DAR
[12:22:50 CEST] <JEEB> I guess it tries to match the SAR of the input
[12:52:42 CEST] <barhom> BtbN: Let assume the 2 stream limitation didnt exist. How much ram you think we need to fully utilize the nvenc chip? 2 or 4gb?
[12:52:48 CEST] <barhom> I doubt we need the 6 or 8gb
[12:53:11 CEST] <barhom> RAM could be the bottleneck without the 2 stream limitation
[13:21:27 CEST] <BtbN> barhom, you need vram
[13:21:34 CEST] <BtbN> how much depends entirely on your configuration
[13:21:45 CEST] <BtbN> 4K encoding obviously needs more
[13:31:44 CEST] <barhom> vram?
[13:32:22 CEST] <barhom> BtbN: We're talking about the same thing no? The ram on the GFX, 2,4,6,8gb
[14:00:40 CEST] <DHE> does the video encoder, which is discrete silicon, use the same memory as the main GPU? I don't know. it might...
[14:04:19 CEST] <BtbN> Of course it does
[14:47:47 CEST] <Liam__> Hi
[14:47:59 CEST] <Liam__> I need help installing ffmpeg on mac.
[14:48:10 CEST] <Liam__> is there somebody who can help me?
[14:55:38 CEST] <pmjdebru1jn> Liam__: define "install"
[14:56:19 CEST] <pmjdebru1jn> https://evermeet.cx/ffmpeg/ is you get a static binary there, it's just a single executable IIRC
[14:57:02 CEST] <pmjdebru1jn> for example https://evermeet.cx/ffmpeg/ffmpeg-3.4.2.dmg
[14:57:10 CEST] Action: pmjdebru1jn hasn't got any personal experience with OS X
[14:58:54 CEST] <dystopia_> you can add it to your paths so it can be called from anywhere
[15:02:55 CEST] <Liam__> thanks
[15:02:58 CEST] <Liam__> i got the file
[15:03:06 CEST] <Liam__> but i dont know how to make it run
[15:04:47 CEST] <pmjdebru1jn> Liam__: ffmpeg is a commandline tool
[15:04:51 CEST] <Liam__> yes
[15:04:57 CEST] <pmjdebru1jn> so just go the path where you have the binary, and run it from a terminal
[15:06:28 CEST] <Liam__> cd ffmpeg
[15:06:32 CEST] <Liam__> and a bunch of files
[15:06:36 CEST] <Liam__> i opened readme
[15:06:42 CEST] <Liam__> and tried to follow instructions
[15:07:13 CEST] <Liam__> wants me to use makefile and build a tre
[15:07:14 CEST] <Liam__> tree
[15:07:50 CEST] <Liam__> i mean install.md
[15:08:03 CEST] <pmjdebru1jn> huh?
[15:08:12 CEST] <pmjdebru1jn> that sounds like sources?
[15:08:14 CEST] <pmjdebru1jn> not a binary
[15:08:23 CEST] <Liam__> oh
[15:08:31 CEST] <Liam__> so a binary lets me use it instantly?
[15:08:36 CEST] <pmjdebru1jn> are you looking inside? https://evermeet.cx/ffmpeg/ffmpeg-3.4.2.dmg ?
[15:11:02 CEST] <Liam__> CONTRIBUTING.md		README.md		libavformat COPYING.GPLv2		RELEASE			libavresample COPYING.GPLv3		compat			libavutil COPYING.LGPLv2.1	config.h		libpostproc COPYING.LGPLv3		configure		libswresample CREDITS			doc			libswscale Changelog		ffbuild			presets INSTALL.md		fftools			tests LICENSE.md		libavcodec		tools MAINTAINERS		libavdevice Makefile		libavfilter
[15:11:10 CEST] <Liam__> thats inside the file
[15:11:39 CEST] <Liam__> is this a binary?
[15:12:04 CEST] <pmjdebru1jn> find -name ffmpeg
[15:15:57 CEST] <Liam__> it says illegal option --n
[15:16:20 CEST] <pmjdebru1jn> find . -name ffmpeg
[15:16:22 CEST] <pmjdebru1jn> maybe
[15:16:33 CEST] <Liam__> command not found
[15:16:42 CEST] <pmjdebru1jn> huh?
[15:16:42 CEST] <Liam__> oh the dot
[15:17:49 CEST] <Liam__> ls CONTRIBUTING.md		README.md		libavformat COPYING.GPLv2		RELEASE			libavresample COPYING.GPLv3		compat			libavutil COPYING.LGPLv2.1	config.h		libpostproc COPYING.LGPLv3		configure		libswresample CREDITS			doc			libswscale Changelog		ffbuild			presets INSTALL.md		fftools			tests LICENSE.md		libavcodec		tools MAINTAINERS		libavdevice Makefile		libavfilter
[15:18:08 CEST] <Liam__> it looks like the same
[15:18:14 CEST] <pmjdebru1jn> I didn't ask ls
[15:18:16 CEST] <pmjdebru1jn> I asked
[15:18:18 CEST] <pmjdebru1jn> find . -name ffmpeg
[15:18:32 CEST] <Liam__> nothing happened
[15:18:48 CEST] <pmjdebru1jn> that would imply the DMG hasn't got the binary
[15:20:08 CEST] <Liam__> okay..
[15:20:25 CEST] <Liam__> im trying this then, no? ... https://www.npmjs.com/package/ffmpeg-static
[15:23:11 CEST] <pmjdebru1jn> Liam__: I just checked the .dmg, it has a single file in it
[15:23:14 CEST] <pmjdebru1jn> called ffmpeg
[15:23:18 CEST] <pmjdebru1jn> so I have no clue what you're doing
[15:24:20 CEST] <pmjdebru1jn> Liam__: just exact the ffmpeg binary from the dmg, and place it where you want
[15:24:20 CEST] <Liam__> ^^
[15:24:45 CEST] Last message repeated 1 time(s).
[15:24:45 CEST] <Liam__> its a exec file
[15:24:59 CEST] <pmjdebru1jn> it's an executable yes
[15:25:10 CEST] <Liam__> when i open it
[15:25:22 CEST] <Liam__> terminal opens and it says process completed
[15:25:47 CEST] <Liam__> but when im in the terminal
[15:25:51 CEST] <pmjdebru1jn> it's a terminal application, you need to start if _from_ a terminal
[15:25:51 CEST] <Liam__> and try ffmpeg
[15:25:57 CEST] <Liam__> it says command not found
[15:26:03 CEST] <pmjdebru1jn> because it's not in your PATH
[15:26:07 CEST] <pmjdebru1jn> as suggested earlier
[15:26:10 CEST] <Liam__> oh
[15:26:14 CEST] <pmjdebru1jn> /Applications/whatever/ffmpeg
[15:26:18 CEST] <Liam__> so i cd into ffmpeg
[15:26:28 CEST] <Liam__> and the ffmpeg blabla
[15:26:39 CEST] <pmjdebru1jn> ?
[15:26:44 CEST] <pmjdebru1jn> if you in the directory
[15:26:48 CEST] <pmjdebru1jn> you need to start it with
[15:26:49 CEST] <pmjdebru1jn> ./ffmpeg
[15:26:57 CEST] <pmjdebru1jn> as it's still not in your PATH
[15:27:45 CEST] <Liam__> oh
[15:29:23 CEST] <Liam__> ok thank you
[15:29:35 CEST] <Liam__> i think im running it now
[15:38:55 CEST] <Liam__> Oh man Pedrosouza it worked :D
[15:39:28 CEST] <Liam__> thanks a lot for your help and your patience :D :D
[15:47:23 CEST] <colekas> hello friends, I'm looking to do a simple remuxing of packets using mepgts, however I'm kind of confused from the results of setting muxrate before calling the write_header command
[15:47:35 CEST] <colekas> is the muxrate in bits?
[15:48:18 CEST] <colekas> do I need to set anything else?
[15:48:57 CEST] <DHE> mpegts -muxrate setting is in bits per second
[15:49:10 CEST] <DHE> with the usual k,M,G suffixes accepted
[15:57:38 CEST] <th3_v0ice> Does anyone know how can I force ffmpeg decoder to not care about duplicate frames and just give me whatever it decoded from packets? I am using API.
[15:58:46 CEST] <JEEB> the decoder shouldn't duplicate frames if they're not duplicated in the original stream
[16:01:10 CEST] <iive> actually i do not remember what ffmpeg12 does with the picture repeat...
[16:07:46 CEST] <th3_v0ice> JEEB: It seems that in some videos that I process some frames either have similar timestamps or something and ffmpeg marks them as duplicates and gives me one frame. I want the other one also because I need to do frame to frame comparison between input and output. The interesting thing is that while decoding to YUV FFmpeg is also not displaying those frames. For example x264 bitstream has 1500 frames, FFmpeg while decoding to YUV only gives 1490.
[16:09:41 CEST] <JEEB> that sounds like ffmpeg.c logic, not what lavc would be returning
[16:09:50 CEST] <JEEB> also are you sure you are flushing the decoder?
[16:10:48 CEST] <JEEB> https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[16:10:55 CEST] <JEEB> related to the flushing part here
[16:15:44 CEST] <th3_v0ice> JEEB: I was using avcodec_decode_video2(), not the latest API, is this required for the older API? Because if I saw correctly avcodec_decode_video2() is just calling the new API instead of me.
[16:18:11 CEST] <JEEB> yes, you will have to flush the decoder always
[16:18:15 CEST] <JEEB> with old or new API
[16:18:22 CEST] <transcodeine> howdy.
[16:18:41 CEST] <transcodeine> still working on this 'too many packets buffered' error
[16:18:41 CEST] <JEEB> and the old API can be buggy, and it's deprecated since more than a year or two. so if possible you should start moving to the push/pull API
[16:19:42 CEST] <th3_v0ice> JEEB: Ok, will do. Thanks!
[16:19:44 CEST] <transcodeine> cur_dts is invalid (this is harmless if it occurs once at the start per stream)     Last message repeated 2040 times
[16:20:09 CEST] <transcodeine> this repeats about 25x before it bombs out with the 'too many packets buffered' error
[16:21:08 CEST] <th3_v0ice> transcodeine: Are you generating the PTS and DTS for packets?
[16:21:22 CEST] <transcodeine> yes sir, JEEB helped me determine that
[16:21:36 CEST] <transcodeine> via lboxdumper
[16:21:40 CEST] <transcodeine> boxdumper
[16:22:07 CEST] <th3_v0ice> I should've asked first, are you using API or CLI?
[16:22:15 CEST] <transcodeine> cli
[16:22:36 CEST] <th3_v0ice> Did you try with -reset_timestamps?
[16:22:48 CEST] <transcodeine> i did not.  i will
[16:23:10 CEST] <transcodeine> should i leave -max_muxing_queue_size?
[16:23:19 CEST] <transcodeine> doesn't seem to help
[16:23:47 CEST] <transcodeine> although in the bug description some people have indicated it was a temporary workaround..
[16:23:49 CEST] <th3_v0ice> Try without it first.
[16:23:52 CEST] <transcodeine> ok
[16:24:30 CEST] <th3_v0ice> Then you can try with that option as well.
[16:26:31 CEST] <transcodeine> i don't see that option
[16:33:39 CEST] <barhom> BtbN, DHE: Yes it uses the same memory as the GPU. But that was one of my questions. All cards have the same NVENC chip. Question is how much memory you want on the GPU to fully utlize the NVENC chip. 2,3,4gb?
[16:34:57 CEST] <th3_v0ice> transcodeine: -reset_timestamps 1
[16:37:10 CEST] <transcodeine> Too many packets buffered for output stream 0:1.
[16:37:13 CEST] <transcodeine> same error with that option
[16:38:03 CEST] <transcodeine> i'm passing "./ffmpeg -v debug -hwaccel cuvid -c:v h264_cuvid -i src.mov -c:a aac -c:v h264_nvenc -max_muxing_queue_size 9999 test.mp4 -ar 48000 -ac 2 -reset_timestamps 1"
[16:38:14 CEST] <transcodeine> with or without max muxing, doesn't matter.
[16:39:15 CEST] <DHE> barhom: I don't know enough about the details to make a call. but I would imagine much less than 50 megabytes per video... 1080p with ~4 frames lookahead at 4:2:0 doesn't need all that much RAM
[16:42:50 CEST] <th3_v0ice> transcodeine: I assume that audio is 0:1 stream. If you turn it off does it work? I am sorry but that is all that I can think of right now.
[16:44:24 CEST] <transcodeine> fwiw it works w/o using hardware encoding if that gives anything away
[16:50:21 CEST] <transcodeine> i'm going to dump the debug to pastebin
[16:50:52 CEST] <transcodeine> https://pastebin.com/KULCVz1V
[16:51:46 CEST] <JEEB> ok, that just sounds like the hw encoder is derping up
[16:51:49 CEST] <JEEB> try with -debug_ts
[16:51:54 CEST] <JEEB> and see what comes out of the encoder
[16:52:00 CEST] <transcodeine> def agree
[16:53:34 CEST] <transcodeine> output:
[16:53:35 CEST] <transcodeine> https://pastebin.com/2YdCWULk
[16:58:04 CEST] <transcodeine> i got a bunch of warnings at compile time, maybe i need to revisit my compile
[16:59:38 CEST] <th3_v0ice> If you output to mov again, does it work?
[17:00:46 CEST] <confusedjoe32> hey
[17:00:50 CEST] <confusedjoe32> can you please help me guys
[17:01:04 CEST] <confusedjoe32> im trying to watch a rtsp stream, but the video is messed up
[17:01:29 CEST] <transcodeine> negative
[17:01:36 CEST] <transcodeine> still bombs
[17:02:47 CEST] <confusedjoe32> hello ?
[17:46:42 CEST] <colekas> how come none of the muxing/remuxing examples in the source code set the muxrate?
[17:47:06 CEST] <colekas> I'm getting PCR errors when trying to use ffmpeg to remux a multicast stream
[17:47:18 CEST] <colekas> so I thought setting muxrate in the av_dict before write_output_headers would work
[17:47:27 CEST] <colekas> but I'm getting odd behavior, like the write blocking
[17:47:32 CEST] <colekas> and getting gigabytes of data
[17:50:30 CEST] <JEEB> because the examples really didn't feed into some specific use case other than the most basic that the author made I think
[17:58:06 CEST] <transcodeine> Jeeb and Th3_v0ice- so hw encoding works fine for mp4 > mp4. its > mov to mp4 that causes that too many packets buffered error
[17:58:35 CEST] <transcodeine> if i remove -hwaccel cuvid it will encode- but no faster than cpu :\
[17:59:14 CEST] <transcodeine> i'm getting 14x when it works- too bad i can't get this mov to transcode
[18:01:42 CEST] <JEEB> check the timescale/time base of the input for shits and giggles if the difference is the input file
[18:01:53 CEST] <JEEB> and as I noted, start logging stuff with -debug_ts
[18:02:03 CEST] <JEEB> 2> long_darn.log
[18:02:04 CEST] <JEEB> for example
[18:02:12 CEST] <JEEB> will throw the stderr into the long_darn.log
[18:02:14 CEST] <JEEB> file
[18:04:47 CEST] <moshisushi> transcodeine: good nickname m8!
[18:05:51 CEST] <pkeroulas> Hello everyone, I'm trying to add support of the interlaced format in libavformat/rtpdec_rfc4175.c and re-encode the result to h264. I can either keep the received fields separated and make AVFrames from each of them OR I can reconstruct progressive frames (with the help of yadif filter). libx264 seems to work either way. So my question is: which method should I use? Is it a good practice to keep interlaced fields in the decoding
[18:05:51 CEST] <pkeroulas> process?
[18:07:30 CEST] <JEEB> decoding in lavc for H.264 interlacism gives you two fields in a single picture
[19:08:29 CEST] <transcodeine> moshi :)
[19:09:05 CEST] <transcodeine> jeeb- def the input file is the issue along with the hw encoding
[19:12:06 CEST] <transcodeine> stripping all audio didn't seem to improve things either- e.g. -an -vcodec copy
[19:17:59 CEST] <transcodeine> is it possible that what's happening is that the src file which is 4:2:2 is being rejected by the hw encoder?
[19:19:05 CEST] <transcodeine> from nvidia: "4:2:2 chroma subsampling is not supported by NVENC hardware (for encoding) or NVDEC hardware (for decoding)."
[19:25:38 CEST] <furq> transcodeine: it's probably being rejected by the decoder then
[19:28:38 CEST] <transcodeine> can i split off the color conversion to software?
[19:38:34 CEST] <JEEB> if it was encoding, yes
[19:38:39 CEST] <JEEB> most likely it's the decoding, though :P
[19:39:59 CEST] <transcodeine> can you suggest the filter/switch for that to offload decoding to cpu?
[19:40:17 CEST] <transcodeine> i'm screwing around but can't seem to find the right mix
[19:40:40 CEST] <transcodeine> oh wait..nevermind. you're saying it's the decoder.
[19:40:43 CEST] <transcodeine> blah
[19:42:33 CEST] <JEEB> or well, it all depends on which part of the process is slow :P but in any case you wouldn't be doing it all pretty all-in-gpu-memory
[19:42:39 CEST] <JEEB> which already introduces speed loss
[19:43:52 CEST] <transcodeine> i can't tell can i?
[19:44:41 CEST] <transcodeine> nvenc won't do 4:2:2 but it will do 4:4:4- so i COULD upsample to 4:4:4 in hw then transcode to 4:2:0- the desired output
[19:48:03 CEST] <JEEB> uhh
[19:48:10 CEST] <JEEB> what will 4:4:4 support in the encoder help you again?
[19:48:25 CEST] <JEEB> if your input is 4:2:2 and that's the part that has to be decoded
[19:48:38 CEST] <JEEB> also no, nvidia does not support decoding of 4:4:4 in hardware
[19:48:46 CEST] <JEEB> it's an encoding-only feature
[19:50:45 CEST] <transcodeine> yeah i'm grasping at straws here
[19:52:36 CEST] <JEEB> just decode in software, do the colorspace conversion in software and push either into x264 preset superfast or the darn hw encoder :P
[19:55:11 CEST] <transcodeine> right.  i'm not certain how to compose that command line to do that.
[19:55:28 CEST] <pkeroulas> JEEB, thank you.
[20:05:24 CEST] <furq> transcodeine: presumably just remove -c:v h264_cuvid and add -vf format=yuv420p
[20:05:42 CEST] <furq> you should still be able to use nvenc after that
[20:09:06 CEST] <transcodeine> furq- that bombs at the beginning if replaced
[20:12:08 CEST] <transcodeine> ok i got the cmdline right.
[20:12:12 CEST] <transcodeine> it's very slow w/o hw
[20:12:34 CEST] <transcodeine> bg
[20:27:58 CEST] <transcodeine> tweaked it to get about 5.25x
[20:32:46 CEST] <mssng_chrs> Hello there, I'm trying to use ffmpeg with the new libndi output. I'd like to stream my USB mic (audio only) over NDI to another machine so I can use it as input in OBS Studio. I was able to find how to get the mic as input using Alsa, but I haven't found anything on the settings for audio-only NDI output. Has anyone played with NDI yet?
[21:43:24 CEST] <mssng_chrs> Found a solution to my problem. NDI is working flawlessly :D See you
[21:45:40 CEST] <blue_misfit> hey folks! I've been working on a process for ABR encoding of HEVC using ffmpeg + libx265 and I'm trying to get the GOP part of things optimized
[21:45:53 CEST] <blue_misfit> I'm using parallel chunk encoding, so the simplest approach is to start with fixed GOP
[21:46:15 CEST] <blue_misfit> e.g. keyint=48:scenecut=0 to make 2 second fixed GOPs assuming 24p input
[21:47:05 CEST] <blue_misfit> first - is my above assumption correct (that keyint=48:scenecut=0) in the -x265-params will absolutely insert an IDR every 48 frames and no others?
[21:47:44 CEST] <blue_misfit> second - is there a good way to guarantee that I get an IDR every 48 frames, but also let the encoder use non-IDR I frames whenever it wants (to improve quality)?
[21:48:20 CEST] <blue_misfit> seeing some weird advice online about using force_key_frames, setting keyint to twice what I actually want, and setting keyint-min to waht I actually want, which seems really really strange
[22:22:14 CEST] <wfbarksdale> does anyone know the difference between these two compiler options? "--disable-vda            disable Apple Video Decode Acceleration code [autodetect]" and "--disable-videotoolbox" ?
[22:22:38 CEST] <wfbarksdale> i thought videotoobox WAS the apple hardware acceleration?
[22:22:53 CEST] <wfbarksdale> so what is this "vda" ?
[22:23:37 CEST] <kerio> The following frameworks are no longer part of the OS X SDK as of version 10.11: VideoDecodeAcceleration. Use VideoToolbox.framework instead.
[22:25:59 CEST] <wfbarksdale> thanks kerio!
[22:34:09 CEST] <kerio> o no ffmpeg HEAD doesn't build in brew D:
[22:34:21 CEST] <kerio> pls halp
[22:34:55 CEST] <kerio> use of undeclared identifier 'AV_INPUT_BUFFER_PADDING_SIZE'
[23:58:24 CEST] <BenLubar> ffmpeg isn't allowing me to enter this as the filter for setpts or asetpts: https://pastebin.com/raw/ikYSZp3s
[23:59:43 CEST] <BenLubar> oh, it's because of the commas, isn't it
[00:00:00 CEST] --- Tue Mar 27 2018


More information about the Ffmpeg-devel-irc mailing list