[Ffmpeg-devel-irc] ffmpeg.log.20170213

burek burek021 at gmail.com
Tue Feb 14 03:05:01 EET 2017


[01:52:50 CET] <MHD123> HELP
[01:53:00 CET] <MHD123> Hello friends
[01:53:40 CET] <MHD123> i need an urgent help, i am no coder .. but i am using ffmpeg
[01:54:41 CET] <MHD123> i need the code to resize a .tif sequence and put the resized files in a new folder with the same naming without quality loss
[02:00:15 CET] <DHE> wonder if using imagemagick might be a better way to do it?
[02:01:14 CET] <Diag> Yeah i agree with DHE, imagemagick would be better
[02:01:40 CET] <MHD123> ok thanks i ll try ..
[02:02:19 CET] <MHD123> will it do it without encoding? only dimensions changing?
[02:08:30 CET] <DHE> isn't a tiff usually lossless? (unless jpeg mode is enabled)
[02:09:57 CET] <Diag> 99% of the time tiff is just like rle or something
[02:10:10 CET] <Diag> at least i tthink it is
[02:10:33 CET] <MHD123> yeah it is
[02:10:39 CET] <MHD123> you are right
[02:11:23 CET] <Diag> ah yeah
[02:11:30 CET] <Diag> all 3 types of tiff are rle
[02:13:35 CET] <MHD123> i just downloaded imagick
[02:13:53 CET] <MHD123> i need to scale 131k images
[02:14:11 CET] <Diag> holy
[02:14:12 CET] <Diag> fuck
[02:14:17 CET] <Diag> do you have photoshop?
[02:14:17 CET] <MHD123> so it is not possible to do so, i can do that on encoder
[02:14:24 CET] <MHD123> yeah
[02:14:29 CET] <Diag> use photoshop
[02:14:32 CET] <MHD123> encoder will take 10 hours
[02:14:45 CET] <MHD123> so i thought ffmpeg is faster
[02:15:02 CET] <Diag> idunno
[02:15:17 CET] <Diag> ive always used photoshop for shit like that
[02:15:23 CET] <MHD123> i found the command but i can't repeat and rename
[02:15:43 CET] <MHD123> is there any way to do such batch on photoshop?
[02:15:56 CET] <Diag> yes
[02:16:07 CET] <Diag> if you go up to view > actions
[02:16:13 CET] <Diag> start recording actions
[02:16:18 CET] <Diag> resize
[02:16:20 CET] <Diag> save
[02:16:23 CET] <Diag> close
[02:16:34 CET] <Diag> then just do the action on the folder of shit
[02:16:55 CET] <Diag> https://helpx.adobe.com/photoshop/using/processing-batch-files.html
[02:17:57 CET] <MHD123> ok i am trying
[02:27:11 CET] <MHD123> no way this will take a decade
[02:29:10 CET] <Prelude2004c> hey guys.. using ffmpeg with cuvid to decode and transcode a udp stream
[02:29:20 CET] <Prelude2004c> each one is using about 1 GIG of ram on teh GPU card... any way to limit ram Usage ?
[03:09:40 CET] <Prelude2004c> hey guys.. question .. using h264_nvenc with quadro M4000 card.. can it do 10bit encoding?
[03:10:44 CET] <DHE> nvenc does have some 10bit support. gotta check that specific card's capabilities though
[03:10:44 CET] <hatsunearu> hello! i was following this tutorial and someone told me that this is super old and has a lot of deprecated stuff in it. i'm wondering if there is a good community guide that people approve of. http://dranger.com/ffmpeg/tutorial01.html
[03:19:33 CET] <Prelude2004c> how do i turn on 10bit encoding ?
[03:19:39 CET] <Prelude2004c> to check if card supports it
[03:20:44 CET] <Prelude2004c> looks like its just h265
[03:21:05 CET] <Prelude2004c> and only pascal cards
[04:16:24 CET] <Werel> I'm following a tutorial to install ffmpeg and it recommends installing other libraries and formats like AAC and MP3 before making and installing ffmpeg ( http://www.jeffreythompson.org/blog/2014/11/13/installing-ffmpeg-for-raspberry-pi/ ) .  It went through the steps for x264.  Does this mean that ffmpeg does not come with those codecs built in?  And if not, is there a recommended way I do it, as the tutorial I am following, looks like it ca
[04:16:24 CET] <Werel> n quickly become overwhelming when trying to make sure I have all my codec bases covered.
[04:16:58 CET] <DHE> some codecs are provided externally. AAC can be provided with libfdk, but an AAC encoder is available (and good enough to use) in recent versions
[04:17:13 CET] <DHE> x264 provides H264 encoding; ffmpeg includes its own decoder
[04:17:15 CET] <DHE> and so on and so forth
[04:19:19 CET] <Werel> Ok.  My end use is to use the server feature as to have ffmpeg ( hopefully, I haven't mapped this all out, yet ) take HLS or MP4 streams from OBS and offer to serve it to other clients dialing in from for example, VLC, or an embedded video player.  Essentially hosting my own small scale live stream hosting like Twitch, but just for myself a a couple friends.
[04:22:11 CET] <DHE> well, ffserver is garbage and not to be used if at all possible
[04:22:32 CET] <DHE> if OBS produces pre-encoded video, you don't need codec support in ffmpeg itself to do format-only conversions
[04:24:56 CET] <Werel> I believe it is pre-encoded as you say.  Really, I'm just looking for something to re-bounce it out to a couple clients, and I wanted to centralize it on my raspberrypi.  That's my limitation, using it on the pi, and that it's hls or mp4, just not rtmp.  I used MonaServer with great success... until I realized it needed flash players, which mobile doesn't like, so I'm looking for some tool that can accept my OBS stream in those formats, and j
[04:24:56 CET] <Werel> ust serve it back out.
[04:27:32 CET] <DHE> well nginx-rtmp does have more formats than just rtmp. and hls is kinda nice in that you can just have ffmpeg process a live stream with an output in the webhosting directory and it works
[04:33:48 CET] <Werel> I looked into nginx momentarily, but the ffmpeg webhosting is what I was really looking into
[04:34:02 CET] <Werel> btw, thank you for all your information so far, it's been really helpful :)
[04:36:22 CET] <Werel> well, to go back to my original reason for going in here, in terms of encoding AAC and MP3 and, hmm.. OGG, not sure what else would be useful, MP4? That's pretty much built in at this stage, and would be recommended I don't have to worry about finding addittional libraries to support those in making ffmpeg?
[04:37:04 CET] <DHE> MP4 is a format, not a codec. ffmpeg can convert an MP4 into HLS with or without x264 and AAC codecs, provided the MP4 is already has H264+AAC codecs in it
[04:37:23 CET] <DHE> this is where OBS' existing conversion comes into play, and is something I can't comment on
[04:38:06 CET] <DHE> and I would expect a Pi has the CPU power to convert the audio to AAC in realtime, but not the video.
[04:38:29 CET] <DHE> so this is already getting over my head because I don't have experience with OBS to know what it offers and how to make use of it
[04:40:03 CET] <Werel> I'm working on the premise that when I choose an output format for OBS, that's the codec it is sending in.  And it offers a number, between HLS, FLV, etc.  I woudln't want the pi to be converting it, it was able to redistribute flv without much issue.. but that's the scope of my project :)
[04:40:36 CET] <Werel> Yeah, I understand containers and codecs, I just don't have complete vocabulary in which ones are which :)
[04:41:03 CET] <Werel> Thank you again! I'm going to stay in here abit, but will be continuing my install.
[04:42:46 CET] <echelon> hey
[04:43:00 CET] <Werel> hello!
[04:43:39 CET] <echelon> i'd like to blackout the first few seconds of a video stream, is there an easy way to do that?
[04:53:04 CET] <echelon> i'd like to keep the audio, that's why i don't want to outright crop it
[05:01:54 CET] <hatsunearu> i'm trying to stream my webcam video with as low latency as possible
[05:02:03 CET] <hatsunearu> like, sub 100ms
[05:02:08 CET] <hatsunearu> is it possible with a laptop?
[05:07:44 CET] <FishPencil> Do the number of cores on a CPU matter for encoding, or does only the clock speed?
[05:12:48 CET] <Diag> both?
[05:28:08 CET] <FishPencil> Can two CPUs be compared for performance? For example, an 8 core and 4 core, assuming I know the clock speeds? I would think it would be: encode time/(cores*GHz)
[05:33:19 CET] <furq> FishPencil: clock speed and core count isn't enough
[05:33:24 CET] <furq> otherwise a pentium d would be better than a core 2 duo
[05:34:05 CET] <FishPencil> What is needed?
[05:34:21 CET] <FishPencil> The math comes out to s^2 anyway, so that doesn't seem right
[05:35:05 CET] <furq> https://www.cpubenchmark.net/cpu_list.php
[05:36:41 CET] <FishPencil> furq: And it wouldn't be faster if you're using the encode time
[06:55:54 CET] <designgears> I'm using ffmpeg with nginx-rtmp and I keep running into an error that causes ffmpeg to fail, it eventually recovers, but I'm trying to track the issue down because it interrupts my playback.
[06:56:07 CET] <designgears> This is the the error I get in the ffmpeg logs. http://pastebin.com/jR24tpY8
[06:57:18 CET] <designgears> Im streaming from another computer with OBS to my nginx server, same basic recommended twitch settings.
[06:57:46 CET] <designgears> exec ffmpeg -loglevel verbose -i rtmp://localhost/stream/$name -c copy -f flv rtmp://localhost/hls/$name 2>>/var/log/nginx/ffmpeg.log;
[06:58:31 CET] <designgears> that is all I am doing in nginx, it was more complicated but I removed everything to see if it was a specific option I was using.
[07:00:47 CET] <designgears> and here is my ffmpeg build settings. http://pastebin.com/h8ZrE4MP
[07:05:12 CET] <Werel> 'm here for the same end result, but I'm staying the heck away from flv and rtmp, it's really hard to find a method to watch steams on mobile, since mobile doesn't like flash
[07:05:25 CET] <furq> designgears: can't you just use push for that
[07:05:58 CET] <furq> Werel: nginx-rtmp will create hls and dash streams from rtmp inputs
[07:06:10 CET] <designgears> furq: yes, but it still ends up running into the error reading the rtmp stream
[07:06:52 CET] <furq> shrug
[07:07:00 CET] <furq> if it does it with push then that's either obs or nginx which is fucking up
[07:08:07 CET] <Werel> furq, Does nginx do a lot of encoding processing? I'm looking for raspberry pi application.  So far, I've only been using MonaServer, which seems to just hold and bounce off rtmp only, which runs well on a pi.
[07:08:15 CET] <furq> it doesn't do any encoding
[07:08:28 CET] <furq> it just remuxes rtmp streams into hls or dash
[07:08:46 CET] <furq> it'll obviously serve rtmp as well but that's no use for streaming to browsers
[07:09:00 CET] <furq> although it's preferable if you're streaming to actual players and don't want 20 seconds of latency
[07:09:36 CET] <furq> you won't be able to encode video on a pi at all unless you use the pi's builtin h264 encoder
[07:11:59 CET] <designgears> Werel: you can compile nginx with the rtmp module, along with ffmpeg you can encode to pretty much anything you want. https://github.com/arut/nginx-rtmp-module
[07:12:54 CET] <designgears> furq: Im of the opinion it's obs or nginx-rtmp module at this point, going to test some other rtmp streams and see what happens
[07:12:57 CET] <Werel> I am now interested in seriously looking at nginx.. I mean, I want minimal heavy computation because raspberry pi
[07:13:15 CET] <furq> well yeah if the issue happens with push then ffmpeg isn't involved at all
[07:13:40 CET] <furq> if you're getting rtmp_readpacket errors on the server side then i'm inclined to think OBS is just dropping out
[07:13:48 CET] <designgears> Werel: you can do what furq said, just remux it to hls or dash and it will work on anything mobile without any encoding
[07:14:02 CET] <Werel> thank you
[07:14:08 CET] <furq> hls is easier
[07:14:20 CET] <furq> desktop browsers will play it with a bit of js, and it's natively supported on mobile
[07:14:26 CET] <designgears> furq: very possible, tho it's not reporting any drops, going to dig into the logs there and see if there is anything else.
[07:14:27 CET] <furq> dash is more annoying to set up and will never work on iOS
[07:15:09 CET] <furq> even youtube, probably the biggest adopter of dash, uses hls for live streaming because of the iOS situation
[07:15:53 CET] <designgears> I wish they would get onboard already, dash is pretty great
[07:16:10 CET] <furq> they already are on board, but only if your site's domain is netflix.com
[07:16:20 CET] <designgears> lol
[09:39:43 CET] <user128> I have a .3gp file which seems to be partially corrupt (it doesn't play in ffmpeg or in other softwares). how can I verify the structural integrity of the 3gp container. one way would be to put lot of printf statements and trace the control flow of ffmpeg when operating on the corrupt 3gp file. do you folks have a better idea to do this?
[09:40:12 CET] <kerio> http://mp4parser.com
[10:00:21 CET] <user128> kerio, thanks, that helped. I have now found https://github.com/sannies/isoviewer which gives more details.
[10:08:22 CET] <JEEB> I use l-smash's boxdumper for checking ISOBMFF-like files usually
[10:08:39 CET] <JEEB> `boxdumper --box > file_log.txt`
[10:08:46 CET] <JEEB> with the file name after --box of course
[10:14:26 CET] <user128> JEEB, got a link for boxdumper? a google search didn't help me much.
[10:16:53 CET] <user128> got it
[11:01:14 CET] <user128> JEEB, there is a junk box (with some binary data as the name). this comes after the mdat box. can this cause problems for decoders? or are unknown boxes simply ignored?
[11:01:36 CET] <user128> there is a junk box -> in my 3gp file
[11:07:42 CET] <IntruderSRB> guys did anyone of you successfully deployed fMP4 stream with 'cbcs' encryption scheme? I'd really appreciate working example of it :/
[11:22:32 CET] <user128> kerio, user128 there were 16 junk bytes at the end of my .3gp file. I removed them and things are working now. thank you both for your help! I learned something new today :)
[11:28:56 CET] <flux> how does ffmpeg deal with if track configuration/sample types change during the stream, as might be with mpegts? how should I deal with it as a developer using the library?
[14:56:36 CET] <wyre> hi guys!
[14:56:45 CET] <wyre> how can I set the container to the output file?
[14:56:53 CET] <wyre> it is the file extension?
[14:57:06 CET] <furq> that or -f
[15:00:36 CET] <wyre> so... to specify mpeg-4 container what should be -f paramenter? furq
[15:00:44 CET] <furq> -f mp4
[16:40:50 CET] <Demian> Hello ! Any idea how to make it work force_original_aspect_ratio=decrease with scale2ref?
[17:04:54 CET] <faLUCE> Hello. I'm stil fighting with libav.  I allocate an image with av_image_alloc(mResampledLibAVFrame->data...), then I free it with av_freep(&mResampledLibAVFrame->data[0]); but I obtain a memory leak (ffmpeg version 2.8). What is the right function that I have to call in order to free properly the allocated picture?
[18:02:22 CET] <IntruderSRB> guys I'm testing my H264 Video NAL parser and I'm trying to isolate video slice header. Any idea of it's usual average byte size?
[18:03:34 CET] <amseir2> i'm attempting to cut a video encoded using H264 and Speex using ffmpeg using the -ss and -to arguments without success. the video itself is several hours long and i've tried both rencoding and copy for the audio and video codec which has not helped. enforcing keyframes also did not help, the cut video is several minutes short... any idea on what else i can try?
[18:32:42 CET] <Mooniac> how can I concatenate two .mkv files into one?
[18:32:54 CET] <Diag> like merge?
[18:33:21 CET] <Diag> https://trac.ffmpeg.org/wiki/Concatenate
[18:51:21 CET] <Mooniac> yes, that's what I'm looking for. thx
[19:16:57 CET] <Demian> <Demian> Hello ! Any idea how to make it work force_original_aspect_ratio=decrease with scale2ref?
[19:21:21 CET] <jarkko> http://pastebin.com/cxREzLjj
[19:21:24 CET] <JEEB> thank you
[19:21:53 CET] <JEEB> jarkko: are you building within the source tree or out-of-tree?
[19:22:10 CET] <JEEB> it seems like your build env needs cleaning :P
[19:22:12 CET] <jarkko> it should be ffmpeg-git
[19:22:17 CET] <jarkko> how do i do that?
[19:22:27 CET] <JEEB> no, I mean are you building in the directory with the configure file
[19:22:37 CET] <JEEB> or are you making a separate directory and calling the configure script from that
[19:22:51 CET] <JEEB> in-tree or out-of-tree
[19:23:15 CET] <aerodavo_> how to limit duplicate and dropped frames when capturing video from camera using avfoundation on Mac?
[19:23:36 CET] <faLUCE> in a YUV420P format, is the chroma stored in the third or in the first plane?
[19:23:49 CET] <JEEB> second and third
[19:23:59 CET] <JEEB> [0] is luma, chroma is [1] and [2]
[19:24:03 CET] <aerodavo_> http://pastebin.com/exzh5ZZD
[19:24:26 CET] <faLUCE> JEEB: but from what I see only [2] is subsampled, right?
[19:24:32 CET] <aerodavo_> capturing to mpeg-2 using -target ntsc-dvd
[19:24:44 CET] <JEEB> faLUCE: both should be in 4:2:0
[19:25:22 CET] <JEEB> jarkko: basically if you built in-tree you can do `git clean -dfx` and that will destroy anything not part of the repository :P
[19:25:27 CET] <aerodavo_> is there a way to buffer the input somehow?
[19:25:35 CET] <JEEB> and if you were out of tree, you can just remove your build dir and re-create it
[19:25:49 CET] <faLUCE> JEEB: I thought there was a 420 non subsampled, and a 420 subsampled. Is it wrong?
[19:25:53 CET] <aerodavo_> would buffering the input help?
[19:26:01 CET] <JEEB> faLUCE: 4:2:0 by definition means it's subsampled
[19:26:29 CET] <JEEB> 4:2:0 means you have full luma res, and one sample per 2x2 area for chroma
[19:26:51 CET] <JEEB> 4:2:2 means you have full luma and one sample per two luma samples
[19:26:57 CET] <JEEB> 4:4:4 means you have full luma and chroma
[19:27:39 CET] <faLUCE> thanks  JEEB
[19:33:31 CET] <faLUCE> but when I get samples from a YUV420P AVFrame, I can correctly them through read AVFrame->data[0 or 1] from 0 to linesize[0 or 1]*heigth. Instead, for the plane[2]  (AVFrame->data[2]) valgrind shows invalid reads from some index, which is < than  linesize[2]*heigth.... dooes plane[2] contain less data than plane[1] ?
[19:34:00 CET] <faLUCE> but when I get samples from a YUV420P AVFrame, I can correctly READ them by getting AVFrame->data[0 or 1] from 0 to linesize[0 or 1]*heigth. Instead, for the plane[2]  (AVFrame->data[2]) valgrind shows invalid reads from some index, which is < than  linesize[2]*heigth.... dooes plane[2] contain less data than plane[1] ?
[19:35:58 CET] <kepstin> faLUCE: with 4:2:0 video, planes 1 & 2 are both the same size, and smaller than plane 0. (but plane 2 is usually right after plane 1 - so over-reading data from plane 1 will just read data from plane 2 instead, without giving a valgrind error)
[19:37:13 CET] <JEEB> the stride for a single line is in AVFrame::linesize
[19:37:40 CET] <JEEB> so the next line always starts at N*linesize
[19:41:45 CET] <faLUCE> kepstin: then I don't understand why I obtain that valgrind error. For plane[2] I just read data between the bounds:  (frame->data[2])  and  (frame->data[2]+frame->linesize[2]*heigth)
[19:42:16 CET] <kepstin> faLUCE: what's "height"? there? in 4:2:0 video, the chroma planes are ½ the height of the luma
[19:42:45 CET] <kepstin> (they're half width and half height, so a quarter of the size overall)
[19:43:26 CET] <jarkko> when i type ffmpeg and i see the numbers there does it tell my version of library and the latest available?
[19:44:18 CET] <faLUCE> kepstin: what a stupid I am...
[19:44:48 CET] <faLUCE> I was confused by linesize[2], which is the half of [0]
[19:44:57 CET] <faLUCE> and I forgot to reduce the heigth as well
[19:45:03 CET] <kepstin> jarkko: it tells you which version of the ffmpeg libraries the tool is built against, an which version it's currently using at runtime. If the two sets don't match you probably have an installation problem.
[19:46:08 CET] <faLUCE> kepstin: you know? I did not obtain that error for plane[1] because of what you just said:  "so  over-reading data from plane 1 will just read data from plane 2 instead, without giving a valgrind error"
[19:46:14 CET] <faLUCE> then I was confused
[19:50:04 CET] <JEEB> yes, usually those buffers end up being near each other so you can get such funky stuff even though you are already overreading :P
[19:53:17 CET] <faLUCE> JEEB: LOL, I spent 4 hours for solving that. I even thought that there were two different versions of YUV420P (subsampled and not subsampled)
[20:18:17 CET] <theo> Do anyone know if this guide still applies for Ubuntu 16.04 when installing ffmpeg with libfdk-aac http://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu , or should I install in in some other way?
[20:46:32 CET] <Senji> how to achieve the opposite of reducing file size, I have an mp4 file, I want to "visually losslessly" re-encode it to something like 3 MB
[20:46:36 CET] <Senji> from 1MB originally
[20:47:01 CET] <kerio> so do that
[20:47:15 CET] <Senji> yes but, how
[20:47:34 CET] <kerio> set the bitrate to 3mb / duration
[20:51:16 CET] <kepstin> Senji: if you have a target size, then just figure out what the bitrate needed is, then do a 2-pass encode with a good codec (libx264 probably) and the slowest settings you can stand.
[20:51:56 CET] <kepstin> Senji: but why would you want to /increase/ file size? unless your target is a lossless codec, the result will always be bigger and worse
[20:52:49 CET] <kepstin> If you have specific requirements, like a particular h264 profile, or a max keyframe rate for an editor or something, sure, re-encode, but otherwise just don't bother...
[20:53:13 CET] <kepstin> You could append 2MB of null bytes to the file and hope that players ignore them?
[20:53:52 CET] <Senji> I'm trying to get around a stupid "minimum file size" requirement for a video upload on a site
[20:53:58 CET] <Senji> I think I might have managed to do it
[21:00:27 CET] <ChocolateArmpits> That's such a useless limit
[21:07:24 CET] <Senji> It's moronic because there isn't a "minimum time" limit, they just went with a kludge which is really useless in the case of what I want to upload
[21:19:48 CET] <faLUCE>  well, this is pretty hard: I have to share a int* obtained with a C library. The library wants that it has to be freed by a lib function, let's call it libfree(void* ). I was thinking to use a shared_ptr for sharing the pointer, but what should I do? Should I create a derived class of shared_ptr and override the destructor?
[21:20:00 CET] <faLUCE> sorry, wrong channel
[22:14:33 CET] <gwohl> I cannot find any documentation on the `duration_ts` value that is reported by ffprobe. What does the `duration_ts` value indicate about a media stream?
[22:25:02 CET] <kepstin> gwohl: it's just the duration, except in timebase units instead of seconds
[22:25:23 CET] <gwohl> Oh, duh! makes perfect sense. thanks kepstin :)
[22:25:23 CET] <kepstin> gwohl: duration = duration_ts / time_base
[22:25:55 CET] <kepstin> (or, uh, * time_base? I can never remember which way that goes)
[22:34:13 CET] <gwohl> the time_base is reported as 1/<value> (which is proper, as I understand)
[22:34:19 CET] <gwohl> so it'd be * :)
[22:46:26 CET] <restrexxp> hello guys. I got h264 NAL units coming from an encoder, and I am wondering how can I initialise them into an AVPacket. I would like to mux them into an MPEG-TS container and then output it via a udp stream... Any ideas? Thanks a lot!
[23:45:07 CET] <jarkko> guys is it possible to use hardware decoding at youtube?
[23:45:17 CET] <jarkko> linux platform
[23:48:28 CET] <faLUCE> jarkko: what do you mean?
[23:48:46 CET] <faLUCE> ffmpeg is software deconding
[23:48:52 CET] <kepstin> jarkko: nothing stopping it from working, assuming you can convince youtube to send you a format your hardware can decode. I don't think any current linux browsers implement it tho.
[23:49:08 CET] <kepstin> faLUCE: ffmpeg supports a number of hardware decoders as well as software decoders.
[23:49:26 CET] <jarkko> well i think the problem is with the browsers
[23:49:47 CET] <jarkko> i dont know if there is such that could hardware decode
[23:51:22 CET] <kepstin> you can always use youtube-dl and pipe the video to a player like mpv which supports hardware decoders :)
[23:52:04 CET] <jarkko> i tried with vlc but i didnt see any powerconsumption drop
[00:00:00 CET] --- Tue Feb 14 2017


More information about the Ffmpeg-devel-irc mailing list