[Ffmpeg-devel-irc] ffmpeg.log.20160217

burek burek021 at gmail.com
Thu Feb 18 02:05:02 CET 2016


[00:00:00 CET] <J_Darnley> What CPU does that machine have?
[00:00:18 CET] <TD-Linux> yeah basically, if -speed 7 is still too slow for you
[00:00:38 CET] <cyphix> Intel(R) Atom(TM) CPU N2800   @ 1.86GHz, 4 CPU
[00:00:42 CET] <TD-Linux> rip
[00:00:45 CET] <kepstin> lol, an Atom?
[00:01:04 CET] <kepstin> yeah... good luck with that. It's never gonna be fast.
[00:01:12 CET] <interrogator> how to download youtube-videos with ffmpeg ? im a new ffmpeg intruder
[00:01:14 CET] <cyphix> Ah. Damn.
[00:01:34 CET] <J_Darnley> interrogator: don't.  use youtube-dl
[00:01:44 CET] <kepstin> interrogator: you probably want to try the 'youtube-dl' tool (which does use ffmpeg for some parts)
[00:01:47 CET] <TD-Linux> you could MAYBE try libvpx 1.5.0 and vp9 with threaded encodes at a really high speed setting
[00:03:28 CET] Action: TD-Linux used an overclocked atom as a desktop for half a year
[00:03:41 CET] <interrogator> thanks guys
[00:04:13 CET] <cyphix> TD-Linux: What's vp9?
[00:04:34 CET] <kepstin> cyphix: new and improved version of vp9
[00:04:39 CET] <kepstin> er, of vp8
[00:04:50 CET] <kepstin> supposed to compete with h265, apparently.
[00:04:56 CET] <TD-Linux> cyphix, a newer video codec than vp8 which you are using
[00:05:01 CET] <TD-Linux> also in libvpx
[00:05:06 CET] <J_Darnley> Surely vp9 won't be quicker than vp8
[00:05:28 CET] <furq> it won't
[00:05:38 CET] <TD-Linux> J_Darnley, well libvpx 1.5.0 can do threaded encode of vp9, so it *might* be with a really high -speed setting
[00:05:52 CET] <furq> can it not do threaded vp8
[00:06:13 CET] <J_Darnley> Did they just abandon vp8 and let it rot in their library?
[00:06:20 CET] <furq> cyphix: i take it you need webm and not mp4
[00:06:30 CET] <TD-Linux> no because I think it only does tile threading
[00:06:44 CET] <furq> for webm your only options are vp8 and vp9
[00:06:44 CET] <TD-Linux> and vp8 doesn't have tiles
[00:07:15 CET] <TD-Linux> x264 is not going to be spectacular on an atom either though.
[00:07:20 CET] <cyphix> furq: yes
[00:07:21 CET] <furq> it'll be better than vpx
[00:07:48 CET] <TD-Linux> J_Darnley, mostly, though they still add speed improvements. tiles are a bitstream feature
[00:08:09 CET] <TD-Linux> basically 100% of webrtc calls are vp8
[00:08:16 CET] <J_Darnley> Did they never invent frame threading?
[00:08:20 CET] <J_Darnley> huehuehue
[00:08:31 CET] <furq> but yeah using an atom as a transcoding server isn't going to work out well
[00:08:53 CET] <jkqxz> If you can accept rather dodgy H.264 then your Bay Trail atom has a lot of hardware capability - 60fps transcode maybe.  (*Support not yet present in ffmpeg, unfortunately.)
[00:08:53 CET] <J_Darnley> libx264 and lame master race for ever!
[00:08:57 CET] <TD-Linux> J_Darnley, I don't think it's in libvpx but I'm not totally sure. doesn't matter for any of Google's use cases
[00:09:21 CET] <TD-Linux> because codec technology hit its peak in 2001 and will never improve :^)
[00:09:48 CET] <jkqxz> When webrtc is the target use, frame threading doesn't make sense (the next frame isn't available when you want to encode the current one, so there is no parallelism).
[00:09:51 CET] <TD-Linux> though technically lame was already worse than vorbis in 2001
[00:11:16 CET] <J_Darnley> Only if you try to encode a shit-tier stream
[00:11:17 CET] <cyphix> it seems that libvpx 1.3 (the version I have) already has vp9
[00:11:25 CET] <furq> you mean x264 and -c:a copy
[00:11:33 CET] <furq> cyphix: 1.3 doesn't support vp9 multithreading
[00:11:48 CET] <furq> which means vp9 will definitely be much slower
[00:11:51 CET] Action: J_Darnley goes away to prevent further trolling
[00:11:52 CET] <cyphix> And I'm a bit afraid to install the 1.5 version from the unstable. But maybe I should
[00:12:05 CET] <J_Darnley> Oh noes!  Unstable!
[00:12:14 CET] Action: J_Darnley really leaves
[00:12:19 CET] <furq> install it from stretch
[00:13:46 CET] <furq> installing packages from other repos is fine if you manually install the .deb to avoid pulling in conflicting deps
[00:14:06 CET] <furq> that way when it all inevitably goes wrong, you'll be able to resolve it by removing one package
[00:14:14 CET] <TD-Linux> well the ABI broke between 1.3.0 and 1.5.0
[00:14:26 CET] <furq> oh
[00:14:53 CET] <furq> well then my expert advice, as your lawyer, is to upgrade to testing
[00:15:01 CET] <interrogator> is there any url to have ffmeg compiled with all ffmeg known LIBs ?
[00:15:10 CET] <TD-Linux> also 1.3.0 is from 2013
[00:15:23 CET] <furq> interrogator: http://ffmpeg.org/releases/ffmpeg-3.0.tar.bz2
[00:15:49 CET] <interrogator> is it for windows ?
[00:16:00 CET] <furq> sure
[00:16:06 CET] <interrogator> thanks
[00:16:29 CET] <TD-Linux> cyphix, tbh my vp9 suggestion is kind of a long shot
[00:16:38 CET] <TD-Linux> why do you need fast encodes?
[00:16:48 CET] <TD-Linux> are you trying to transcode realtime?
[00:17:51 CET] <cyphix> TD-Linux: Oh no. I don't even need amazing quality. It's just that if I want to upload a 10mn video on my mediacrush server, it takes... well... more than an hour, if it succeeds. That's not convenient at all.
[00:18:31 CET] <cyphix> furq: I might consider change to gentoo instead :p
[00:19:09 CET] <furq> if it's not some kind of important business production server then you should be running testing anyway
[00:19:22 CET] <TD-Linux> cyphix, how much do you care about filesize? you could use theora
[00:19:31 CET] <cyphix> furq: it's not indeed
[00:19:33 CET] <furq> and it's easy enough to upgrade
[00:19:34 CET] <TD-Linux> it's great for potatoes
[00:20:05 CET] <cyphix> TD-Linux: You mean that it would be faster, but the resulting files would be bigger? I don't care at all about the size.
[00:20:13 CET] <furq> the former
[00:20:32 CET] <furq> i'm not sure why it insists on webm though
[00:20:37 CET] <cyphix> Well.... it has to be viewable in streaming from a webpage.
[00:20:52 CET] <TD-Linux> cyphix, yeah, maybe try theora, supported in the same browsers as webm
[00:21:10 CET] <furq> mp4 is more widely compatible in browsers than webm
[00:21:18 CET] <cyphix> TD-Linux: So I should replace libvpx by libtheora?
[00:21:36 CET] <TD-Linux> cyphix, yes
[00:21:44 CET] <cyphix> I'll try that
[00:21:49 CET] <TD-Linux> probably also change container to .ogv
[00:22:15 CET] <furq> and say goodbye to android support
[00:22:16 CET] <cyphix> TD-Linux: What part of the command specifies the container? :/
[00:22:25 CET] <TD-Linux> the filename of the output
[00:22:29 CET] <cyphix> These notions still confuses me a bit...
[00:22:33 CET] <cyphix> ah ok
[00:22:39 CET] <furq> cyphix: it guesses the container from the filename if you don't provide one
[00:27:16 CET] <cyphix> So I tried this command: "ffmpeg -y -i tmpiQFeWX.mkv -c:v libtheora -speed 7 -c:a libvorbis -q:a 5 -pix_fmt yuv420p -quality good -b:v 5M -crf 5 -vf "scale=trunc(in_w/2)*2:trunc(in_h/2)*2" -map 0:v:0 -map 0:a:0 /var/www/mc.cyphix.org/MediaCrush/storage/h18uoUjcQ6o6.ogv" but I'm still at 2 fps...
[00:31:21 CET] <cyphix> It seems definitely 2-3x faster with libvpx than with libtheora
[00:32:27 CET] <TD-Linux> ok, well I guess vp8 got a lot of love :)
[00:32:52 CET] <TD-Linux> also I don't know how speed settings map to libtheora
[00:35:10 CET] <cyphix> Mkay. I think I'm too limited by the hardware of my server. Too bad...
[00:52:05 CET] <utack> Hi. is the new version 3 "wrapped_avframe" in benchmark mode supposed to be slower than the previous "rawvideo" it decoded it to?
[01:47:13 CET] <durandal_1707> utack: faster
[01:47:46 CET] <utack> in that case i have bad news on my arch system, but i will wait for the final version in the community repo test again and make more than two tests
[01:54:40 CET] <kbarry> I'm looking for an example of comparison of audio with and without the sofalizer filter.
[02:21:22 CET] <Wader8> is it okay to talk about specific codec option tips but tied with my attempt at a custom config for batch processing of files for archival project, since #x265 looks like to be for development only
[05:09:30 CET] <ThomQ> Hi all. I´m trying to make a transparent Webm video from a series of PNGs. Now, for one series, the following code works absolutely fine: ffmpeg -i wow.wav -r 30  -f image2 -i wow%3d.png -vf fps -pix_fmt yuva420p -metadata:s:v:0 alpha_mode="1" -c:v libvpx -b:v 0 -crf 30 output.webm
[05:10:06 CET] <ThomQ> But for another series of transparent PNGs, rendered the exact same way, I get weird blue artifacts / backgrounds, and stuttering near the end of the video
[05:10:53 CET] <ThomQ> I just changed the names in that line, nothing else. Anybody any idea?
[05:58:21 CET] <relaxed> ThomQ: that should be -framerate 30
[06:00:33 CET] <ThomQ> relaxed: it did seem to work though. What part should I change?
[06:05:02 CET] <relaxed> -r 30
[06:05:25 CET] <relaxed> I'm not saying that's going to fix your issue
[06:07:17 CET] <relaxed> ffmpeg -h demuxer=image2
[06:10:47 CET] <ThomQ> I´ve narrowed the problem down to the chromakeying. If not done very precisely (as in having small transparent outlines along the chroma´ed footage), results in major artifacts all over
[06:10:56 CET] <ThomQ> They?e not visible in the PNGs though, ofcourse
[06:12:21 CET] <ThomQ> -h demuxer=image2 instead of -f image2?
[06:12:35 CET] <furq> no -h is help
[06:12:49 CET] <furq> that shows all the private options for the image2 demuxer, but it sounds like the issue is with the source
[06:15:19 CET] <ThomQ> yeah, i tried some more Heavy Duty chroma´ing, get rid of as much of the edges as i can. In the PNG the edges look very nice though, even when overlayd on a solid color. Its only after using FFMPEG the blue and now also red artifacts appear
[09:52:24 CET] <termos> should the PTS of my audio and video packets be the same for the video to be in sync with the audio?
[09:52:35 CET] <Mavrik> uhm
[09:52:39 CET] <Mavrik> it should be in the same timebase.
[09:53:02 CET] <Mavrik> But since your audio and video packets almost certanly don't represent same slices of time, they usually aren't exactly the same
[09:55:49 CET] <termos> hm so they should be the same timebase
[09:58:25 CET] <Mavrik> when encoded
[09:58:31 CET] <termos> because I convert from stream_tb to codec_ts (this will differ between codecs I guess) when encoding and then back to output stream tb
[09:59:01 CET] <Mavrik> yes
[09:59:08 CET] <Mavrik> That's usually how you do it
[09:59:57 CET] <termos> ok, that's good to know
[10:00:50 CET] <termos> so before decoding + filtergraph + encoding: av_packet_rescale_ts(&in_packet.p, in_stream->time_base, in_stream->codec->time_base);
[10:01:06 CET] <termos> before write_frame: av_packet_rescale_ts(&packet, stream->codec->time_base, stream->time_base);
[10:25:41 CET] <termos> one strange thing I'm doing is having one thread that pushes to my filter graph and another thread pulling from it, could that lead to any issues?
[10:26:14 CET] <termos> what I'm seeing now is some syncing issues for some streams
[10:28:53 CET] <durandal_1707> what your filtergraphs looks like?
[10:39:54 CET] <termos> the filter string is pretty long and out of order but for video it's: buffer -> yadif -> fps -> scale -> buffersink
[10:41:52 CET] <termos> for audio it's: aeval=val(0)*1.000000|val(1)*1.000000,aformat=sample_fmts=s16:channel_layouts=stereo,aresample=44100,asetnsamples=n=2048:p=0
[11:15:29 CET] <Wader8> Hello, any HEVC GPU Acceleration supported for latest ATI Radeon GPUs (R7 370 latest OpenCL DC and DX12)
[11:15:47 CET] <Wader8> or just nvenc ?
[11:29:01 CET] <jkqxz> Wader8:  VAAPI/VDPAU H.265 decode should work on those, I think.  No encode support.
[11:30:02 CET] <Wader8> jkqxz, that's for the heads up, unfortunately I need encode, building an archive
[11:30:07 CET] <Wader8> thanks*
[11:33:50 CET] <fritsch> only hevc 8 bit, though
[12:12:57 CET] <Wader8> fritsch 8bit ?
[12:15:13 CET] <fritsch> Wader8: hevc exists in 8 10 12 14 bit
[12:15:52 CET] <Wader8> and that would mean what ? sorry I haven't got that far with my research, started 2 days ago
[12:20:17 CET] <jkqxz> Wader8:  Higher-quality streams with greater than eight bit sample depth will not decode on that AMD hardware with current software.  This is mostly an irrelevant problem for now because almost everything is still eight bit, but it will become more of a problem in future.
[12:22:04 CET] <Wader8> oh you mean color, well most of it is historical videos, even if it would be 10 bit, color quality is the least important thing
[12:22:31 CET] <Wader8> well in some cases it would be but quite rare
[12:22:37 CET] <at0m> Wader8: been looking hw accel for a bit too, seems that conflicts with X, so better on headless machines
[12:22:54 CET] <Wader8> conflicts with X?
[12:23:03 CET] <at0m> Wader8: and nvenc is obviously for nvidia cards (that support cuda)
[12:24:08 CET] <at0m> https://trac.ffmpeg.org/wiki/HWAccelIntro is where i went from
[12:26:23 CET] <at0m> maybe check if your intel CPU (if that's what you got) is listed at http://ark.intel.com/search/advanced?s=t&QuickSyncVideo=true and go for qsv
[12:33:27 CET] <Wader8> at0m i have i7-3820 and there are some with QM at the end, but this naming system is very confusing to me I don't like it, so many similar names in the last 3-4 years for intel CPUs, basically i have Sandy Bridge E , the one without IGPU
[12:33:47 CET] <Wader8> so this one is not on the list i guess
[12:36:33 CET] <at0m> Wader8: lscpu | grep Model
[12:36:42 CET] <at0m> it'll say there
[12:36:57 CET] <Wader8> what does that mean ?
[12:37:33 CET] <at0m> for example on this laptop, i see "i5-3230M" which is in that list
[12:37:34 CET] <Wader8> em, I've bought this PC in 2013, since then I've been out of the loof from hardware
[12:37:59 CET] <Wader8> and this was early 2013, i just followed Mantle, CES and AMD GPU stuff, that's about it
[12:37:59 CET] <at0m> even my core2duo is there, it's way older
[12:38:31 CET] <Wader8> LSCPU .. never heard of that
[12:39:01 CET] <at0m> lscpu lists cpu info
[12:39:11 CET] <at0m> assuming you run linux
[12:39:18 CET] <Wader8> Win7X64
[12:39:20 CET] <at0m> oh
[12:39:45 CET] <Wader8> i have Mint debian and Win10 on another drive but so far only for small stuff testing, it's not on SSD so it's slow
[12:40:15 CET] <Wader8> no it's just Mint, not debian, afaik
[12:40:33 CET] <at0m> not sure how windows goes about displaying CPU info and stepping etc
[12:43:30 CET] <Wader8> well yes I said it's i7-3820  that's the name, now, the family is Sandy Bridge-E, Family 6, Stepping 7, Model D, Ext. Model 2D, Revision C2 , instructions, VT-X, AVX. EMT64T, AES, MMX and 7 SSEs
[12:44:00 CET] <Wader8> 32nm
[12:44:08 CET] <Wader8> LGA 2011
[12:45:00 CET] <jkqxz> That's not a desktop CPU at all, it's a nobbled Xeon E5.  Therefore it doesn't have any hardware for graphics or video at all.
[12:45:23 CET] <Wader8> jkqxz, did i say it does ?
[12:45:34 CET] <Wader8> I know, i didn't want any video at the time
[12:45:43 CET] <Wader8> this one was cheaper
[12:45:53 CET] <at0m> Wader8: we're just trying to find out if your CPU supports QSV
[12:46:06 CET] <Wader8> probably not since I never heard of that
[12:46:31 CET] <at0m> Wader8: so it would do hardware encoding. and seems ATI doesn't support it either, so there you go.
[12:47:07 CET] <at0m> Wader8: i hadn't heard of it till i started to look for hw accelerated encoding
[12:47:43 CET] <Wader8> Sandy Bridge E family doesn't have GPU, it's just CPU, I wanted an i7 but I didn't want the GPU, and I usually pick the top market but pick the lowest , so this is the lowest i7 which cost like 250 Euro, anything up was 500 Eur which I would never pay for a CPU
[12:47:57 CET] <at0m> still, without hw accel you should be able to do 8-10x realtime encoding
[12:48:38 CET] <jkqxz> Pretty much all recent desktop/mobile Intel has the hardware for encoding and decoding.  In Sandy Bridge / Ivy Bridge generations they played some games with disabling it on certain models, but they've given up on that now.
[12:49:13 CET] <Wader8> well i'm planning to use very-slow preset for maximizing quality and low size, (keeping same quality at a smaller size)
[12:49:39 CET] <Wader8> to better use my archive space of 2+2TBs
[12:50:18 CET] <Wader8> can the GPU inside be disabled by BIOS option ?
[12:50:34 CET] <Wader8> probably has separate clocks no ?
[12:51:06 CET] <at0m> Wader8: you don't have a 'GPU inside'
[12:51:18 CET] <Wader8> i didn't knew GPU inside would help for video transcoding, i just wasn't thinking about that at the time
[12:52:18 CET] <jkqxz> If your target is output quality/size rather than transcode speed, hardware is not a sensible thing to use anyway.
[12:53:05 CET] <Wader8> Would be slower ?
[12:53:58 CET] <at0m> Wader8: yes, but neither your videocard nor cpu support hw transcoding. so it's not an option at all.
[12:54:43 CET] <jkqxz> The result is /much/ worse from hardware encoders compared to x26[45].  The only reason to use them is if you need high encoding speed.
[12:56:58 CET] <at0m> i'm looking at a project that required to transcode maybe 100s of streams in realtime. so pursuing the hw accel path...
[12:57:41 CET] <Wader8> Well I don't have a problem leaving it at night, i will make batch processes completely customized for what I need, it's just if it takes .... 3DAYS to finish one video it's a bit of a weird, but I guess one or two such cases won't be that bad, most of the videos are small, below 500 MB, some are 10-15 GB but only a hanful and some of those are "professionally" encoded not sure if i'll have...
[12:57:42 CET] <Wader8> ...the time to fiddle with HEVC params just to lower a few GBs but also weaker quality,, plus x265 not being as mature as x264 but im not even sure if maturity right now is only for performance or also featureset, but i heard lookahead isn't done yet, any other major thing that can affect quality and size that aren't finished in x265, i did look on official site, i just don't understand half...
[12:57:44 CET] <Wader8> ...the coding talk
[12:58:35 CET] <Wader8> well alot of them are below 100 MB also
[12:59:07 CET] <at0m> Wader8: like i said, even without hw acceleration you should get about 10x realtime encoding
[12:59:54 CET] <Wader8> based on what you say that ?
[12:59:58 CET] <at0m> i can do maybe 6 here in realtime on the laptop
[13:01:03 CET] <Wader8> because I was using all kidns of transcoding and H264 was always the slowest for me on anything below middle
[13:01:13 CET] <Wader8> iand I was doing MPEG also
[13:01:32 CET] <Wader8> maybe I was using bad codecs, i forgot,
[13:01:40 CET] <Wader8> choice
[13:02:53 CET] <at0m> if i use 1080p input streams, transcoding goes down a lot, i can do maybe 2, 3 in realtime
[13:03:02 CET] <Wader8> well i was using libx265 in one ocassion
[13:03:09 CET] <at0m> so depends on the source material, too
[13:03:25 CET] <at0m> outputting 720p
[13:03:27 CET] <Wader8> 3 months ago, i did a lot of work between, kinda forgot let me check out
[13:04:26 CET] <Wader8> well one video it was hard, DVDs, i have to deal with them being interlaced and having weird pulldowns and well i spend like a week trying to put one of them into x265 properly i did it, took like 11 hours on MEDIUM
[13:05:10 CET] <Wader8> 12.4 hours
[13:05:15 CET] <Wader8> 3.4 FPS average
[13:05:23 CET] <Wader8> output size 1.2 GB
[13:05:43 CET] <Wader8> input size 4GB (VOB)
[13:06:11 CET] <Wader8> ffmpeg version N-76137-gb0bb1dc
[13:08:40 CET] <Wader8> sorry i'll just pastebin it
[13:11:00 CET] <Wader8> at0m so here's what happened last time 3 months ago, note x265 version too, i think 1.9 now, it was 1.8 at the time
[13:11:02 CET] <Wader8> http://pastebin.com/UCaaB3j3
[13:11:51 CET] <Wader8> sorry, the preset was SLOW, not medium, i had 2 config candidates and I used the CRF one (rate factor)
[13:12:58 CET] <Wader8> so for my archival work, most of the videos are X264, so if it has to decode X264 (this was MPEG2) and if I use very-slow, isn't that going to be like super slow then ?
[13:13:06 CET] <at0m> the "-preset slow" will have quite an impact on the speed, obviously. i'd try skipping that and using default, or even 'faster', on smaller files, and then check the difference in results
[13:13:20 CET] <Wader8> so I don't get where you get 6-10 times the realtime from ?
[13:13:37 CET] <at0m> if you're encoding old VHS recordings for example, you might maybe not even notice the difference
[13:14:23 CET] <kevmitch> is there any reason to link to libdcadec anymore?
[13:14:25 CET] <at0m> experiment with a bunch of different settings, on smaller size source file
[13:15:26 CET] <Wader8> oh sorry this probably isn't a fair comparrison because this was one exception with the interlaced mess, you can see a lot of additional filters used, most of the videos are all progressive and clean of these ancient methods
[13:15:26 CET] <eynix> Hi, I have a question regarding the usage of ffmpeg lib in a propritary software. I read in the legal FAQ : "Notably, MPEG LA is vigilant and diligent about collecting for MPEG-related technologies. ". Does this mean that my company should give money to MPEG LA ? (sorry my english is not really good)
[13:19:35 CET] <Wader8> well the results I'm showing you in the pastebin are from a successful run, the size got decreased by 75% at minimal quality loss, obviously MPEG2 does look a bit better in the original, i probably couldn't have made it better because I had to convert it into progressive and get rid of pulldowns and telecines and it was really a headache getting the config just right to get the output...
[13:19:36 CET] <Wader8> ...working, cuase anything else produced playback issues
[13:20:07 CET] <Wader8> in terms of HEVC speed, it a bad example, but I don't have any other example at the moment
[13:20:15 CET] <blarghlarghl> Hi all. I have a question about streaming via RTMP(S). I have a phone. It has a live streaming app (Broadcast Me, for example.) On this phone, I can type in an address to my RTMP(S) server and then hit 'record.' I would like to have ffmpeg or ffserver or whatever sit at that address and receive that stream and save it to disk. How can I do that?
[13:21:10 CET] <jkqxz> eynix:  Quite possibly - you should look for actual legal advice to answer that question.  (I believe that it is dependent on many things, notably sales volume and location, so there are no useful general remarks we can make.)
[13:21:32 CET] <eynix> jkqxz: thank you
[13:21:41 CET] <blarghlarghl> Or rather - is it even possible?
[13:22:00 CET] <blarghlarghl> All I can find is the other way around - ffmpeg can accept a URI to an RTMP(S) stream and then save it.
[13:22:08 CET] <blarghlarghl> That's not useful for me though, I think.
[13:22:43 CET] <at0m> blarghlarghl: https://en.wikipedia.org/wiki/Nginx-rtmp-module
[13:23:20 CET] <blarghlarghl> at0m: Yeah, I use that, but I'd rather ... not.
[13:23:46 CET] <blarghlarghl> at0m: Isn't it possible to reimplement this using ffserver or ffmpeg?
[13:24:08 CET] <blarghlarghl> All it needs to do is sit and listen on a port and save what it gets...
[13:24:25 CET] <blarghlarghl> (Okay, that's not true. But you get my point.)
[13:24:27 CET] <at0m> blarghlarghl: i have no idea. i did get a feel that ffserver is frowned upon here
[13:25:30 CET] <at0m> ... so i'd stick to ffmpeg instead
[13:25:30 CET] <blarghlarghl> Oh.
[13:26:12 CET] <blarghlarghl> at0m: Okay, sure, so how do I do this in vanilla ffmpeg? :)
[13:26:56 CET] <at0m> blarghlarghl: i pointed you in a direction i use and know which works. feel free to hint me on what you come up with as alternative ;)
[13:27:15 CET] <Wader8> at0m, what about VP9 and WEBM, since Youtube's not going to support H265 from what I hear, and they say VP9 is faster in FFMPEG also, so if youtube cuts HEVC, what's the point of me DLing H264 and then putting to HEVC if VP9 is good enough I could do directly that, but there's a question of future proof, is VP9 that future proof ? but there's another thing, inside WEBM not everything is VP9,...
[13:27:16 CET] <Wader8> ...old videos are probably still in VP8, and I want to have some order in my archive, not a bunch of codecs, I rather have just one or two
[13:28:10 CET] <blarghlarghl> at0m: We already use nginx-rtmp in our testing infra as a spike. Now we're rewriting it to support rtmps, and I'm taking the opportunity to see if there are cleaner ways of doing it :) Sadly it looks like we're coming up empty :(
[13:30:08 CET] <at0m> just to go to rtmps? can't you just reverse proxy that, or stunnel?
[13:38:49 CET] <blarghlarghl> at0m: no, not just for rtmps. for actually getting it right. second iteration is always better than the first ;)
[13:44:03 CET] <Mavrik> blarghlarghl, ffmpeg isn't a streaming server but a transcoding tool
[13:44:06 CET] <Mavrik> And you need a streaming server.
[13:44:17 CET] <Mavrik> ffserver is a streaming server but it's a mess of unmaintained obsolete code
[13:44:26 CET] <Mavrik> So use nginx-rtmp-server, it's the right tool for the job
[13:44:38 CET] <Mavrik> Or use Wowza or similar product.
[13:45:01 CET] <blarghlarghl> Mavrik: yeah, i'm considering red5.
[13:45:13 CET] <blarghlarghl> it supports rtmps out of the box, then i don't have to faff about with stunnel or whatever.
[15:45:23 CET] <oldcode> is checking the return value of av_read_frame for AVERROR_EOF the best way to test if the video is complete?
[16:23:18 CET] <kbarry> I've never compiled from source before. I'm hoping to get the sofalizer filter into a build, for windows.
[16:25:48 CET] <J_Darnley> You can start by reading http://trac.ffmpeg.org/wiki/CompilationGuide
[16:35:14 CET] <eynix> is there a "free" encoding format  ?
[16:35:24 CET] <eynix> as in "you don't pay royalties to anyone"
[16:35:53 CET] <at0m> "but you want to use it for commercial projects" ?
[16:36:08 CET] <eynix> at0m: yes
[16:37:11 CET] <eynix> at0m: also, since the feature using the encoding format is really small compare to the product, i'm unsure about if we still need to pay royalties..
[16:39:38 CET] <J_Darnley> eynix: then you'd better go oldschool and use something like mpeg3
[16:39:45 CET] <J_Darnley> uh mpeg1
[16:39:46 CET] <eynix> this is completely new to me, sorry if those questions are stupids ^^'
[16:40:03 CET] <kbarry> J_Darnley: I am having some trouble following the guides on th elink above.
[16:40:24 CET] <J_Darnley> Well which one are you trying to follow?
[16:40:31 CET] <J_Darnley> MSVC or Mingw?
[16:40:55 CET] <J_Darnley> eynix: the amount of royalites you might have to pay someone depends on who you're paying.
[16:41:12 CET] <J_Darnley> the h264 patent pool is quite well defined
[16:41:35 CET] <kbarry> CompilationGuide/MSVC , But I think i will try to cross-compile.
[16:41:46 CET] <J_Darnley> Wait, are you on Windows or not?
[16:42:08 CET] <eynix> J_Darnley: I though MPEG-3 and 4 was done by the same people, so we still needed to pay royalties
[16:42:18 CET] <eynix> to MPEG LA
[16:42:33 CET] <J_Darnley> All patent on mpeg1 have probably expired by now
[16:43:13 CET] <eynix> J_Darnley: do you know about motion jpeg ?
[16:43:13 CET] <J_Darnley> You should be asking your lawyer these questions
[16:43:46 CET] <J_Darnley> theora's another choice for "patent free"
[16:43:57 CET] <eynix> J_Darnley: thank you for your help. It's a rather small company, I don't believe we have any lawyer.
[16:44:12 CET] <kbarry> I am on windows, But have access to linux.
[16:44:46 CET] <eynix> J_Darnley: Theora seems perfect ! I didn't know about it ! looking into it
[16:45:05 CET] <J_Darnley> I don't know which a newbie will find easier to be honest, setting up a dev environment on Windows or cross-compiling from linux.
[16:45:13 CET] <ritsuka> if your software runs on mac os x or windows you can probably use the system build-in video libraries, and the royalties should be already paid by apple and microsoft.
[16:45:15 CET] <kbarry> eynix: It is in your best interest to consult legal counsel, be it an in-house legal staff member, or an attorney on retainer. Chances are, you company has a lawyer
[16:45:44 CET] <kbarry> hahaha, right
[16:45:51 CET] <kbarry> Well, I don't know either,
[16:45:55 CET] <J_Darnley> Indeed.  I'm just some arsehole who thinks software can't be patented.
[16:46:04 CET] <eynix> ritsuka: can you give me an example ?
[16:46:32 CET] <kbarry> expecially because the linux i'd probably be using centos 6.5, which isnt the most easy thing to build ffmpeg for.
[16:49:24 CET] <ritsuka> eynix: for example avfoundation on os x or directshow or whatever on windows. firefox for example does use them, so it can playback h.264 and aac without paying a $
[16:50:15 CET] <eynix> ritsuka: I'm not sure to get it, but I'll look into it. thank you very much.
[16:53:15 CET] <eynix> ritsuka: ... oh, so encoding can be done using only the OS API.
[16:53:21 CET] <eynix> and without paying.
[16:53:31 CET] <eynix> this is a great solution
[17:00:39 CET] <furq> eynix: vp8 and vp9 are probably your best modern choice
[17:00:58 CET] <furq> those are royalty-free but it's not impossible that they could be found to be using patented technology
[17:01:43 CET] <furq> mpeg-la claim to be creating a patent pool to use against vp8/vp9, but they said the same thing about theora and that never happened
[17:01:57 CET] <eynix> furq, I'll look, thanks. Currently, I'm looking in Theora. I wonder if I can encode images in a vids with it. Like the motion jpeg thing
[17:02:31 CET] <eynix> but hm, by the way.
[17:02:35 CET] <eynix> those patent
[17:02:41 CET] <eynix> are patent to code, right  ?
[17:02:55 CET] <eynix> since, I'm in EU, does this apply ?
[17:03:00 CET] <eynix> damn, I need a lawyer.
[17:03:07 CET] <furq> are you doing business with the US
[17:03:12 CET] <eynix> probably
[17:03:20 CET] <eynix> yes, we are
[17:03:26 CET] <furq> then yeah, i think it does
[17:03:34 CET] <furq> although it probably makes it less likely that anyone will come after you for it
[17:04:29 CET] <eynix> this is so complicated. I just want to make a movie from my images TT__TT
[17:06:01 CET] <J_Darnley> Then say "fuck imaginary property"
[17:06:13 CET] <furq> if you're a small business you probably fall under the minimum usage exemption anyway
[17:06:28 CET] <eynix> what's that ?
[17:09:56 CET] <furq> with h265 at least you can ship 100,000 units per year, which afaik either means h265 videos or products that encode h265, without paying royalties
[17:10:04 CET] <furq> idk if something similar exists for h264
[17:10:30 CET] <furq> unless they changed the licence again since i last read it, which isn't unlikely
[17:11:41 CET] <furq> you're probably best off sticking with webm though
[17:13:04 CET] <Venti> mpeg-la isn't the only party holding patents on obvious things, and not even the only party accepting money for hevc
[17:13:58 CET] <zeryx> hey guys, I'm looking to build a wrapper that can split a video into multiple intervals (currently using -ss & -t iteratively), but it doesn't seem to split accurately
[17:14:09 CET] <zeryx> I'm looking to recombine the videos later in a stitching operation, but I don't want repeated frames
[17:14:14 CET] <zeryx> is there a way to do this with ffmpeg?
[17:14:36 CET] <furq> zeryx: -ss isn't frame accurate with -c copy, it'll split at the nearest keyframe
[17:15:20 CET] <zeryx> furq, ahh, so what I should do is gather the keyframes of the video and cut at those times instead of cutting at an arbitrary time right?
[17:15:23 CET] <furq> right
[17:15:25 CET] <zeryx> is there a way to do that with ffmpeg or would I need ffprobe?
[17:15:35 CET] <furq> https://www.ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
[17:15:38 CET] <furq> that might work for you as well
[17:15:48 CET] <zeryx> I was looking at refactoring that in actually
[17:16:13 CET] <furq> that always splits at the next keyframe so you won't get repeated frames
[17:16:18 CET] <zeryx> is there a way to alter the output metadata in the filename from an iterator to a keyframe interval?
[17:16:22 CET] <zeryx> nice
[17:16:31 CET] <zeryx> I know the split to images works great with the FPS arg
[17:17:47 CET] <zeryx> I still don't know why I don't always join irc instead of searching SO for 4 days, its almost always faster :D
[17:17:55 CET] <zeryx> thanks a ton furq
[17:29:11 CET] <zeryx> furq, is segment_time's input in seconds or ms?
[17:29:26 CET] <zeryx> and does it accept the regular HH:MM:SS.mm format as well?
[17:31:48 CET] <furq> seconds and yes
[17:32:21 CET] <pkeuter> hey guys, i know there's a black detection in ffmpeg, is there also a way to detect motion, or the lack thereof?
[17:33:54 CET] <durandal_1707> see tblend
[17:35:43 CET] <pkeuter> thanks!
[17:57:47 CET] <zeryx> furq, ok cool that seems to have worked, but the resulting videos seem to be missing some metadata (IE: when the clip is over, when it starts) vlc player seems to start in the center of the video
[17:57:55 CET] <zeryx> is that normal or is there a way to solve that by resetting the headers or something?
[18:06:05 CET] <zeryx> can force_keyframes accept an interval parameter instead of a list of times?
[18:07:27 CET] <DHE> for an interval I just use "-g 300" and then disable automatic keyframe generation. (x264 option is no-scenecut)
[18:08:24 CET] <zeryx> DHE, -g? I haven't see that command before, I was initially trying -ss & -t however I need accurate cutting so I can stitch video segments back together
[18:09:20 CET] <DHE> that doesn't sound like the response I expected...
[18:10:03 CET] <zeryx> now using force_keyframes & segment_time commands
[18:10:40 CET] <zeryx> creating a tool that can do work on small accurately segmented components of a video, and then stitch them back together into a coherent video
[18:15:49 CET] <DHE> ah, and you want the segments to be of consistent size for the initial splitting...
[18:16:04 CET] <DHE> yeah, the above is what I do. I've use HLS which is a narrow version of that
[18:47:21 CET] <EmleyMoor> To create my "front view stamped" dashcam videos, I currently use three calls to ffmpeg - two crops and a vstack. Is there any way to combine them?
[19:01:31 CET] <DHE> should be.... something like:   ffmpeg -i input1.mp4 -i input2.mp4  -filter_complex "[0:v] crop=...  [top];  [1:v] copy=... [bottom] ; [top][bottom] vstack [output]" -map '[output]' -map 0:a    $CODEC_OPTIONS   output.mp4
[19:01:57 CET] <DHE> audio is taken from input1.mp4 in this example
[19:11:54 CET] <kepstin> EmleyMoor: if you have a single video input that you're taking two cropped sections from, change DHE's command to drop the second -i and change the '1:v' to '0:v' in the filter_complex string
[19:12:48 CET] <DHE> .. changing a side-by-side video into a vertical stack video?
[19:14:23 CET] <durandal_1707> see stereo3d filter
[19:18:26 CET] <Filarius> how to make FFmpeg try to reconnect to live stream while recording if stream laggy or can be turned off for short time and turned on again ?
[19:34:25 CET] <zeryx> if I wanted to force a keyframe every X interval of miliseconds, how would I do that without needing to escape double quotes?
[19:34:45 CET] <zeryx> expr:gte(t, n_forced*$interval) errors out, and seems to require double quotes to work
[19:37:13 CET] <zeryx>  Unable to find a suitable output format for 'n_forced*1.000)
[19:37:25 CET] <zeryx> -force_key_frames 'expr:gte(t, n_forced*$interval)'
[19:39:13 CET] <zeryx> rofl
[19:39:29 CET] <zeryx> expr:gte(t,n_forced*$interval) works, expr:gte(t, n_forced*$interval) doesn't
[19:41:43 CET] <zeryx> ok so I'm cutting with exactly 1 second intervals and creating timestamps at 1 second, however my video lengths are around 4 seconds long, thoughts?
[20:00:38 CET] <furq> zeryx: if you're not reencoding then your gops are probably 4 seconds long
[20:04:46 CET] <zeryx> furq, I'm copying the encoding from the original, but I'm also trying to force keyframes at the timestamps that I plan to cut from
[20:05:05 CET] <zeryx> I want 100 millisecond accuracy on my cuts, but right now I'm getting like 5-6 seconds minimum
[20:05:20 CET] <Mavrik> um
[20:05:22 CET] <zeryx> plus the duration of my videos are all wrong, they don't reset properly
[20:05:36 CET] <furq> yeah that's not a thing
[20:05:39 CET] <Mavrik> you're copying the stream_
[20:05:46 CET] <Mavrik> and you want to force keyframes?
[20:05:51 CET] <furq> if you're copying the stream then you get the keyframes that are in the original stream
[20:06:07 CET] <furq> if you want them in different places then you need to reencode
[20:06:20 CET] <zeryx>  /tmp/ffmpeg-static/ffmpeg -i /tmp/counting.webm -force_key_frames interval -c copy /tmp/7fd19e99-7fad-4010-9a43-3aaa186087ee/formattedInput.webm -y
[20:06:23 CET] <zeryx> that doesn't work?
[20:06:27 CET] <furq> no
[20:06:36 CET] <DHE> you can't specify keyframes when copying pre-encoded material
[20:06:49 CET] <DHE> you need to use "-c:v libx264" or something
[20:07:10 CET] <furq> just get rid of -c copy and it'll use vp9
[20:07:20 CET] <furq> -c:a copy should be ok
[20:07:37 CET] <zeryx> good thing I asked you guys, would literally never figure that out
[20:09:33 CET] <jas99> hey guys
[20:09:34 CET] <zeryx> ok this is getting disgusting, I'm trying to create a universal video splitter + concentate tool for computer vision tasks
[20:09:45 CET] <jas99> new here
[20:09:53 CET] <jas99> new at even using irc
[20:10:12 CET] <zeryx> my webm sample video file doesn't allow for those codecs, I'm assuming there's no universal video encoding that can be used
[20:10:18 CET] <zeryx> yo jas99
[20:10:29 CET] <jas99> hey zeryx
[20:11:02 CET] <jas99> is this right channel to discuss c++ coding with ffmpeg
[20:11:10 CET] <jas99> or that will be dev one
[20:11:24 CET] <zeryx> I think it's more of a "just ask a question" channel than anything else
[20:11:41 CET] <croepha> so, when streaming from a file? how can I tell ffmpeg to stream the file at a steady pace (1 second of video for 1 second of time)  currently it acts like an upload, and tries to upload as fast as bandwidth seems to allow
[20:11:57 CET] <jas99> i am trying to encode 16bit gray video
[20:12:30 CET] <jas99> hey zeryx
[20:12:38 CET] <jas99> so people help each other here
[20:12:46 CET] <jas99> or some devs live here??
[20:13:08 CET] <furq> zeryx: -c:v libvpx-vp9
[20:13:09 CET] <zeryx> yeah jas99 what you do is you actually just ask a question
[20:13:16 CET] <furq> it should use that automatically if you don't specify a codec
[20:13:50 CET] <furq> you probably also want -b:v 0 -crf 30 or similar so that it doesn't encode at 200kbps
[20:13:51 CET] <jas99> so i do ffv1 encoding
[20:13:57 CET] <jas99> for gray16
[20:14:08 CET] <furq> croepha: -re as an input option
[20:14:27 CET] <jas99> on muxing.c
[20:14:31 CET] <croepha> furq, ok, Thanks alot!
[20:14:33 CET] <jas99> example
[20:14:56 CET] <jas99> by editing config off course
[20:15:41 CET] <furq> 19:09:34 ( zeryx) ok this is getting disgusting, I'm trying to create a universal video splitter + concentate tool for computer vision tasks
[20:15:44 CET] <furq> oh.
[20:15:46 CET] <furq> yeah that's going to be difficult
[20:16:05 CET] <zeryx> like, everything works except this
[20:16:08 CET] <zeryx> absolutely everything
[20:16:21 CET] <zeryx> I actually thought it was working reasonably correctly until I realized I couldn't enforce splits smaller than the video keyframe
[20:16:21 CET] <furq> a segment generally needs to start with a keyframe, so splitting at arbitrary points is going to get very complicated
[20:17:00 CET] <zeryx> yea
[20:17:07 CET] <jas99> sorry you guys were talking i guess sorry to be rude :)
[20:17:17 CET] <zeryx> this is for a very large work project, this is a foundation
[20:17:23 CET] <furq> if it's for splitting and concatenating you might want to use a lossless codec for the segments and then reencode it when you're done
[20:17:45 CET] <zeryx> I would love to do that, the docs aren't the easiest to read for stuff like this
[20:18:03 CET] <furq> that'll obviously take a bunch of space and ideally you'd have the user provide the final encoding settings
[20:18:07 CET] <furq> but that should at least be relatively simple
[20:18:30 CET] <zeryx> I'd like to have a concatenated duplicate as an end result after combining the subvideos
[20:19:17 CET] <zeryx> so the best way would be to re encode the original video to some standard format, while doing so have keyframes every specified interval, then do a segment_times on those keyframes
[20:19:31 CET] <jas99> hey @zeryx @furq any experience encoding 16bit video?
[20:19:37 CET] <furq> well yeah pick a lossless codec and mux it into mkv or some other appropriate container
[20:20:06 CET] <jas99> i tried with 16bit ffv1
[20:20:11 CET] <zeryx> jas99, I don't, I'm only working on AV stuff because I was told to
[20:20:34 CET] <jas99> @zeryx :(
[20:20:45 CET] <furq> jas99: if someone knows, they'll answer
[20:21:09 CET] <jas99> with i upday yuv fill function
[20:21:14 CET] <jas99> gray 8 works
[20:21:26 CET] <jas99> but gray16 produces video
[20:21:31 CET] <jas99> with green output
[20:21:38 CET] <jas99> with vlc
[20:22:04 CET] <jas99> i am trying to encode stream from xtion pro(kinnect)
[20:22:17 CET] <jas99> oh cool
[20:22:44 CET] <jas99> @furq so this get archived ?
[20:22:56 CET] <jas99> and you guys are just users like me ?
[20:24:09 CET] <zeryx> yea
[20:24:15 CET] <zeryx> I have like 10 irc chats open
[20:24:27 CET] <zeryx> I primarily use IRC to ask questions relating to a project I can't easily find help for on stack overflow
[20:24:34 CET] <zeryx> but I also answer questions on irc channels when I know the answer
[20:24:38 CET] <zeryx> kind of a give/take
[20:24:55 CET] <jas99> very nice zeryx
[20:24:56 CET] <zeryx> like whenever someone asks a question on #cuda, I usually can answer it
[20:25:27 CET] <jas99> oh nice you work on gpu projects?
[20:26:12 CET] <zeryx> yeah
[20:26:23 CET] <jas99> cool
[20:26:23 CET] <zeryx> I actually started programming on gpu's and moved onto regular cpu based stuff
[20:26:40 CET] <zeryx> cuda was the first framework I ever used
[20:26:42 CET] <jas99> nice I am computer vision guy
[20:27:09 CET] <jas99> i work from india
[20:27:27 CET] <jas99> like where are you from and what do work on
[20:28:01 CET] <jas99> btw what problem you are having ??
[20:28:16 CET] <jas99> maybe you can also bounce ideas off me??
[20:28:30 CET] <zeryx> I'm from canada, and I build algorithms
[20:28:57 CET] <jas99> nice allot of friends from canada, very nice place
[20:29:04 CET] <zeryx> trying to convert abritrary video content into small interval videos & images, and then concatcenate them back together after performing cv operations on them
[20:29:37 CET] <jas99> ah
[20:29:54 CET] <zeryx> so streaming webcam input video for example
[20:29:54 CET] <jas99> i was looking for code to do av_frame to cv myself
[20:30:10 CET] <zeryx> building essentially that tool in java, it'll eventually be on www.algorithmia.com
[20:30:19 CET] <zeryx> available for anyone to use it
[20:30:33 CET] <jas99> chking it out
[20:30:54 CET] <jas99> are you into bitcoin and stuff
[20:31:15 CET] <jas99> this is aweosome
[20:31:29 CET] <zeryx> yea
[20:31:42 CET] <jas99> http://colony.io/
[20:32:03 CET] <jas99> know about ethereum??
[20:32:12 CET] <zeryx> furq, I'm currently trying to convert by hand, however I'm getting a seg fault, how would I determine the cause?
[20:32:20 CET] <zeryx> (its actually converting for about 1 second, then faulting)
[20:32:27 CET] <zeryx> yep my company works with them as partners
[20:32:34 CET] <jas99> oh nice
[20:33:52 CET] <zeryx> ahh figured it out
[20:33:59 CET] <zeryx> -b:v 0 was causing some pretty big problems
[20:34:19 CET] <jas99> so i can build algo and sell it at you place??
[20:34:42 CET] <zeryx> yep, and if you need any help I'm part of the QA team :P
[20:34:53 CET] <jas99> nice
[20:35:11 CET] <zeryx> if anyone uses it, you get paid a royalty, but I shouldn't say much more than that, at this point I'm essentially soliciting
[20:35:21 CET] <zeryx> if you want to know more shoot me a private message
[20:35:48 CET] <jas99> like do you have email
[20:36:00 CET] <jas99> like can we discuss stuff like that??
[20:36:09 CET] <jas99> on irc
[20:36:21 CET] <jas99> i guess its public record
[20:36:43 CET] <durandal_1707> private message on irc
[20:36:52 CET] <jas99> oh nice
[20:37:04 CET] <jas99> whats command to do that
[20:37:49 CET] <zeryx>  /pm zeryx
[20:48:25 CET] <zeryx> furq, just spoke with my boss and I misread the use case (doh!), is there a way to segment_ split on every keyframe?
[20:53:54 CET] <jas99> hey @zeryx do you know how to convert avframe to cv mat
[20:53:59 CET] <jas99> and vice versa
[20:54:13 CET] <jas99> like code snipet for that in c++
[20:55:51 CET] <zeryx> nope, not that skilled in the AV field
[20:55:59 CET] <zeryx> I know the basics of video/audio codecs, containers
[20:56:01 CET] <zeryx> now keyframes
[20:56:22 CET] <jas99> cool
[20:58:26 CET] <ethe> there doesn't seem to be an --enable-(lib)jack option, is it just automatically detected, and compiled if found?
[21:02:34 CET] <RobertKm> --enable-x11grab Option does not work? Who can help me
[21:03:39 CET] <c_14> RobertKm: bug in current git, go back a few versions
[21:03:50 CET] <c_14> s/versions/revisions/
[21:04:50 CET] <c_14> Any revision before today should be fine.
[21:05:02 CET] <RobertKm> c_14 WoW, Thanks
[21:09:35 CET] <jas99> av frame to cv mat
[21:09:37 CET] <jas99> ??
[21:09:41 CET] <jas99> relp :)
[21:09:47 CET] <jas99> and vice versa
[21:44:16 CET] <_jamiejackson> hi folks, i've got ffmpeg version 0.8.17-4:0.8.17-0ubuntu0.12.04.1 and i'm trying to concatenate two mpg video files without re-encoding them...
[21:44:33 CET] <_jamiejackson> I'm looking at https://trac.ffmpeg.org/wiki/Concatenate , but i'm having trouble with the example syntax...
[21:45:19 CET] <_jamiejackson> the example that's given: ffmpeg -f concat -i mylist.txt -c copy output
[21:46:37 CET] <_jamiejackson> http://pastebin.com/ZvaWDpLC
[21:47:46 CET] <_jamiejackson> gives: Unknown input format: 'concat'
[21:48:10 CET] <_jamiejackson> please help me get this figured out
[21:48:45 CET] <furq> that's an ancient version of ffmpeg
[21:48:49 CET] <jkqxz> I suggest starting by upgrading to version 3.0 from your current 0.8.17.
[21:48:51 CET] <furq> -f concat was only added in 1.x
[21:49:13 CET] <_jamiejackson> oh geez, that would explain it
[21:49:34 CET] <furq> http://johnvansickle.com/ffmpeg/
[21:49:40 CET] <furq> you can use that if you're stuck on an old ubuntu
[21:53:28 CET] <_jamiejackson> furq: thanks, it's rocking its way through the concat with that binary you linked
[21:54:54 CET] <_jamiejackson> this is the most lossless way of concatenating mpg video, right?
[21:56:10 CET] <furq> it's a lossless way
[21:56:15 CET] <furq> lossless isn't really a comparative term
[22:23:56 CET] <_jamiejackson> understood, thanks
[22:36:18 CET] <zeryx> is there a way to automatically generate a list of split files from semething like segment_list?
[22:36:45 CET] <zeryx> I'm trying to concatcenate and I'd like to automate the process of creating the txt file
[22:37:43 CET] <J_Darnley> A good shell will do that with ease
[22:38:00 CET] <zeryx> yeah, I can do it with a shell
[22:38:08 CET] <zeryx> but I'd like to do it with ffmpeg output preferably
[22:38:20 CET] <zeryx> is it not possible as an output from a -f -segment operation?
[22:38:35 CET] <J_Darnley> Oh, that makes a little more sense
[22:38:54 CET] <zeryx> ffconcat file
[22:38:55 CET] <zeryx> yea
[22:38:56 CET] <J_Darnley> No idea.  Why split only to rejoin
[22:39:02 CET] <zeryx> computer vision
[22:39:07 CET] <zeryx> and parallel video processing
[22:39:19 CET] <zeryx> GPUs and stuff
[22:39:50 CET] <J_Darnley> but why rejoin?  what did you do to the origina; file?
[22:39:59 CET] <J_Darnley> *original
[22:40:31 CET] <zeryx> the video files will be edited, but they will remain the same codec & format
[22:40:44 CET] <zeryx> there may be some operations done to the videos such as image recognition overlays
[22:41:23 CET] <zeryx> pretty standard stuff, we want to output as both a rejoined video and a streamable video
[22:41:39 CET] <J_Darnley> Then yes it does look like its possible, rtfm
[22:41:53 CET] <zeryx> thanks J_Darnley, been reading it all day
[22:41:55 CET] <zeryx> :D
[22:41:56 CET] <J_Darnley> http://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
[22:42:04 CET] <zeryx> staring at it
[22:42:05 CET] <J_Darnley> segment_list name
[22:42:05 CET] <J_Darnley>     Generate also a listfile named name. If not specified no listfile is generated.
[22:43:39 CET] <zeryx> also the manual is attrocious to read fyi
[22:43:50 CET] <J_Darnley> It sure is.
[22:43:54 CET] <zeryx> some of the worst documentation I've had to sift through
[22:45:25 CET] <furq> contributions welcome
[23:25:15 CET] <Wader8> hello
[23:27:24 CET] <Wader8> is it possible or is it planned to support automatic fragmental cut-out and concat ? Or is there a prepared script to do this with multiple actions, basically taking H264, taking out a few seconds of both video and audio in a number of areas from the source file, and then recoding that back to either remux or x265 for example
[00:00:00 CET] --- Thu Feb 18 2016


More information about the Ffmpeg-devel-irc mailing list