[Ffmpeg-devel-irc] ffmpeg.log.20150416

burek burek021 at gmail.com
Fri Apr 17 02:05:01 CEST 2015


[00:00:12 CEST] <klaxa> i'm pretty sure ffmpeg can do it too, but i don't know how from the top of my head
[00:00:44 CEST] <kyleogrg> ooh interesting
[00:01:17 CEST] <c_14> ffprobe -show_frames
[00:01:20 CEST] <c_14> check for key_frame
[00:02:57 CEST] <kyleogrg> ffprobe.exe -i VTS_01_1.VOB -show-frames
[00:03:06 CEST] <kyleogrg> Missing argument for option 'show-frames'
[00:03:29 CEST] <klaxa> it's -show_frames not -show-frames
[00:03:48 CEST] <kyleogrg> thx
[00:03:59 CEST] <kyleogrg> holy mackerel
[00:04:17 CEST] <kyleogrg> it's spitting out tons of info
[00:05:22 CEST] <c_14> yeeeaaaah; it's only really useful if you have a program to parse it.
[00:06:19 CEST] <c_14> You can use things like -select_streams v to only get video stream info though
[00:06:49 CEST] <c_14> You can also output to xml/json/csv etc
[00:07:00 CEST] <kyleogrg> so mkvtoolnix can split at idr frames?
[00:08:04 CEST] <klaxa> ffmpeg can too, not sure if it can split one file into multiple parts though
[00:08:53 CEST] <c_14> It can, but it's not trivial.
[00:08:54 CEST] <klaxa> if you invoke ffmpeg for every part you want to have split it takes some time to seek to the position you need, mkvtoolnix doesn't have that overhead
[00:09:23 CEST] <klaxa> oh? maybe it would make sense to make that easier
[00:09:29 CEST] <kyleogrg> what does mkvtoolnix do?  split all the idr frames?
[00:09:46 CEST] <c_14> hmm, you could just use the segment muxer with -segment_times
[00:09:51 CEST] <c_14> that would actually not be that hard
[00:09:58 CEST] <kyleogrg> c_14: in ffmpeg?
[00:10:02 CEST] <c_14> yeyp
[00:10:21 CEST] <kyleogrg> what would the command line look like?
[00:10:33 CEST] <klaxa> mkvtoolnix is a collection of tools to create/edit/analyze matroska files, one of its capabilities is splitting a file at given timecodes/frames
[00:10:45 CEST] <kyleogrg> ok
[00:11:01 CEST] <c_14> ffmpeg -i video -segment_times 5,50,55,756,6541 -c copy out.m3u8
[00:11:13 CEST] <c_14> assuming those are the timestamps of the idr frames
[00:11:32 CEST] <kyleogrg> c_14: this would produce multiple videos?
[00:11:45 CEST] <c_14> yes
[00:12:15 CEST] <kyleogrg> so if i did this automatically for a long vob file, the command would be very very long
[00:12:18 CEST] <kyleogrg> is that ok?
[00:12:28 CEST] <c_14> yep
[00:14:09 CEST] <kyleogrg> cool.  with ffprobe, how can i parse or pull out the idr frame locations?
[00:14:09 CEST] <klaxa> note: to enhance the load-balancing effect it would make more sense to make parts of similar size while still cutting at i-frames, even though adding one i-frame every once in a while doesn't drive bitrate through the roof anyway
[00:14:59 CEST] <kyleogrg> klaxa: for a DVD VOB, would i-frames more or less be distributed evenly?
[00:15:10 CEST] <klaxa> i think so
[00:15:24 CEST] <klaxa> i think the spec says an i-frame every 0.5 seconds or something
[00:16:20 CEST] <klaxa> you actually don't want an i-frame every 0.5 seconds with h265 aimed at small filesize
[00:19:00 CEST] <kyleogrg> so in my command line for converting a dvd to mkv -- i pasted the link to it earlier -- would the mkv keep all the i-frames from the vob??
[00:19:41 CEST] <c_14> depends on how you do it
[00:19:57 CEST] <klaxa> if you re-encode, that's generally not the case
[00:20:09 CEST] <kyleogrg> ok
[00:34:52 CEST] <kyleogrg> Ever heard of Ripbot264? http://forum.doom9.org/showthread.php?t=127611
[00:35:46 CEST] <kyleogrg> It can apparently do distributed encoding for x264
[00:36:20 CEST] <klaxa> i think i heard about it
[00:37:02 CEST] <klaxa> it's funny how these things get written for windows
[00:37:32 CEST] <kyleogrg> yeah
[00:37:58 CEST] <klaxa> oh yeah, i definitely remember it
[00:38:24 CEST] <klaxa> because i was thinking "cutting at 1500 frames? just like that? does that make sense?"
[00:38:44 CEST] <klaxa> but yeah, adding a few i-frames really shouldn't do much damage
[00:45:02 CEST] <pron> r
[00:47:12 CEST] <kyleogrg> back
[00:47:20 CEST] <kyleogrg> my web irc window froze
[00:48:03 CEST] <kyleogrg> That guy should make RipBot265
[00:48:31 CEST] <klaxa> >Changelog
[00:48:31 CEST] <klaxa> >v1.18.1
[00:48:31 CEST] <klaxa> >Added: Support for HEVC as input format (mkv and mp4 containers only)
[00:48:35 CEST] <klaxa> oh wait
[00:48:36 CEST] <klaxa> nvm
[00:49:03 CEST] <klaxa> actually
[00:49:06 CEST] <klaxa> >v1.18.0
[00:49:06 CEST] <klaxa> >Added: Support for x265 (DE mode works as well)
[00:49:12 CEST] <kyleogrg> what???
[00:49:25 CEST] <kyleogrg> so it would work?
[00:49:32 CEST] <klaxa> probably
[00:50:18 CEST] <kyleogrg> ok, i'll probably look at that when i have time
[02:03:08 CEST] <Guest90273> why is bluray subtitle file "SUP"  so much larger in file size than  dvd subtitle file "idx/vob" file
[02:12:53 CEST] <debianuser> Guest90273: maybe it contains subtitle images in higher resolution?
[02:54:57 CEST] <mtcjayne> Could someone explain to me how "Stream specifier 'main' in filtergraph description [...] matches no streams." in this paste?
[02:55:00 CEST] <mtcjayne> http://pastebin.com/tjbGETfC
[02:55:19 CEST] <mtcjayne> It seems to be that it matches painfully obviously.
[02:56:40 CEST] <kyleogrg> has anyone here had experience with ripbot264?
[02:57:03 CEST] <c_14> mtcjayne: you can't use a pad twice in one filtergraph
[02:57:12 CEST] <mtcjayne> That was a mistake on my part.
[02:57:13 CEST] <c_14> there's a filter
[02:57:41 CEST] <mtcjayne> I was trying to figure out why it was seeing a audio/video concat mismatch.
[02:57:58 CEST] <mtcjayne> http://pastebin.com/xueFGHZ9 is the correct version of the paste, with the audio/video concat mismatch.
[02:58:48 CEST] <mtcjayne> http://pastebin.com/kxNqVwjD is the error I was attempting to correct.
[02:59:11 CEST] <c_14> [pip] is video
[02:59:13 CEST] <c_14> not audio
[02:59:19 CEST] <mtcjayne> Yes.
[02:59:24 CEST] <c_14> That filtergraph doesn't have audio anywhere
[02:59:28 CEST] <c_14> change the concat to a=0
[02:59:38 CEST] <mtcjayne> Gotcha. Makes sense, thanks.
[02:59:42 CEST] <c_14> and get rid of the [a]
[03:00:57 CEST] <mtcjayne> I've really enjoyed seeing the automation that can be done with FFmpeg.
[03:01:27 CEST] <mtcjayne> Plus... for some reason... I seem to get 10x the file size with Sony Vegas Pro. Probably not 100% FFmpeg's victory there though.
[03:02:19 CEST] <c_14> What codec?
[03:07:49 CEST] <mtcjayne> 720p60 H264.
[03:08:16 CEST] <mtcjayne> I was getting >1G files from 15 minutes of footage, where the input file was about 300M.
[03:09:19 CEST] <c_14> mhm, no idea what encoder vegas uses or what settings, but x264 is pretty damn hard to beat
[03:09:34 CEST] <mtcjayne> Hmm I had some problems with my result.
[03:10:48 CEST] <mtcjayne> The audio from EndCard.mp4 wasn't included in its playback section, and I heard some footage from the primary episode footage.
[03:11:28 CEST] <c_14> That's because your command doesn't specify what audio to use.
[03:11:39 CEST] <c_14> map the audio you want from the file you want explicitly
[03:16:32 CEST] <mtcjayne> I'm not seeing how to do that. I want audio for stream 0 during its section, audio from stream 1 during its section (no audio from streams 2 or 3.) http://pastebin.com/nkmXLVMP
[03:17:01 CEST] <mtcjayne> I had that working without the "select" before
[03:17:35 CEST] <mtcjayne> Now I'm trying to give the script user control over what time periods of each file (0, 2, and 3) are used.
[03:18:43 CEST] <mtcjayne> I could simply remove the audio from streams 2 and 3 if you think that'd work.
[03:18:51 CEST] <c_14> eeeeeeeh, I'm pretty sure that concat=n=4 should be giving you an error, but w/e just add another filterchain to the graph with something like "[0:a][2:a][3:a]concat=n=3:v=0:a=1[a]" and map it
[03:19:52 CEST] <c_14> eeeeeh
[03:20:00 CEST] <mtcjayne> Yeah that's another remnant. I'll remove it here.
[03:20:09 CEST] <c_14> you might need to shorten the 0th audio stream to the same as what you put in the select
[03:20:27 CEST] <c_14> you can use aselect or something
[03:21:05 CEST] <c_14> Oh, and aren't 2+3 playing simultaneously?
[03:21:14 CEST] <c_14> You'll have to mix their audio tracks together with amix
[03:21:17 CEST] <mtcjayne> The videos are, the audio hasn't been in the past.
[03:21:34 CEST] <mtcjayne> I don't want it to be playing but I haven't actually done anything to prevent it. It's just worked out that way.
[03:21:50 CEST] <c_14> Oh, if you don't want it to be playing you can just ignore it.
[03:22:00 CEST] <c_14> just concat silence to the end of the first one.
[03:22:07 CEST] <mtcjayne> Ok
[03:22:08 CEST] <c_14> (and use -shortest)
[03:22:26 CEST] <c_14> you can generate silence with aevalsrc=0
[03:24:39 CEST] <mtcjayne> You're thinking concat <length of stream 1> seconds of silence after stream 0 in line 16? (And generate the silence the line before that?)
[03:25:53 CEST] <c_14> should work
[03:28:03 CEST] <mtcjayne> Is there an expression I can use to get the length of streams?
[03:28:49 CEST] <mtcjayne> I see that I'm using some, like "main_w" but I don't know where they're documented.
[03:29:06 CEST] <c_14> you can use ffprobe; what I would do, however, is just concat infinite amounts of silence to the end of the first audio, and then add the -shortest option. That'll make it end when the shortest stream (in this case the video) ends.
[03:29:56 CEST] <c_14> main_w etc are documented in the sections for each filter in ffmpeg-filters
[03:30:04 CEST] <c_14> https://ffmpeg.org/ffmpeg-filters.html
[03:30:26 CEST] <mtcjayne> Ah I thought they were universal or something. Thanks.
[03:38:14 CEST] <mtcjayne> This is what I've got, but I don't think I'm doing it correctly as I get the following error: "No output pad can be associated to link label 'main'." http://pastebin.com/LDzfVgQ7
[03:38:25 CEST] <mtcjayne> Oh
[03:38:38 CEST] <mtcjayne> Well first of all I've got a typo which I'll fix. Missing semicolon.
[03:39:00 CEST] <c_14> you also mispelled silence
[03:39:14 CEST] <c_14> and it should be
[03:39:15 CEST] <mtcjayne> Alright that and the missing \/n are fixed.
[03:39:21 CEST] <c_14> [main][0:a][pip][silence]
[03:39:50 CEST] <mtcjayne> Alright.
[03:39:58 CEST] <mtcjayne> I'll see what results I get here.
[03:40:40 CEST] <c_14> and I don't think the concat filter has a shortest option. What I was referring to was [main][pip]concat=n=2:v=1:a=0[v];[0:a][silence]concat=n=2:v=0:a=1[a]' -map '[v]' -map '[a]' -shortest
[03:41:08 CEST] <mtcjayne> It doesn't have one documented.
[03:47:10 CEST] <c_14> Anyway, I hope that works out. I need to go to bed now.
[03:48:44 CEST] <mtcjayne> Thanks for the help
[04:49:11 CEST] <a[0]> I'm having trouble finding documentation on this... in complex filter graphs like "... [x][1:v] paletteuse" what does [1:v] mean?
[05:34:01 CEST] <quasarlog> hi, I recently pulled from the github ffmpeg repo to update my working directory and I can't stream video using ffserver anymore.this is the command i used to send to ffserver:
[05:34:02 CEST] <quasarlog> ffmpeg -f avfoundation -i "1:"  -maxrate 2500k -threads:0 4 -threads:1 4 -vcodec copyts -movflags +faststart -pix_fmt yuv420p http://localhost:8090/feed1.ffm -loglevel 48
[05:34:12 CEST] <quasarlog> i have ffserver running locally
[05:34:45 CEST] <quasarlog> with the following conf:
[05:34:46 CEST] <quasarlog> conf follows:
[05:34:46 CEST] <quasarlog> <Feed feed1.ffm>
[05:34:46 CEST] <quasarlog>    File /tmp/feed1.ffm
[05:34:46 CEST] <quasarlog>    FileMaxSize 5M
[05:34:46 CEST] <quasarlog> </Feed>
[05:35:06 CEST] <quasarlog> <Stream test.flv>
[05:35:06 CEST] <quasarlog> Feed feed1.ffm
[05:35:06 CEST] <quasarlog> Format flv
[05:35:06 CEST] <quasarlog> #NoVideo
[05:35:06 CEST] <quasarlog> VideoCodec flv1
[05:35:07 CEST] <quasarlog> AVOptionVideo flags +global_header
[05:35:07 CEST] <quasarlog> PixelFormat yuv420p
[05:35:08 CEST] <quasarlog> VideoFrameRate 25
[05:35:08 CEST] <quasarlog> VideoGopSize 12
[05:35:09 CEST] <quasarlog> VideoBufferSize 0
[05:35:09 CEST] <quasarlog> VideoBitRate 1872
[05:35:10 CEST] <quasarlog> NoDefaults
[05:35:10 CEST] <quasarlog> StartSendOnKey
[05:35:34 CEST] <quasarlog> VideoSize 1280x800
[05:35:35 CEST] <quasarlog> PreRoll 0
[05:35:35 CEST] <quasarlog> Noaudio
[05:35:35 CEST] <quasarlog> </Stream>
[05:37:48 CEST] <quasarlog> when i use this, the output that results is this with the recently updated and rebuilt ffmpeg:
[05:37:48 CEST] <quasarlog> Output #0, ffm, to 'http://localhost:8090/feed1.ffm':
[05:37:48 CEST] <quasarlog>   Metadata:
[05:37:48 CEST] <quasarlog>     creation_time   : now
[05:37:48 CEST] <quasarlog>     Stream #0:0, 0, 0/0: Video: flv1 (flv), 1 reference frame, yuv420p, 1280x800 (0x0), 1/25, q=1-10, 1872 kb/s, 1000k fps, 25 tbc
[05:37:48 CEST] <quasarlog>     Metadata:
[05:37:49 CEST] <quasarlog>       encoder         : Lavc56.34.100 flv
[05:37:49 CEST] <quasarlog> Stream mapping:
[05:37:50 CEST] <quasarlog>   Stream #0:0 -> #0:0 (rawvideo (native) -> flv1 (flv))
[05:37:50 CEST] <quasarlog> Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[05:37:51 CEST] <quasarlog> [AVIOContext @ 0x7f990b710ec0] Statistics: 0 seeks, 0 writeouts
[05:44:16 CEST] <quasarlog> http://pastebin.com/WV8D8Dgz
[06:00:40 CEST] <relaxed> quasarlog: -vcodec copyts is wrong
[06:02:48 CEST] <Guest90273> is there a difference if the source is HD or SD  if output is SD?
[06:06:28 CEST] <zumba_addict> output from HD will be a little sharper
[06:07:00 CEST] <zumba_addict> but if you are not maximizing the output, sd will be sharp
[06:08:18 CEST] <Guest90273> you think if source is HD< it will be little better?
[06:18:33 CEST] <relaxed> Guest90273: yes
[06:19:06 CEST] <Guest90273> why would that be if output is SD
[06:21:54 CEST] <relaxed> because there's more detail
[06:54:13 CEST] <Guest90273> what happens if you play 5.1 audio on  2.0 system ?
[07:05:06 CEST] <DeadSix27> depends on the system and soundcard and software ?
[07:05:11 CEST] <DeadSix27> usually gets downmixed to 2.0
[07:14:15 CEST] <urmumstty> hey guys, i have 54 frames taken with a camera rig that does a kind of matrix bullet time effect (i guess) and i was thinking i could clean it up / trick it up a bit with ffmpeg since the original technicians are not interested, basically the cameras all seem poorly calibrated so as the rotation occures it jumps around a little, also i was thinking i could use some kind of interpolation on the pts to give it an ease at the start and end
[07:14:33 CEST] <urmumstty> another idea was to do some kind of decaying motion blur so it blurs a bit at the start and end but is clear through the middle
[07:14:55 CEST] <urmumstty> i've never tried to use ffmpeg for anything like this, i've used the filter graph a fair bit, is this the right tool, should it be possible, what do i do about the jitter
[07:15:03 CEST] <urmumstty> don't think i've seen any kindo f steady cam effect
[07:15:49 CEST] <urmumstty> so i guess i'm asking: is there some kind of frame steading algorithm avaliable, can i apply an equation to the pts, can i apply a motion blur effect with its strength defined by another equation
[07:16:08 CEST] <urmumstty> would be nice to take advantage of the hardware setup and polish it up a bit rather than the current half assed output
[08:29:17 CEST] <benoliver999> I'm getting skewed screenshots when using Debian wheezy (ffmpeg from the Debian repos) - would upgrading to a more recent version fix this?
[08:36:11 CEST] <dsl420> do you use debian testing? afaik a legit ffmpeg is only in the testing reposis, debian has a fork in the reposis. they changed it in testing afaik. just grab ffmpeg from github
[08:37:03 CEST] <benoliver999> I do not, but thank you for the information. I'll give it a go.
[08:37:30 CEST] <dsl420> and if you want screenshots from an anamorphic video you would need to adjust your command as well
[08:39:54 CEST] <benoliver999> Wait that's the problem I'm having
[08:40:33 CEST] <benoliver999> but I was told somewhere (wrongly?) that a more recent version of ffmpeg solves the problem on its own
[09:44:59 CEST] <jookiyaya> what happens if you have 5.1 surround audio on  2.0 system/speakers
[10:06:21 CEST] <Anoia> depends ont he audio chipset I think
[10:27:48 CEST] <Fish-> hi
[10:28:23 CEST] <Fish-> do you know how to issue the stss box in mp4 format using the ffmpeg api?
[15:18:16 CEST] <DiegoMax> im having some issues with av_seek_frame() on h.265 files
[15:18:19 CEST] <DiegoMax> is this a known issue ?
[15:19:30 CEST] <spaam> DiegoMax: maybe you should tell us what version you are using. then maybe someone know if its a known issue or not :D
[15:19:52 CEST] <DiegoMax> latest version from the repo, pulled lastnight
[15:20:06 CEST] <DiegoMax> let me give you an exact version
[15:20:08 CEST] <DiegoMax> just a sec
[15:20:29 CEST] <DiegoMax> is there any file in the source tree with the version number or should i use the runtime version() method ?
[15:26:58 CEST] <DiegoMax> well
[15:27:00 CEST] <DiegoMax> i found version.sh
[15:27:05 CEST] <DiegoMax> and this is what it shows: N-71425-gf4f3065
[15:27:16 CEST] <DiegoMax> not sure if thats the library version or what
[15:27:38 CEST] <DiegoMax> and at runtime, avformat_version() returns 3677796
[15:27:54 CEST] <DiegoMax> the issue is that only with h.265 files, av_seek_frame() is returning -1 no matter what
[15:28:04 CEST] <DiegoMax> with any other code it works just fine
[16:12:36 CEST] <DiegoMax> i keep failing to understand the licensing model of the popular video codecs
[16:12:55 CEST] <DiegoMax> if i understand correctly, H.264 is a spec, and then we have things like x264 which are open source encoders that adhere to that spec
[16:13:30 CEST] <DiegoMax> so if i write an application that ENCODES h.264 im supposed to pay a licensing fee to that H.264 people, even if im using an open source encoder
[16:14:13 CEST] <DiegoMax> so the obvious question is, why nobody came with an open source spec ? is it something utterly complicated that no more than a few selected people can do ? or am i missing something in this story ?
[16:16:55 CEST] <nasojlsu> it is utterly complicated.
[16:17:19 CEST] <DiegoMax> but i think there are way more complicated things in this world
[16:17:22 CEST] <DiegoMax> than a video encoder
[16:17:25 CEST] <DiegoMax> that are open source and public
[16:17:42 CEST] <DiegoMax> like the linux kernel, to name one.
[16:17:42 CEST] <nasojlsu> lots of engineers come together and create something that IEEE approves and bam...
[16:18:36 CEST] <c_14> DiegoMax: there are open specs, H.264/H.265 just aren't
[16:18:53 CEST] <DiegoMax> yes my question is, why nobody ever came with an open spec ?
[16:18:58 CEST] <DiegoMax> or is there any ?
[16:19:22 CEST] <DiegoMax> what is an example of an open spec that is as good as h264/265 ?
[16:20:36 CEST] <c_14> vp9 is pretty decent (as a codec); vorbis and opus are both great
[16:20:43 CEST] <c_14> As are ogg and matroska
[16:21:21 CEST] <DiegoMax> isnt VP9 owned by Google
[16:21:22 CEST] <DiegoMax> ?
[16:21:59 CEST] <nasojlsu> yup
[16:22:02 CEST] <c_14> Developed by, not owned by
[16:22:07 CEST] <DiegoMax> i think its owned too
[16:22:13 CEST] <DiegoMax> like, they own the spec
[16:22:16 CEST] Action: DiegoMax checks
[16:22:40 CEST] <c_14> daala is the next big open video spec to keep an eye on
[16:22:42 CEST] <DiegoMax> hm interesting
[16:22:55 CEST] <DiegoMax> acoording to wikipedia: VP9 is an open and royalty free[3] video coding format being developed by Google.
[16:23:08 CEST] Action: DiegoMax looks at [3]
[16:23:34 CEST] <c_14> afaik vp9 is relesed under a BSD license
[16:23:53 CEST] <c_14> So Google might be the majority license holder, but that doesn't mean they own it.
[16:23:56 CEST] <DiegoMax> so it would be safe to assume, that the big browsers dont support vp9 natively because someone is paying them not to ?
[16:24:13 CEST] <DiegoMax> because if they did, im sure it would be widely adopted
[16:24:57 CEST] <c_14> firefox supports vp9
[16:25:01 CEST] <c_14> iirc so does chrome
[16:25:11 CEST] <DiegoMax> chrome does
[16:25:28 CEST] <DiegoMax> i dont know for firefox
[16:25:32 CEST] <c_14> No clue about IE
[16:25:35 CEST] <DiegoMax> but safari and IE im sure they dont
[16:25:50 CEST] <DiegoMax> and as far as i know android doesnt either
[16:25:58 CEST] <DiegoMax> and of course iOS doesnt either
[16:25:59 CEST] <DiegoMax> :/
[16:28:36 CEST] <c_14> Well, google did recently partner up with people for hardware-decoding support for vp9. So it shouldn't be long until at least newer devices get it.
[16:32:27 CEST] <JEEBsv> "< DiegoMax> like, they own the spec" <- are you implying that there is a spec :P
[16:32:44 CEST] <DiegoMax> and am i wrong ?
[16:32:55 CEST] <JEEBsv> there is "kind of a spec" for VP8, but with VP9 they decided that they don't need one
[16:33:15 CEST] <DiegoMax> so how they settle on the format....
[16:33:24 CEST] <JEEBsv> implementation is the definition
[16:33:42 CEST] <JEEBsv> is which is why we effectively have an implementation bug in the "format" right now
[16:33:55 CEST] <JEEBsv> because someone noticed it after the implementation was first frozen'ish
[16:33:58 CEST] <DiegoMax> i was about to say that...
[16:34:10 CEST] <DiegoMax> im by no means a vydeo codec engineer
[16:34:26 CEST] <JEEBsv> and basically they do not expect anyone else to implement it
[16:34:31 CEST] <DiegoMax> but specs are needed for much simpler things.... i cant imagine how u can build that without a spec that everyone agrees to...
[16:34:40 CEST] <JEEBsv> the guy who implemented it in libavcodec worked for G as well
[16:35:29 CEST] <DiegoMax> now let me see if i am understanding this right
[16:35:31 CEST] <JEEBsv> also G generally seems to only care that you can decode it, encoder or realtime coding was never in their plans looking at how long there was zero multithreading in the encoder
[16:35:56 CEST] <DiegoMax> suppose i write a tool that encode VP9 files
[16:35:57 CEST] <JEEBsv> now there is in-picture multithreading but that of course is at the expense of compression
[16:36:02 CEST] <DiegoMax> i then make it closed source
[16:36:07 CEST] <DiegoMax> and i sell it for $1000
[16:36:33 CEST] <stefkos> lo
[16:36:33 CEST] <DiegoMax> can anyone come after me an tell me "you have to pay a license fee" ?
[16:36:37 CEST] <JEEBsv> of course
[16:36:47 CEST] <JEEBsv> that possibility never goes away
[16:37:01 CEST] <DiegoMax> then its not "open" and "bsd licensed"
[16:37:04 CEST] <DiegoMax> as the wiki says
[16:37:10 CEST] <DiegoMax> which goes back to my original rant :p
[16:37:23 CEST] <JEEBsv> the software is open and the code is bsd licensed for libvpx, don't mix software licenses and patents etc
[16:37:30 CEST] <JEEBsv> those are two completely different problem areas
[16:38:04 CEST] <stefkos> I use configure with flags  ./configure --toolchain=msvc --enable-cross-compile --arch=i686
[16:38:21 CEST] <stefkos> and then Im getting a message
[16:38:26 CEST] <stefkos> Must specify target arch and OS when cross-compiling
[16:38:29 CEST] <JEEBsv> the problem is that for AVC and HEVC you have pre-set rules and some licensing (whether or not you actually pay something depends on the licensing, for example you could distribute 100k (I think) copies of a decoder for AVC for fere per year)
[16:38:46 CEST] <JEEBsv> while vp8/9 are supposedly "free of any such stuff"
[16:39:10 CEST] <JEEBsv> in both cases someone can come to you and threaten to sue
[16:39:11 CEST] <stefkos> I want to compile Windows/VisualStudio includes and libs
[16:41:00 CEST] <JEEBsv> also AVC and HEVC are actually formats openly developed (I am registered on the jct-vc mailing list that does HEVC development), and there is a spec
[16:41:19 CEST] <JEEBsv> while G decided to do VPx development just like On² used to do it
[16:41:26 CEST] <JEEBsv> behind closed doors
[16:41:40 CEST] <DiegoMax> stefkos what is your host platform ?
[16:41:43 CEST] <JEEBsv> the only difference is that you now get the source code right away with libvpx
[16:42:15 CEST] <stefkos> DiegoMax, I must go, will back in 30 mins...
[16:42:20 CEST] <stefkos> brb
[16:42:20 CEST] <DiegoMax> ok
[16:42:33 CEST] <DiegoMax> your problem is most likely that you should set ur build to a different directory
[16:42:37 CEST] <DiegoMax> when cross compiling
[16:42:38 CEST] <JEEBsv> and I would be less negative about VPx if only either the formats were brilliant or the implementations fucking awesome
[16:43:06 CEST] <JEEBsv> right now VP8 is a limited copy of AVC and VP9 is a HEVC rip-off'ish thing
[16:43:15 CEST] <DiegoMax> yes
[16:43:27 CEST] <DiegoMax> i have been reading about some very respectable people claiming
[16:43:41 CEST] <DiegoMax> that with VP9 they just modified hevc enough to avoid patent infringement
[16:43:49 CEST] <DiegoMax> if thats true, it is really sad then
[16:44:14 CEST] <JEEBsv> well, if the result is nice then using good ideas is good
[16:44:29 CEST] <JEEBsv> anyways, off to public transport for me...
[16:44:34 CEST] <DiegoMax> lol ok
[16:44:35 CEST] <DiegoMax> cya
[19:22:17 CEST] <bencc> is ffmpeg suitable for mixing dynamic rtmp streams with audio and video?
[19:22:33 CEST] <DiegoMax> what you mean by "mixing" ?
[19:22:41 CEST] <DiegoMax> mixing as in editing live ?
[19:22:52 CEST] <bencc> it's an audio/video conference where participants may come and go
[19:23:03 CEST] <bencc> and start and stop mic and cam dynamically
[19:23:18 CEST] <DiegoMax> im not 100% sure, but i doubt it can do that out of the box
[19:23:32 CEST] <bencc> DiegoMax: not out of the box, by writing C code
[19:23:39 CEST] <DiegoMax> by writing C code, for sure
[19:23:56 CEST] <bencc> the question is, can it handle syncing of streams, mixing, adding and removing streams...
[19:24:05 CEST] <bencc> queueing...
[19:24:21 CEST] <DiegoMax> i doubt it, thats what you would have to write... for sure
[19:24:36 CEST] <DiegoMax> have you considered an streaming server instead ?
[19:25:29 CEST] <bencc> which one?
[19:25:37 CEST] <DiegoMax> as in, with ffmpeg you have the ability to do realtime rtmp, but you will still need to write the part to feed the buffers with your mixed streams and such
[19:26:02 CEST] <bencc> I was hoping ffmpeg can help me with that like gstreamer
[19:27:12 CEST] <DiegoMax> ffmpeg can build the final stream for sure
[19:27:28 CEST] <DiegoMax> what im almost 100% certain ffmpeg can NOT do, is mixing several realtime streams into one, queuing, and stuff
[19:34:59 CEST] <stefkos> I think I setup whole enviroment to compile ffmpeg for VS on windows
[19:35:14 CEST] <stefkos> now I can compile whole archive without problems
[19:35:29 CEST] <stefkos> but when I will add --enable-libtheora
[19:35:55 CEST] <stefkos> it cannot find lib. But I have it in minsys/user/lib/  (something like this) and in VC/lib/ directory
[19:36:07 CEST] <stefkos> link -nologo -libpath:ARGEADDRESSAWARE -out:./ffconf.C1yHs8yD.exe ./ffconf.QOqlO0hA.o theora_static.lib ogg_static.lib psapi.lib advapi32.lib shell32.lib
[19:36:07 CEST] <stefkos> LINK : fatal error LNK1181: cannot open input file 'theora_static.lib'
[19:37:27 CEST] <stefkos> I have this message during   ./configure
[19:40:04 CEST] <stefkos> hmm maybe I made mistake, I should pass  --extra-ldflags
[19:40:07 CEST] <stefkos> lets see
[19:49:18 CEST] <stefkos> nope didnt help
[21:33:24 CEST] <Jobeanie123> Hello. I just learned about ffmpeg. It's neat so far.
[23:37:08 CEST] <Prelude2004c> hey guys.. question ... i tried ffmpeg -i < input transport stream > .. -acodec copy and vcodec copy... to mpegts udp://anotherserver:port ..
[23:37:18 CEST] <Prelude2004c> some reason only one program is available on the other stream
[23:37:33 CEST] <Prelude2004c> should it not show all the program id's in the transport stream?
[23:43:31 CEST] <c_14> -map 0
[23:43:36 CEST] <c_14> (probably)
[23:58:41 CEST] <Prelude2004c> um.. if i use -map 0 it will take everyhing?
[00:00:00 CEST] --- Fri Apr 17 2015


More information about the Ffmpeg-devel-irc mailing list