[Ffmpeg-devel-irc] ffmpeg-devel.log.20140728

burek burek021 at gmail.com
Tue Jul 29 02:05:02 CEST 2014


[00:00] <BBB> vp9 is pretty good
[00:00] <BBB> googles PR is terrible
[00:00] <BBB> Im like the only person in the world blogging about vp9, and thats only because I dont work at google
[00:00] <Timothy_Gu> and cabac/cavlc/rc are all lossless right?
[00:00] <BBB> (anymore)
[00:00] <BBB> yes, range coders such as rac/cabac are all lossless
[00:00] <Timothy_Gu> PR?
[00:00] <BBB> press
[00:01] <Timothy_Gu> are cabac/rac better than huffman?
[00:01] <BBB> yes
[00:01] <BBB> huffman is like a poor-mans version of trying to be a little bit like range-coding but not really getting it
[00:02] <BBB> like, its not a bad idea, but range-coding is just theoreitcally much better
[00:02] <BBB> or more granular,if you will
[00:02] <BBB> if youre lucky, one of the x264 people can explain the deep internals of range coding to you, maybe michaelni understands it too
[00:03] <BBB> I understand it at a more basic level but not completely mathematically
[00:03] <Timothy_Gu> but is range coding slower than huffman?
[00:03] <BBB> yes
[00:03] <BBB> huffman is like this
[00:03] <BBB> lets say I have a tree of modes, 3 options
[00:03] <BBB> A B or C
[00:03] <BBB> then I could create a coding tree where first bit zero means A
[00:03] <BBB> first bit 1, second bit 0 means B
[00:04] <BBB> and 11 means C
[00:04] <BBB> right?
[00:04] <BBB> thats a coding tree
[00:04] <Timothy_Gu> ya
[00:04] <BBB> now, what if A happens much less frequently than B or C?
[00:04] <BBB> A is coded most efficiently
[00:04] <Timothy_Gu> wait, why is C 11 and not 10
[00:04] <Timothy_Gu> ?
[00:04] <BBB> (1 bit, versus 2 bits for the others)
[00:04] <BBB> 10 is B
[00:04] <Timothy_Gu> derp
[00:05] <BBB> 0 is A, 10 is B, 11 is C
[00:05] <BBB> so were wasting bits by making A the preferred token in the tree
[00:05] <BBB> we adjust that dynamically by huffman coding these things, which means the token with higest occurence will get the shortest tree node
[00:05] <BBB> so B would become 0, C 10, and A 11
[00:05] <BBB> this is all dynamic and automated, and works because encoder and decoder agree
[00:06] <BBB> you can also code it in a frame header as an init pattern
[00:06] <BBB> but that sucks, because I said B and C both occurred a lot, and A didn't
[00:06] <BBB> so only B is efficient, not C
[00:06] <BBB> range-coding makes hit smore fine-grained and allows subbit resolution
[00:06] <BBB> so B/C get 0.8 bits and A gets 4 bits or so
[00:07] <wm4> so you never encode when the symbols switch?
[00:07] <wm4> and it's implicitly from what's encoded/decoded?
[00:07] <BBB> you could encode it
[00:07] <BBB> its up to the codec to allow that in the frame header or not
[00:07] <kierank> 11:02 PM <"BBB> if youre lucky, one of the x264 people can explain the deep internals of range coding to you,  --> the ian richardson book explains it well
[00:07] <BBB> most, if not all, formats allow coding it int he ehader
[00:07] <BBB> ah, cool
[00:07] <BBB> Timothy_Gu: so there you go, a book :-p
[00:08] <mraulet> nice book for beginners
[00:09] <Timothy_Gu> http://www.amazon.com/H-264-Advanced-Video-Compression-Standard/dp/0470516925 this one?
[00:09] <JEEB> yeah
[00:10] <Timothy_Gu> that's pretty expensive...
[00:10] <mraulet> I know more this one
[00:10] <mraulet> http://books.google.co.uk/books/about/H_264_and_MPEG_4_video_compression.html?id=ECVV_G_qsxUC&redir_esc=y
[00:10] <JEEB> that's the older version
[00:10] <JEEB> of the same thing
[00:10] <JEEB> the newer one just has MPEG-4 Part 2 removed
[00:11] <JEEB> for obvious reasons
[00:11] <wm4> what reasons?
[00:11] <JEEB> MPEG-4 Part 2 isn't really /that/ interesting while it was a nice academic experiment
[00:12] <mraulet> I think it is good to have an introduction with MPEG-4 part 2
[00:12] <mraulet> it is easier to understand new standards
[00:13] <JEEB> I don't disagree with that, just that if you're going to grab the book it's most useful to grab the newest one
[00:13] <JEEB> which just happens to have had MPEG-4 Part 2 removed
[00:13] <JEEB> otherwise you'd have to grab both :P
[00:13] <mraulet> life is like this
[00:13] <JEEB> granted, I guess not that much has changed between the versions otherwise (I guess)
[00:13] <Timothy_Gu> at what age did you guys first read this kind of book?
[00:14] <JEEB> don't worry, I still haven't finished it and I'll be 30 in three years :P
[00:15] <kierank> mraulet: nah mpeg-4 part 2 has crazy stuff
[00:15] <kierank> he explains crazy things in there
[00:15] <kierank> mpeg-2 is a nice place to start
[00:15] <mraulet> not in this book
[00:15] <mraulet> they are just explaining the simple part of it
[00:15] <mraulet> what xvid divx did extrat from the standard
[00:16] <mraulet> not the crazy « other » stuff
[00:17] <BBB> lol mpeg4
[00:17] <BBB> rotational predictors, no?
[00:17] <BBB> and other wtfy stuff
[00:17] <BBB> Timothy_Gu: I wrote my first decoder 6 years ago or so, so youre well ahead of the curve
[00:18] <BBB> Timothy_Gu: this stuff isnt hard, just takes some sitting down and let it all come together (thats what some people use their PhD for)
[00:19] <Timothy_Gu> ok, thank you all.
[00:19] <Timothy_Gu> gotta go though
[00:20] <trn> Is there some standard way of encoding per-frame metadata in any native container format, or maybe another stream in a NUT or FFM container?
[00:21] <wm4> maybe if NUT supports packet side data?
[00:22] <trn> Right now I'm abusing PTS by offsetting it based on wall clock time at the start.
[00:23] <trn> There are lots of disadvantages to that I can already see myself running up against.
[00:24] <michaelni> nut should support side data
[00:24] <trn> I start this on Monday.  
[00:25] <michaelni> as well as per packet metadata
[00:25] <trn> A similar commercial solution uses an empty MJPEG straem with Exif data.
[00:27] <kierank> in TS there is plenty of space for that
[00:27] <trn> Yeah, they are using ts for it
[00:27] <trn> But there has to be a better way
[00:31] <trn> Thankfully I don't need wall clock, GPS, etc. every frame.  A few ms would be good enough, as long as the operator can put two live streams side-by-side without visible desync.
[00:32] <trn> GPS is also for sports, so the operator can set notifications to themselves for when a received stream is in or out of a certain boundry.
[00:33] <trn> For example, a GoPro sending a live stream on a mountain biker.
[00:33] <trn> So you can automatically alert when he's in the area and it'll put the cloest available scene cameras somehow easily avaialble to the operator.
[00:35] <trn> I can't rely on setting the PTS offset in these cases because of the latency introduced by the gopro live itself, the phones 4g/lte, etc... before it gets to the server to be timestamped.
[00:36] <wm4> you could abuse subtitle streams (lol)
[00:36] <wm4> would make a pretty simple solution that works with common tools
[00:38] <trn> Hrrm, could work, sure :)  lol
[00:39] <trn> I'm not famaliar with them at all, are they all image formats like DVD or SVCD?
[00:40] <trn> I can RTFM, sorry :)
[00:44] <Mavrik> uh
[00:44] <Mavrik> isn't TimeCode what you're looking for? O.o
[00:45] <trn> Mavrik: I need actual wallclock time, which might not be monotonic by design.
[00:45] <trn> DST adjustments, leap seconds, and all. :b
[00:46] <Mavrik> Yes, isn't TimeCode used for that? That is, synchronizing video to realtime.
[00:46] <Mavrik> or did I mix something up badly? :)
[00:47] <cone-621> ffmpeg.git 03Mickaël Raulet 07master:3b777db13200: hevc: remove non necessary parameters to ff_hevc_set_qpy
[00:47] <trn> I'm a total noob when it comes to this sort of stuff, I might be reinventing the wheel without know it.
[00:48] <trn> Mavrik: I also need to implement multiple time stamps that can be added by anything that does a remux of the stream along the way.
[00:49] <trn> Application of this is to provide endpoints with estimation of latency.
[00:49] <kierank> Mavrik: no
[00:49] <kierank> timecode is not wallclock
[00:50] <Mavrik> yeah, just read up on SMTPE spec
[00:50] <Mavrik> ignore me then :)
[00:50] <Mavrik> *SMPTE
[00:50] <trn> Yeah, timecode is also monotonic by spec
[00:51] <trn> Wikipedia says otherwise from what I quickly googled: https://en.wikipedia.org/wiki/SMPTE_timecode#Discontinuous_timecode.2C_and_flywheel_processing
[00:53] <cone-621> ffmpeg.git 03Mickaël Raulet 07master:772f7f4eddbc: hevc: fix skip_flag
[01:02] <trn> Final noob questions - is there some sort of easy way to determine what sort of latency I'd be introducing during a remuxing process using ffmpeg's libraries?
[01:06] <cone-621> ffmpeg.git 03Anton Khirnov 07master:77ef9fd1e938: hevc: eliminate unnecessary cbf_c{b,r} arrays
[01:07] <trn> And is there somewhere where it's commonly know what latency ffmpeg or it's libraries introduce in libavfilter or libavformat?
[01:08] <Compn> trn : depends on options, there are ways to get ultra low latency live encodings
[01:08] <trn> Like, what sort of latency does the fillin code add, etc?
[01:08] <Compn> people keep those things secret it seems 
[01:08] <trn> hehe
[01:09] <Compn> because they build business models around it
[01:09] <trn> Compn: Yeah, famaliar with Streambox Avenir? :)
[01:10] <Compn> for example, gaikai, online video gaming cloud service. play games in your web browser... you know how much latency you have to have to get that working ? :)
[01:10] <trn> That is competition, but my trick isn't to make custom stuff on layer 2/bonding they do, or their custom codecs.
[01:11] <trn> If I can know the latency ahead of time that certain things introduce, and reduce it where I can, and have good latency estimation, I can use latency hiding techniques ...
[01:12] <trn> Compn: Like for a live interview with three seconds of latency... Instead of asking a short question, make sure the interviewer and interviewee know the latency and you can send a signal along with the video.
[01:12] <trn> ask question, press button, answer question, press button ...
[01:13] <kierank> they don't use custom codecs
[01:13] <Compn> its x264 (h264) :P
[01:13] <Compn> you mean like send the questions ahead of time ? 
[01:13] <trn> Yes, correct. :) 
[01:13] <Compn> trn : or encode two streams, one low quality that streams faster than the hd stream ? :P
[01:14] <Compn> probably reporters couldnt handle such tech. hard part is training them not to trip over the cable on the floor
[01:14] <trn> Really?  I failed trying to figure out what Streambox sends out.
[01:14] <trn> 9300.. they call it ACT-L3
[01:14] <Compn> to answer your question, no i dont know streambox stuff
[01:15] <trn> Anyway tho, this isn't for reporters, this is to allow smaller and smaller teams to cover sporting events, church events, for live production.
[01:15] <Compn> ah
[01:15] <trn> With pan/zoom remote IP cameras, gopro w/wifi, etc.
[01:16] <trn> and a operator who works on a laptop
[01:17] <trn> Using location data and stream metadata allows tons of stuff to be automated...
[01:17] <Compn> oh , so studio in a box type thing ?
[01:17] <trn> Essentially.
[01:18] <trn> Once it works it's going to be free "on the cloud" kind of thing which can accept input via HLS/RTMP/RTP/etc and stream the video back all via HTML5.
[01:19] <trn> latency will be unacceptable for live production but you can pay me for a local hardware box :)
[01:20] <Compn> press button eh? why not have mic-activated video mixing , so whoever is talking gets on screen or auto splitscreen ? :D
[01:20] <Compn> plus remote control for adv users
[01:20] <trn> I imagine a car race, for example.  With 10 cameras positioned around the track... and camera in the car w/GPS.
[01:20] <trn> with the known locations of the 10 cameras, you can tell the track cameras to follow the car based on location.
[01:20] <trn> and it'll cut them in and out automatically, etc.  
[01:20] <Compn> ah
[01:22] <trn> so being able to accurately estimate the latency is important.  
[01:23] <Compn> just have to run the ole benchmarks yourself after its setup :)
[01:23] <trn> Yeah, I was hoping that it would all be documented someplace.  :)
[01:23] <Compn> i dont think any of us have those benchmarks available in doc
[01:24] <Compn> only people who are using ffmpeg for broadcast stuff
[01:24] <Compn> which ... are keeping it a secret
[01:24] <Compn> maybe ask the dirac guys
[01:24] <trn> I've now got the stream switching, combining, mixing all done.
[01:24] <Compn> you need any venture capital ? :p
[01:24] <trn> I've also got most of the logic done for the operator.
[01:24] <trn> Now I just need to get timestamps into the streams, etc. :)
[01:24] <trn> Compn: You offering? :)
[01:25] <Compn> always looking for investments
[01:25] <trn> I'm probably not the guy to invest with just yet to be honest.
[01:25] <Compn> i know, sounds a bit niche right now :P
[01:26] <trn> I work 10 hours a day on the phone selling roofing and construciton stuff and work on this on weekends and nights.
[01:26] <Compn> gopro is on the walmart commercials now so maybe its going to get bigger
[01:26] <trn> I used to do computer stuff professionally however, but now it's starting to come together.
[01:26] <Compn> youtube needs to add more support for live broadcasters
[01:26] <trn> Once I get a first working minimal-feature web version up, I'll remember you. 
[01:28] <trn> I also don't have an Apple dev account yet either, but android stuff is easy to work with.
[01:29] <trn> I mainly just need phones for GPS, etc.  Sports is going to be my primary focus at first.
[01:29] <trn> Produce a great live broadcast with multiple cameras, side-by-side support, etc.  
[01:30] <trn> And then if you saved the original streams, you'll be able to reproduce the live broadcast later from the script file it writes, tweak it, fix stuff, and then export a better edited replay.
[01:31] <trn> So you can live broadcast then upload a slightly corrected version to youtube or whatever.
[01:32] <wm4> [FFmpeg-devel] Reintroducing FFmpeg to Debian
[01:32] <j-b> wm4: while I don't see an issue with that, some facts are a bit distorted...
[01:33] <trn> I'm extensively using ffmpeg stuff for all of this, so if I end up making any money on it, ffmpeg will get my donations.
[01:33] <wm4> j-b: which ones? I didn't switch on my lawyer mode (since it's broken), but all in all it looks correct...
[01:34] <Compn> whoa thats a long mail 
[01:35] <j-b> wm4: mostly the "more codecs" part showing 3 propres decoders (WTF) and introducing more HW-codecs
[01:36] <j-b> wm4: but I don't really care, since VLC compiles fine (of course)
[01:36] <wm4> I don't see any about that in that mail
[01:38] <Compn> prores isnt in the mail, although there are a lot of debian bug urls 
[01:38] <j-b> it is
[01:38] <j-b> link 4
[01:38] <j-b> and the first arguement
[01:38] <Compn> thats what wm4 was saying
[01:38] <Compn> its in a link nto in a mail
[01:39] <wm4> haha
[01:39] <j-b> but it's the number one argument
[01:39] <Compn> lol
[01:39] <Compn> i didnt write it :P
[01:39] <j-b> I know
[01:40] <Compn> theres a few codecs but yeah i dont remember any big things
[01:40] <Compn> probably wise not to mention prores situation :\
[01:43] <Compn> just more of the same ego wars
[01:43] <wm4> big things are of course merged back by libav
[01:44] <Compn> didnt realize crystalhd was removed ?
[01:44] <wm4> what is that even
[01:45] <Compn> its a little pcie? card that decodes h264
[01:45] <Compn> or mpcie something like that
[01:45] <Compn> kind of an add-on for mac puters that didnt have hwaccel gpu
[01:46] <Compn> few years ago
[01:46] <Compn> also works on other os too of course
[01:49] <Compn> i just like to pick on mac computers :)
[01:54] <BBB> ffmpeg should just remove all of its prores decoders except the one it thinks is best
[01:54] <BBB> the whole having 3 decoders is too ridiculous for words
[01:54] <BBB> or encoders
[01:56] <Compn> take benchmarks, quality, compatability samples between the three and submit which one is best. then we'll kill the other two? 
[01:57] <Compn> also which one is still maintained ?
[01:57] <Compn> because some devels have stopped working on them... i think
[02:08] <trn> I can understand having multiple encoders if some are better than others in a particular cirumstance.
[02:09] <trn> About the NUT format... I notice there is libnut in ffmpeg/mplayer git, and also an internal NUT support.
[02:09] <wm4> AFAIK libnut is deader than dead
[02:09] <trn> OK, I had much better luck with internal so I disabled libnut.
[02:17] <Compn> no one picked up libnut , so its more of an internal format at this point
[02:17] <trn> Is there any documentation for the zeromq support other than the source?
[02:18] <Compn> which means any editing software wont support it...
[02:18] <Compn> only ffmpeg
[02:18] <Compn> and anything that uses lavf, which is a lot
[02:19] <trn> Compn: I'm using NUT as an internal transport format myself, so I'm fine with that, I just hope it doesn't disappear. :)
[02:19] <trn> If it does I can look into FFM.
[02:21] <Compn> nut isnt going anywhere :)
[02:21] <Compn> i think a few people use nut internally.
[02:28] <trn> Compn: Final thing before I get back to coding ...
[02:28] <trn> I think there might be an issue with how ffmpeg writes segmented MPEG2-TS where it won't pass Apple's HLS test.
[02:31] <trn> VLC made me notice, It's resetting the continuity counter between segments produced.
[02:32] <trn> I will probably file a bug for that.
[02:32] <Compn> you are using latest git ffmpeg ?
[02:32] <Compn> there were hls changes recently , although i dont remember the details
[02:32] <trn> Yes, from today.
[02:33] <Compn> then yes report bug
[02:34] <trn> Hrrm, already reported, I just checked. :b
[02:34] <trn> https://trac.ffmpeg.org/ticket/2828
[02:34] <trn> Hasn't been updated, I'll isolate the minimal case and add some info if I can.
[02:37] <Compn> post a request for that guys new patch :P
[02:38] <trn> Might save me a little work, at least.
[02:38] <trn> I have completely rewritten the m3u8 segment stuff, btw, but it's horrible code.
[02:39] <trn> Generally you can specify the same output m3u8 file out multiple outputs, and it writes 1 merged output file that is suitable for adaptive bitrate streaming.
[02:42] <trn> Properly done, it would add the extended hints such as BANDWIDTH, CODECS, and RESOLUTION based on outpuit.
[02:42] <trn> Right now I just write it based on an option. :)
[02:44] <trn> Compn: In the meantime, there is a workaround for the TS discontinuity thing.
[02:44] <trn> Adding a EXT-X-DISCONTINUITY between each media file, nornmally used serve advertisements.
[02:44] <trn> But it's in the spec per apple and a valid workaround.
[02:47] <trn> Compn: I also have a patch that implements MPEG-2TS AES-128 encryption per apple, but it depends on openssl.
[02:48] <trn> Sort of interesting, anything that uses Apple HLS which is everything has like no real DRM at all.
[02:49] <trn> Unless they are using PlayReady which they can.  
[02:49] <trn> But the AES method passes the key location via the playlist and it's accessed via HTTPS.
[02:50] <trn> The only real secure way is to use client certificates so that a client connecting to the key server can't just grab them, but almost no real applications ever do such a thing.
[02:51] <trn> So I use my VPN, mitmproxy, and squid and a little program I wrote that sends any HLS video to my chromecast. :)
[02:51] <trn> bbl
[03:10] <jamrial> michaelni: regarding versions, the RELEASE file in the master branch should be updated
[03:11] <jamrial> not sure what's proper, if 2.3.git or 2.4.git as Timothy_Gu suggested the other day
[03:16] <Compn> trn : neat. tell me when i can rip hulu again :P
[03:16] <Compn> i think most people are using adobehds.php 
[03:16] <Compn> er thats for adobe
[03:16] <Compn> durr
[03:16] Action: Compn brain hurts
[03:16] <Compn> hulu was just using rtmp
[03:17] <michaelni> jamrial, dunno, ill change it to 2.3 for now, we can always chage to 2.4 if thats preferred
[03:17] <michaelni> 2.2 is certainly wrong though
[03:17] <cone-621> ffmpeg.git 03Michael Niedermayer 07master:a7762384cf9b: RELEASE: update, we are after 2.3 not 2.2
[05:22] <cone-621> ffmpeg.git 03Michael Niedermayer 07master:2f717be22a93: avcodec/avdct: Add avcodec_dct_get_class()
[05:23] <cone-621> ffmpeg.git 03Michael Niedermayer 07master:cab8fc624b40: avfilter/vf_scale: fix log message category
[05:23] <cone-621> ffmpeg.git 03Michael Niedermayer 07master:a06c14a48ee9: avfilter/vf_spp: support setting dct avoptions from the filter graph string
[06:28] <michaelni> anyone has any comments about the "frame-mt: hevc: implement and use step progress API" patchset ?
[06:29] Action: michaelni falls asleep
[09:26] <plepere> good morning
[09:50] <ePirat-> morning
[10:54] <ePirat-> ubitux, discovered that I cannot really guess content type, since I only can do the checks when the headers are already sent
[10:55] <ePirat-> (I have no stream data in the open function, only when writing)
[10:55] <ubitux> what about assuming and setting "audio/ogg" when content_type is not set and mount_point ends with "ogg"?
[10:55] <ubitux> or just "application/ogg"
[10:55] <ubitux> most icecast version only stream audio
[10:55] <ubitux> i think some icecast forks are able to play with flv, but that's a limited usage
[10:56] <ePirat-> nope
[10:56] <ePirat-> icecast can stream nearly anything
[10:56] <ePirat-> and ogg video and webm is officially supported
[10:57] <ePirat-> yes I can assume ogg but if I notice then later on it's actrually not ogg I would have to say: Hay I guessed ogg but hey I guessed you don't stream ogg actually
[10:57] <ePirat-> which seems stupid
[10:59] <ePirat-> I could remove the doublecheck at all of course and just guess based on file extension
[10:59] <ePirat-> but then I had no way to know if I guessed right
[10:59] <ePirat-> and if not things will just break without user knowing why
[11:00] <ePirat-> thats a very complex problem and I have tought a long time about it
[11:01] <ePirat-> I could of course only open the connection in the read function but that would be a bit of a hack&
[11:04] <ePirat-> I will guess it based on file extension and warn about it. 
[11:04] <ubitux> ePirat-: you can print a warning and make a guess
[12:29] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:5003b8b9c3ba: MAINTAINERS: update list of releases i maintain
[12:45] <cone-572> ffmpeg.git 03Anton Khirnov 07master:c5fca0174db9: lavc: add a property for marking codecs that support frame reordering
[12:45] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:2e5c8b06491b: Merge commit 'c5fca0174db9ed45be821177f49bd9633152704d'
[12:51] <cone-572> ffmpeg.git 03Anton Khirnov 07master:4b169321b845: codec_desc: fix some typos in long codec names
[12:51] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:c11255ae8b1a: Merge commit '4b169321b84502302f2badb056ebee4fdaea94fa'
[13:01] <cone-572> ffmpeg.git 03Anton Khirnov 07master:e36a2f4c5280: hevc: eliminate an unnecessary array
[13:01] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:453224f10b37: Merge commit 'e36a2f4c5280e2779b0e88974295a711cf8d88be'
[13:08] <cone-572> ffmpeg.git 03Anton Khirnov 07master:53a11135f2fb: hevc: simplify splitting the transform tree blocks
[13:08] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:2fb8aa9b10fc: Merge commit '53a11135f2fb2123408b295f9aaae3d6f861aea5'
[13:15] <cone-572> ffmpeg.git 03Anton Khirnov 07master:0daa2554636b: hevc: do not store the transform inter_split flag in the context
[13:15] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:0a1ffc57882c: Merge commit '0daa2554636ba1d31f3162ffb86991e84eb938a8'
[14:01] <cone-572> ffmpeg.git 03Anton Khirnov 07master:4aa80808bcc2: hevc: eliminate unnecessary cbf_c{b,r} arrays
[14:01] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:4a73fa19ca7a: Merge commit '4aa80808bcc2a30fcd7ce5b38594319df3a85b36'
[14:19] <cone-572> ffmpeg.git 03Anton Khirnov 07master:e76f2d119704: hevc: eliminate the last element from TransformTree
[14:19] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:e0492311c854: Merge commit 'e76f2d11970484266e67a12961f2339a5c2fccf9'
[14:26] <cone-572> ffmpeg.git 03Anton Khirnov 07master:a5c621aa8525: hevc: rename variable in boundary strength to b more explicit
[14:34] <cone-572> ffmpeg.git 03Mickaël Raulet 07master:42ffa226f9a5: hevc: clean up non relevant TODO
[14:34] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:772dfd5f6e5d: avcodec/hevc: add some const to cbf arrays
[14:39] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:243236a6f589: avcodec/hevc: fix "discards const qualifier from pointer target type" warning
[14:50] <cone-572> ffmpeg.git 03Michael Niedermayer 07release/2.3:fcc6568a1025: avcodec: add avdct
[14:50] <cone-572> ffmpeg.git 03Michael Niedermayer 07release/2.3:8f53d32dfbe2: avfilter/vf_spp: use AVDCT
[14:50] <cone-572> ffmpeg.git 03Christophe Gisquet 07release/2.3:65259b4d687a: x86: hevc_mc: replace one lea by add
[14:50] <cone-572> ffmpeg.git 03Michael Niedermayer 07release/2.3:2f71aeb30161: remove VERSION file
[14:50] <cone-572> ffmpeg.git 03Michael Niedermayer 07release/2.3:ee606fd0317d: version.sh: Print versions based on the last git tag for release branches
[15:50] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:f3158c3f29cd: version.sh: Print versions based on the last git tag for release branches
[18:08] <trn> Hello guys.
[18:09] <trn> I added a comment at https://trac.ffmpeg.org/ticket/2828#comment:16 as requested, thought better to report here since I see this bug was already reported.
[18:10] <trn> I did my last git pull 12 hours ago, but nothing has changed in that time.
[18:15] <trn> Compn: About ripping hulu, I think that you actually can do it if you have Hulu Plus.  Unless it uses PlayReady.
[18:16] <trn> Compn: http://www.microsoft.com/playready/licensing/list/ shhows Hulu LLC as a licensee, so it probably does.  I don't have Hulu Plus, so I can't use iOS Hulu to intercept the delivery stream.
[18:19] <trn> But Hulu+ content is delivered via HLS for sure to iOS devices.
[18:34] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:d554d004a67c: tests/fate.sh: If cat *.rep fails try it with a for loop.
[18:53] <`md> trn?!
[18:57] <Compn> trn : oh interesting. i wasnt sure if H+ and ios used different transports
[18:57] <trn> `md: Why hello there.  As you see, I'm still alive, but last few months weren't so great and I didn't end up on the plane :b
[18:57] <trn> Gotta catch up!
[18:58] <trn> First time on IRC in months!  
[18:59] <trn> Compn: Apple states that pretty much everything has to be HLS when video is streamed to an iOS device, if it's more than some trivial size or length.
[19:01] <trn> And all HLS streams are interceptable other than PlayReady stuff because the AES keys are delivered via the m3u8, and if you are using something like mitmproxy, you can easily impersonate the session authentication to revieve the key.
[19:02] <`md> trn: uhm, yea, i noticed, lol :D
[19:04] <trn> Also, most iOS apps and streaming servers suck and do not do really any authentication at all.  Sometimes they rotate the key every nn segments, but the keyfile path as specified by the m3u8 isn't protected from retrieval.  
[19:05] <trn> `md: Sadly trying to make money with ffmpeg now.  :b
[19:06] <`md> :O
[19:06] <llogan> i did it, and now i'm a thousandaire.
[19:07] <`md> trn: hope you at least caught the livestream?
[19:07] <`md> llogan: what currency? i hope btc... :D
[19:07] <trn> `md: Parts of it.
[19:07] <llogan> `md: Ameribucks
[19:07] <`md> trn: well it's all available as vod on youtube
[19:07] <`md> llogan: oh :(
[19:08] <trn> `md: Long story short, my mother had a stroke combined with diabetes complications.
[19:08] <`md> ah fuck
[19:08] <trn> She's OK, but I had to sell her car and deal with getting her medical coverages in order and ended up having to pay tons out of my own pocket, cover her mortgage, etc.
[19:09] <trn> And I'm an asshole for not keeping in touch, but honestly I've been in a daze for a few months trying to get things all straightened out.
[19:10] <`md> it's alright, family comes first
[19:10] <trn> Yeah, sucks when I had tickets and all. :b
[19:10] <`md> uh yea!
[19:10] <`md> well at least you supported the eventy
[19:10] <`md> *event
[19:11] <trn> `md: So you're an ffmpeg dev now or something :)
[19:11] <`md> heh :D
[19:11] <`md> not really no
[19:11] <`md> i've always been here
[19:12] <`md> lurking
[19:12] <`md> one of these days tho!
[19:12] <Compn> large number of multimedia lurkers in the channels
[19:13] <trn> ffmpeg libav* stuff is quite easy to get into once you figure out the basics.
[19:16] <Compn> trn : consider writing some docs so others can ease into it. we have many complaints from devels who cant figure things out :p
[19:17] <Compn> maybe the api is so large. i dont know if the complaints are legit or not as i'm no programmer.
[19:21] <trn> Compn: I got started by looking at https://github.com/chelyaev/ffmpeg-tutorial myself.
[19:28] <trn> `md: New project is a (imho) rather innovative live broadcast studio and broadcast dealyed editing suite.
[19:30] <`md> oho
[19:31] <trn> `md: Finding a way to unobtrusively remove video and maintain the buffer is part of my secret sauce. :)
[19:32] <trn> Works well for sports and such, for something that would be way to jarring with a cut discontinuity, it can revert to standard audio bleeping or user-placed video pixelization :)
[19:38] <trn> Has anyone looked into support OpenH264 in ffmpeg?
[19:44] <Compn> as a lib wrapper ?
[19:45] <trn> I'd be interested in seeing how well it works compared to x264 in realtime.
[19:47] <trn> It's limited to baseline, but you need baseline for iPhone 3 support.
[19:48] <iive> x264 have standalone application , doesn't openh264 have one too?
[19:48] <Compn> yes afaict
[19:49] <Compn> the better question is, is there anything we can steal to make ffmpeg faster/better :P
[19:49] <Compn> probably x264 already did such an audit
[19:49] <Compn> for the encoding side
[19:50] <wm4> isn't the only reason openh264 exists licensing bullshit?
[19:51] <trn> I'm not sure it's the only reason, but Cisco does cover the MPEG-LA license on the binary redistributable.
[19:54] <Compn> trn : if you just want to get out of having your software pay the mpegla license by using cisco binary, thats fine
[19:54] <Compn> and ffmpeg would accept such an openh264 wrapper for that purpose
[19:54] <Compn> but most users i think probably wouldnt need it...
[19:57] <jamrial> kurosu: you forgot to attach the patch in "[PATCH] x86: hevc_mc: fix register count usage"
[20:01] <kurosu> jamrial, thanks
[20:07] <`md> trn: shouldnt ffmpeg look into offloading encoding to a gpu for live stuff?
[20:07] <`md> i can already do that just fine for live streaming on windows
[20:07] <trn> Compn: Problem with the live stuff w/ OpenH264 is baseline profile only for encoding.
[20:09] <trn> `md: Using the ultrafast or similar x264 presets and asm optimizations, I have no issues with live encoding on a server.
[20:10] <trn> `md: But a GPU isn't always available, because eventually I want to get something setup similar to a Streambox Avenir.
[20:10] <`md> trn: sure i can also encode at fast profile up to 720p50 or 1080p25
[20:10] <`md> on the cpu
[20:10] <`md> but offloading it to a gpu or several gpus is a bit nicer
[20:10] <kierank> trn: lol building a streambox avenir
[20:11] <kierank> `md: sure a nice way to waste energy
[20:11] <trn> kierank: Not building an avenir, no :)  
[20:11] <`md> some servers would have capable intel integrated graphics, and some have pcie slots that you could use
[20:11] <`md> kierank: waste?
[20:11] <kierank> yes
[20:11] <`md> is energy something that matters?
[20:11] <trn> kierank: I have no need for any sort of the things they do loike link aggregation or what not, use case involves mainly local streams.
[20:11] <kierank> some decent low power cpus out there that are better
[20:12] <kierank> trn: and you want to do this with nut
[20:12] <kierank> ...
[20:12] <`md> kierank: better than what? 
[20:12] <kierank> `md: than wasting cycles encoding on the gpu
[20:12] <kierank> better quality, less power
[20:12] <kierank> less marketing though
[20:12] <`md> well it frees up the cpu to do other stuff
[20:12] <`md> like deinterlacing or stuff like that
[20:12] <kierank> buy another cpu
[20:13] <kierank> uses less power
[20:13] <kierank> gpu encoding is only there for marketing purposes
[20:13] <`md> if you have a dual cpu board, sure
[20:13] <kierank> buy a more powerful cpu
[20:13] <kierank> all scenarios will use less energy, cost less money and give a better picture
[20:14] <`md> even the most powerful cpus are still struggling with handling x264 at really good quality at 1080p50 while also deinterlacing it with something like yadif
[20:14] <`md> just saying
[20:14] <`md> i DO agree with you
[20:14] <kierank> "really good quality"
[20:14] <kierank> good luck getting that from a gpu
[20:14] <`md> i am not
[20:14] <`md> but at least i am getting something
[20:15] <kierank> you'll get something on a cpu as well
[20:15] <`md> comparable to what the cpu would output
[20:15] <kierank> 1080p50 is doable on 1-2 cores
[20:15] <`md> depends on the profile you use and the cpu of course
[20:15] <kierank> sigh
[20:15] <cone-572> ffmpeg.git 03Michael Niedermayer 07master:1e51af13c753: avdevice/pulse_audio_enc: use getter function for AVFrame.channels
[20:15] <kierank> can't win against the gpu tards
[20:15] <`md> 20:14:30 < `md> i DO agree with you
[20:15] <kierank> anyway i look forward to your ffmpeg gpu patch
[20:16] <kierank> 7:07 PM <`md> trn: shouldnt ffmpeg look into offloading encoding to a gpu for live stuff?
[20:16] <kierank> 7:07 PM <`md> i can already do that just fine for live streaming on windows
[20:16] <`md> yes...
[20:17] <`md> i am not even sure any of the gpu vendors even have an api yet on linux for it
[20:17] <`md> they and their stupid closed source driver bullshit
[20:18] <trn> Honestly on a local LAN configuration, I can do raw/PCM at 720p over the wire at 270Mbps, that's 33.75MB/s.  
[20:19] <iive> doesn't radeons (and intel) have some specialized silicon for encoding video?
[20:19] <`md> nvidia too....
[20:19] <kierank> trn: do your maths correctly
[20:19] <`md> and they all have nice marketing names for it
[20:19] <iive> it's not like using graphic shaders.
[20:19] <trn> So that's about 3 streams on 1000BaseT in reality.
[20:20] <kierank> trn: 1280*720*2*30 is not 270mbps
[20:20] <`md> what colourspace?
[20:20] <`md> 4:2:0?
[20:21] <trn> kierank: I am very sure I did... 1280x720, 4:2:0, 24fps and PCM audio plus muxing overhead ~= 270,000kbps 
[20:21] <trn> Let me verify.
[20:21] <kierank> lol 24fps
[20:22] <kierank> ok for 24fps you're right...
[20:22] <kierank> enjoy your blur
[20:22] <trn> Yep ... and 24fps is fine for my application, because the output is TV and film mainly.
[20:22] <`md> kierank: why so negative tonight?
[20:22] <kierank> tv is not 24fps
[20:22] <trn> TV is mostly 25fps and file is 23.96fpm ...
[20:22] <kierank> `md: lot of BS being spouted in this channel
[20:23] <trn> kierank: Explain?
[20:23] <`md> kierank: oh and that angers you?
[20:23] <kierank> tv is 25fps and 30fps with some places 50fps and 60fps
[20:23] <kierank> if it's 720p it's the latter
[20:23] <`md> true
[20:24] <trn> kierank: I pretty much do exactly what the broadcast equipment I'm famaliar with does.
[20:24] <kierank> 720p24 isn't even a broadcast format
[20:24] <`md> well the tv channels deal with input that is mostly 24fps
[20:24] <`md> they either speed it up or use telecine
[20:24] <trn> kierank: Capture 24p then stretch to 23.975 for NTSC conversion, or speed up to 25fps for PAL/SECAM.
[20:24] <`md> to get it to 25 or 30fps
[20:24] <`md> so yea
[20:24] <kierank> input is a wide variety of formats
[20:25] <`md> sure
[20:25] <trn> That is industry standard everywhere I've worked.
[20:25] <kierank> 720p24 isn't a standard
[20:25] <kierank> anywhere
[20:25] <kierank> it's kiddy web standard sure
[20:26] <`md> kierank: dont underestimate netflix and such...
[20:26] <`md> far from being kiddy anymore
[20:26] <trn> kierank: I'm working on prosumer level stuff, yes.
[20:27] <kierank> `md: sure i can go to one of the conferences and hear about how the entire audience of these web services is the same as one channel I encode
[20:27] <trn> kierank: Panasonic AG-DVX100 and similar are standards there... megachurch stuff uses HVX200's ...
[20:27] <kierank> oh it's changing the world
[20:27] <kierank> OTT
[20:29] <trn> The standard is mostly DVCPRO HD which is 960×720 ... 
[20:29] <trn> It's unusual to see higher than that in the target segment.
[20:30] <kierank> wouldn't even pass QC on my side...
[20:30] <kierank> except maybe for news
[20:32] <trn> kierank: That's the target, providing live-streams of events, target is web broadcast motorsports, house of worship stuff, gopro3/4 wifi-backpack streams, etc.
[20:32] <kierank> ok
[20:32] <trn> kierank: Most of that is using DV.
[20:35] <trn> The point is with cheap gigE you can already do 3 streams at equal or higher quality to what is commonly in use in this segment.
[20:35] <trn> So even the most rudimentary compression would be enough to give enough cameras to cover most events.
[20:36] <kierank> makes more sense as to why you would use nut
[20:36] <kierank> but I would use rtp
[20:39] <trn> kierank: I'm going to look into it actually.  
[20:40] <trn> kierank: But to get something quick and working and remember the 'freemium'
[20:40] <trn> version runs on (uugh) "the cloud".
[20:41] <trn> So it's easier to get TCP where I need it vs UDP
[20:41] <trn> NAT and such
[20:41] <trn> And using NUT makes it much easier to play with different formats.
[21:06] <ubitux> kierank: xiph, .doc? seriously?
[21:06] <kierank> ubitux: etsi rules
[21:06] <kierank> has to be a doc
[21:06] <ubitux> ._.
[21:07] <ubitux> fear, ok
[21:07] <kierank> ubitux: better than ietf imo
[21:07] <kierank> they draw graphs using ascii art...
[21:07] <ubitux> fine with me
[21:07] <kierank> makes my eyes bleed
[21:08] <ubitux> i do that as well actually :p
[21:08] <ubitux> antiword seems to support you .doc, perfect
[21:08] <ubitux> your&
[21:09] <kierank> not very well
[21:10] <kierank> derf had to use a windows VM to edit it =p
[21:10] <ubitux> antiword is just for reading
[21:10] <ubitux> http://b.pkh.me/ETSI_TS_opus-v0.1.2-draft.txt
[21:10] <kierank> hmm it did the tables as well
[21:10] <kierank> not too badly either
[21:11] <ubitux> you couldn't write it in whatever sane format, make a screenshot, and paste it in the doc?
[21:11] <ubitux> i used to do that @ school
[21:12] <ubitux> (until i realized they couldn't differenciate adobe reader from word)
[21:13] <kierank> I would have done it in LaTeX
[21:13] <JEEB> > standardization bodies > LaTeX
[21:13] <JEEB> all the things are in doc
[21:33] <kierank> ubitux: if you have any thoughts please feel free to make them
[21:54] <cone-572> ffmpeg.git 03James Almer 07master:f137876182f6: x86/hevc_idct: add a colon to labels
[22:06] <cone-572> ffmpeg.git 03James Almer 07master:664e9e433119: x86/hevc_deblock: load less data in hevc_h_loop_filter_luma_8
[23:24] <kurosu> jamrial, looks like some of the MASKED_COPY could use SWAP instead of mova in hevc_deblock.asm
[23:25] <kurosu> I can't understand why it doesn't work, so I guess I'll look at it with a fresher mind tomorrow
[23:26] <cone-572> ffmpeg.git 03Carl Eugen Hoyos 07master:63c0b41904bc: Fix standalone compilation of the adts muxer.
[23:39] <trn> OK.  I have a quick patch for ticket #2828 but it involves three new data fields in AVFormatContext struct.
[23:39] <trn> Not sure if that is acceptable.
[23:40] <trn> Probably the same as the other guys patch.. should I send to mailing list anyway?
[23:43] <michaelni> what "others guys patch" ?
[23:43] <michaelni> but without really knowing anything about the specific case, sending patches to the ML is always a good idea
[23:43] <ubitux> http://git.libav.org/?p=libav.git;a=commitdiff;h=942269fd ????
[23:44] <trn> michaelni: https://trac.ffmpeg.org/ticket/2828#comment:12 <- he described it but didn't include it.
[23:45] <trn> michaelni: It's rather obvious after reading the whole thread.  I included full debug output from latest git head and full command-lines to replicate the issues by using ffmpeg directly.
[23:45] <trn> I'll send the patch along anyway.  It just seems like it should be handled another way.
[23:47] <trn> Or like I mention in comment #17, a workaround is adding EXT-X-DISCONTINUITY hint to M3U8 output which should make the ffmpeg output pass Apple validation.
[23:48] <trn> That would at least not require adding new entires to AVFormatContext struct.
[23:48] <michaelni> well, i cant comment without seeing the patch
[23:48] <michaelni> that tis is about
[23:52] <trn> I'll try to send two tonight, one that patches AVFormatContext and one that just adds EXT-X-DISCONTINUITY to M3U8 output (but that is trivial).
[23:52] <trn> I'm just not sure how AVFormatContext struct is used and if adding new fields to it would break ABI.
[23:54] <michaelni> you can add fields at the end, i mostly wonder if the fields wouldnt belong elsewhere
[23:55] <michaelni> also if the fields are to be accessed from outside libavformat that needs MAKE_ACCESSORS
[23:59] <trn> leaving work, thanks again guys.
[00:00] --- Tue Jul 29 2014


More information about the Ffmpeg-devel-irc mailing list