[FFmpeg-devel-irc] IRC log for 2010-05-15

irc at mansr.com irc at mansr.com
Sun May 16 02:00:28 CEST 2010


[01:06:15] <thick_mcrunfast> Just asked this in #ffmpeg, forgot about this channel:
[01:06:22] <thick_mcrunfast> hey, trying to compile "segmenter", which is supposed to segment mpeg2ts into files ready for live streaming; unfortunately I can't seem to get it to compile
[01:06:29] <thick_mcrunfast> Here's the log: http://paste.pocoo.org/show/214151/
[01:06:35] <thick_mcrunfast> Does this indicate which library I'm missing?
[02:11:21] <Dark_Shikari> what's a simple formula to convert 0..63 64..127 -> 0..63 63..0
[02:11:26] <Dark_Shikari> in as few ops as possible
[02:11:29] <Dark_Shikari> no branches
[02:23:58] <pengvado> remember, we can renumber the states
[02:24:21] <pengvado> 0..63 64..127 -> 0..63 0..63 works just as well
[02:25:16] <pengvado> or 0..126 1..127
[02:34:09] <nefrir> min(x, 127-x) seems short
[02:35:10] <nefrir> if it is for mmx
[02:36:52] <Dark_Shikari> pengvado: oh... hmm, true
[03:48:39] <hyc> gahh. ffserver always shows "AAC with no global headers is currently not supported." even though I have AVOptionAudio flags +global_header in my <stream> def
[03:49:57] <hyc> can't get anything working with rtsp. has anyone tested it successfully recently?
[03:50:08] <hyc> google doesn't show any success stories, only people with the same failures
[09:20:42] <hyc> hmm, spent all this time screwing around with different container formats and ffserver, finally to realize this: http://code.google.com/p/android/issues/detail?id=1513
[09:20:52] <hyc> Android's rtsp client only supports mp4/3gpp container
[12:34:25] <CIA-7> ffmpeg: stefano * r23142 /trunk/libavutil/pixfmt.h:
[12:34:25] <CIA-7> ffmpeg: Clarify descriptions for RGB4, BGR4, NV12, NV21,
[12:34:25] <CIA-7> ffmpeg: RGB48BE, and RGB48LE pixel formats.
[13:00:20] <_av500_> hyc: why not just ask before :)
[13:18:48] <hyc> that would have been too easy :P
[13:19:52] <hyc> This kinda sucks, can't transcode flv to mp4 in realtime and stream the mp4 concurrently, because you need to add hint tracks after the transcode
[13:20:52] <hyc> I was considering adding a URL reader to ffserver, so that it can take an rtmpe stream as input and push it out again as rtsp
[13:21:05] <hyc> but it looks like ffserver just plain doesn't work
[13:21:42] <hyc> now I'm looking at feng, which uses ffmpeg libraries
[13:31:40] <hyc> hmmm. this test stream plays OK on my G1 phone, but not with mplayer or ffplay
[13:31:43] <hyc> rtsp://v4.cache7.c.youtube.com/CkYLENy73wIaPQlAxD6Aps4FuxMYEiASFEIJbXYtZ29vZ2xlSARSBWluZGV4Wg5DbGlja1RodW1ibmFpbGD82u-DktbbwUsM/0/0/0/video.3gp
[13:32:04] <hyc> found from this page http://forums.fedoraforum.org/showthread.php?t=242205
[13:32:47] <hyc> ffplay sorta plays it but the video is corrupted
[13:33:17] <_av500_> guess thats what  utube mobile streams
[13:35:08] <hyc> it must be a valid stream, if the phone plays it. but ffplay scrolls line after line of MV errors and other messages
[13:35:44] <hyc> laptop and phone are on the same wifi network, so it's not a difference in routing / firewalls whatnot
[14:16:36] <wbs> hyc: if you want to stream live rtsp stuff from an url, you can set up DSS, then do ffmpeg -i <inputurl> -f rtsp -vcodec mpeg4 -acodec libfaac rtsp://dss.server/name.sdp, and then you should be able to watch the rtsp url on the phone
[14:17:37] <hyc> wbs: how does that work? how can the index stuff be generated on the fly?
[14:18:19] <wbs> hyc: when streaming with DSS, there's two options. either you feed it a realtime stream over RTSP/RTP, and it just mirrors it out, that's what the commandline above does
[14:18:56] <wbs> or you store it into a file, add RTP hinting, put the file in DSS's movie folder and let it serve it from there
[14:19:02] <wbs> but that's not for the realtime/live case
[14:19:08] <hyc> ok
[14:19:41] <wbs> the RTSP muxer I added in february adds support for the live case, for offline stuff, you can either just mux it into a normal 3gp/mp4 and do MP4Box -hint <file>, or apply the movenc/rtphint patches I sent a few weeks ago
[14:19:46] <wbs> ... and still is waiting for review of
[14:19:52] <hyc> then that's not what I'm after. I want to grab an rtmp stream (flv H264) and send it out as rtsp (mp4 H264)
[14:20:21] <wbs> you can do that with the ffmpeg commandline
[14:20:34] <wbs> ffmpeg -i rtmp://whatever -f rtsp rtsp://dss/name.sdp
[14:20:42] <hyc> but you can't do the hinting in realtime
[14:20:54] <wbs> uh, you don't need hinting in that case
[14:21:04] <hyc> you have to have the entire flv converted to mp4 first
[14:21:08] <wbs> no, you don't
[14:21:36] <wbs> it reads it, packet by packet, puts them in RTP packets and sends them over the network to the DSS server
[14:21:46] <wbs> which then relays them to anybody connected to that URL at the moment
[14:21:53] <hyc> hmmmm
[14:22:00] <wbs> the "hinting" aka RTP packetization is done by the RTP muxer, aka lavf/rtpenc*
[14:22:22] <wbs> hinting is only needed when serving archived content from file
[14:22:27] <hyc> ok, well that's good news
[14:26:06] <hyc> I was hoping to get more clever ... send an rtmp URL to ffserver in params tacked onto the rtsp URL
[14:26:23] <hyc> so that it could do the fetch / transcode / serve all in one
[14:26:48] <wbs> it might work, I don't really know ffserver at all
[14:27:15] <hyc> ffserver appears to be mostly abandonware.
[14:27:29] <hyc> lots of questions on the ffserver mailing list, no answers.
[14:27:54] <wbs> yeah, if all of its features would work, it would at least be easier to evaluate _what_ it does, I'm not really sure I know, quite frankly
[14:28:00] <hyc> http://www.mail-archive.com/ffserver-user@mplayerhq.hu/maillist.html
[14:28:18] <hyc> yeah, it's puzzling to say the least.
[14:28:29] <hyc> a lot of it is undocumented
[14:28:52] <wbs> all the people i've talked with either say that it works automatically and does wonders, and other say that they didn't even figure out what it was supposed to do ;P
[14:28:57] <hyc> most of the simple examples I tried didn't work
[14:29:34] <hyc> even streaming from a file source over http was chunky / stuttered
[14:30:16] <hyc> I read every email in that list archive, never found anything that helped
[14:30:46] <hyc> perhaps it works best for live streaming from cameras
[14:30:52] <hyc> dunno
[14:31:40] <hyc> I couldn't even get it to serve an aac stream, it kept on complaining that global headers are required but were missing
[14:31:58] <hyc> and no combination of "flags +global_header" would make that go away
[14:49:26] <pross-au> OT: I love it how somebody has found a creative use for defective inter-frame refreshes: http://vimeo.com/3139412
[14:58:40] <Compn> lol fun stuff
[16:32:00] <hyc> wbs: hm, my DSS seems to want a username and password before accepting a braodcast
[16:55:25] <wbs> hyc: yeah, you set it up through a web ui at http://server:1220/
[16:55:45] <wbs> hyc: then use rtsp://username:password@dss/name.sdp as destination url
[16:56:26] <wbs> hyc: if you run ffmpeg and DSS on the same machine, you can send to it without a password, if you use connect to it over localhost
[17:05:32] <hyc> ok
[17:05:43] <hyc> most of my attempts with ffmpeg just hang
[17:05:54] <hyc> Could not write header for output
[17:06:06] <wbs> hmm, weird.. the error reporting isn't exactly great
[17:06:08] <hyc> twice I've seen it start encoding successfuly, out of 20 tries
[17:06:25] <wbs> are you using a normal file as source, or a rtmp url?
[17:06:33] <hyc> right now just a flv file
[17:06:48] <wbs> if you use a normal file, you need to use -re, to feed the data in realtime pace instead of as fast as it can
[17:07:00] <hyc> ok
[17:07:20] <wbs> but other than that, it should work; can you wireshark it and see where it stops?
[17:07:41] <hyc> damn, it started right up with that
[17:08:22] <wbs> oh the irony ;P
[17:08:27] <hyc> ;)
[17:09:02] <wbs> but let's say there's room for improvement in the error reporting :-)
[17:09:57] <hyc> indeed...
[17:10:00] <hyc> ok going to try it again
[17:11:49] <hyc> yeah, twice in a row, success
[17:12:36] <wbs> great!
[17:13:11] <wbs> you may want to use h263, mpeg4 or h264 as video codec, and amr or aac as audio codecs then.. don't know if it works with stream copy, but at least when you allow ffmpeg to transcode, it should work
[17:13:11] <hyc> thanks for your help ;)
[17:13:30] <hyc> yeah, using h264
[17:16:39] <BBB> error reporting is poor for rtsp/http
[17:16:44] <BBB> I looked into that in the past
[17:16:49] <BBB> but didn't really get anywhere practical yet
[17:16:58] <BBB> but patches for that are greatly appreciated </hint>
[17:17:16] <wbs> i usually just fire up wireshark instead of trying to look at ffmpeg log level options ;P
[17:18:26] <hyc> heh
[17:18:40] <hyc> wireshark tends to get directly to the heart of the matter
[17:18:57] <wbs> yeah
[17:19:23] <wbs> especially in stuff like rtsp which is complex to say the least, the error message reported by the application would probably not really tell the actual problem but just be misleading
[17:19:32] <BBB> not true
[17:19:39] <BBB> the error from the server generally helps
[17:19:42] <BBB> 403 forbidden
[17:19:43] <BBB> or whateer
[17:19:52] <wbs> yeah, in those cases saying just that would be very helpful
[17:19:52] <BBB> that would help a lot as a AV_LOG_ERROR msg
[17:20:02] <BBB> a patch for that isn't very hard
[17:20:23] <BBB> if (num != 400) av_log(..);
[17:20:39] <wbs> but e.g. if trying to watch a stream with unsupported payload formats, it usually gets stuck for a long time when trying to detect the stream parameters
[17:20:51] <BBB> I mean 200
[17:21:01] <hyc> hm yeah that would make a big difference
[17:21:10] <BBB> anyway
[17:21:16] <BBB> wbs: I kow that issue also
[17:21:23] <BBB> in the past it motivated me to add support for new pyloads
[17:21:26] <BBB> right now it's annoying
[17:21:37] <BBB> where's our soc student josh btw?
[17:21:59] <wbs> he's been here a few times since he did his qual task
[17:22:13] <BBB> ok, good
[17:35:35] <CIA-7> ffmpeg: stefano * r23143 /trunk/ffplay.c:
[17:35:35] <CIA-7> ffmpeg: Avoid mixed declaration and code, fix C89 compatibility.
[17:35:35] <CIA-7> ffmpeg: Patch by Fran?ois Revol revol free fr.
[17:49:38] <mru> saste: mmu_man is a committer, he can apply his own patches
[17:56:39] <merbanan> mru: how much slower is the arm float 2 int code ?
[17:57:10] <mru> which code?
[17:57:16] <mru> and slower than what?
[17:58:14] <merbanan> I'll elaborate
[17:58:29] <merbanan> I'm gonna switch the dca decoder to outputting float
[17:58:36] <mru> pretty please, don't
[17:58:53] <mru> you'll make it 3x slower on cortex-a8
[17:59:24] <merbanan> ok, what is missing on arm
[17:59:30] <merbanan> to not make it 3x slower
[17:59:31] <mru> s/arm//
[17:59:44] <mru> audioconvert.c is awful
[17:59:54] <merbanan> jolly good
[18:00:01] <mru> no simd at all
[18:00:08] <mru> and bad even for scalar code
[18:00:20] <merbanan> so we need dsp accelerated interleavers
[18:00:28] <mru> I am so tired of having this discussion
[18:00:30] <merbanan> and float2int
[18:00:41] <mru> we already have blazing fast float2int
[18:00:45] <mru> in dsputil
[18:00:46] <wbs> audioconvert isn't part of the external api, btw, so for applications using lavc directly, they have to go through av_resample (without doing any actual resampling) if one doesn't want to code the audio conversion separately
[18:01:06] <mru> so we need to fix it up and make it public
[18:01:11] <mru> BUT FIX IT FIRST
[18:01:44] <mru> merbanan: can you at least leave the scaling in the dca decoder?
[18:02:23] <mru> the existing fast float2int code needs input in +-32k range
[18:02:37] <merbanan> mru: no
[18:02:40] <mru> why?
[18:02:43] <merbanan> just the c path
[18:02:53] <mru> wrong
[18:03:06] <mru> the C code needs +-1 with bias of 384
[18:03:06] <merbanan> hmm
[18:03:40] <merbanan> ok ok
[18:03:49] <mru> if you remove the scaling in the decoder you'll ruin thousands of lines of asm
[18:04:28] <merbanan> for float2int ?
[18:04:39] <mru> and in the decoder
[18:04:39] <merbanan> or float2int16
[18:04:50] <mru> float2int16
[18:04:55] <mru> that's the only conversion we have
[18:05:02] <merbanan> if you need scaling you do it last
[18:05:31] <mru> the scaling is combined with the synth filter now
[18:05:46] <merbanan> um, doesn't audioconvert use float2int if the codec outputs float ?
[18:05:56] <mru> it uses lrintf and scalar mult
[18:06:04] <mru> slower than anything you can imagine
[18:06:12] <merbanan> but why :(
[18:06:18] <mru> because I haven't had time to fix it
[18:06:24] <mru> and everybody else refuses to see the problem
[18:06:54] <mru> and before you ask, this code is used for real on cortex-a8 devices
[18:06:57] <merbanan> float out from codecs are scaled to +-1 IIRC
[18:07:06] <mru> gaaaaaaaaaaaahhhhhhhhhhhhh
[18:08:07] <mru> did you pay attention to ANYTHING of what I said?
[18:08:23] <merbanan> I read every line you write
[18:08:30] <mru> that's not what I asked
[18:08:33] <merbanan> I think you need a hug
[18:08:36] <merbanan> :)
[18:08:52] <mru> I need 48-hour days
[18:09:35] <merbanan> I'll go and read some code, I just don't understand some things with the audio stuff
[18:09:59] <mru> the neon float2int16 is insanely fast
[18:10:08] <mru> it does interleaving and clipping
[18:10:23] <mru> but it requires pre-scaled input
[18:10:54] <merbanan> so is sse and 3dnow, ie fast
[18:10:56] <mru> the old codecs, dca, ac3, vorbis, etc all scale internally
[18:11:21] <mru> the new ones, wmapro and I forget what else, don't
[18:11:38] <Dark_Shikari> could we make it so that float2int16 takes a parameter telling it how much to scale?
[18:11:43] <Dark_Shikari> which is passed via the api from the decoder?
[18:11:44] <mru> on cortex-a8 decoding wmapro to s16 spends OVER 50% of the time in audioconvert.c
[18:11:48] <Dark_Shikari> thus kill 3 birds with one stone
[18:11:50] <Dark_Shikari> or something
[18:11:52] <mru> Dark_Shikari: no
[18:11:55] <Dark_Shikari> why not?
[18:12:07] <mru> it's cheaper to scale as part of another step
[18:12:12] <Dark_Shikari> really?
[18:12:22] <mru> some codecs do it simply by premultiplying some tables
[18:12:22] <Dark_Shikari> you mean like during the idct?
[18:12:24] <Dark_Shikari> ah
[18:12:25] <mru> yes
[18:12:30] <merbanan> you can do it in the transform for free
[18:12:44] <Dark_Shikari> so what's the debate about
[18:12:45] <mru> and the fast float2int16 asm already exists
[18:12:47] <Dark_Shikari> why can't wmapro do that too?
[18:12:49] <mru> I don't want to rewrite it
[18:13:20] <mru> wmapro doesn't because a) the author was lazy, and b) there's API to request it in the first place
[18:13:23] <merbanan> so the "new codec api" needs to be able to adjust the scalefactor
[18:13:24] <mru> no api
[18:13:28] <mru> yes
[18:13:46] <Dark_Shikari> so audioconvert needs to handle scale factors, but only _some_ of the time?
[18:13:47] <mru> we need to add scale and bias to AVCodecContext
[18:13:52] <merbanan> add a scalefactor to avcodec ?
[18:14:09] <merbanan> and let decode_audio3 populate it ?
[18:14:27] <mru> there's a slight problem
[18:14:28] <merbanan> AVCodecContext->scale_factor ?
[18:14:46] <mru> the scale factor depends on the output format
[18:14:55] <merbanan> sure
[18:15:02] <mru> and on the specific implementation of any conversion functions used
[18:15:10] <mru> so we have to
[18:15:12] <merbanan> but all codecs should output their native format
[18:15:16] <mru> 1. query codec output format
[18:15:24] <mru> 2. set up conversion
[18:15:36] <mru> 3. pass scale/bias from converter to decoder
[18:15:51] <merbanan> bias can go go to hell
[18:15:53] <merbanan> imo
[18:15:58] <mru> there goes the fast C code
[18:16:02] <merbanan> or maybe not
[18:16:20] <mru> it really is a lot faster sometimes
[18:16:51] <merbanan> planar buffers also
[18:16:57] <mru> that too
[18:17:03] <Dark_Shikari> the float C code crap with the bitmath can die
[18:17:07] <Dark_Shikari> it's caused too much bikeshedding
[18:17:15] <mru> fine, so kill the bias
[18:17:18] <mru> I don't care
[18:17:22] <Dark_Shikari> Also, IMO it's useless
[18:17:34] <Dark_Shikari> because if you're processing float data -- you're going to be on a cpu with some half-decent fpu
[18:17:40] <Dark_Shikari> obviously if you're not, you shouldn't be using float
[18:18:24] <mru> indeed
[18:18:53] <merbanan> oh, we need downmixing also
[18:19:03] <mru> that's separate
[18:19:07] <mru> but yes, we need it
[18:19:19] <merbanan> hmmm, ok
[18:19:20] <mru> it should also be possible to do in the decoder though
[18:19:24] <mru> like ac3 and dca do
[18:19:48] <merbanan> but we need generic routines
[18:20:02] <mru> yes, for aac and vorbis
[18:20:09] <mru> and whatever else can't downmix pre-transform
[18:20:51] <saste> mru: I get *thousands* warnings of the kind - 'xxx' defined but not used - when compiling
[18:21:06] <mru> saste: why are you telling me?
[18:21:26] <saste> mru: because you're the configure/make guru :-)
[18:21:42] <mru> I'm not the one writing unused code
[18:21:50] <saste> mru: I wonder if I'm the only one affected by this...
[18:21:57] <mru> did you use --enable-small ?
[18:22:02] <saste> since I usually compile *without* optimizations
[18:22:12] <saste> so I was trying to look at how to avoid that
[18:22:18] <mru> why on earth do you do that?
[18:22:41] <saste> uh... I don't remember it had something to do with some gcc bugs
[18:22:54] <saste> I don't change my configure line since ages
[18:23:00] <mru> maybe you should
[18:24:52] <merbanan> omg :/
[18:25:13] * merbanan looked at audioconvert
[18:25:32] <mru> good boy
[18:25:49] <mru> now do you understand why I object?
[18:26:27] <saste> mru: well indeed they disappeared... I suppose I'll enable optimizations since now
[18:27:14] <merbanan> saste: use --disable-optimizations --enable-debug=3 --disable-mmx if you need to source level debug something
[18:27:32] * mru has never needed that
[18:27:58] * merbanan is a visual studio fanboy
[18:28:15] * mru is a fan of writing correct code to begin with
[18:28:31] <mru> or failing that, looking at the code until the error becomes obvious
[18:29:37] <merbanan> sure that is no problem, but try maintaining 100k lines of code you didn't write yourself by just looking at it
[18:30:02] <mru> so write it yourself
[18:30:09] <saste> that's why I'd prefer to continue to compile without optimizations...
[18:30:23] <mru> jokes aside, debuggers are only useful for looking at core dumps
[18:30:31] <mru> stepping through code is a waste of time
[18:30:43] <mru> adding a few printfs is much more efficient
[18:31:02] <merbanan> well have you heard of globals ?
[18:31:18] <mru> they're evil
[18:31:30] <mru> the ones we have are mostly static or write-once
[18:31:35] <merbanan> it was very much used in the code I maintain
[18:31:49] <mru> not that I see the connexion with debuggers
[18:34:02] <merbanan> well, you never know what thread modifies what member in the global structs so I prefer to use source level debuggers in that kind of area
[18:34:36] <mru> I've occasionally used jtag debuggers for such things
[18:34:40] <mru> on mmuless systems
[18:35:02] <merbanan> what kind of scaling does the cortex need ?
[18:35:14] <mru> same as all the other asm
[18:35:21] <mru> -32k..32k
[18:35:49] <merbanan> ok, if we drop the c bias we don't need any scaling
[18:35:58] <mru> yes we do
[18:36:14] <mru> "native" float range is -1..1
[18:36:22] <mru> that's what wmapro outputs
[18:37:24] <merbanan> and that can be remuxed directly into compliant .wav ?
[18:37:50] <mru> I've no idea
[18:38:04] <mru> I've never owned anything that could handle float samples
[18:39:18] <merbanan> I think we need to sort out what range/resolution and scale formats have
[18:39:25] <merbanan> and what we need in ffmpeg
[18:40:53] <merbanan> or what is commonly used in different containers etc
[18:40:55] <mru> the wav spec says only "PCM audio in IEEE floating-point format"
[18:41:17] <merbanan> how nice :/
[18:41:35] <mru> for format code 3
[18:42:23] <merbanan> I'd like to know how the formats all relate to each other
[18:48:03] * _av500_ thanks mru for fighting his cause
[18:54:20] <merbanan> if you treat int16 as Q1.15 then float at -1:1 make sense but what about int24 then ...
[18:54:43] <merbanan> is it just Q1.23 ?
[18:54:58] <mru> _av500_: everything is pointing towards an avcodec_decode_audio4()
[18:55:18] <merbanan> and a 5 when we add downmixing :)
[18:55:29] <mru> we already have that
[18:55:32] <mru> in the api
[18:55:46] <mru> it's the standalone downmixing that's missing
[18:56:58] <merbanan> yes correct
[18:57:16] <merbanan> but downmixing ac3 and dts isn't the same
[18:57:23] <_av500_> mru: fine with me as long as i get to select the fixed point output somehow
[18:57:27] <merbanan> they use different coeffs
[18:57:44] <mru> ac3 and dts downmixing is defined by their specs
[19:00:02] <Kovensky> CC      libavcodec/aaccoder.o
[19:00:02] <Kovensky> gcc: Internal error: Segmentation fault (program cc1)
[19:00:02] <Kovensky> yay
[19:00:11] <Kovensky> not reproducible though :/
[19:00:15] <mru> bad ram
[19:00:15] <Kovensky> rerunning make worked
[19:00:22] <Kovensky> hmm, maybe
[19:00:25] <_av500_> compiling on XM?
[19:00:38] <Kovensky> I'm getting some creepy kernel Oops when running wine apps too
[19:00:38] <Kovensky> http://pastebin.org/240444
[19:01:46] <mru> bad ram
[19:02:53] <Kovensky> oh well
[19:02:57] * Kovensky installs memtest86
[19:03:16] <mru> too much overclocking?
[19:03:36] <Kovensky> well, I did overclock my CPU, but only by 100MHz ._.
[19:04:20] <Kovensky> the multiplier is locked, I can only mess with FSB freq, and increasing it too much makes the RAM not work
[19:04:33] <mru> of course
[19:04:33] <Kovensky> lol 1600MHz K8 CPU on PCCHIPS mobo
[19:09:04] * Kovensky goes turn the overclocking down a notch or two and run memtest
[19:18:07] <mru> there, flame started
[19:20:49] <mru> I'm going out for a while, please don't do anything rash while I'm gone
[19:21:04] <mru> that is, don't commit any float-audio stuff until I've had a look
[19:22:28] <elenril> or you'll remove their commit rights? ;)
[20:28:00] <Kovensky> yep, bad ram, errors on 0x2256bcc4 even @ stock clocks
[20:36:49] <iive> Kovensky: time to buy something more expensive with heatsinks on it.
[20:42:56] * Kovensky has $-150
[21:49:25] <hyc> wbs: thanks for your help. I've now got Hulu streaming to my G1 in realtime
[21:49:57] <wbs> hyc: congrats!
[21:50:35] <hyc> yeah it's pretty cool. someone else could take it and turn it into an app I suppose
[21:50:52] <wbs> hyc: you may want to tweak the reflector_buffer_size_sec value in streamingserver.xml btw, the default of 10 is a bit high, you can adjust it down to 1, to get lower latency, if that's of interest
[21:51:18] <hyc> i don't think it's a problem right now
[21:51:23] <wbs> ok
[21:51:34] <wbs> I guess this is the second user of the RTSP muxer, except me ;P
[21:51:41] <hyc> lol
[21:51:58] <wbs> i debugged the youtube rtsp url you sent btw
[21:52:11] <hyc> rtmp and rtsp all at once
[21:52:19] <hyc> oh yes? what did you find?
[21:52:40] <wbs> well, the audio format isn't supported at all, so it not working is just to be expected, but the video track is quite problematic too
[21:52:52] <wbs> it sends the video packets out of order for some strange reason
[21:53:13] <wbs> not sure if that's in the original data on their servers, or just always reordered in transport
[21:53:35] <wbs> the libavformat rtpdec code needs to buffer packets and reorder them to get it decoded sanely
[21:53:38] <hyc> ah... well this is UDP after all
[21:54:04] <wbs> yeah, but I have never seen any such problems before, so it may actually be reordered in that way at the sending end, for some strange reason
[21:54:33] <wbs> I'm hoping for a good explanation from lu_zero, I'm writing a RFC on the issue now
[21:55:43] <hyc> sounds like it'll be interesting. how is anyone supposed to know how many packets to buffer
[21:57:16] <wbs> yeah
[22:14:03] <kierank> oh dear mp4a-latm in rtsp...that's not going to end well :/
[23:35:33] <Kovensky> oh, so people will finally have "motivation" to make LATM work? =p
[23:35:54] <iive> what is latm?
[23:36:05] <Kovensky> some aac thing
[23:36:14] <iive> oh.
[23:36:14] <Kovensky> used by some obscure DVB and ISDB broadcasters


More information about the FFmpeg-devel-irc mailing list