[Ffmpeg-devel] RTP patches & RFC
Mon Oct 9 18:35:03 CEST 2006
On Mon, Oct 09, 2006 at 10:53:31AM -0500, Ryan Martell wrote:
> >>>5) My code in rtp_h264 uses linked lists. It creates packets,
> >>>stuff into them, resizes them, etc. It means at best 2 copies per
> >>>packet (from the UDP buffer, to my packet, to the AVPacket), and at
> >>>worst it could be many copies (I have to aggregate all of the
> >>>for a given timestamp together). My philosophy is "get it
> >>>working, then
> >>>make it fast.". Are these acceptable issues? I could keep a
> >>>pool of
> >>>packets around, but the payloads are various sizes.
> >>>Alternatively could
> >>>set it up the way tcp handles it's streams, but that's a lot of
> >>>overhead and room for error.
> >>Can't you put some of this part in the framer layer?
> >framer == AVParser (so you have a chance to find it ...)
> >anyway, code which does unneeded memcpies when they can be avoided
> >will be rejected
> >and i dont understand why you need 2 copies, just have a few
> >AVPackets (one
> >for each frame) and get_buffer() the data into them
> >if the final size isnt known (missdesigned protocoll ...) then you
> >some av_realloc() for out of order packets which IMO should be rare
> >memcpy() should be fine
> Okay, I'll take a look at the framer. I was only looking at the rtp/
> rtsp stuff, and have no idea what happens to the AVPacket once I hand
> it up from the rtp code.
> I don't think I'm using unneeded memcpy's right now: this is what
> 1) A UDP packet comes in.
> 2) There are three things that can happen to that packet:
> a) I split it into several NAL units (creating my own internal
> b) I pass it through unchanged (to my own internal packet)
> c) I accumulate it into a partial packet, which, when complete
> (multiple udp packets to compose), gets added to my own internal
> packet queue.
> 3) I then take all the packets on my own internal queue, get all of
> them that are for the same timestamp, (assuming there is a different
> timestamp beyond them in the queue; meaning i have everything for a
> given frame) and accumulate them into a single AVPacket.
> Previously, I was not using my own packet queue, and was just handing
> them up as I got them. But the h264 codec must have all NALs for a
> given frame at the same time, so that didn't work.
just set AVStream->need_parsing = 1; in the "demuxer" and an AVParser
shall merge and chop up the packets into complete frames, theres only
one thing it cannot and that is reorder packets ...
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
In the past you could go to a library and read, borrow or copy any book
Today you'd get arrested for mere telling someone where the library is
More information about the ffmpeg-devel