[Ffmpeg-devel] RTP patches & RFC

Ryan Martell rdm4
Mon Oct 9 17:53:31 CEST 2006


>>> 5) My code in rtp_h264 uses linked lists.  It creates packets,  
>>> copies
>>> stuff into them, resizes them, etc.  It means at best 2 copies per
>>> packet (from the UDP buffer, to my packet, to the AVPacket), and at
>>> worst it could be many copies (I have to aggregate all of the  
>>> packets
>>> for a given timestamp together).  My philosophy is "get it  
>>> working, then
>>> make it fast.".  Are these acceptable issues?  I could keep a  
>>> pool of
>>> packets around, but the payloads are various sizes.   
>>> Alternatively could
>>> set it up the way tcp handles it's streams, but that's a lot of  
>>> pointer
>>> overhead and room for error.
> [...]
>> Can't you put some of this part in the framer layer?
> framer == AVParser (so you have a chance to find it ...)
> anyway, code which does unneeded memcpies when they can be avoided  
> easily
> will be rejected
> and i dont understand why you need 2 copies, just have a few  
> AVPackets (one
> for each frame) and get_buffer() the data into them
> if the final size isnt known (missdesigned protocoll ...) then you  
> need
> some av_realloc() for out of order packets which IMO should be rare
> memcpy() should be fine

Okay, I'll take a look at the framer.  I was only looking at the rtp/ 
rtsp stuff, and have no idea what happens to the AVPacket once I hand  
it up from the rtp code.

I don't think I'm using unneeded memcpy's right now: this is what  

1) A UDP packet comes in.
2) There are three things that can happen to that packet:
	a) I split it into several NAL units (creating my own internal packets)
	b) I pass it through unchanged (to my own internal packet)
	c) I accumulate it into a partial packet, which, when complete  
(multiple udp packets to compose), gets added to my own internal  
packet queue.

3) I then take all the packets on my own internal queue, get all of  
them that are for the same timestamp, (assuming there is a different  
timestamp beyond them in the queue; meaning i have everything for a  
given frame) and accumulate them into a single AVPacket.

Previously, I was not using my own packet queue, and was just handing  
them up as I got them.  But the h264 codec must have all NALs for a  
given frame at the same time, so that didn't work.

The final size of a given frame packet is unknown. (I didn't design  
the protocol! ;-) ).

I'll go take a peek at the AVParser stuff, and see if that will help  


More information about the ffmpeg-devel mailing list