[FFmpeg-devel] [RFC] rtpdec: Reordering RTP packets
Wed May 19 15:28:26 CEST 2010
On Wed, May 19, 2010 at 12:05:41PM +0300, Martin Storsj? wrote:
> On Tue, 18 May 2010, Ronald S. Bultje wrote:
> > On May 18, 2010, at 8:17 PM, Michael Niedermayer <michaelni at gmx.at>
> > wrote:
> > > On Tue, May 18, 2010 at 11:37:57PM +0300, Martin Storsj? wrote:
> > >> On Tue, 18 May 2010, Luca Barbato wrote:
> > >>
> > >>>>> Default reorder queue size - the idea was to move the decision
> > >>>>> on the
> > >>>>> queue size out of rtpdec, so that it could be dynamically
> > >>>>> choosable.
> > >>>>> For now there's no such code, but the decision on the queue size
> > >>>>> is
> > >>>>> outside of rtpdec at least.
> > >>>>>
> > >>>>> This perhaps still could be a define (where should it be in that
> > >>>>> case,
> > >>>>> rtpdec.h?) even if it isn't hardcoded as a static array in
> > >>>>> RTPDemuxContext
> > >>>>> as in the previous attempt.
> > >>>>
> > >>>> Changed to a define now.
> > >>>
> > >>> I'd consider it as rtp proto param, but I know will be annoying
> > >>> forwarding it from rtsp://path/to/resource?buffer=10&tcp to it but
> > >>> might
> > >>> be something nice for downstream users. Otherwise use the
> > >>> AVOption...
> > >>
> > >> Yes, Ronald suggested something such, too. Although, I see that as a
> > >> separate feature that can be added later, after the general
> > >> reordering
> > >> support within rtpdec...
> > >
> > > iam scared
> > >
> > > the user cares about 1 thing and that is delay, not packet number
> > > which is
> > > just a meaningless number, unfit to be constant by default
> > Or size in bytes (more from a security/resource/admin point of view),
> Usually, the RTP packets have a fixed max size, and as long as the max
> queue size is capped by a fixed number of packets (in addition to a max
> delay given in time), I don't see this as such a big threat from a
> resource usage point of view. If the source sends huge packets, that will
> force the whole pipeline to allocate more resources. The queue just uses a
> bit more resources, scaling linearly with the increase in packet size.
if you want to cap resources like bytes of memory then cap bytes of memory
and not variable size packets
> > but yes I think I agree with Michael here. We should inplement both
> > and nr of packets isn't terribly useful then...
> Yes, a max delay would be useful. To implement it properly, it needs code
> in two places, though:
> Within rtpdec, which checks the timestamp range of packets in the queue,
> before choosing whether to forcibly output the first one in the queue,
> regardless of if we're waiting for a possibly dropped packet.
> Within the rtsp demuxer. Say we receive a packet once a second, but have a
> max reordering delay of 50 ms. If we miss one packet and receive the next
> one, it will be kept within the reorder queue until the next received
> packet (one second later), since the rtpdec code doesn't have a chance to
> flush it from the queue until it gets called next time, when the next
> packet is received. This would require additional code within
> udp_read_packet, to call rtp_parse_packet with a NULL buffer to forcibly
> get the first one from the queue if there is one, if we've been waiting
> longer than (max reordering delay) for the next packet.
> It gets even worse if we're receiving two streams, one which gets packets
> very often, and another one that gets very seldom. Then we have to keep
> track that we haven't gotten any packets on one stream for a while, trying
> to flush packets from its reordering queue.
> Does this sound necessary, or overly complicated? If this really is a
> problem, the user at least should be able to disable the reordering,
> either as a number of packets or as a delay in ms.
The obvious way to handle streaming (overall not just libav) is that
1. packets are received and put in a buffer
2. each packet is decoded and displayed once its time has come
3. the initial delay is set so that display likely wont get stuck and
that the delay doesnt annoy the user too much
4. if the next packet is unavailable by the time it is needed then
either one has to decode without it or one has to wait which implicates
that the delay between receive and display from now on is larger
5. Clocks used to decode/display have to be synchronized between server
and client (using SCR/PCR in mpeg and NTP in rtp)
I might be missing some things and there may be better and different
approuches and i dont mind at all if its done differently.
But i do mind if its done in a way that is worse in terms of delay vs.
reordering resistance because i dont think the world needs another rtp
implementation that doesnt work properly.
now if we assume the packet que is inside lavf, with any que between
lavf and lavc in the user app being as small as possible then the
application will call av_read_frame() when it _needs_ the next packet
and the rtp code simply has to return the next packet in this case.
i see no complexity here.
This might need minor adjustments in ffplay related to que sizes and
some 2 line initial delay buildup loop between av_open* and av_read_frame()
and we may need a way for the user app to indicate which stream it needs
a packet from first for av_read_frame().
But it really does not look all that complex.
As said dont mind at all to do this differently if the end result is
equally good from the users point of view in performance and features.
I also dont mind if just a subset of this is implemented to keep work
and complexity small.
But i do mind if work is going of toward some dead end that doesnt get
us closer to a proper implementation
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
While the State exists there can be no freedom; when there is freedom there
will be no State. -- Vladimir Lenin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
More information about the ffmpeg-devel