[FFmpeg-devel] [PATCH] Add an RTSP muxer
Tue Jan 5 12:04:51 CET 2010
On 05/01/10 11:31, Martin Storsj? wrote:
>>> - I need to better understand your rtpenc.c patches. Well, I understand what
>>> the patches do, but I do not understand why all those changes are needed (for
>>> example, patch 10/24 says "Use the AVStream st member from RTPMuxContext
>>> instead of AVFormatContext->streams", but it does not explain why. BTW, I
>>> just noticed that the "st" field of RTPMuxContext seems to be unused...).
>>> It seems that your goal is to use the rtpenc code without using an
>>> output format context... Why? Cannot the RTSP muxer create one or more
>>> proper RTP muxers, and use them, instead of exposing some of the RTP
>>> muxer internals?
>> Yes, that was my goal. I wasn't really sure how to chain the muxers, to
>> connect the AVStream* that the user set in the RTSP AVFormatContext to the
>> internal RTP AVFormatContexts (just simply copy the pointers?), and thus I
>> thought using RTPMuxContext directly would be leaner.
>> From both your and Ronalds reactions, I see that you'd prefer me to use
>> the full AVFormatContext interface inbetween.
> I tried creating full proper RTP muxers for each of the streams
Thanks for trying this.
> and while it does work, it feels more messy to me.
Ok; I need some time for studying the problem and seeing if this
approach can be taken without introducing any mess in the code.
> Downsides to this approach, in my opinion:
> - I store the RTP AVFormatContext in rtsp_st->transport_priv
This looks fine, I think.
> and thus
> initializes it long before the proper initialization point in
> rtsp_open_transport_ctx. This, since I need to create the AVFormatContexts
> before doing the setup, in order to use the contexts for creating the SDP
> (as the correct approach was, as I understood from Luca).
Without having looked at the code yet, this looks fine too... Obviously,
rtsp_open_transport_ctx() will need some modifications (or, we call a
different function in case of RTSP output). But I hope it can be done
> - To get the codec parameters available to the RTP muxer, I simply free
> the newly created AVCodecContext and copy the pointer to the
> AVCodecContext of the original AVStream in the RTSP muxer. Instead of
> copying the pointer, the relevant parts (extradata, frame_size, etc) could
> be copied instead, of course.
I need to look at it more deeply. But why freeing the AVCodecContext?
Maybe both the RTSP's AVStream and the RTP's avstream can contain a
pointer to the same AVCodecContext? Or am I missing something? I'll have
a better look later.
> Given this comparison, what's your opinion - internal RTP muxer interface
> or proper chained AVFormatContexts within the RTSP muxer?
I still think the "chained muxers" approach is better, but maybe some
comments from Michael would help.
Of course, we have to implement chained muxers in a clean way, without
making the code too complex, and this might require some work... As I
wrote in the previous email, I am willing to help in this.
More information about the ffmpeg-devel