[FFmpeg-devel] [PATCH] ffserver rtsp bug fixes

Luca Abeni lucabe72
Wed May 19 09:26:51 CEST 2010


On 05/19/2010 09:10 AM, Howard Chu wrote:
[...]
>>>> I think this is the wrong solution. The (IMHO) correct one would be to
>>>> use
>>>> a bitstream filter to convert the NAL syntax.
>>>
>>> Why is that a better solution?
>>
>> Because in this way we can separate different functionalities (converting
>> the bitstream syntax - in the bitstream filter - and packetising the
>> bitstream - in the rtp mixer) in different parts of the code. And we can
>> avoid code duplication (and we can reduce the complexity of the rtp muxer
>> code).
>> AFAIR, the needed bitstream filter is already there, it just needs to be
>> used (I used this solution in one of my programs, some time ago... If I
>> remember well, I just needed to fix some minor issue. But then it
>> worked).
>
> I don't think the muxer's complexity has been harmed by these patches.

They add (probably duplicated) code to rtpenc_h264.c, and to sdp.c.


> On the other hand, I've just taken a look at the h264_mp4toannexb filter
> source code and it is quite dense. Also, anything that does
> "alloc_and_copy" is a bad idea from a memory and CPU resource perspective.
>
> If bitstream filters were capable of operating in-place on the data, I
> might agree with you. But to me it just looks like a useless pig.

Well, there can be 2 cases:
1) bitstream filters are useless -> in this case I agree that something like
    your patch (removing code duplication) can be ok
2) bitstream filters are useful and should be used -> in this case, I believe
    a bitstream filter should be used, here.

I am not qualified to say if 1) or 2) is the case (maybe someone else can
comment on this), but the fact that bitstream filters exist seems to suggest
that 2) is the case...


>> In my opinion, a muxer (or a demuxer) should accept (or produce) only one
>> bitstream syntax per codec. If the application wants to use a different
>> bitstream syntax, then it should use a bitstream filter for the
>> conversion.
>
> How does the application know that it needs to use a filter?

By looking at the bitstream source (probably a demuxer) and destination (the
RTP muxer, in this case).

> How does it
> know that the syntaxes are different? Why should the application author
> worry about details like this that are otherwise hidden under several
> layers of binary encoding?

In my opinion, for the same reason why an application author has to worry
about converting pixel formats, etc (if a decoder produces RGB frames
and the application wants to feed them in a codec which accepts YUV frames,
the application is in charge of converting them. I think this situation is
similar...)

>>> so that libavcodec/h264.c doesn't have to do exactly these
>>> same steps (after all, that's where I lifted this code from) ?
>>
>> This smells like code duplication... :)
>
> Indeed. These functions ought to be collected into a single place. I
> believe the "clean split" between decoder, muxer, and network layer
> you're alluding to is a distinct disadvantage here. Anything that
> touches H264 has to understand how to parse NALUs; that code belongs in
> a single place usable from all of those layers.

So, a helper function doing this can be introduced, and used in all the
relevant places...


				Luca



More information about the ffmpeg-devel mailing list