[FFmpeg-soc] [soc]: r3757 - libavfilter/diffs/01_ffplay_filters.diff

Stefano Sabatini stefano.sabatini-lala at poste.it
Sun Oct 5 16:16:16 CEST 2008


On date Saturday 2008-10-04 16:17:20 +0200, Michael Niedermayer encoded:
> On Sat, Oct 04, 2008 at 12:39:41PM +0200, Stefano Sabatini wrote:
> > On date Saturday 2008-10-04 00:37:49 +0200, Michael Niedermayer encoded:
> > > On Fri, Oct 03, 2008 at 04:50:00PM +0200, Stefano Sabatini wrote:
> > > > On date Friday 2008-10-03 02:11:39 +0200, Michael Niedermayer encoded:
> > > > > On Thu, Oct 02, 2008 at 10:48:41PM +0200, stefano wrote:
> > > > > > Author: stefano
> > > > > > Date: Thu Oct  2 22:48:41 2008
> > > > > > New Revision: 3757
> > > > > > 
> > > > > > Log:
> > > > > > Fix a crash when using the rawvideo decoder in ffplay.
> > > > > > 
> > > > > > The rawvideo decoder uses the packet data to set the data in the
> > > > > > output AVFrame, so when the packet containing the data was released in
> > > > > > get_video_frame(), then the AVFrame data was released as well, and
> > > > > > when accessed caused a crash.
> > > > > 
> > > > > this change looks wrong
> > > > > and i dont mean just the indention
> > > > 
> > > > Do you mean only the code or the idea also is in itself wrong? Please
> > > > give me some hint, so I can fix it as fast as possible.
> > > 
> > > checking for a specific codec_id and then handling packets differently
> > > is not a good idea at all
> > 
> > Yes I agree.
> > 
> > Currently the input buffer in raw_decode() is used to fill the output
> > picture in avpicture_fill().
> > 
> > The first trivial solution which I can see is to create a new buffer
> > and memcopy the old one into it, then use this one to fill the
> > picture. This way we can safely free the input data buffer and still
> > be able to access the output frame, check the attached patch.
> > 
> > This would affect performance of course but I cannot see other safe
> > solutions.
> 
> simply not freeing packets too early ...

Are we sure this is a viable solution?

In ffmpeg/ffplay outside of libavfilter this works because we process
the input packet, *then* we free them in the same routine after we use
the output packet, while in general when processing the input packet
we don't know (and we shouldn't know) when we ended to process the
corresponding decoded *output* frame/frames.

For example, in ffmpeg.c this works because in av_encode we process
the input packet (output_packet), and when we delete the input packet
we're sure the packet has been already processed and anyone is still
acessing it.

When using libavfilter processing this doesn't seem to work anymore,
indeed the same problem occurrs with ffmpeg:

ffmpeg -s 200x200 -f rawvideo -i /dev/urandom -vfilters scale=100:200,vflip -y ~/rand.avi

Furthermore what I most doesn't like about this approach is that we're
introducing a special case to deal with the rawvideo decoder (the same
buffer of the input buffer is used to store data in the output
AVFrame), and so we're potentially introducing the *need* for a
specific codec_id check.

If you don't like the memcpy solution, then we should add this
information to the codec context (e.g.: CODEC_CAP_USE_INPUT_BUFF),
then we should set the packet containing the data to a "non-free data
mode" just after decoding the corresponding data.

I think this is possible but I'm not sure it is worth the complexity
it adds, maybe a memcpy is just cheaper.

Regards.



More information about the FFmpeg-soc mailing list