[Libav-user] Memory Leaks?

wm4 nfxjfg at googlemail.com
Sat Apr 5 23:01:38 CEST 2014


On Sat, 5 Apr 2014 22:32:31 +0200
Harald Schlangmann <harry at gps-laptimer.de> wrote:

> Hello everybody,
> 
> I have a video demuxing / decoding / filter / encoding / muxing process up and running using NDK on Android. During testing, I discovered a continuous heap grow adding roughly 200kB per second of video processed. The value depends on the video resolution which is HD / 720p in my test case. As I have not been able to find the leak running several code reviews, I extracted the code utilized into a test program and compiled / ran it on my Mac desktop. As Valgrind seems to be not supported for MacOSX Maveriks (10.9), I used the leak template from Instruments - which is part of the MacOSX development environment. Here is a screen shot showing the result: 
> 
> 	http://www.gps-laptimer.com/Leaks.png
> 
> As you can see, Instruments detects 6059 leaks from any av_buffer_realloc () calls and additional 12122 smaller ones calling av_mallocz. Oddly, I have not found a way to include the full caller hierarchy so far, but it seems pretty obvious the leaks happen with every video frame or packet processed (the test video has 6275 frames) which is pretty close the 6059 bigger leaks and double the smaller ones. Please don’t get the red bars wrong, they appear each 10 seconds as this is the leak sampling rate (i.e. aggregated value for last 10 seconds).
> 
> Despite the missing caller hierarchy, is anyone able to derive a theory what’s wrong - especially the av_buffer_realloc ()s? I pulled the ffmpeg sources today, so they are recent. I have extracted part of the sources (just to show the pattern used, several placeholders included, all error handling and pts/dts calculation removed), so maybe someone can point me to issues in my code that generates the problem:
> 
> Code to read one frame, video frames get decoded and translated from native pixel format to RGB565, audio packets are simply passed through…
> 
>         AVPacket    packet;
>         
>         av_init_packet (&packet);
>         packet.data = NULL;
>         packet.size = 0;
>         
>         int         ret = av_read_frame (videoSource->pFormatCtx, &packet);
>         
>         while (ret>=0)
>         {            
>             if (packet.stream_index==videoSource->videoStream)
>             {
>                 int     frameFinished;
>                 
>                 ret = avcodec_decode_video2 (videoSource->pVideoCodecCtx, videoSource->pVideoFrameRaw, &frameFinished, &packet);
> 
>                 if (ret>=0)
>                 {
>                     if (frameFinished)
>                     {
>> 
>                         if (!videoSource->pImageConvertRGB565Ctx)
>                             videoSource->pImageConvertRGB565Ctx = sws_getContext (
>                                                                         videoSource->pVideoCodecCtx->width, videoSource->pVideoCodecCtx->height,
>                                                                         videoSource->pVideoCodecCtx->pix_fmt,
>                                                                         videoSource->pVideoCodecCtx->width, videoSource->pVideoCodecCtx->height,
>                                                                         PIX_FMT_RGB565,
>                                                                         SWS_BICUBIC, NULL, NULL, NULL);
>                         
>                         sws_scale (videoSource->pImageConvertRGB565Ctx,
>                                    videoSource->pVideoFrameRaw->data, videoSource->pVideoFrameRaw->linesize,
>                                    0, videoSource->pVideoCodecCtx->height,
>                                    videoSource->pFrameRGB565->data, videoSource->pFrameRGB565->linesize);
>                         
>                         av_frame_unref (videoSource->pVideoFrameRaw);
>                         
>                         break;
>                     }
> 
>                     //  The packet has been used by the ffmpeg engine (both if a frame has been finished or not),
>                     //  we must free the content now
>                     av_free_packet (&packet);
>                 }
>             }
>             else if (packet.stream_index==videoSource->audioStream)
>             {
>                 //  We do not free the packet here as it is used by AssetWriter (and freed there once it is written)
>                 videoSource->audioPacket = packet;
>                 break;
>             }
>             else
>                 av_free_packet (&packet); //  Not needed
>             
>             if (ret>=0)
>                 ret = av_read_frame (videoSource->pFormatCtx, &packet);
>         }
>    
> Code to write changed video frames:
> 
>     if (!flush)
>     {
>         if (!assetWriter->pImageConvertFromRGB565Ctx)
>             assetWriter->pImageConvertFromRGB565Ctx = sws_getContext (c->width, c->height, AV_PIX_FMT_RGB565,
>                                                                       c->width, c->height, c->pix_fmt,
>                                                                       SWS_BICUBIC, NULL, NULL, NULL);
>         
>         uint8_t *rgb565Data [AV_NUM_DATA_POINTERS] = { assetWriter->masterVideoSource->pFrameRGB565Buffer, NULL, NULL, NULL };
>         int 	rgb565Linesize [AV_NUM_DATA_POINTERS] = { c->width*2 /*linesize is twice the width*/, 0, 0, 0 };
>         
>         sws_scale (assetWriter->pImageConvertFromRGB565Ctx,
>                    rgb565Data, rgb565Linesize,
>                    0, c->height, assetWriter->dst_picture.data, assetWriter->dst_picture.linesize);
>     }
>     
>     int                         got_packet;
>     AVPacket                    packet;
> 
>     av_init_packet (&packet);
>     packet.data = NULL;
>     packet.size = 0;
>     
>     ret = avcodec_encode_video2 (c, &packet, flush?NULL:assetWriter->frame, &got_packet);
>     
>     if (got_packet)
>     {
>         ret = write_frame (assetWriter,
>                            &assetWriter->masterVideoSource->pFormatCtx->streams [assetWriter->masterVideoSource->videoStream]->time_base,
>                            assetWriter->video_st, &packet);
>     }
>     
>     av_free_packet (&packet);
> 
> Code to write unchanged audio packets:
> 
>     int         ret = 0;
>     
>     ret = write_frame (assetWriter,
>                        &assetWriter->masterVideoSource->pFormatCtx->streams [assetWriter->masterVideoSource->audioStream]->time_base,
>                        assetWriter->audio_st,
>                        &assetWriter->masterVideoSource->audioPacket);
>     
>     av_free_packet (&assetWriter->masterVideoSource->audioPacket);
>     
> 
> Thanks, and please let me know if you need more input.
> 
> Greetings Harald
> 
> -
> Harald Schlangmann
> Antwerpener Str. 52, 50672 Köln, Germany
> +49 151 2265 4439
> Harry at gps-laptimer.de
> www.gps-laptimer.de
> 

Did you unref the frame before passing it to the decoder? In some
versions of ffmpeg this is required, though it might be incorrect and
cause memory corruption with some earlier versions of ffmpeg. Either
way, it's probably an API usage error with AVFrame management.


More information about the Libav-user mailing list