[FFmpeg-user] memory leak with vaapi decode since ffmpeg_vaapi: Convert to use hw_frames_ctx only
Andy Furniss
adf.lists at gmail.com
Mon May 29 01:32:05 EEST 2017
Mark Thompson wrote:
>
> Hmph, looks like the parameter buffer confusion strikes again.
>
>
> The original VAAPI specification contained the following functions:
>
> """ /** * After this call, the buffer is deleted and this buffer_id
> is no longer valid * Only call this if the buffer is not going to be
> passed to vaRenderBuffer */ VAStatus vaDestroyBuffer ( ... /** * Send
> decode buffers to the server. * Buffers are automatically destroyed
> afterwards */ VAStatus vaRenderPicture ( """
>
> The newer lavc implementation follows this, and does not call
> vaDestroyBuffer() on buffers passed to vaRenderPicture(). Not doing
> this would cause random crashes in multithreaded programs, because
> another buffer with the same ID could be created between the two
> calls.
>
> However, the Intel implementation never actually followed the
> specification - it leaked the buffers passed to vaRenderPicture().
> So, a special driver quirk was added to detect that driver and
> additionally destroy the buffers in that case:
> <http://git.videolan.org/?p=ffmpeg.git;a=commit;h=4926fa9a4aa03f3b751f52e900b9efb87fea0591>.
> This restored same behaviour as old lavc with the Intel driver
> without breaking other implementations.
>
> That worked for a bit.
>
> Unfortunately, Intel got tired of people complaining that their
> driver was broken. Rather than fixing it, they decided to change the
> specification:
> <https://github.com/01org/libva/commit/3eb038aa13bdd785808286c0a4995bd7a1ef07e9>
> (this is an ABI and API break, and the SONAME was not changed).
>
> Thus we have the state now:
>
> VAAPI < 0.40: A) Destroy buffers: wrong, will crash in multithreaded
> programs. B) Don't destroy buffers: correct, driver quirk restores
> behaviour for Intel.
>
> VAAPI >= 0.40: A) Destroy buffers: correct, but may crash old
> drivers which haven't been updated (not detectable, because the
> library allows loading old drivers without telling the user). B)
> Don't destroy buffers: wrong, driver quirk restores correct behaviour
> for Intel, others may leak memory.
>
> lavc currently chooses (B) in both cases on the grounds that leaking
> memory is preferable to crashing randomly. There is some thought of
> switching to (A) for VAAPI >= 0.40, but given that we can't detect
> the version at runtime and the libraries consider everything to be
> compatible this seems dangerous.
>
>
> For Mesa, I feel like I remember it having a correct < 0.40
> implementation, but looking again it doesn't seem to and there isn't
> any obvious change (maybe I am just wrong). Do you know if this
> changed? Adding the driver quirk like the Intel driver is probably
> the right thing to do if it has always been like that, if it hasn't
> then we need more magic to be able to distinguish versions to prevent
> older ones from crashing.
Thanks for the explanation.
I haven't seen anything change in mesa and st/va search on cgit doesn't
come up with anything obvious.
In your github link there is a link to a bug that states gstreamer
always released, and testing it doesn't leak.
Since AMD devs develop with that, they wouldn't have seen this, or, I
guess, needed to implement in driver.
More information about the ffmpeg-user
mailing list