[FFmpeg-user] memory leak with vaapi decode since ffmpeg_vaapi: Convert to use hw_frames_ctx only
adf.lists at gmail.com
Mon May 29 01:32:05 EEST 2017
Mark Thompson wrote:
> Hmph, looks like the parameter buffer confusion strikes again.
> The original VAAPI specification contained the following functions:
> """ /** * After this call, the buffer is deleted and this buffer_id
> is no longer valid * Only call this if the buffer is not going to be
> passed to vaRenderBuffer */ VAStatus vaDestroyBuffer ( ... /** * Send
> decode buffers to the server. * Buffers are automatically destroyed
> afterwards */ VAStatus vaRenderPicture ( """
> The newer lavc implementation follows this, and does not call
> vaDestroyBuffer() on buffers passed to vaRenderPicture(). Not doing
> this would cause random crashes in multithreaded programs, because
> another buffer with the same ID could be created between the two
> However, the Intel implementation never actually followed the
> specification - it leaked the buffers passed to vaRenderPicture().
> So, a special driver quirk was added to detect that driver and
> additionally destroy the buffers in that case:
> This restored same behaviour as old lavc with the Intel driver
> without breaking other implementations.
> That worked for a bit.
> Unfortunately, Intel got tired of people complaining that their
> driver was broken. Rather than fixing it, they decided to change the
> (this is an ABI and API break, and the SONAME was not changed).
> Thus we have the state now:
> VAAPI < 0.40: A) Destroy buffers: wrong, will crash in multithreaded
> programs. B) Don't destroy buffers: correct, driver quirk restores
> behaviour for Intel.
> VAAPI >= 0.40: A) Destroy buffers: correct, but may crash old
> drivers which haven't been updated (not detectable, because the
> library allows loading old drivers without telling the user). B)
> Don't destroy buffers: wrong, driver quirk restores correct behaviour
> for Intel, others may leak memory.
> lavc currently chooses (B) in both cases on the grounds that leaking
> memory is preferable to crashing randomly. There is some thought of
> switching to (A) for VAAPI >= 0.40, but given that we can't detect
> the version at runtime and the libraries consider everything to be
> compatible this seems dangerous.
> For Mesa, I feel like I remember it having a correct < 0.40
> implementation, but looking again it doesn't seem to and there isn't
> any obvious change (maybe I am just wrong). Do you know if this
> changed? Adding the driver quirk like the Intel driver is probably
> the right thing to do if it has always been like that, if it hasn't
> then we need more magic to be able to distinguish versions to prevent
> older ones from crashing.
Thanks for the explanation.
I haven't seen anything change in mesa and st/va search on cgit doesn't
come up with anything obvious.
In your github link there is a link to a bug that states gstreamer
always released, and testing it doesn't leak.
Since AMD devs develop with that, they wouldn't have seen this, or, I
guess, needed to implement in driver.
More information about the ffmpeg-user