[FFmpeg-devel] [PATCH 2/2] lavd/pulse_audio_enc: allow non monotonic streams

Nicolas George george at nsup.org
Sat Jan 4 12:24:56 CET 2014


Le quartidi 14 nivôse, an CCXXII, Michael Niedermayer a écrit :
> thats not practical, even 15years ago players had 3 buffers in video
> memory and while loading one from the decoder, having one displayed
> the third was the next to display and at the correct time display
> was switched by page flip from one buffer to the other.
> 
> a write_packet() takes time and can get interrupted by other processes
> so theres no way timing the presentation can happen from outside
> unless that outside would be inside the kernel
> 
> 
> [...]
> 
> more generally
> there are 2 kinds of output devices
> ones that can do presentation of (audio/video/...) at a time
> provided by the application
> and ones that cannot do that
> 
> the 2nd kind like a TV with fixed 50hz output or most audio hardware
> (hopefully) can provide feedback of when actual display happened
> (using av_get_output_timestamp for example) and that feedback
> can then be used by a audio resampler or fps convertion filter to
> stretch or squish the stream so as to get presentation to happen when
> its intended
> 
> output APIs/Libs that lack their own buffers and just do "display
> it now" should be wraped in their own thread that has a fifo
> and does the presentation when the user provided timestamps want it
> that would happen in a ffmpeg device/muxer or could use some utils
> from common code
> 
> better ideas & solutions welcome of course

What you describe is very beautiful in theory, but I am afraid it is not
very realistic right now.

First, AFAIK, none of the devices currently implemented in lavd implement
that kind of time control, and I guess none of the few application that use
them expect it.

Second, this kind of behaviour would require additional API elements to
work. At least, application would need a way of informing the device of a
timestamp continuity break (e.g. if the user has hit the "pause" button),
and the device would need a way of reporting under-runs.

So what you describe seems very nice, but I am afraid that implementing it
would be a huge amount of work.

As a side note, if we consider having specialized threads for output
devices, then I believe we should also consider having these threads report
user interaction to the application: for example, lavd/sdl or lavd/xv would
report window resize, keystrokes and clicks.


OTOH, I am afraid this patchset is wrong for another reason: if a timestamp
discontinuity is feed to the device, then av_get_output_timestamp() will
return strange results while the samples around the discontinuity are in the
device buffer. I believe the application should not pass through the
timestamps from the input and rather synthesize its own monotonic
timestamps.

Regards,

-- 
  Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20140104/df795a44/attachment.asc>


More information about the ffmpeg-devel mailing list