[FFmpeg-devel] Network IO in FFmpeg (was: abstractable threading api)

Roger Pack rogerdpack2 at gmail.com
Mon Dec 23 22:30:29 CET 2013


>The maximum size of the buffer seems to be around 300k, at least on Linux
> (it could be higher with the help of root privileges, but we can not count
> on it). That is about half a second of digital TV, unless I am mistaken. A
> video processing application can easily be busy for that amount of time,
> even if it is on average fast enough to work in real time.
> Therefore, we need something whose task is to read from the socket as fast
> as possible, storing the data into a larger buffer.

I'd be happy to come up with a patch that increased the thread
priority (for windows) for the udp thread, if it had a chance of being
accepted and would be helpful :)

> A thread for TCP too
>
> TCP has the same kind of problem: the kernel buffer is the same size, and
> the same treatment happens inside FFmpeg. The difference is that TCP has
> flow control, therefore when the buffer is full, the data is not discarded.
> But it can lead to other kinds of unpleasantness. The least of it would be
> to achieve a lower throughput than actually possible. At worst, the flow
> control could reach the sending application and cause it to lose some of
> its
> input.

Agree, to me it seems like it wouldn't be as useful for TCP [?] though
simplification of the API might be useful.

My question was for "a thread for reading from file handles" so that
it can be cacheing/buffering future data from files before it is
requested (if it doesn't do that already).  Might result in a nice
speedup. Same with writing to them too, though that is a different
story.


> A thread for devices too
>
> Some devices have the same kind of problem. ALSA may do buffer overruns,
> x11grab may skip frames, etc. Unlike the protocols, the devices are at the
> demuxer level, so it is not possible to handle everything together, but the
> solution will likely be similar.

dshow does something similar (the current code discards frames if it
can't buffer them, but in general directshow, upstream discards frames
if they're not received in time, at least AFAICT).

> Integration in a generic event loop
>
> Some projects may need to avoid uncontrollable threads and to integrate the
> network IO in their own event loop. Making sure this is possible, at least
> for well known event loop APIs (libev2 for example) would be a good idea.
> That would probably require making ffurl_get_file_handle() public again
> (but
> probably with some extensions), and adding a public API to tell a protocol
> "do the shortest non-blocking operation you can".

libev doesn't have awesome windows support (supports only select, not
IOCP), but might be an option.  Combining lots of sockets for evented
processing might slow down the event loop though (where today a single
thread services a single port, trying to avoid any lost UDP packets)
since it has to execute the callback of each packet before being able
to recv() again.

Anyway thanks for your help on this.
-roger-


More information about the ffmpeg-devel mailing list