[FFmpeg-devel] [PATCH] Make RTP work with IPv6 enabled

Ronald S. Bultje rsbultje
Tue Oct 30 13:53:08 CET 2007

Hi Luca,

On 10/30/07, Luca Abeni <lucabe72 at email.it> wrote:
> Ronald S. Bultje wrote:
> [...]
> > For me, the bind() fails with EINVAL, as in ffserver...
> Now I am even more confused: is it bind() to fail, or is it sendto()?
> I understood that sendto() was the problem?

In udp.c, yes, because we use the family of the bind() to set up the socket.
In my test.c, I use the family of the sendto() to set up the socket so the
opposite occurs. The problem is the same.


> The first UDP socket is used for receiving RTP packets (and I think the
> rtsp
> demuxer should NEVER use this socket for sending anything), while the
> second
> one is used for sending/receiving RTCP packets. Note that the destination
> address for this socket is known since the beginning (it's the rtsp server
> address).

But not configured, rtsp.c is doing the least it has to do. You'll notice it
opens rtp://?localport=X, then sets up the reading (by sending a SETUP
request after having done a bind() on the socket knowing the port etc.),
which means it tells the server which port it can receive data on. Then, it
receives the transports which means we get to know the port that data will
come in on.

One way to solve this is to open rtp://<host>?localport=X, then later call
(again) rtp_set_remote_uri() once we received the transports and know the
ports, but then you resolve the same host multiple times, which is network
traffic and thus sort of stupid. Below is a patch doing this.

I am sure I am missing something, but I really do not see any problem here.
> Are you talking about the RTP sockets or the TCP socket used for RTSP
> commands?
> If you are talking about the RTP sockets, I believe that the target
> address
> and its family are known, because when RTP sockets are created the RTSP
> connection
> is already up (so, the server has already been contacted).

See above, rtsp.c doesn't use this when opening the rtp uri.

Also note that the SDP should tell you if we are going to send/receive
> traffic over
> IPv6 or IPv4.

So I guess another way to solve this is that I could add a ?family=ipv4/ipv6
param to the udp/rtp options. Kind of ugly...

Attached patch ffmpeg-udp-use_correct_family.patch implements the first
proposed solution and replaces my previously submitted patch
ffmpeg-udp-send_and_connect.patch. It changes rtpproto.c to not increase the
provided portnum if none is given (which is on purpose). It changes
rtsp.cto open rtp://host?localport=X instead of rtp://?localport=X
such that we
know the host address family. Lastly, it changes udp.c to use the host to
set a family prerequisite on the local port it bind()s to such that the
address family of the target address and bind()-address are the same,
initializes dest_addr (or the whole UDPContext struct) to zero such that we
don't use uninitialized memory here and it adds error reporting to
udp_read()/write() (maybe that is DOS'able if people use stderr for log
files, should I leave it or remove it?). With this patch, I can stream
rtp/udp from ffserver.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: ffmpeg-udp-use_correct_family.patch
Type: application/octet-stream
Size: 3292 bytes
Desc: not available
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20071030/855b102a/attachment.obj>

More information about the ffmpeg-devel mailing list