[Ffmpeg-devel] [Question] avio/ByteIOContext/Circular Buffers?

Ryan Martell rdm4
Tue Dec 12 05:21:18 CET 2006


Hi...

On Dec 11, 2006, at 8:45 PM, Michael Niedermayer wrote:

> Hi
>
> On Mon, Dec 11, 2006 at 04:50:06PM -0600, Ryan Martell wrote:
>> Hi--
>>
>> I'm trying to get the mms stuff working, and having a tough time
>> integrating with asf.c.  I can do this rather easily as a URLContext,
>> but I need seeking to work, and the other nice features that the
>> AVInputFormat structure gives me. (Pausing, etc.)
>
> please elaborate on what the problem is with URLContext and seeking
> note, i know nothing about mms ...

This problem is that when I pause, i need to pause the network  
stream, and fast forwarding/rewinding (seeking) means sending a  
message to the server with the proper playtime and flushing queues.   
I can't just seek to an arbitrary point in a file.  AFAIK,  
AVInputFormat supports that, but URLProtocol does not.  Please  
correct me if I am incorrect.

>> Anyway.
>>
>> I'm trying to use a ByteIOContext as a circular queue, and it doesn't
>> look like they're made for that.  They appear to be either read or
>> write, but not both- am I missing something?
>
> no, ByteIOContext is not intended for read+write, it would be possible
> to have 2 ByteIOContexts one for reading and one for writing but they
> would need something to store any data written and as mms streams  
> arent
> "writeable" this doesnt make any sense to me ...

I've figured out another way to do this, that's cleaner anyway.  My  
initial reason here is that asf uses ByteIOContexts for reading, and  
since I'm interfacing with that using my own AVInputFormat, I don't  
really have a ByteIOContext.  So I fake one and fill it in with my  
packets received.  I'll upload the patch soon (I finally got it  
working today, now I have to fixup the seeking issues and then clean  
everything up).


>> Essentially, I need
>> put_byte() and read_byte() to maintain their own indexes, and read
>>> from separate places.
>>
>> I setup the ByteIOContext at init, then I put the asf header in it as
>> a i read it.  I then pad it with the necessary bytes (it must be a
>> multiple of packet_size).  At this point, I call the standard packet
>> read function in asf.  It seeks to the end of my header, and then
>> keeps reading (garbage) because it doesn't call the read_packet
>> function in my ByteIOContext.
>>
>> Any suggestions on ways to proceed would be appreciated.  Rewriting
>> asf.c is an option, but it has lots of seeks and tells in places, and
>> I worry about breaking it on the non-seeking version.  My thought
>> might be to add a flag to init_put_byte() that turns it into a
>> circular queue. (By that I mean that the it has a read and a write
>> ptr, and they are incremented separately and wrap when they hit the
>> end of the buffer.  The number of bytes available is calculated by
>> the difference between the two, taking into consideration buffer end,
>> etc.)
>
> please explain why you want to hack ByteIOContext, and no i dont like
> the idea at all

I've got a better way now; but for sake of information:

Whenever I have done networking before (fair number of times, in a  
couple of games), I would use a state machines with circular queues  
(probably a bad name for them; my nomenclature: essentially a buffer,  
a buffer size and a read ptr and write ptr).  Since this was before  
reliable threading (think Mac OS 9/Win9x), I would have an interrupt  
routine that would fire on packet arrival, and would put the bytes  
into a circular queue, and would atomically increment the write  
pointer.  Then at idle time in the game (or frame time, however you  
want to look at it) the networking would read as many packets from  
the circular queue as were available.  So essentially, I had the same  
structure for reading and writing, with one producer and one  
consumer, and they could run without conflict at different interrupt  
levels.  The reason I could use that here is that since the asf stuff  
is so tightly integrated with the ByteIOContext, I was thinking that  
that would work as one of those.

And for completeness, my better way (in ugly form) is:

 From asf.c:

// this was asf_read_packet, asf_read_packet calls this with the last  
two parameters as NULL.
// the pb will go away before the final patch.
int ff_asf_read_packet(AVFormatContext *s, AVPacket *pkt, ASFContext  
*asf, ByteIOContext *pb, ASFLoadPacketProcPtr load_packet, void *opaque)
{
     ASFStream *asf_st = 0;

     //static int pc = 0;
     for (;;) {
         int rsize = 0;
         if (asf->packet_size_left < FRAME_HEADER_SIZE
             || asf->packet_segments < 1) {
             int ret;
             if(!load_packet) {

		.... Do what it did before....		

             } else {
                 asf->packet_pos= (2*asf->packet_size) + asf- 
 >packet_seq * asf->packet_size; // the 2* part is a hack currently;
                 assert(load_packet);
                 ret= load_packet(opaque, pb);
                 if(ret<0 || url_feof(pb))
                     return AVERROR_IO;

                 ret = asf_get_packet(s, asf, pb);
                 if (ret < 0 || url_feof(pb))
                     return AVERROR_IO;
             asf->packet_time_start = 0;
             continue;
         }





More information about the ffmpeg-devel mailing list