[Libav-user] (no subject)

Eric Hsieh(Psychesnet) psychesnet at gmail.com
Thu Sep 8 09:13:29 CEST 2011


Dear Luke,

Thanks for your advance, unfortunately, my raw data is stored at memory, I
get it directly. So, I can not use av_read_frame(), and I am working
embedding system, we have memory usage limit for app and lib, we resize the
ffmpeg lib to take off encodec and decodec function. So, I can not
use avcodec_decode_video2(), so sad~
So, do you know how to calculate the pts value for mpegts container.
this is big issue for me, please help, thanks a lot.

Regards, Eric Hsieh, 09/08

2011/9/8 Luke Clemens <lclemens at gmail.com>

> The magic is in avcodec_encode_video() and avcodec_decode_video2(). The
> documentation for it is in the header file where it's defined. There are
> some examples online. You setup a codec context, then pass an uncompressed
> image to avcodec_encode_video(), and it will spit out a compressed buffer.
>
> There are different options for sending over the network.
>
> 1) You can use the current network method you use now, and just send those
> compressed packets instead of uncompressed frames. The only catch is that
> you may have to send some setup information to your client as well so it
> knows how to setup the decoder. If you always use the same format, then you
> won't have to worry about that. It's slightly more complicated if you use P
> frames. for 16ms or less, you won't want to use B frames. The simplest is
> mjpeg, but it will use 3x more bandwidth than mpeg4 or H.264.
>
> 2) The other option is to use libavformat to send over the network. You
> setup an ouput context and then write those frames to it with
> av_write_frame(), and it will send the packets via RTP or another format
> that you specify.
>
> At the client end you also have two choices, which depend on the option you
> chose for the server.
>
> If you chose option 1, then you'll need to setup a decode context, receive
> the packets you sent, and feed them to avcodec_decode_video2(). You might
> need extra information about the codec so you might have to receive that
> also and use it to determine how to setup the decode context.
>
> If you chose option 2, then you'll setup an input format context and
> repeatedly call av_read_frame() and feed the resulting frames
> into avcodec_decode_video2().
>
> There are a few examples and tutorials on the web if you google around.
> Especially for the compression/decompression parts. Unfortunately, those few
> tutorials are all that's out there. One of the popular ones for decoding is
> here: http://dranger.com/ffmpeg/. There is one for encoding in
> libavcodec/api-example.c . And of course the source code for ffplay,
> mplayer, vlc, and other opensource players is available, but if you're a C++
> guy like me, you'll probably find it pretty ugly.
>
> Also, make sure you have enough space in your buffers for padding at the
> end - I didn't realize it and it took a long time to figure out why I was
> getting errors. The details are in the documentation for the encode/decode
> functions.
>
> --luke
>
> On Thu, Aug 25, 2011 at 1:49 PM, <david.weber at l-3com.com> wrote:
>
>>  Hey all, I have searched quite a bit, but haven't found a good answer.
>> So I thought I'd post it here.****
>>
>> ** **
>>
>> I have a custom video device which gives me a simple framebuffer, (24-bit
>> BGR).****
>>
>> ** **
>>
>> I am trying to stream it elsewhere, but would like to encode/compress it
>> in realtime.****
>>
>> ** **
>>
>> On the client side, I would decode it, and display it in an OpenGL window.
>> ****
>>
>> ** **
>>
>> I have it working today, with uncompressed images, but it is killing the
>> network (as one would expect).****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> Are there any examples for making something like this work?****
>>
>> ** **
>>
>> In short:****
>>
>> ** **
>>
>> [Custom Video Framebuffer] -> [encode] -> [network] -> [decode] ->
>> [display]****
>>
>> ** **
>>
>> ** **
>>
>> Considerations that I have to deal with:****
>>
>> ** **
>>
>> **1.)    **Relatively realtime.  I have <16ms to encode a 640x480x3 frame
>> buffer, and ship it off.****
>>
>> **2.)    **I need a streaming-type on the wire format.  The clients may
>> be turned on/off at any point, so i need to support this.****
>>
>> ** **
>>
>> ** **
>>
>> The naive implementation, is simply running the video frames though zlib,
>> which is shrinking the data nicely, but it's rather expensive, and makes me
>> feel dirty.  All of the examples I can find, are reading off an existing
>> encoded file, but not quite what I need.****
>>
>> ** **
>>
>> ** **
>>
>> Thanks for your input****
>>
>> ** **
>>
>> --dw****
>>
>> _______________________________________________
>> Libav-user mailing list
>> Libav-user at ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/libav-user
>>
>>
>
>
> --
> -
> -
> -
> -
> Luke Clemens
> http://clemens.bytehammer.com
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20110908/338b3270/attachment.html>


More information about the Libav-user mailing list