[Libav-user] Help needed on using the ffmpeg libraries in an applcation
Bearak, Arnold - 0999 - MITLL
abearak at ll.mit.edu
Mon Apr 25 16:54:28 CEST 2011
I am trying to figure out how to use the ffmpeg libraries inside an application.
I have used the ffmpeg binary directly on a set of pgm files to generate an mp4 file using the following command:
ffmpeg -r 1 -b 18000 -i frame_%05d.pgm -vcodec libx264 -vpre slow foobar.mp4
The source data used to create the pgm files is jp2 files.
My goal is to be able to do RTSP streaming of buffers created from the jp2 files to a remote server running ffserver.
To start though, I am trying to create an mp4 file from a list of jp2 files without having to go thru the step of creating the pgm files. That is I just want to encode the data that I get from a series of buffers created from the jp2 files.
I have spent a fair amount of time looking at the ffmpeg.c code, and instrumenting it or running it under gdb and have found that part of what it does when it creates the mp4 file is to write out a 48 byte header, then write each of the frames for the mp4 file, and then to finally write out a 1000+ byte trailer to the mp4 file.
By including pieces of the ffmpeg.c file inside my application, I have been able to write out the header, and perform all of the encoding and write out the frames, and to write out most of the trailer. But a number of the tags inside the trailer are missing. I am quite sure that I am missing those tags due to not having the AVFormatContext structure set up correctly for the data that resides in the priv_data area. I have not been able to figure out how the ffmpeg binary actually fills in that area when it generates the its AVFormatContext object.
So I am looking for some help on how to do this. The libavcodec/api-example.c does not provide the needed info on how to do this type of encoding. Are there other examples that might help me?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Libav-user