[FFmpeg-devel] Fw: [foms] Paper submissions to LCA

Frank Barchard fbarchard
Fri Jul 17 03:46:43 CEST 2009


On Thu, Jul 16, 2009 at 5:43 PM, David Conrad <lessen42 at gmail.com> wrote:


> I've been doing benchmarks of ffmpeg's Theora decoder, and libtheora is
> only significantly (~30%) faster in two situations, for different reasons:
>

cool.  30% is acceptable.  Decoding is only about 30% of the overall
performance.  So we're talking 30% of 30%, so 10%.  Mostly its rendering,
which software.

>
> 1: HD (720p and up). This isn't particularly important for the internet
> since the bitrate needed for Theora at these resolutions is too high to be
> feasible. This will be solved by not sorting coefficients into raster order
> when they're decoded and instead doing MC/IDCT in Hilbert order.


Render performance is obviously sensitive to resolution.  Although with full
screen mode, its mainly the display resolution that matters.
But decode performance is more directly proportional to bitrate.
For internet with MP4, I aim at 2 megabit/s for 720p for benchmarking
purposes.
Ogg quality/bitrate varies more.  You can either use the same bitrate and
accept the quality loss or try to match up the quality (PSNR).  That varies,
but probably 3 megabits/s.
The internet is the same speed reguardless of the codec you choose, so its
probably best to aim at the same bitrate.
ffmpeg's ogg theora encode is pretty good with bitrate.  The xiph encoder,
so far, is not.


> 2: On the ARM11...

Apparently Chrome will run on ARM (
http://googleblog.blogspot.com/2009/07/introducing-google-chrome-os.html) so
good to know it might work.
Intel claims Atom machines can decode 1080p H264, but thats using the GPU.



More information about the ffmpeg-devel mailing list