[FFmpeg-cvslog] r18860 - in trunk/libavcodec: ac3dec.c ac3dec_data.c ac3dec_data.h eac3dec.c

Justin Ruggles justin.ruggles
Thu May 21 03:21:11 CEST 2009

Michael Niedermayer wrote:
> On Tue, May 19, 2009 at 10:13:34PM -0400, Justin Ruggles wrote:
>> Michael Niedermayer wrote:
>>> On Tue, May 19, 2009 at 06:23:38PM -0400, Justin Ruggles wrote:
>>>> Michael Niedermayer wrote:
>>>>> On Sun, May 17, 2009 at 08:53:27AM +0200, jbr wrote:
>>>>>> Author: jbr
>>>>>> Date: Sun May 17 08:53:24 2009
>>>>>> New Revision: 18860
>>>>>> Log:
>>>>>> eac3dec: use 16-bit pre-mantissas instead of 24-bit in AHT decoding. it is
>>>>>> simpler and also fixes a bug in GAQ dequantization.
>>>>> are you sure 16bit precission is enough for that?
>>>>> what effect does this compared to double precission floats have on the
>>>>> PSNR ?
>>>> stddev:    7.95 PSNR: 78.30 bytes:  3999744/  3999744
>>> and what is that?
>>> we have raw -> encoder -> ac3 -> double fp decoder -> raw2
>>>                               -> 16bit     decoder -> raw3
>>> PSNR of raw2 - raw vs. PSNR raw3 - raw is what we care about.
>>> That would make 2 PSNR scores though, a before and a after
>>> the second part we care about is the difference to a reference decoder
>>> that should mention which reference decoder is used though
>> I can't compare to raw in this case because I don't have access to a
>> professional E-AC-3 encoder which uses the AHT quantization.  But I can
>> compare to decoded output from Nero's E-AC-3 decoder.
>> nero vs. ffac3 svn
>> stddev:  131.16 PSNR: 53.96 bytes:  3998716/  3999744
> why does the output differ so much to begin with?

That I'm really not sure of.  I don't have access to the Nero decoder
other than begging madshi to decode samples for me like I did in this case.

I suspect Nero does dithering during the float-to-int16 conversion.

Somehow it manages to add bandwidth above what it should be based on the
number of encoded frequency bins in that E-AC-3 stream.  I don't know how.

Part of it is certainly the random mantissas, but that probably isn't a
lot.  I could test how much the random mantissas affect the difference
by crafting two AC-3 files of the same source, one with all dither flags
set to 1, and another with all dither flags set to 0.

We do lose some precision in the use of 24-bit coefficients, but I
tested converting all of the fixed point calculations which reduce
precision into floating point and it was not any closer to Nero's output.


More information about the ffmpeg-cvslog mailing list