[FFmpeg-cvslog] r14956 - trunk/libavformat/matroskadec.c

Måns Rullgård mans
Tue Aug 26 10:26:10 CEST 2008


"Robert Swain" <robert.swain at gmail.com> writes:

> 2008/8/26 Aurelien Jacobs <aurel at gnuage.org>:
>> Robert Swain wrote:
>>
>>> 2008/8/26 Mike Melanson <mike at multimedia.cx>:
>>> > Michael Niedermayer wrote:
>>> >> On Mon, Aug 25, 2008 at 11:51:52PM +0200, Aurelien Jacobs wrote:
>>> >>> Mike Melanson wrote:
>>> >>>
>>> >>>> aurel wrote:
>>> >>>>> Author: aurel
>>> >>>>> Date: Mon Aug 25 01:57:29 2008
>>> >>>>> New Revision: 14956
>>> >>>>>
>>> >>>>> Log:
>>> >>>>> matroskadec: don't try to seek to negative timestamp
>>> >>>>> matroska timestamps are unsigned
>>> >>>>>
>>> >>>>>
>>> >>>>> Modified:
>>> >>>>>    trunk/libavformat/matroskadec.c
>>> >>>> Either 14955 or 14956 (both from you) broke the seek regressions.
>>> >>>> Representative breakage:
>>> >>> Grmlll... I was pretty sure those commit would change the seek
>>> >>> regressions. But when I wanted to test this, I just couldn't
>>> >>> run the test due to the wma regression (floating-point issue IIRC,
>>> >>> running on amd64).
>>> >>
>>> >> Which commit did cause this wma regression test failure?
>>
>> I thought it was due to r14698 as this was already discussed, and I
>> got the same md5. But In fact it's not.
>>
>>> > One of the 3 revisions following 14755 broke WMA. According to an
>>> > earlier email, it was 14758, specifically.
>>
>> Almost. It's in fact r14757 which broke reg tests for me.
>
> Before r14757:
>
> /* init MDCT windows : simple sinus window */
> for(i = 0; i < s->nb_block_sizes; i++) {
>     int n, j;
>     float alpha;
>     n = 1 << (s->frame_len_bits - i);
>     window = av_malloc(sizeof(float) * n);
>     alpha = M_PI / (2.0 * n);

This is a floating-point division...

>     for(j=0;j<n;j++) { window[j] = sin((j + 0.5) * alpha);

and a floating-point multiplication.

>     }
>     s->windows[i] = window;
> }
>
> Now:
>
> // Generate a sine window.
> void ff_sine_window_init(float *window, int n) {
>     int i;
>     for(i = 0; i < n; i++)
>         window[i] = sin((i + 0.5) / (2 * n) * M_PI);

This is an integer division, followed by floating-point multiplication.

> }
>
> Why does this work fine with some compilers and not with others? Why
> does it not work fine on some archs and not others with the same
> compiler?

On x86-32, gcc defaults to using x87 floating-point instructions with
80-bit precision for intermediate values held in registers.  On
x86-64, the default is to use SSE with 64-bit internal precision.  PPC
also has 64-bit precision.  There may be rounding differences as well.

-- 
M?ns Rullg?rd
mans at mansr.com




More information about the ffmpeg-cvslog mailing list