[FFmpeg-devel] [PATCH] Optimization of AMR NB and WB decoders for MIPS

Babic, Nedeljko nbabic at mips.com
Wed May 23 18:54:51 CEST 2012


Hi Vitor,

> av_always_inline in a .c file? I don't think it can work...
you are right. I'll change this.

> Here and in several other places: in general, it is a good idea to do
> your loops also in ASM. That way, you can be sure that the compiler
> cannot slow-down a otherwise good code.
Good point. However, compilers are generally doing good job optimizing this 
kind of code for MIPS. Anyhow, loops are left in C only in places where
compilers can’t do a lot of damage, or where it would be too costly to do the 
loops in ASM (for example because of constraint on number of named registers 
that can be used in inline assembly)

> If you do such unrolling, please say in the documentation of the
> function (in acelp_filter.h) that n should be a multiple of eight or
> any other thing that it requires. Also, please note that this code is
> not only used for AMR, but also for other vocoders. Can you check if
> the other uses respect the multiple-of-eight constraint?
We checked if unrolling can be done (regarding multiple-of-eight constraint) in all
the places where this function is called. The same goes for all the function where
we done loop unrolling.
Comment will be put in documentation of the function. Should I specify that the 
comment is just for MIPS optimizations?

> Hmm, am I missing something or you are doing all this to save a single
> pre-calculation of (1 + lsp[lp_order - 1])?
You are correct. We decided to optimize this function (in respect to our profiling 
results)and this was the only meaningful thing to do with our instruction set. We got 
some speed up on this function and decided to leave it in the code taking in account 
document optimization.txt (section: When is an optimization justified? ). 

> > \ No newline at end of file
> Please add one.
Will be done

> Can you please apply to your ASM version the change made in
> 37ddd3833219fa7b913fff3f5cccc6878b047e6b? Invalid reads are bad.
Will be done

> You mean outer loop?
Again, you are right.

> > +static av_always_inline float ff_dot_productf(const float* a, const float* b,
> > +                                              int length)
> > +{
> > +    float sum;
> > +
> > +    __asm__ __volatile__ (
> > +        "mtc1   $zero,      %[sum]                      \n\t"
> > +        "blez   %[length],  ff_dot_productf_end%=       \n\t"
> > +        "ff_dot_productf_madd%=:                        \n\t"
> > +        "lwc1   $f2,        0(%[a])                     \n\t"
> > +        "lwc1   $f1,        0(%[b])                     \n\t"
> > +        "addiu  %[length],  -1                          \n\t"
> > +        "addiu  %[a],       %[a],   4                   \n\t"
> > +        "addiu  %[b],       %[b],   4                   \n\t"
> > +        "madd.s %[sum],     %[sum], $f1, $f2            \n\t"
> > +        "bnez   %[length],  ff_dot_productf_madd%=      \n\t"
> > +        "ff_dot_productf_end%=:                         \n\t"
> > +
> > +        : [sum] "=&f" (sum), [a] "+r" (a), [b] "+r" (b),
> > +          [length] "+r" (length)
> > +        :
> > +        : "$f1", "$f2"
> > +    );
> > +    return sum;
> > +}
> 
> I know almost nothing about mips, so I wonder why not set a register
> to reg=(a + lenght) before the loop, replace "bnez   %[length],
> ff_dot_productf_madd%=" by "bne %[a], %[reg], ff_dot_productf_madd%="
> and remove the "addiu %[length], -1". Naively, that would make one
> instruction less in the main loop...
Good suggestion, but the architecture that we optimized this for has 
dual issue pipeline and can work in parallel memory accessing operations 
and ALU operations. So we wouldn't get anything by removing 
"addiu  %[length],  -1" from the loop and we would have two additional 
instructions before the loop.


Thanks,
Nedeljko
________________________________________
From: Vitor Sessak [vitor1001 at gmail.com]
Sent: Wednesday, May 23, 2012 8:44
To: FFmpeg development discussions and patches
Cc: Babic, Nedeljko; Lukac, Zeljko
Subject: Re: [FFmpeg-devel] [PATCH] Optimization of AMR NB and WB decoders for MIPS

Hi!

On 05/18/2012 03:47 PM, Nedeljko Babic wrote:
> AMR NB and WB decoders are optimized for MIPS architecture.
> Appropriate Makefiles are changed accordingly.

Just a few comments:

> diff --git a/libavcodec/mips/acelp_filters_mips.c b/libavcodec/mips/acelp_filters_mips.c
> new file mode 100644
> index 0000000..d219898
> --- /dev/null
> +++ b/libavcodec/mips/acelp_filters_mips.c
> @@ -0,0 +1,211 @@
> + /*
> + * Copyright (c) 2012
> + *      MIPS Technologies, Inc., California.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its
> + *    contributors may be used to endorse or promote products derived from
> + *    this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + *
> + * Author:  Nedeljko Babic (nbabic at mips.com)
> + *
> + * various filters for ACELP-based codecs optimized for MIPS
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * Reference: libavcodec/acelp_filters.c
> + */
> +#include "config.h"
> +
> +#if HAVE_MIPSFPU&&  HAVE_INLINE_ASM
> +
> +#include "libavutil/attributes.h"
> +#include "libavcodec/acelp_filters.h"
> +
> +av_always_inline void ff_acelp_interpolatef(float *out, const float *in,
> +                           const float *filter_coeffs, int precision,
> +                           int frac_pos, int filter_length, int length)

av_always_inline in a .c file? I don't think it can work...

> +{
> +    int n, i;
> +    int prec = precision * 4;
> +    int fc_offset = precision - frac_pos;
> +    float in_val_p, in_val_m, fc_val_p, fc_val_m;
> +
> +    for (n = 0; n<  length; n++) {
> +        /**
> +        * four pointers are defined in order to minimize number of
> +        * computations done in inner loop
> +        */
> +        const float *p_in_p =&in[n];
> +        const float *p_in_m =&in[n-1];
> +        const float *p_filter_coeffs_p =&filter_coeffs[frac_pos];
> +        const float *p_filter_coeffs_m = filter_coeffs + fc_offset;
> +        float v = 0;
> +
> +        for (i = 0; i<  filter_length;i++) {
> +            __asm__ __volatile__ (
> +                "lwc1   %[in_val_p],           0(%[p_in_p])                    \n\t"
> +                "lwc1   %[fc_val_p],           0(%[p_filter_coeffs_p])         \n\t"
> +                "lwc1   %[in_val_m],           0(%[p_in_m])                    \n\t"
> +                "lwc1   %[fc_val_m],           0(%[p_filter_coeffs_m])         \n\t"
> +                "addiu  %[p_in_p],             %[p_in_p],              4       \n\t"
> +                "madd.s %[v],%[v],             %[in_val_p],%[fc_val_p]         \n\t"
> +                "addiu  %[p_in_m],             %[p_in_m],              -4      \n\t"
> +                "addu   %[p_filter_coeffs_p],  %[p_filter_coeffs_p],   %[prec] \n\t"
> +                "addu   %[p_filter_coeffs_m],  %[p_filter_coeffs_m],   %[prec] \n\t"
> +                "madd.s %[v],%[v],%[in_val_m], %[fc_val_m]                     \n\t"
> +
> +                : [v] "=&f" (v),[p_in_p] "+r" (p_in_p), [p_in_m] "+r" (p_in_m),
> +                  [p_filter_coeffs_p] "+r" (p_filter_coeffs_p),
> +                  [in_val_p] "=&f" (in_val_p), [in_val_m] "=&f" (in_val_m),
> +                  [fc_val_p] "=&f" (fc_val_p), [fc_val_m] "=&f" (fc_val_m),
> +                  [p_filter_coeffs_m] "+r" (p_filter_coeffs_m)
> +                : [prec] "r" (prec)
> +            );
> +        }
> +        out[n] = v;
> +    }
> +}

Here and in several other places: in general, it is a good idea to do
your loops also in ASM. That way, you can be sure that the compiler
cannot slow-down a otherwise good code.

> +av_always_inline void ff_acelp_apply_order_2_transfer_function(float *out, const float *in,
> +                                              const float zero_coeffs[2],
> +                                              const float pole_coeffs[2],
> +                                              float gain, float mem[2], int n)
> +{
> +    /**
> +    * loop is unrolled eight times
> +    */

If you do such unrolling, please say in the documentation of the
function (in acelp_filter.h) that n should be a multiple of eight or
any other thing that it requires. Also, please note that this code is
not only used for AMR, but also for other vocoders. Can you check if
the other uses respect the multiple-of-eight constraint?

> +    __asm__ __volatile__ (
> +        "lwc1   $f0,    0(%[mem])                                              \n\t"
> +        "blez   %[n],   ff_acelp_apply_order_2_transfer_function_end%=         \n\t"
> +        "lwc1   $f1,    4(%[mem])                                              \n\t"
> +        "lwc1   $f2,    0(%[pole_coeffs])                                      \n\t"
> +        "lwc1   $f3,    4(%[pole_coeffs])                                      \n\t"
> +        "lwc1   $f4,    0(%[zero_coeffs])                                      \n\t"
> +        "lwc1   $f5,    4(%[zero_coeffs])                                      \n\t"
> +
> +        "ff_acelp_apply_order_2_transfer_function_madd%=:                      \n\t"
> +
> +        "lwc1   $f6,    0(%[in])                                               \n\t"
> +        "mul.s  $f9,    $f3,      $f1                                          \n\t"
> +        "mul.s  $f7,    $f2,      $f0                                          \n\t"
> +        "msub.s $f7,    $f7,      %[gain], $f6                                 \n\t"
> +        "sub.s  $f7,    $f7,      $f9                                          \n\t"
> +        "madd.s $f8,    $f7,      $f4,     $f0                                 \n\t"
> +        "madd.s $f8,    $f8,      $f5,     $f1                                 \n\t"
> +        "lwc1   $f11,   4(%[in])                                               \n\t"
> +        "mul.s  $f12,   $f3,      $f0                                          \n\t"
> +        "mul.s  $f13,   $f2,      $f7                                          \n\t"
> +        "msub.s $f13,   $f13,     %[gain], $f11                                \n\t"
> +        "sub.s  $f13,   $f13,     $f12                                         \n\t"
> +        "madd.s $f14,   $f13,     $f4,     $f7                                 \n\t"
> +        "madd.s $f14,   $f14,     $f5,     $f0                                 \n\t"
> +        "swc1   $f8,    0(%[out])                                              \n\t"
> +        "lwc1   $f6,    8(%[in])                                               \n\t"
> +        "mul.s  $f9,    $f3,      $f7                                          \n\t"
> +        "mul.s  $f15,   $f2,      $f13                                         \n\t"
> +        "msub.s $f15,   $f15,     %[gain], $f6                                 \n\t"
> +        "sub.s  $f15,   $f15,     $f9                                          \n\t"
> +        "madd.s $f8,    $f15,     $f4,     $f13                                \n\t"
> +        "madd.s $f8,    $f8,      $f5,     $f7                                 \n\t"
> +        "swc1   $f14,   4(%[out])                                              \n\t"
> +        "lwc1   $f11,   12(%[in])                                              \n\t"
> +        "mul.s  $f12,   $f3,      $f13                                         \n\t"
> +        "mul.s  $f16,   $f2,      $f15                                         \n\t"
> +        "msub.s $f16,   $f16,     %[gain], $f11                                \n\t"
> +        "sub.s  $f16,   $f16,     $f12                                         \n\t"
> +        "madd.s $f14,   $f16,     $f4,     $f15                                \n\t"
> +        "madd.s $f14,   $f14,     $f5,     $f13                                \n\t"
> +        "swc1   $f8,    8(%[out])                                              \n\t"
> +        "lwc1   $f6,    16(%[in])                                              \n\t"
> +        "mul.s  $f9,    $f3,      $f15                                         \n\t"
> +        "mul.s  $f7,    $f2,      $f16                                         \n\t"
> +        "msub.s $f7,    $f7,      %[gain], $f6                                 \n\t"
> +        "sub.s  $f7,    $f7,      $f9                                          \n\t"
> +        "madd.s $f8,    $f7,      $f4,     $f16                                \n\t"
> +        "madd.s $f8,    $f8,      $f5,     $f15                                \n\t"
> +        "swc1   $f14,   12(%[out])                                             \n\t"
> +        "lwc1   $f11,   20(%[in])                                              \n\t"
> +        "mul.s  $f12,   $f3,      $f16                                         \n\t"
> +        "mul.s  $f13,   $f2,      $f7                                          \n\t"
> +        "msub.s $f13,   $f13,     %[gain], $f11                                \n\t"
> +        "sub.s  $f13,   $f13,     $f12                                         \n\t"
> +        "madd.s $f14,   $f13,     $f4,     $f7                                 \n\t"
> +        "madd.s $f14,   $f14,     $f5,     $f16                                \n\t"
> +        "swc1   $f8,    16(%[out])                                             \n\t"
> +        "lwc1   $f6,    24(%[in])                                              \n\t"
> +        "mul.s  $f9,    $f3,      $f7                                          \n\t"
> +        "mul.s  $f15,   $f2,      $f13                                         \n\t"
> +        "msub.s $f15,   $f15,     %[gain], $f6                                 \n\t"
> +        "sub.s  $f15,   $f15,     $f9                                          \n\t"
> +        "madd.s $f8,    $f15,     $f4,     $f13                                \n\t"
> +        "madd.s $f8,    $f8,      $f5,     $f7                                 \n\t"
> +        "swc1   $f14,   20(%[out])                                             \n\t"
> +        "lwc1   $f11,   28(%[in])                                              \n\t"
> +        "mul.s  $f12,   $f3,      $f13                                         \n\t"
> +        "mul.s  $f16,   $f2,      $f15                                         \n\t"
> +        "msub.s $f16,   $f16,     %[gain], $f11                                \n\t"
> +        "sub.s  $f16,   $f16,     $f12                                         \n\t"
> +        "madd.s $f14,   $f16,     $f4,     $f15                                \n\t"
> +        "madd.s $f14,   $f14,     $f5,     $f13                                \n\t"
> +        "swc1   $f8,    24(%[out])                                             \n\t"
> +        "addiu  %[out], 32                                                     \n\t"
> +        "addiu  %[in],  32                                                     \n\t"
> +        "addiu  %[n],   -8                                                     \n\t"
> +        "swc1   $f15,   4(%[mem])                                              \n\t"
> +        "mov.s  $f1,    $f15                                                   \n\t"
> +        "mov.s  $f0,    $f16                                                   \n\t"
> +        "swc1   $f16,   0(%[mem])                                              \n\t"
> +        "swc1   $f14,   -4(%[out])                                             \n\t"
> +        "bnez   %[n],   ff_acelp_apply_order_2_transfer_function_madd%=        \n\t"
> +
> +        "ff_acelp_apply_order_2_transfer_function_end%=:                       \n\t"
> +
> +         : [out] "+r" (out),
> +           [in] "+r" (in), [gain] "+f" (gain),
> +           [n] "+r" (n), [mem] "+r" (mem)
> +         : [zero_coeffs] "r" (zero_coeffs),
> +           [pole_coeffs] "r" (pole_coeffs)
> +         : "$f0", "$f1", "$f2", "$f3", "$f4", "$f5",
> +           "$f6", "$f7",  "$f8", "$f9", "$f10", "$f11",
> +           "$f12", "$f13", "$f14", "$f15", "$f16"
> +    );
> +}
> +#endif /* HAVE_MIPSFPU&&  HAVE_INLINE_ASM */
> diff --git a/libavcodec/mips/acelp_vectors_mips.h b/libavcodec/mips/acelp_vectors_mips.h
> new file mode 100644
> index 0000000..b47a695
> --- /dev/null
> +++ b/libavcodec/mips/acelp_vectors_mips.h
> @@ -0,0 +1,93 @@
> +
> +/**
> + * @file
> + * Reference: libavcodec/acelp_vectors.c
> + */
> +
> +#ifndef AVCODEC_ACELP_VECTORS_MIPS_H
> +#define AVCODEC_ACELP_VECTORS_MIPS_H
> +static av_always_inline void ff_weighted_vector_sumf(
> +                             float *out, const float *in_a, const float *in_b,
> +                             float weight_coeff_a, float weight_coeff_b, int length)
> +{
> +    /* loop unrolled two times */

Again, unrolling is fine if you check all usages and document the assumptions.

> +    __asm__ __volatile__ (
> +        "blez   %[length], ff_weighted_vector_sumf_end%=                \n\t"
> +
> +        "ff_weighted_vector_sumf_madd%=:                                \n\t"
> +        "lwc1   $f0,       0(%[in_a])                                   \n\t"
> +        "lwc1   $f3,       4(%[in_a])                                   \n\t"
> +        "lwc1   $f1,       0(%[in_b])                                   \n\t"
> +        "lwc1   $f4,       4(%[in_b])                                   \n\t"
> +        "mul.s  $f2,       %[weight_coeff_a], $f0                       \n\t"
> +        "mul.s  $f5,       %[weight_coeff_a], $f3                       \n\t"
> +        "madd.s $f2,       $f2,               %[weight_coeff_b], $f1    \n\t"
> +        "madd.s $f5,       $f5,               %[weight_coeff_b], $f4    \n\t"
> +        "addiu  %[length], -2                                           \n\t"
> +        "addiu  %[in_a],   8                                            \n\t"
> +        "addiu  %[in_b],   8                                            \n\t"
> +        "swc1   $f2,       0(%[out])                                    \n\t"
> +        "swc1   $f5,       4(%[out])                                    \n\t"
> +        "addiu  %[out],    8                                            \n\t"
> +        "bnez   %[length], ff_weighted_vector_sumf_madd%=               \n\t"
> +
> +        "ff_weighted_vector_sumf_end%=:                                 \n\t"
> +
> +        : [out] "+r" (out),
> +          [in_a] "+r" (in_a),   [in_b] "+r" (in_b),
> +          [length] "+r" (length)
> +        : [weight_coeff_a] "f" (weight_coeff_a),
> +          [weight_coeff_b] "f" (weight_coeff_b)
> +        : "$f0", "$f1", "$f2", "$f3", "$f4", "$f5"
> +    );
> +}
> +#endif /* AVCODEC_ACELP_VECTORS_MIPS_H */
> diff --git a/libavcodec/mips/amrwb_lsp2lpc.h b/libavcodec/mips/amrwb_lsp2lpc.h
> new file mode 100644
> index 0000000..e0303bc
> --- /dev/null
> +++ b/libavcodec/mips/amrwb_lsp2lpc.h
> @@ -0,0 +1,97 @@
> +/*
> + * Copyright (c) 2012
> + *      MIPS Technologies, Inc., California.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its
> + *    contributors may be used to endorse or promote products derived from
> + *    this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + *
> + * Author:  Nedeljko Babic (nbabic at mips.com)
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * Reference: libavcodec/lsp.c
> + */
> +
> +#ifndef AVCODEC_AMRWB_LSP2LPC_H
> +#define AVCODEC_AMRWB_LSP2LPC_H
> +av_always_inline void ff_amrwb_lsp2lpc(const double *lsp, float *lp, int lp_order)
> +{
> +    int lp_half_order = lp_order>>  1;
> +    double buf[lp_half_order + 1];
> +    double pa[lp_half_order + 1];
> +    double *qa = buf + 1;
> +    double lsp_lp_o = lsp[lp_order - 1];
> +    int i,j;
> +    double paf, qaf;
> +
> +    qa[-1] = 0.0;
> +
> +    ff_lsp2polyf(lsp    , pa, lp_half_order    );
> +    ff_lsp2polyf(lsp + 1, qa, lp_half_order - 1);
> +
> +    for (i = 1, j = lp_order - 1; i<  lp_half_order; i++, j--) {
> +        paf =  pa[i];
> +        qaf = (qa[i] - qa[i-2]) * (1 - lsp_lp_o);
> +
> +        __asm__ __volatile__ (
> +            "madd.d %[paf], %[paf], %[paf], %[lsp_lp_o] \n\t"
> +
> +            : [paf]"+f"(paf)
> +            : [lsp_lp_o]"f"(lsp_lp_o)
> +        );
> +        lp[i-1]  = (paf + qaf) * 0.5;
> +        lp[j-1]  = (paf - qaf) * 0.5;
> +    }
> +
> +    paf = pa[lp_half_order] * 0.5;
> +
> +    __asm__ __volatile__ (
> +        "madd.d %[paf], %[paf], %[paf], %[lsp_lp_o]     \n\t"
> +
> +        : [paf]"+f"(paf)
> +        : [lsp_lp_o]"f"(lsp_lp_o)
> +    );
> +
> +    lp[lp_half_order - 1] = paf;
> +
> +    lp[lp_order - 1] = lsp_lp_o;
> +}

Hmm, am I missing something or you are doing all this to save a single
pre-calculation of (1 + lsp[lp_order - 1])?

> +#endif /* AVCODEC_AMRWB_LSP2LPC_H */

> \ No newline at end of file

Please add one.

> diff --git a/libavcodec/mips/celp_filters_mips.c b/libavcodec/mips/celp_filters_mips.c
> new file mode 100644
> index 0000000..a01bfde
> --- /dev/null
> +++ b/libavcodec/mips/celp_filters_mips.c
> @@ -0,0 +1,289 @@
> +/*
> + * Copyright (c) 2012
> + *      MIPS Technologies, Inc., California.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its
> + *    contributors may be used to endorse or promote products derived from
> + *    this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + *
> + * Author:  Nedeljko Babic (nbabic at mips.com)
> + *
> + * various filters for CELP-based codecs optimized for MIPS
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * Reference: libavcodec/celp_filters.c
> + */
> +#include "config.h"
> +
> +#if HAVE_MIPSFPU&&  HAVE_INLINE_ASM
> +
> +#include "libavutil/attributes.h"
> +#include "libavutil/common.h"
> +#include "libavcodec/celp_filters.h"
> +
> +av_always_inline void ff_celp_lp_synthesis_filterf(float *out,
> +                                  const float *filter_coeffs,
> +                                  const float* in, int buffer_length,
> +                                  int filter_length)
> +{
> +    int i,n;
> +
> +    float out0, out1, out2, out3;
> +    float old_out0, old_out1, old_out2, old_out3;
> +    float a,b,c;
> +    const float *p_filter_coeffs;
> +    float *p_out;
> +
> +    a = filter_coeffs[0];
> +    b = filter_coeffs[1];
> +    c = filter_coeffs[2];
> +    b -= filter_coeffs[0] * filter_coeffs[0];
> +    c -= filter_coeffs[1] * filter_coeffs[0];
> +    c -= filter_coeffs[0] * b;
> +
> +    old_out0 = out[-4];
> +    old_out1 = out[-3];
> +    old_out2 = out[-2];
> +    old_out3 = out[-1];
> +    for (n = 0; n<= buffer_length - 4; n+=4) {
> +        p_filter_coeffs = filter_coeffs;
> +        p_out = out;
> +
> +        out0 = in[0];
> +        out1 = in[1];
> +        out2 = in[2];
> +        out3 = in[3];
> +
> +        __asm__ __volatile__(
> +            "lwc1       $f2,     8(%[filter_coeffs])                        \n\t"
> +            "lwc1       $f1,     4(%[filter_coeffs])                        \n\t"
> +            "lwc1       $f0,     0(%[filter_coeffs])                        \n\t"
> +            "nmsub.s    %[out0], %[out0],             $f2, %[old_out1]      \n\t"
> +            "nmsub.s    %[out1], %[out1],             $f2, %[old_out2]      \n\t"
> +            "nmsub.s    %[out2], %[out2],             $f2, %[old_out3]      \n\t"
> +            "lwc1       $f3,     12(%[filter_coeffs])                       \n\t"
> +            "nmsub.s    %[out0], %[out0],             $f1, %[old_out2]      \n\t"
> +            "nmsub.s    %[out1], %[out1],             $f1, %[old_out3]      \n\t"
> +            "nmsub.s    %[out2], %[out2],             $f3, %[old_out2]      \n\t"
> +            "nmsub.s    %[out0], %[out0],             $f0, %[old_out3]      \n\t"
> +            "nmsub.s    %[out3], %[out3],             $f3, %[old_out3]      \n\t"
> +            "nmsub.s    %[out1], %[out1],             $f3, %[old_out1]      \n\t"
> +            "nmsub.s    %[out0], %[out0],             $f3, %[old_out0]      \n\t"
> +
> +            : [out0]"+f"(out0), [out1]"+f"(out1),
> +              [out2]"+f"(out2), [out3]"+f"(out3)
> +            : [old_out0]"f"(old_out0), [old_out1]"f"(old_out1),
> +              [old_out2]"f"(old_out2), [old_out3]"f"(old_out3),
> +              [filter_coeffs]"r"(filter_coeffs)
> +            : "$f0", "$f1", "$f2", "$f3", "$f4"
> +        );
> +
> +        old_out3 = out[-5];

Can you please apply to your ASM version the change made in
37ddd3833219fa7b913fff3f5cccc6878b047e6b? Invalid reads are bad.

> +        for (i = 5; i<= filter_length; i += 2) {
> +            __asm__ __volatile__(
> +                "lwc1    $f5,         16(%[p_filter_coeffs])                \n\t"
> +                "addiu   %[p_out],    -8                                    \n\t"
> +                "addiu   %[p_filter_coeffs], 8                              \n\t"
> +                "nmsub.s %[out1],     %[out1],      $f5, %[old_out0]        \n\t"
> +                "nmsub.s %[out3],     %[out3],      $f5, %[old_out2]        \n\t"
> +                "lwc1    $f4,         12(%[p_filter_coeffs])                \n\t"
> +                "lwc1    %[old_out2], -16(%[p_out])                         \n\t"
> +                "nmsub.s %[out0],     %[out0],      $f5, %[old_out3]        \n\t"
> +                "nmsub.s %[out2],     %[out2],      $f5, %[old_out1]        \n\t"
> +                "nmsub.s %[out1],     %[out1],      $f4, %[old_out3]        \n\t"
> +                "nmsub.s %[out3],     %[out3],      $f4, %[old_out1]        \n\t"
> +                "mov.s   %[old_out1], %[old_out3]                           \n\t"
> +                "lwc1    %[old_out3], -20(%[p_out])                         \n\t"
> +                "nmsub.s %[out0],     %[out0],      $f4, %[old_out2]        \n\t"
> +                "nmsub.s %[out2],     %[out2],      $f4, %[old_out0]        \n\t"
> +
> +                : [out0]"+f"(out0), [out1]"+f"(out1),
> +                  [out2]"+f"(out2), [out3]"+f"(out3), [old_out0]"+f"(old_out0),
> +                  [old_out1]"+f"(old_out1), [old_out2]"+f"(old_out2),
> +                  [old_out3]"+f"(old_out3),[p_filter_coeffs]"+r"(p_filter_coeffs),
> +                  [p_out]"+r"(p_out)
> +                :
> +                : "$f4", "$f5"
> +            );
> +            FFSWAP(float, old_out0, old_out2);
> +        }
> +
> +        __asm__ __volatile__(
> +            "nmsub.s    %[out3], %[out3], %[a], %[out2]                     \n\t"
> +            "nmsub.s    %[out2], %[out2], %[a], %[out1]                     \n\t"
> +            "nmsub.s    %[out3], %[out3], %[b], %[out1]                     \n\t"
> +            "nmsub.s    %[out1], %[out1], %[a], %[out0]                     \n\t"
> +            "nmsub.s    %[out2], %[out2], %[b], %[out0]                     \n\t"
> +            "nmsub.s    %[out3], %[out3], %[c], %[out0]                     \n\t"
> +
> +            : [out0]"+f"(out0), [out1]"+f"(out1),
> +              [out2]"+f"(out2), [out3]"+f"(out3)
> +            : [a]"f"(a), [b]"f"(b), [c]"f"(c)
> +        );
> +
> +        out[0] = out0;
> +        out[1] = out1;
> +        out[2] = out2;
> +        out[3] = out3;
> +
> +        old_out0 = out0;
> +        old_out1 = out1;
> +        old_out2 = out2;
> +        old_out3 = out3;
> +
> +        out += 4;
> +        in  += 4;
> +    }
> +
> +    out -= n;
> +    in -= n;
> +    for (; n<  buffer_length; n++) {
> +        float out_val, out_val_i, fc_val;
> +        p_filter_coeffs = filter_coeffs;
> +        p_out =&out[n];
> +        out_val = in[n];
> +        for (i = 1; i<= filter_length; i++) {
> +            __asm__ __volatile__(
> +                "lwc1    %[fc_val],          0(%[p_filter_coeffs])                        \n\t"
> +                "lwc1    %[out_val_i],       -4(%[p_out])                                 \n\t"
> +                "addiu   %[p_filter_coeffs], 4                                            \n\t"
> +                "addiu   %[p_out],           -4                                           \n\t"
> +                "nmsub.s %[out_val],         %[out_val],          %[fc_val], %[out_val_i] \n\t"
> +
> +                : [fc_val]"=&f"(fc_val), [out_val]"+f"(out_val),
> +                  [out_val_i]"=&f"(out_val_i), [p_out]"+r"(p_out),
> +                  [p_filter_coeffs]"+r"(p_filter_coeffs)
> +            );
> +        }
> +        out[n] = out_val;
> +    }
> +}
> +
> +av_always_inline void ff_celp_lp_zero_synthesis_filterf(float *out,
> +                                       const float *filter_coeffs,
> +                                       const float *in, int buffer_length,
> +                                       int filter_length)
> +{
> +    int i,n;
> +    float sum_out8, sum_out7, sum_out6, sum_out5, sum_out4, fc_val;
> +    float sum_out3, sum_out2, sum_out1;
> +    const float *p_filter_coeffs, *p_in;
> +
> +    for (n = 0; n<  buffer_length; n+=8) {
> +        p_in =&in[n];
> +        p_filter_coeffs = filter_coeffs;
> +        sum_out8 = in[n+7];
> +        sum_out7 = in[n+6];
> +        sum_out6 = in[n+5];
> +        sum_out5 = in[n+4];
> +        sum_out4 = in[n+3];
> +        sum_out3 = in[n+2];
> +        sum_out2 = in[n+1];
> +        sum_out1 = in[n];
> +        i = filter_length;
> +
> +        /* i is always greater than 0
> +        * inner loop is unrolled eight times so there is less memory access
> +        */

You mean outer loop?

> +        __asm__ __volatile__(
> +            "filt_lp_inner%=:                                               \n\t"
> +            "lwc1   %[fc_val],   0(%[p_filter_coeffs])                      \n\t"
> +            "lwc1   $f7,         6*4(%[p_in])                               \n\t"
> +            "lwc1   $f6,         5*4(%[p_in])                               \n\t"
> +            "lwc1   $f5,         4*4(%[p_in])                               \n\t"
> +            "lwc1   $f4,         3*4(%[p_in])                               \n\t"
> +            "lwc1   $f3,         2*4(%[p_in])                               \n\t"
> +            "lwc1   $f2,         4(%[p_in])                                 \n\t"
> +            "lwc1   $f1,         0(%[p_in])                                 \n\t"
> +            "lwc1   $f0,         -4(%[p_in])                                \n\t"
> +            "addiu  %[i],        -2                                         \n\t"
> +            "madd.s %[sum_out8], %[sum_out8],          %[fc_val], $f7       \n\t"
> +            "madd.s %[sum_out7], %[sum_out7],          %[fc_val], $f6       \n\t"
> +            "madd.s %[sum_out6], %[sum_out6],          %[fc_val], $f5       \n\t"
> +            "madd.s %[sum_out5], %[sum_out5],          %[fc_val], $f4       \n\t"
> +            "madd.s %[sum_out4], %[sum_out4],          %[fc_val], $f3       \n\t"
> +            "madd.s %[sum_out3], %[sum_out3],          %[fc_val], $f2       \n\t"
> +            "madd.s %[sum_out2], %[sum_out2],          %[fc_val], $f1       \n\t"
> +            "madd.s %[sum_out1], %[sum_out1],          %[fc_val], $f0       \n\t"
> +            "lwc1   %[fc_val],   4(%[p_filter_coeffs])                      \n\t"
> +            "mov.s  $f7,         $f6                                        \n\t"
> +            "mov.s  $f6,         $f5                                        \n\t"
> +            "mov.s  $f5,         $f4                                        \n\t"
> +            "mov.s  $f4,         $f3                                        \n\t"
> +            "mov.s  $f3,         $f2                                        \n\t"
> +            "mov.s  $f2,         $f1                                        \n\t"
> +            "mov.s  $f1,         $f0                                        \n\t"
> +            "lwc1   $f0,         -8(%[p_in])                                \n\t"
> +            "addiu  %[p_filter_coeffs], 8                                   \n\t"
> +            "addiu  %[p_in],     -8                                         \n\t"
> +            "madd.s %[sum_out8], %[sum_out8],          %[fc_val], $f7       \n\t"
> +            "madd.s %[sum_out7], %[sum_out7],          %[fc_val], $f6       \n\t"
> +            "madd.s %[sum_out6], %[sum_out6],          %[fc_val], $f5       \n\t"
> +            "madd.s %[sum_out5], %[sum_out5],          %[fc_val], $f4       \n\t"
> +            "madd.s %[sum_out4], %[sum_out4],          %[fc_val], $f3       \n\t"
> +            "madd.s %[sum_out3], %[sum_out3],          %[fc_val], $f2       \n\t"
> +            "madd.s %[sum_out2], %[sum_out2],          %[fc_val], $f1       \n\t"
> +            "madd.s %[sum_out1], %[sum_out1],          %[fc_val], $f0       \n\t"
> +            "bgtz   %[i],        filt_lp_inner%=                            \n\t"
> +
> +            : [sum_out8]"+f"(sum_out8), [sum_out7]"+f"(sum_out7),
> +              [sum_out6]"+f"(sum_out6), [sum_out5]"+f"(sum_out5),
> +              [sum_out4]"+f"(sum_out4), [sum_out3]"+f"(sum_out3),
> +              [sum_out2]"+f"(sum_out2), [sum_out1]"+f"(sum_out1),
> +              [fc_val]"=&f"(fc_val), [p_filter_coeffs]"+r"(p_filter_coeffs),
> +              [p_in]"+r"(p_in), [i]"+r"(i)
> +            :
> +            : "$f0", "$f1", "$f2", "$f3", "$f4", "$f5", "$f6", "$f7"
> +        );
> +
> +        out[n+7] = sum_out8;
> +        out[n+6] = sum_out7;
> +        out[n+5] = sum_out6;
> +        out[n+4] = sum_out5;
> +        out[n+3] = sum_out4;
> +        out[n+2] = sum_out3;
> +        out[n+1] = sum_out2;
> +        out[n] = sum_out1;
> +    }
> +}
> +
> +#endif /* HAVE_MIPSFPU&&  HAVE_INLINE_ASM */
> diff --git a/libavcodec/mips/celp_math_mips.h b/libavcodec/mips/celp_math_mips.h
> new file mode 100644
> index 0000000..c3c373f
> --- /dev/null
> +++ b/libavcodec/mips/celp_math_mips.h
> @@ -0,0 +1,82 @@
> +/*
> + * Copyright (c) 2012
> + *      MIPS Technologies, Inc., California.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its
> + *    contributors may be used to endorse or promote products derived from
> + *    this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + *
> + * Author:  Nedeljko Babic (nbabic at mips.com)
> + *
> + * Math operations optimized for MIPS
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * Reference: libavcodec/celp_math.c
> + */
> +
> +#ifndef AVCODEC_CELP_MATH_MIPS_H
> +#define AVCODEC_CELP_MATH_MIPS_H

> +static av_always_inline float ff_dot_productf(const float* a, const float* b,
> +                                              int length)
> +{
> +    float sum;
> +
> +    __asm__ __volatile__ (
> +        "mtc1   $zero,      %[sum]                      \n\t"
> +        "blez   %[length],  ff_dot_productf_end%=       \n\t"
> +        "ff_dot_productf_madd%=:                        \n\t"
> +        "lwc1   $f2,        0(%[a])                     \n\t"
> +        "lwc1   $f1,        0(%[b])                     \n\t"
> +        "addiu  %[length],  -1                          \n\t"
> +        "addiu  %[a],       %[a],   4                   \n\t"
> +        "addiu  %[b],       %[b],   4                   \n\t"
> +        "madd.s %[sum],     %[sum], $f1, $f2            \n\t"
> +        "bnez   %[length],  ff_dot_productf_madd%=      \n\t"
> +        "ff_dot_productf_end%=:                         \n\t"
> +
> +        : [sum] "=&f" (sum), [a] "+r" (a), [b] "+r" (b),
> +          [length] "+r" (length)
> +        :
> +        : "$f1", "$f2"
> +    );
> +    return sum;
> +}

I know almost nothing about mips, so I wonder why not set a register
to reg=(a + lenght) before the loop, replace "bnez   %[length],
ff_dot_productf_madd%=" by "bne %[a], %[reg], ff_dot_productf_madd%="
and remove the "addiu %[length], -1". Naively, that would make one
instruction less in the main loop...

-Vitor


More information about the ffmpeg-devel mailing list