[FFmpeg-devel] [PATCH 1/2] lavu/fixed_dsp: optimise R-V V fmul_reverse
Rémi Denis-Courmont
remi at remlab.net
Sun Nov 19 13:39:41 EET 2023
Gathers are (unsurprisingly) a notable exception to the rule that R-V V
gets faster with larger group multipliers. So roll the function to speed
it up.
Before:
vector_fmul_reverse_fixed_c: 2840.7
vector_fmul_reverse_fixed_rvv_i32: 2430.2
After:
vector_fmul_reverse_fixed_c: 2841.0
vector_fmul_reverse_fixed_rvv_i32: 962.2
It might be possible to further optimise the function by moving the
reverse-subtract out of the loop and adding ad-hoc tail handling.
---
libavutil/riscv/fixed_dsp_rvv.S | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/libavutil/riscv/fixed_dsp_rvv.S b/libavutil/riscv/fixed_dsp_rvv.S
index 2bece88685..46bb591352 100644
--- a/libavutil/riscv/fixed_dsp_rvv.S
+++ b/libavutil/riscv/fixed_dsp_rvv.S
@@ -127,16 +127,17 @@ endfunc
func ff_vector_fmul_reverse_fixed_rvv, zve32x
csrwi vxrm, 0
- vsetvli t0, zero, e16, m4, ta, ma
+ // e16/m4 and e32/m8 are possible but slow the gathers down.
+ vsetvli t0, zero, e16, m1, ta, ma
sh2add a2, a3, a2
vid.v v0
vadd.vi v0, v0, 1
1:
- vsetvli t0, a3, e16, m4, ta, ma
+ vsetvli t0, a3, e16, m1, ta, ma
slli t1, t0, 2
vrsub.vx v4, v0, t0 // v4[i] = [VL-1, VL-2... 1, 0]
sub a2, a2, t1
- vsetvli zero, zero, e32, m8, ta, ma
+ vsetvli zero, zero, e32, m2, ta, ma
vle32.v v8, (a2)
sub a3, a3, t0
vle32.v v16, (a1)
--
2.42.0
More information about the ffmpeg-devel
mailing list