[FFmpeg-devel] [PATCH] avfilter: add audio surround upmixer

Michael Niedermayer michael at niedermayer.cc
Fri May 26 22:55:38 EEST 2017

On Fri, May 26, 2017 at 06:54:33PM +0200, Paul B Mahol wrote:
> On 5/26/17, Michael Niedermayer <michael at niedermayer.cc> wrote:
> > On Fri, May 26, 2017 at 01:11:38PM +0200, Paul B Mahol wrote:
> >> On 5/26/17, Nicolas George <george at nsup.org> wrote:
> >> > Le septidi 7 prairial, an CCXXV, Paul B Mahol a ecrit :
> >> >> > This belongs in libswresample
> >> >> No it does not.
> >> >
> >> > I think it does too.
> >>
> >> You want to link libswresample with libavcodec?
> >
> > While this question was directed at nicolas ...
> >
> > I dont think audio upmix code should depend on a lib of encoders and
> > decoders (libavcodec)
> > No matter if the upmix code would be in libavfilter or libswresample
> >
> > I belive a temporary dependancy would be ok, if there is intend to
> > factor/move things to remove the dependancy in the future.
> >
> > But IMO libavcodec is the wrong place to export generic FFT
> > functionality.
> > We need FFTs in codecs, we need them in filters, we need them in
> > swresample (the soxr resampler we support also uses a FFT internally)
> >
> > Also moving FFT to a different lib should be quite easy compared to
> > other ugly dependancies we have (as in snow motion estimation, which
> > is not as easy to move. none the less none of these ugly dependancies
> > should be there except temporary)

> This code does upmixing, and there could by myriad variants of upmixing.

This is true for any format convertion or more genericaly for anything.
there are always many ways to do something.

the way FFmpeg is modularized is that we have a lib for audio format
convertion, resampling, rematrixing including upmixing and
downmixing, ...

> Having it in libswresample is flawed design.
> So I will not do the transitions.
> If you still object to leave it as it is, in lavfi. You will need to take
> care of the necessary changes by yourself.

Do you agree that we need a part of code and API that does
audio format convertion, amongth it upmixing and downmixing ?
Something thats used by default

if you agree on the need of such code, why would it be flawed design
to add a improved upmixing implementation in there so it gets used ?
(be that by default or users choice of what to use by default or
 by specific choice for a specific instance)

I want the best and most correct code to be used.
I dont want to object or demand anything. I belive though that putting
upmixing code in 2 different places and 2 different libs will give us
headaches in the future

Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Many things microsoft did are stupid, but not doing something just because
microsoft did it is even more stupid. If everything ms did were stupid they
would be bankrupt already.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20170526/54543ee6/attachment.sig>

More information about the ffmpeg-devel mailing list