[FFmpeg-devel] [PATCH 9/9] ffplay: add -af option

Stefano Sabatini stefasab at gmail.com
Sun Feb 3 16:30:11 CET 2013


On date Tuesday 2012-06-26 01:03:52 +0200, Marton Balint encoded:
> 
> 
> On Mon, 25 Jun 2012, Stefano Sabatini wrote:
> 
> [...]
> 
> >>>+    layouts[0] = avctx->channel_layout;
> >>>+
> >>>+    snprintf(abuffer_args, sizeof(abuffer_args), "sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
> >>>+             avctx->sample_rate,
> >>>+             av_get_sample_fmt_name(avctx->sample_fmt),
> >>>+             avctx->channel_layout);
> >>>+    ret = avfilter_graph_create_filter(&filt_asrc,
> >>>+                                       avfilter_get_by_name("abuffer"), "ffplay_abuffer",
> >>>+                                       abuffer_args, NULL, is->agraph);
> >>>+    if (ret < 0) goto fail;
> >>>+
> >>>+    abuffersink_params->sample_fmts     = sample_fmts;
> >>>+    abuffersink_params->channel_layouts = layouts;
> >>
> >
> >>Before calling audio_open using these are fine for determining the
> >>normal output of the audio filters. But once the audio_open is
> >>called, you should enforce the settings set in
> >>VideoState->audio_tgt.
> >
> >So what about calling audio_open() and *then* configuring the
> >filtergraph? This should be simpler than:
> >
> >configure_filters()
> >audio_open()
> >configure_filters() again
> 
> 
> Only problem is that for audio open you have to request something as
> channel count and sample rate. It is better to request the number of
> channels and the sample rate the filter would normally output, but
> you can only determine that by configuring the filters once.
> 
> That is why I proposed to configure the filters before audio open to
> determine the preferred output channel count and sample rate of the
> filter, request that for audio_open and if audio_open can't handle
> that then reconfigure the filters with the real output settings
> audio_open has fallen back to.

Updated work in progress.

I'm not sure about how to handle configuration in open_audio(). Right
now I'm configuring the filtergraph, *then* opening the audio device.

In order to minimize changes, in audio_decode_frame() (which should be
renamed something like audio_process_frame) I'm passing is->frame to
the filtergraph, and I convert it back to is->frame if I manage to get
a frame. Then the data is handled the usual way.

In case the device target configuration differs from the output
filter, the normalization code in audio_decode_frame() should convert
it to the target configurations (possibly not very efficient).
Alternatively: we configure the filters, open the device, then
*reconfigure* the filters to take into account of the device
configuration (and virtually avoids the need for the output
normalization).

The current design doesn't address the case where the input changes,
which should be addressed by either reconfiguring the filtergraph or
adding another normalization layer between the decoder and the
filtergraph.

A long-term solution would be to implement reconfiguration at the
filtering library level, and/or port/extend -filter_complex from
ffmpeg.c, but for the moment I prefer a minimal working solution which
can be delivered in a short time.
-- 
FFmpeg = Fierce and Fundamental Maxi Prodigious Emblematic Guru
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0005-ffplay-add-af-option.patch
Type: text/x-diff
Size: 7857 bytes
Desc: not available
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20130203/7a35f31a/attachment.bin>


More information about the ffmpeg-devel mailing list