[Ffmpeg-devel] upsampling of subsampled video data

Rich Felker dalias
Mon Sep 11 05:39:04 CEST 2006


On Sun, Sep 10, 2006 at 10:25:25PM +0200, Attila Kinali wrote:
> On Sun, 10 Sep 2006 12:17:41 +0200
> Michael Niedermayer <michaelni at gmx.at> wrote:
> 
> > well, lets see, first heres a list of the common formats with ideal sample 
> > positions shown below too
> > progressive 4:2:0 (mpeg1)
> > Y Y Y Y
> >  C   C
> > Y Y Y Y
> > 
> > Y Y Y Y
> >  C   C
> > Y Y Y Y
> 
> Does this mean, that the C samples are inbetween the Y
> samples and thus have to be interpolated in vertical
> direction too before converting to RGB?
> (apart from the missing sample every second line)

Yes.

> > interlaced 4:2:0 (mpeg2/mpeg4)
> > field A     field B
> > Y Y Y Y
> > C   C
> >             Y Y Y Y
> > 
> > Y Y Y Y
> >             C   C
> >             Y Y Y Y
> > note, its very important that only luma and chroma samples from the same field
> >       are used in building rgb values or pretty ugly artifacts appear
> 
> Hmpf.. Which means that there is a flag needed to
> specify interlaced content. Which will make the hardware
> even more complex.

No, it doesn't need to be treated specially because you simply cannot
present interlaced content on a progressive display without
deinterlacing it. Either you'll double the stride when passing it to
the video hardware and adjust the offsets for a simple 'bob
deinterlace' effect, or you'll do advanced deinterlacing in software
first. There's no harm in scaling interlaced content incorrectly if
you're scaling it in hardware since it's _already_ going to be
displayed horribly wrong. :)

> > the following codecs depend on 4:1:1 support (or a sw converter to make 4:2:2
> [...]
> > the following codecs depend on 4:1:0 support (or a sw converter to make 4:2:0
> > out of their 4:1:0)
> 
> >From the list of codecs that use these two, i guess that
> these are special cases that can be left out if there is
> not enough space (which there isnt)
> 
> BTW: while we are at it, how important is it to have a
> YUV->RGB converter that supports different standards?
> Ie, IIRC there are 3 different coefficients for YUV<->RGB
> conversion by different standards groups. Is it important
> to have a programmable converter to select one of these,
> or would be one constant enough without too much quality
> los for the others?

Normal video cards only implement one of them AFAIK, or at least the
drivers only support one. Still I doubt it matters which you
support...

Also, it's ideal for the YUV->RGB to be done at the DAC level, not
into a quantized RGB framebuffer. The reason is that the quantization
levels for YUV are different from RGB and requantizing the Y as RGB
will introduce ugly banding. I dunno if existing cards work around
this or not, but it's yet another reason why YUV should be done as
overlay and not blitter.

Rich





More information about the ffmpeg-devel mailing list