[FFmpeg-devel] [libav-devel] [RFC] M-bit to N-bit YUV conversions
michaelni at gmx.at
Thu Aug 25 06:34:03 CEST 2011
Hi Oskar, Ronald
On Wed, Aug 24, 2011 at 06:55:10PM -0700, Ronald S. Bultje wrote:
> On Wed, Aug 24, 2011 at 5:18 PM, Oskar Arvidsson <oskar at irock.se> wrote:
> > There have been complaints about how ffmpeg/libav converts M-bit to N-bit YUV,
> > e.g. see http://forum.doom9.org/showthread.php?t=161915 which I've been asked
> > to look into.
> > a) What method should be used for conversion? Current method? Simple shifting?
> Neither. I believe the shifting is OK,
no, shift is correct for 8->10bit (strictly speaking just the limited
range variant and not the full range)
but its really wrong for lets say 1->8bit, your black and white
source image would become black and 50% gray
for 8->10 full range its a compromise between loosing the 3 brightst
levels vs. some slight loss of smoothness. Here id say we should pick
whatever is less work. If it ends up easy doing either it could be
> but we can do it in the same
> step as hscale(). E.g. if we have 8bit input and 14bit hscale coeffs
> (sum = 0x4000, or 1,0000,0000,0000,00b), then instead of shifting
> input, we can scale to sum = 1,0000,0001,0000,00b - i.e. a 1 every 8th
> bit, and for 10bit input it'd look like sum = 1,0000,0000,0100,00b
> instead) instead to achieve the same effect. (This doesn't solve the
> luma problem, which is completely orthogonal, and you already know how
> to fix it anyway.) I've sort-of started working towards this by
> removing the shifting in the input operations for the scaler, although
> they still exist in the unscaled case. They will be reintroduced at a
> later point by changing hscale coeffs sum as per above.
the scaling can be done at 3-5 points for free not 1.
theres the hscale, vscale and the jpegrange convertion code.
it should be quite easy to do it during one of the h/vscales indeed
> Dither is all just because of ordered dither and I don't feel it's
> worth looking into. I've been planning to replace it with
> Atkinson-style dither (that's like f-s, which x264 uses) so let's just
> work towards that and be done with it.
I suggest a double blind comparission of a video (not still image)
I did it 5? years ago (not double blind) but FS on 8->4bit or so
just flickered terribly compared to ordered dither.
IMHO if one can see any difference between a FS type dither and
ordered on 8->10 at all it wont be in FS favor.
The situation may be different once it hits an encoder and certainly
is different for still images where FS is better
also ordered is faster, which when they look indistinguishable is
also something to consider
> > b) Should we distinguish between limited and full range YUV?
for 8->10 it likely doesnt make a vissible difference, for things
like 1->8 pure shifting does not work, for 5->8 (like in rgb555)
i also wouldnt just shift it would make the image 3% darker
31*4 vs 255
> Doesn't convertJpegRange already do that?
> Are you afraid that we
> become "out of range" when we scale between bitdepths with different
> ranges? (I admit I haven't looked much into that yet, i.e. don't know
> how to handle it yet.)
> > c) Other comments, ideas?
One comment i have is that the libav & ffmpeg swscale codebases differ
I suggest the work is based on ffmpegs
and please CC ffmpeg-dev on discussions about this
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
Freedom in capitalist society always remains about the same as it was in
ancient Greek republics: Freedom for slave owners. -- Vladimir Lenin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 198 bytes
Desc: Digital signature
More information about the ffmpeg-devel