[FFmpeg-devel] [WIP/RFC]012V decoder
Reimar.Doeffinger at gmx.de
Wed Jul 11 22:13:37 CEST 2012
On Tue, Jul 10, 2012 at 08:57:45PM +0200, Carl Eugen Hoyos wrote:
> I originally wanted to implement a12v, but there is no
> PIX_FMT_YUVA422P16 (or PIX_FMT_YUVA422P10) and I have no sample with
> actual transparency to test.
> The more difficult problem that attached patch has is that the
> rightmost pixels are missing: There are remaining bytes in every
> line that contain (afaict) the 8 least significant bits of the
> missing samples, but I couldn't find the msb's: The unused two bits
> of every four-byte sample are always 0.
Can't say much about that without more information like what exactly
the width is and what those additional bytes look like for example.
> + const uint32_t *src = (const uint32_t *)avpkt->data;
You're casting this back and forth between uint32_t and uint8_t
but I don't really see any reason to not leave it uint8_t.
You already use AV_RL32 anyway already, so you just need to increment
by 4 instead of by one (and doing the increment inside the macro
argument is dangerous since it leads to undefined behaviour anyway).
Of course using the bytestream functions would avoid the need for an
> + int stride = avctx->width * 8 / 3;
I don't think this can be right (and can actually cause overreads),
if width was 1 it would claim a stride of 2 bytes but the code will
always read 4 bytes as a minimum.
On the other hand getting the stride wrong should give rather obvious
artefacts, so how sure are you this is correct?
> + y = (uint16_t *)pic->data + line * pic->linesize / 2;
While it should not matter in practice due to alignment, this is
IMHO a bit strange since it would round odd values of linesize down.
I would rather go for
> + y = (uint16_t *)(pic->data + line * pic->linesize);
(as said, it does not really matter, if linesize ever was odd that would
be a bug that on some architectures will cause crashes)
More information about the ffmpeg-devel