[Ffmpeg-devel] libavcodec h264 decoder

Ivan Kalvachev ikalvachev
Sat Dec 16 14:47:14 CET 2006


2006/12/15, michael benzon chua <chibiakaii at gmail.com>:
> I seem to have overestimated my understanding of the decoding process... it
> doesn't look very similar to the encoding function used in  the x264
> encoder. Some questions:
>
> How are the matrices containing the dct coefficients represented in h264.c?
> In encoding, they were contained in 2-dimensional arrays that were
> explicitly labeled according to their size. (e.g. dct4x4, dct8x8, dct2x2,
> etc.) I cannot find any 2-dimensional arrays in decode_mb_cabac or
> decode_mb_cavlc that are labeled in such a manner.
>
> How do I distinguish between the types of macroblocks that are being
> decoded? The selection statements seem to be the ones that pick out whether
> an mb is a i4x4 luma or an i8x8 chroma, or an inter-mb... how do i tell
> which is which?

I highly recommend you to abandon your attempts to fuck up with the
codec internals.

1. It makes compression worse. The encoder tries to exploit
correlations in the coefficients so it could compress them better. Any
kind of randomizing/obfuscating them would beat the intent of
compression.

2. Good cryptography assumes that the input stream is random enough.
Predictable input could lead to much easier breaking of the cypher.
The raw coefficient entropy is much lower than the resulting bitstream
(it's not surprise that the final stage is called entropy coding).
This means it is much better to crypt the result bitstream not the
internal codec data.




More information about the ffmpeg-devel mailing list