[FFmpeg-devel] GSoC 2008: Snow
Mon Mar 24 17:53:17 CET 2008
On Mon, Mar 24, 2008 at 03:53:52PM +0100, Marco Gerards wrote:
> As some of you might know, I am considering to apply for GSoC again.
> Although I cannot make the promise that I will choose FFmpeg again (I
> will also submit other applications, I want to be open about this),
> one task in particular drawed my attention: Snow. I had a look at
> what has to be done and selected a certain subset.
> For starters, I am not sure how much can be done during one summer.
> So I am not sure if my selection is realistic. Especially because the
> tasks are design related and not limited to implementation only.
> Reading papers and understanding things requires time, at least for a
> mere mortal like me :-). Here is the selected subset, I have added
> some questions where things were not clear to me.
> - More optimal quantization
> What has to be done? Choosing other weights and determine how the SNR
> changes? Or should I look into other algorithms/papers about this and
> try them?
This point was about optimally choosing the the quantized values, simple
scalar quantization is not optimal because the wavelet transform is not
orthogonal. The wavelet transform is also not linear (due to rounding).
And things need bits to be stored which are not exactly considered.
My attempt to improve on this is under #if QUANTIZE2, its still not
considering the bits used IIRC and it of course will only find a
local minimum ...
And it is significantly too slow. But there are some quality gains at
the high bitrate end ...
What we would need here is to make this MUCH faster, and i dont really
know how ...
It also should use some approximation for the bits used instead of ignoring
Choosing other weights / other bias is rather low priority because it
just needs a voluteer with some time to do subjective (what looks better)
tests, that person doesnt even need to know C.
> - Faster halfpel interpolation
> This could be straightforward. Perhaps I can reuse the code from
> Dirac, after adapting it to Snow?
In principle maybe, but this code must be as fast as possible and clean.
(it also obviously must be faster than the current snow code)
> - Support for 4x4 blocksize
> Perhaps I should look into variable block-size motion compensation?
> Or is this supported already?
> - 1/8th pel motion compensation
The only real problem with these 2 is that our motion estimation code
has been written with mpeg1/2/4 in mind and thus lacks support for 1/8pel
and 4x4 blocks.
I dont remember it 100% but possibly the decoder supports 4x4 and 1/8
already (of course if so its untested and there might be bugs ...)
> - Using wavelet transformed MC-ed reference frame as context for the
> entropy encoder.
> - Iterative motion estimation based on subsampled images
> With subsampling, I think of interpolation. I noticed that iterative
> motion estimation is implemented. So should this be removed from the
> TODO? Otherwise I do not understand what has to be done...
Iterative motion estimation is slow, if instead of a 640x480 frame you
would do motion estimation with 320x240 it would be faster, this of
course would require one or more final passes at full 640x480 resolution
but it might be overall faster. Of course noone knows until it has been
> - Use multiple threads
> I guess this can become as hard as you want? :-)
> - Different transforms like the Lapped Biorthogonal Transform (instead of wavelets)
> Actually, I never heard about this transform. Besides that, which
> other transforms should be studied? This sounds like a *VERY*
> interesting subtask to me, although potentially time consuming. How
> important is this to any of you?
I did read some papers (if needed i can try to find them again) that claimed
that LBT and MP (matching pursuit) are much more efficient to code inter
frames than wavelets.
That is they claimed IIRC that wavelets are only really good for intra frames
Of course its just a claim in a paper which was written by a human using a
particular wavelet coder (and almost certainly one designed to code images
and not inter frames) so this may or may not be true ...
> - Put more comments in the code and make the code easier to understand
> Perhaps I am stupid, but I had a hard time understanding some of the
> Snow code :-). Although writing documentation is not allowed when
> working on gsoc, I consider comments part of the code.
dont hesitate to commit comments to snow.c (ask diego for an account if
you dont have write access alraedy ... i never remember these things)
especially the slice/line/buffer stuff is horribly obfuscated ...
cleanup is very welcome ...
> Furthermore, I noticed a range coder is used. Is it worthwhile to
> test something else like arithmetic coding? From what I read,
> arithmetic coding is optimal, but slower and patented. What is the
> design criterium of Snow?
simple, good quality/bitrate, fast and useable
some optional error robustness is nice as well but its secondary ...
arithmetic coder == range coder (technically) and real implementations are
Range coders are described in old papers for which any patent must have
expired and arithmetic coders are described more recently. AFAIK :)
If you want to try a better coder, you could try the one from
the code is very similar to the range coder in snow thus it might be easy to
adapt. Its based on the snows rangecoder and paq8l.cpp
In the end the question about a better entropy coder will mostly be one
about how much compression we gain and how much speed we loose ...
The one in snow can be beaten for sure by a few % compression wise ...
If i stumble across any other snow improvment ideas ill reply here ...
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
No snowflake in an avalanche ever feels responsible. -- Voltaire
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
More information about the ffmpeg-devel