[FFmpeg-devel] Psygnosis YOP decoding
Fri Aug 7 06:43:08 CEST 2009
Thomas Higdon wrote:
> I'm interested in getting involved in ffmpeg development. In order to
> get more familiar with the code base, I've decided to take a look at
> one of the suggested small FFmpeg tasks:
Great, welcome! Taking one of those small tasks is indeed a nice way
to get the grip of
participating in the project.
> I've been using this page:
> as a spec for the decoder. I understand that a couple of people
> expressed interest in this task last year as a SoC qualifier, but as
> far as I can tell, nothing came of it, because there's no decoder in
> the tree that I can see, nor any reference to it in the svn history or
> on the mailing list. Let me know if I'm wrong.
> I've had some luck so far. I've been able decode the audio data by
> writing a demuxer and sending it to the Westwood IMA ADPCM decoder.
Yes, that shows that you are in the right track.
> I've written some code for decoding the video, but I'm a little
> confused by what's on the wiki. I have a few questions:
> 1. How does the palette work? Each frame apparently carries its own
> palette part, which is PalColors * 3 bytes long, one byte for each RGB
> color component. I'm pretty sure this is correct, because the audio
> data starts where I expect it to. However, I'm not clear on the roles
> of FirstColor1 and FirstColor2. What does it mean to "update PalColors
> entries starting from FirstColor1 for odd and FirstColor2 for even
> frames respectively"? Does the encoder carry some palette state that
> is updated by each frame?
I'm not the author of this wiki page, but if I understood correctly,
the algorithm modifies the current
palette in the following way:
Suppose that PalColors == 10, FirstColor1 == 3 and FirstColor2 == 20.
The first frame will then change the colors in the palette from index
3 to 13. The second frame will change from index 20 to 30.
And then the third will, as the first, change from 3 to 13. The forth
as an "even" frame, will update like the second and so on...
> 2. I'm assuming when the algorithm says to use a byte to "paint" a
> pixel, I'm taking the byte and indexing into a palette, and painting
> that pixel with that color. Is this correct?
I would guess so.
> 3. Does the decoding proceed from top-to-bottom and left-to-right?.
Probably yes. This is the simplest question to settle, it is easy to
fix an image that is just flipped.
> That is, when I look at the first tag in some frame's video data, and
> consume the next byte to see what color to paint, am I on the upper
> left macroblock at that point? Does the decoding proceed to the next
> block to the right after the first is painted? I ask this because in
> the "copy previous block" part of the algorithm, all of the possible
> offsets have negative numbers, and would seem to refer to
> already-painted macroblocks if things are decoded top-to-bottom,
> left-to-right. Am I missing something?
It is pretty possible that the decoder just use data from the previous
frame if it wasn't yet painted.
Since this is a FMV game codec, it could be that the original decoder
just fetched whatever was in the video buffer at that position.
More information about the ffmpeg-devel