[FFmpeg-devel] [PATCH 1/2] lavu/frame: add new side data type for ICC profiles

Rostislav Pehlivanov atomnuker at gmail.com
Fri Jul 21 02:55:15 EEST 2017

On 21 July 2017 at 00:49, Nicolas George <george at nsup.org> wrote:

> Le tridi 3 thermidor, an CCXXV, Rostislav Pehlivanov a écrit :
> > It can be quite big. In some insane cases it can be bigger than the
> actual
> > packet or even the uncompressed frame. Its also not strictly necessary to
> > display something more or less looking okay, but its necessary to display
> > something correctly. I think its better treated like we currently treat
> > by defining a side data type and letting the API users cache and use it,
> > which is what this patch does.
> All this could apply to a dedicated field. Using side data only brings a
> type pruning of the actual type. Except:
Yes, it could. However I still think having it as a side data is better
since its the easiest way and that's what all API users currently use to
retrieve HDR metadata as well.

> >                                Side data is also refcounted so it even
> > solves the size issue.
> Indeed. A separate type could be refcounted too, though, but that would
> require a little more code. Short-term convenience vs. long-term
> convenience.
> > > > +     * The data contains an ICC profile with an optional name
> defined
> > > in the
> > > > +     * metadata entry.
> > > Not being a specialist of ICC profiles, I have no idea what that means
> > > in practice. What data structure is it?
> > There's a 300 page document describing the bitstream and how its used.
> The
> > smallest implementation, lcms is still a few tens of thousands of lines.
> This is not what I am asking. Look at the doxy for
> AV_FRAME_DATA_SPHERICAL. Explaining the semantic of the structure
> requires at least a few pages of documentation, but the doxy still
> explains in a few world the data structure used.
> What I am asking is very simple: if I want to exploit this side through
> a library, to what type shall I cast the pointer?
Nothing, its a bitstream. You give it to a library to decode and the
library is then ready to take e.g. rgb and output color corrected rgb.

More information about the ffmpeg-devel mailing list