[FFmpeg-devel] Process (Was: [PATCH][7/8] Add VA API accelerated H.264 decoding (take 4))

Gwenole Beauchesne gbeauchesne
Tue Feb 10 11:18:08 CET 2009


Hi,

On Mon, 9 Feb 2009, Michael Niedermayer wrote:

> And above all, do you consider my suggestions bad? If so that should be
> discussed, maybe i miss something and you have a better idea for the
> optimal design but if not, then where is the problem? we arent disagreeing

Yes, your factorisation model is a little suboptimal. I don't object 
factorisation, but I can only see two reasonnable entry-points for it.

1) At the very begining. However, you have to have an overview of other 
work, otherwise you would tend into an over-engineered academic work. If 
one shows you a patch doing X, see if there is any Y or Z related to X. 
i.e. lookahead. In particular, "oh, a video-acceleration patch? are there 
other video-acceleration API around?" Yes - No ? Yes => try to factor out 
already. This will only work if you know about the other specs.

2) Third iteration, i.e. third point of view. Either a 3rd party factoring 
code from 2 concurrent implementations or one of the original author 
willing to add support for a 3rd spec. A third point of view is needed to 
get things correctly, the somewhat "2:1 majority" Diego likes so much. 
IMHO, bipolar models never worked correctly: each party would think the 
way he implemented something was the right one.

The lookahead model applies to anything. e.g. memory allocation => when 
you allocate something, think about when you will deallocate this thing, 
beforehand, this would avoid you memory leaks. Correct use of data types 
=> when you want to cast pointers to integer, think about how large they 
need to be, you will avoid portability issues. Really, this makes your 
life simpler...

>> At the very least, you should have commented on the real implementation
>> earlier/beforehand, not many weeks after commenting on pure *cosmetics*.
>> That's a terrible waste of time, and totally suboptimal process. IOW,
>> cosmetics are the last things to care about in a process, especially if
>> those are things that could be done through an automated tool.
>
> Why did you not realize the problems immedeatly?

I did, but as any newcomer I think, we assume existing practise (i.e. 
current code already committed) was the right one and don't want to bother 
others with what they would think are "pointless discussions" or trivial 
things to check from the code.

So we (well at least me, I don't know for others) then base their work off 
current practise, make the effort to read existing code. Isn't what you'd 
expect? Otherwise, I already saw reactions like: "the code has your 
answers, if you bother" ; "if you can't read any code, you are a beginner 
programmer", etc. That's also has a psychological effect, some people like 
bitching at others easily and newcomers would like to avoid that. IOW, in 
order to keep one's mental health sane, it's better not attempt things 
that would get those usual reactions. => base your code off current 
practise.

Now, you could also warn people that: "be careful, FFmpeg code is probably 
architecturally broken, don't assume anything! Come and discuss first, we 
do promise we won't bite you. If we do, OK, if we would accept your patch 
wihout compromise. ;-)".

> if so why do you belive i knew more when even the author of the patch 
> apparently didnt notice every problem.

Because you are actually the one who know more the surrounding code and 
has the best overview of the code. So, you are a reference, if you 
accepted a patch, then it was a correct practise and incentive to others 
to operate the same.

Moreover, if you don't change the current code either for 
weeks/months/years, it means it's either the most correct approach to 
follow, or the code is totally obsolete and should be marked as such. i.e. 
document it as "bad practise, please don't follow this model, needs to be 
changed, etc." + one or two references to list archives.

BTW, don't tell me you didn't have the curiosity to check if there weren't 
any other API hanging around for video acceleration. I am sure you knew 
there was. ;-)

>>> you can maybe gain some advantage for VAAPI over VDPAU ;)
>>
>> You know, VA API already has a clear advantage over VDPAU: it supports all
>> three major APIs and implementations. ;-) So, I'd even say, if you want to
>> have a common FFmpeg base struct, you'd better deriving it from VA API as
>> it's descriptive enough to suit your purposes.
>
> maybe and maybe iam not opposed but then this sounds all to familiar ...
> SDL supports it all X11,XV, ... ffplay supports just SDL so it has to be
> great ...
> seriously i wish ffplay had native XV & X11 support _instead_ of SDL
>
> thus the idea of just supporting VAAPI has to be considered carefully

It's not the point, it would be for approach (A) if you wanted common data 
structures directly at the FFmpeg level then have converters to the target 
APIs. In that case, VA API would indeed have all the necessary stuff 
already (but the MPEG-2 IDCT-level acceleration, point granted). So, it 
would have been enough grab/rename/whatever the data structures, in that 
case.

Besides, having a single implementation is the role of upper levels, I 
think. That's exactly the route I took: implement other backends to VA 
API. i.e. that's a distribution-{level,policy} choice to take. FFmpeg 
exposes building-blocks.

Regards,
Gwenole.




More information about the ffmpeg-devel mailing list