[FFmpeg-devel] [RFC] H.264/SQV3 separation: h264data.h
Wed Dec 17 00:39:57 CET 2008
On Tue, 2008-12-16 at 22:43 +0100, Michael Niedermayer wrote:
> On Tue, Dec 16, 2008 at 10:54:35PM +0200, Uoti Urpala wrote:
> > On Tue, 2008-12-16 at 20:45 +0100, Michael Niedermayer wrote:
> > > > > Uoti Urpala wrote:
> > > > > > On Tue, 2008-12-16 at 13:31 +0100, Michael Niedermayer wrote:
> > > > > >> On Tue, Dec 16, 2008 at 11:01:34AM +0100, Panagiotis Issaris wrote:
> > > > > >>> Besides that, in my opinion you can't benchmark code on one particular
> > > > > >>> machine and expect a 0.5% performance loss to say anything about the
> > > > > >>> codes performance on other machines (other then that the code hasn't
> > > > > >>> introduced a substantial performance change on similar machines).
> > > > > >> What evidence is there to support this claim?
> > > > > >
> > > > > > Many known cases where changes that are known not to affect the real
> > > > > > quality of the code alter performance by more than 0.5%. I just tried
> > > > > > adding an unused global function to h264.c. The first attempt didn't
> > > > > > show any quickly measurable difference. Then I made the function a bit
> > > > > > larger, and now the benchmark ran consistently 0.8% faster. Removing the
> > > > > > unused function made things correspondingly 0.8% slower again.
> > > What iam wondering is how a speed effect from random unused code on a
> > > single machine is related to your claim above of lack of correlations
> > > between the speed on different machines.
> > Do you really expect adding an unused function to consistently increase
> > speed everywhere?
> Please refrain from randomly cliping parts of the discussion silently, and
> by that placing what i said under some different text.
The only part I clipped was Baptiste asking about the particular code.
Do you think that affected the context?
> Also please refrain from putting words in my mouth, or replying in such
> a way as to pretend that I made some ridiculous claim that you now would
> try to correct.
I don't think I did.
> Also please refrain from trolling in the style of:
> U: > Its always A, thats fact, known by everyone
> M: no its not always A
> M: > no its not always A
> U: Do you really belive its never A?
I did not do that either. There was no such "always" / "not never"
difference above. Only "changes are consistent" or "are not".
You: How is it known that small speed changes are not consistent across
Me: You can see that the speed is affected by random effects like this
even on a single machine.
You: How does that show all the machines do not behave the same?
Me: Once we know there's a random component it's unlikely all the
machines would get the same random result isn't it?
Did you interpret the original claim to say that even _major_
performance differences would not correlate between different enough
machines? I don't think that was meant, and I certainly didn't interpret
it that way. If you see a small change in a benchmark result then you
can't say much about what'll happen on other machines, except that the
changes are unlikely to be too big on similar enough machines (there's
more chance they could vary significantly on substantially different
machines, but there's still correlation).
> > > One actual case here, that is the table split showed a slowdown consistant
> > > across machines, thus contradicting your claim.
> > Why would it contradict my claim (even if verified to be consistent, I'm
> > not sure whether there were enough reports for that yet)? I myself
> > reported a 8% slowdown which is more than the ordinary random variation.
> > Of course you'll see consistent change _direction_ if there's a big
> > enough real effect on speed.
> > What should be rare according to my claim is a nontrivial(*) change
> > having a consistent small negative or consistent small positive effect
> > across machines.
> well the split seems to have a rather consistant effect across machines.
> that is its slower. And peoples reactions, like diego posting a IRC log
> from you shows that you made people belive that your claims where related
> to the actual code changes.
What I said on IRC was that a 0.28% change in the h264 decoder tells
essentially nothing. A 0.28% slowdown in a benchmark is not a much
bigger sign of real slowdown than a 0.28% benchmark speedup would be. So
my claims there were about the interpretation of that one benchmark
result. After hearing of Diego's -0.28% result alone there was little
reason to expect the speed change to be consistently up or to be
consistently down. Only after knowing of my -8% result too was there
evidence of some effect that might make results consistently slower.
More information about the ffmpeg-devel