[FFmpeg-devel] [RFC] H.264/SQV3 separation: h264data.h

Uoti Urpala uoti.urpala
Tue Dec 16 01:57:52 CET 2008


On Tue, 2008-12-16 at 00:43 +0100, Michael Niedermayer wrote:
> On Mon, Dec 15, 2008 at 04:52:34AM +0200, Uoti Urpala wrote:
> > On Mon, 2008-12-15 at 01:33 +0100, Michael Niedermayer wrote:
> > > One has to weigh the advantage against the disadvantage
> > > 
> > > here, advanatge, of the change is largely cosmetic
> > > disadvantage is a 0.5% speed loss, after 10 such changes our decoder is 5%
> > > slower.
> > 
> > Real speed change yes, but 0.5% change in benchmark result tells you
> > very little about whether speed changed or not ("real" speed could have
> > become faster or slower).
> 
> Please see my mail i just posted for an actual test of what can reliably
> be detected.

Your other mail seems to be about something else, namely benchmarking a
particular binary. You can benchmark the performance of one binary to a
precision higher than 0.5%; but the performance of that binary will have
a component determined by "random" compile-time effects that is larger
than 0.5%. As I wrote in the part of my earlier mail you didn't quote:
"I have benchmarked various irrelevant changes to the h264 decoder
earlier and they typically have effects at the percent or couple level
(even when each resulting binary is benchmarked enough that the
randomness caused by per run variation is much less)". You're only
measuring the "randomness caused by per run variation", which I already
said can be much less.

The "random" variation I'm talking about is NOT between different runs
of the same binary, but between slightly different compiles of the same
code. For example using a slightly different compiler version, changing
code whose performance doesn't matter or which is completely unused, or
even changing code in another translation unit from which nothing is
executed in your benchmark. Such "irrelevant" changes can change
performance by a couple of percent, and any real change can change the
"random" effects in addition to its consistent performance effects. You
can benchmark change X to the h264 decoder and note that it makes h264
decoding 1% slower. Then someone changes the vorbis decoder, and after
that change X makes h264 decoding 1% faster on your computer. On someone
else's machine the effects could be reverse (first X looks beneficial,
then harmful).

> [...]
> > > also ive developed and optimized other codecs like the mpeg4 variants based
> > > on this concept and they are pretty much the fastest around. Had i applied
> > > random changes that made the code by 0.5% slower they would not be as
> > > fast.
> > 
> > But benchmarking is not a reliable way to tell whether the h264 decoder
> > really became 0.5% slower (at least you'd need a big collection of
> > various possible "irrelevant" code changes and of compiler versions to
> > benchmark against).
> 
> Please see my mail i just posted for an actual test of what can reliably
> be detected.

It shows that the random result from the compile can be detected. That
doesn't mean it could detect the real quality of the code before the
compile randomness is added to it. Also real code would likely have more
runtime randomness too than the code in your benchmark (at least memory
patterns would vary more depending on heap and stack location
randomization in a real program).

> > > doing optimzation well is not a matter of asking why its slower and then
> > > ignoring it when its not obvious, its a matter of picking the faster.
> > > Like in evolution, the question is not why some species is consistently
> > > 0.5% less effective than another just that it is.
> > 
> > Evolution relies on a large collection of individuals with varying
> > traits. It wouldn't work if it abandoned a trait when one individual
> > with it dies.
> 
> are we speaking about the same thing? it seems not

If you kept a huge collection of programs with various features (one
with patches X, Y and Z applied, another with X, Z and W applied, and a
few thousand other variations) and occasionally pruned the overall
slowest variations then that could work like evolution. Abandoning a
patch because one binary with it applied is slower would be more like
evolution abandoning a gene because one individual with it died, which
wouldn't work.

> > > if one can find out why its 0.5% worse one may be able to find a solution
> > > that makes the wanted change and avoids the speedloss, but if not the
> > > change isnt ok.
> > 
> > What about all the other changes you applied that made it 0.5% slower?
> 
> which change?

Lots of them as I explained below (exactly which ones they are depend on
whose machine they're benchmarked on...).

> > Since the speed varies back and forth with most changes, about half of
> > them (other than clearly beneficial optimizations) have made it slower,
> > and a significant portion of those by 0.5% or more. Even changes in
> > other decoders can affect h264 speed that much (probably some memory
> > aliasing effect).
> 
> The speed very likely does vary yes, if its half faster half slower is
> something you are guessing not knowing i suspect. Same for the relation
> to 0.5%

Half faster half slower is logical effect for "random" changes. The
relation to 0.5% is based on what I've seen in practice as I've
benchmarked h264 decoding in different versions.

> > > and about other random changes making the code 0.5% faster, these are VERY
> > > welcome, adding a noinline here, and inline there and such, really!
> > > [of course only when the patch is clean and doesnt mix unrelated stuff]
> > 
> > Such random changes have about 50% chance of being beneficial on any
> > other combination of compiler, machine and exact FFmpeg source used. 
> 
> > The
> > changes I tested probably wouldn't have the same effect in ffplay as in
> > my test under MPlayer. 
> 
> > The results really are essentially "random".
> 
> So what? is it "probably" or "really are essentially"

You're confusing the subjects of the sentences. There are no two
statements about a single "it" but two separate statements.

The performance effect of that particular change probably wouldn't be
the same under ffplay (but there's a chance it would be; it depends on
which way the "random" roll would land).

And the results are essentially "random" (though strictly speaking
deterministic, but "random" in the same sense as the output of a hash
function).

> Diego tested the change in ffmpeg and mplayer, the results where the same
> you just said you didnt test except one case (mplayer). Now you extrapolate
> from 1 case that it would be random. Are you studying cosmology by chance?

I do not extrapolate from one case. I determine it based on several
prior measurements of the behavior of the h264 decoder performance; and
it doesn't require much extrapolating to say that the performance
variation of the current decoder probably isn't much smaller than that
measured before.





More information about the ffmpeg-devel mailing list