This page is no longer maintained — Please continue to the home page at www.scala-lang.org

Evaluating Java profilers

1 reply
Iulian Dragos 2
Joined: 2009-02-10,
User offline. Last seen 42 years 45 weeks ago.
You may be interested in the following paper:
Evaluating the accuracy of Java profilers (PLDI 2010)
"Performance analysts profile their programs to find methods that are worth optimizing: the "hot" methods. This paper shows that four commonly-used Java profilers (xprof , hprof , jprofile, and yourkit) often disagree on the identity of the hot methods. If two profilers disagree, at least one must be incorrect. Thus, there is a good chance that a profiler will mislead a performance analyst into wasting time optimizing a cold method with little or no performance improvement.

This paper uses causality analysis to evaluate profilers and to gain insight into the source of their incorrectness. It shows that these profilers all violate a fundamental requirement for sampling based profilers: to be correct, a sampling-based profilermust collect samples randomly."


--
« Je déteste la montagne, ça cache le paysage »
Alphonse Allais
Johannes Rudolph 2
Joined: 2010-02-12,
User offline. Last seen 42 years 45 weeks ago.
Re: Evaluating Java profilers

I like it.

I often had doubts about the results of profiling but I mostly used
instrumentation which often is marketed as more accurate. I came to
the conclusion that instrumentation, particularly with JIT, must have
some observer effect.

That they have found short-comings also for sampling is really
interesting and makes hope for getting even better profilers in the
future if they address the issues.

On Wed, Oct 27, 2010 at 3:41 PM, iulian dragos wrote:
> You may be interested in the following paper:
> Evaluating the accuracy of Java profilers (PLDI 2010)
> "Performance analysts profile their programs to find methods that are worth
> optimizing: the "hot" methods. This paper shows that four commonly-used Java
> profilers (xprof , hprof , jprofile, and yourkit) often disagree on the
> identity of the hot methods. If two profilers disagree, at least one must be
> incorrect. Thus, there is a good chance that a profiler will mislead a
> performance analyst into wasting time optimizing a cold method with little
> or no performance improvement.
>
> This paper uses causality analysis to evaluate profilers and to gain insight
> into the source of their incorrectness. It shows that these profilers all
> violate a fundamental requirement for sampling based profilers: to be
> correct, a sampling-based profilermust collect samples randomly."
>
> --
> « Je déteste la montagne, ça cache le paysage »
> Alphonse Allais
>

Copyright © 2012 École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland