README 9.07 KB
Newer Older
1
This is cairo's micro-benchmark performance test suite.
2

3
One of the simplest ways to run this performance suite is:
4

5
    make perf
6

Nis Martensen's avatar
Nis Martensen committed
7
which will give a report of the speed of each individual test. See
8
more details on other options for running the suite below.
9

10
A macro test suite (with full traces and more intensive benchmarks) is
11
also available; for this, see https://cgit.freedesktop.org/cairo-traces.
12 13 14 15 16
The macro-benchmarks are better measures of actual real-world
performance, and should be preferred over the micro-benchmarks (and over
make perf) for identifying performance regressions or improvements.  If
you copy or symlink this repository at cairo/perf/cairo-traces, then
make perf will run those tests as well.
17 18 19

Running the micro-benchmarks
----------------------------
20 21 22 23 24 25
The micro-benchmark performance suite is composed of a series of
hand-written, short, synthetic tests that measure the speed of doing a
simple operation such as painting a surface or showing glyphs. These aim
to give very good feedback on whether a performance related patch is
successful without causing any performance degradations elsewhere.

26
The micro-benchmarks are compiled into a single executable called
27
cairo-perf-micro, which is what "make perf" executes. Some
28
examples of running it:
29

30
    # Report on all tests with default number of iterations:
31
    ./cairo-perf-micro
32

33
    # Report on 100 iterations of all gradient tests:
34
    ./cairo-perf-micro -i 100 gradient
35

36
    # Generate raw results for 10 iterations into cairo.perf
37
    ./cairo-perf-micro -r -i 10 > cairo.perf
38
    # Append 10 more iterations of the paint test
39
    ./cairo-perf-micro -r -i 10 paint >> cairo.perf
40

41
Raw results aren't useful for reading directly, but are quite useful
Nis Martensen's avatar
Nis Martensen committed
42
when using cairo-perf-diff to compare separate runs (see more
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
below). The advantage of using the raw mode is that test runs can be
generated incrementally and appended to existing reports.

Generating comparisons of separate runs
---------------------------------------
It's often useful to generate a chart showing the comparison of two
separate runs of the cairo performance suite, (for example, after
applying a patch intended to improve cairo's performance). The
cairo-perf-diff script can be used to compare two report files
generated by cairo-perf.

Again, by way of example:

    # Show performance changes from cairo-orig.perf to cairo-patched.perf
    ./cairo-perf-diff cairo-orig.perf cairo-patched.perf

This will work whether the data files were generate in raw mode (with
cairo-perf -r) or cooked, (cairo-perf without -r).

Finally, in its most powerful mode, cairo-perf-diff accepts two git
revisions and will do all the work of checking each revision out,
building it, running cairo-perf for each revision, and finally
generating the report. Obviously, this mode only works if you are
using cairo within a git repository, (and not from a tar file). Using
this mode is as simple as passing the git revisions to be compared to
cairo-perf-diff:

    # Compare cairo 1.2.6 to cairo 1.4.0
    ./cairo-perf-diff 1.2.6 1.4.0

    # Measure the impact of the latest commit
    ./cairo-perf-diff HEAD~1 HEAD

As a convenience, this common desire to measure a single commit is
supported by passing a single revision to cairo-perf-diff, in which
78
case it will compare it to the immediately preceding commit. So for
79 80 81 82 83 84 85 86 87
example:

    # Measure the impact of the latest commit
    ./cairo-perf-diff HEAD

    # Measure the impact of an arbitrary commit by SHA-1
    ./cairo-perf-diff aa883123d2af90

Also, when passing git revisions to cairo-perf-diff like this, it will
Nis Martensen's avatar
Nis Martensen committed
88
automatically cache results and re-use them rather than re-running
89 90 91 92 93 94 95 96 97 98 99 100 101 102
cairo-perf over and over on the same versions. This means that if you
ask for a report that you've generated in the past, cairo-perf-diff
should return it immediately.

Now, sometimes it is desirable to generate more iterations rather than
re-using cached results. In this case, the -f flag can be used to
force cairo-perf-diff to generate additional results in addition to
what has been cached:

    # Measure the impact of latest commit (force more measurement)
    ./cairo-perf-diff -f

And finally, the -f mode is most useful in conjunction with the --
option to cairo-perf-diff which allows you to pass options to the
103
underlying cairo-perf runs. This allows you to restrict the additional
104 105 106 107 108 109 110 111 112 113 114 115 116
test runs to a limited subset of the tests.

For example, a frequently used trick is to first generate a chart with
a very small number of iterations for all tests:

    ./cairo-perf-diff HEAD

Then, if any of the results look suspicious, (say there's a slowdown
reported in the text tests, but you think the text test shouldn't be
affected), then you can force more iterations to be tested for only
those tests:

    ./cairo-perf-diff -f HEAD -- text
117

118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133
Generating comparisons of different backends
--------------------------------------------
An alternate question that is often asked is, "how does the speed of one
backend compare to another?". cairo-perf-compare-backends can read files
generated by cairo-perf and produces a comparison of the backends for every
test.

Again, by way of example:

    # Show relative performance of the backends
    ./cairo-perf-compare-backends cairo.perf

This will work whether the data files were generate in raw mode (with
cairo-perf -r) or cooked, (cairo-perf without -r).


134 135 136 137 138
Creating a new performance test
-------------------------------
This is where we could use everybody's help. If you have encountered a
sequence of cairo operations that are slower than you would like, then
please provide a performance test. Writing a test is very simple, it
139 140
requires you to write only a small C file with a couple of functions,
one of which exercises the cairo calls of interest.
141 142 143 144 145 146 147 148 149

Here is the basic structure of a performance test file:

    /* Copyright © 2006 Kind Cairo User
     *
     * ... Licensing information here ...
     * Please copy the MIT blurb as in other tests
     */

150
    #include "cairo-perf.h"
151

152
    static cairo_time_t
153 154 155
    do_my_new_test (cairo_t *cr, int width, int height)
    {
	cairo_perf_timer_start ();
156

157
	/* Make the cairo calls to be measured */
158

159
	cairo_perf_timer_stop ();
160

161
	return cairo_perf_timer_elapsed ();
162
    }
163

164 165 166 167 168 169 170 171
    void
    my_new_test (cairo_perf_t *perf, cairo_t *cr, int width, int height)
    {
	/* First do any setup for which the execution time should not
	 * be measured. For example, this might include loading
	 * images from disk, creating patterns, etc. */

	/* Then launch the actual performance testing. */
172
	cairo_perf_run (perf, "my_new_test", do_my_new_test);
173 174 175

	/* Finally, perform any cleanup from the setup above. */
    }
176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193

That's really all there is to writing a new test. The first function
above is the one that does the real work and returns a timing
number. The second function is the one that will be called by the
performance test rig (see below for how to accomplish that), and
allows for multiple performance cases to be written in one file,
(simply call cairo_perf_run once for each case, passing the
appropriate callback function to each).

We go through this dance of indirectly calling your own function
through cairo_perf_run so that cairo_perf_run can call your function
many times and measure statistical properties over the many runs.

Finally, to fully integrate your new test case you just need to add
your new test to three different lists. (TODO: We should set this up
better so that the lists are maintained automatically---computed from
the list of files in cairo/perf, for example). Here's what needs to be
added:
194 195 196

 1. Makefile.am: Add the new file name to the cairo_perf_SOURCES list

197 198
 2. cairo-perf.h: Add a new CAIRO_PERF_DECL line with the name of your
    function, (my_new_test in the example above)
199

Andrea Canciani's avatar
Andrea Canciani committed
200 201
 3. cairo-perf-micro.c: Add a new row to the list at the end of the
    file. A typical entry would look like:
202

203
	{ my_new_test, 16, 64 }
204 205 206 207 208 209 210 211

    The last two numbers are a minimum and a maximum image size at
    which your test should be exercised. If these values are the same,
    then only that size will be used. If they are different, then
    intermediate sizes will be used by doubling. So in the example
    above, three tests would be performed at sizes of 16x16, 32x32 and
    64x64.

Chris Wilson's avatar
Chris Wilson committed
212

213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233
How to run cairo-perf-diff on WINDOWS
-------------------------------------
This section explains the specifics of running cairo-perf-diff under
win32 plateforms. It assumes that you have installed a UNIX-like shell
environment such as MSYS (distributed as part of MinGW).

 1. From your Mingw32 window, be sure to have all of your MSVC environ-
    ment variables set up for proper compilation using 'make'

 2. Add the %GitBaseDir%/Git/bin path to your environment, replacing the 
    %GitBaseDir% by whatever directory your Git version is installed to.

 3. Comment out the "UNSET CDPATH" line in the git-sh-setup script 
    (located inside the ...Git/bin directory) by putting a "#" at the 
    beginning of the line.

you should be ready to go !

From your mingw32 window, go to your cairo/perf directory and run the 
cairo-perf-diff script with the right arguments.

234
Thanks for your contributions and have fun with cairo!
235 236 237 238 239

TODO
----
Add a control language for crafting and running small sets of micro
benchmarks.