NAME

BigBench - benchmark groups of opcodes under different versions/conditions

SYNOPSIS

Usage  : ./bb [options]
Options: --help              print this screen and exit
         --accuracy=digits   round results to so many digits
         --base=number       print relative summary based on number
         --code=sourcecode   bench code snippet and ignore definitons
         --definitons=file   from where to read benchmark definitions
         --duration=seconds  run each op for at least this time
         --nosummary         don't print summary
         --nounlink          don't unlink temporary files (for debug)
         --path=libpath      path to libraries used by templates
         --runs=number       run benchmark more than once (see --take)
         --simulate=sr       simulate results by using srand(sr)
         --skew=factor       scale reported numbers by factor
         --take=run          take lowest|average|highest|last
         --templates=path    path to templates to be used
         --terse             terse summary (unless --nosummary)
         --tight             more tight summary (smaller spacing)

Options may be abbreviated, their case does not matter.

DESCRIPTION

BigBench let's you define groups of opcodes (source code snippets), which are placed into different templates and then benchmarked. The definitons are stored in a file and can thus be re-used and allow to re-do benchmarks with reliable results. It is also possible to run a short benchmark snippet directly from the commandline.

The templates can load different module or Perl versions, or just set different flags. This allows comparisations between versions.

You can specify on a per-op basis, unlike with use Benchmark;, what the empty loop should look and how the benchmark should be setup.

The benchmark results are rounded, and a nice summary is printed.

There are many options which control how long to run benchmarks, how the results are rounded, summary should look like etc.

OPTIONS

The following commandline options exists:

--help

Print the screen and exit. All other options are ignored.

--accuracy=digits

Round results to so many digits. Default is 3.

--base=number

Print a relative summary based on number, not with absolute results. If you do --base=100, you can simulate perlbench.

--code=sourcecode

Bench the given source code snippet. A --definitions=file will be ignored.

--definitons=file

From which file to read the benchmark definitions. Defaults to 'bigint.def'.

--duration=seconds

Run each op for at least this time (in seconds). Must be at least one second. The timings are quite accurate with only 2 or 3 seconds, so going higher will not give you much better results. If you want just a quick peek on what the output will look like, use --simulate=sr (see there).

--nosummary

Don't print summary. When given, --terse and --tight are ignored.

Don't unlink temporary files. If something goes wrong, you can have a look at what bb creates.

--path=libpath

Specifie the path to the libraries used by templates. You can put unpacked modules or Perl versions there. This string will be interpolated into ##path## in the template files.

--runs=number

Run each benchmark so many times. See also "--take" for what result will be taken.

--simulate=sr

Simulate results by using srand(sr). This is a quick way to see how the output will look like and also used by the testsuite.

--skew=factor

Scale all reported numbers by this factor. For more nicer output.

--take=run

Take the run that is 'lowest', 'average', 'highest' or the 'last'.

Generally, when doing multiply runs, you want to take the highest run. The reason is that any noise introduced by the system while running the benchmarks makes the numbers worse, and since the results are in ops/s, the highest number represents the peak performance, which comes as closest as possible to the real performance.

You want to take the average of all runs when you want to find out the average performance that can be expected on a typical system.

The worst perfomance can be found by taking the lowest run.

The last run is for benchmarks that involve things that get cached (f.i. filesystem dependend benchmarks). In this case, --take=last --runs=2 will get the first run to cache the things, discard it's result and use the one from the second run.

See also "--runs".

--templates=path

Specifies the path to the templates to be used for the actual benchmarking.

--terse

Create a more terse summary. It will only print out the group averages, and suppress the actual op results.

--tight

Create a more tight summary with smaller spacing.

Examples

./bb --def=math.def --terse --skew=2.1		# better printable?	
./bb --def=str.def --inc=math --duration=5	# really fine-grained
./bb --def=some.def --nosummary		# detailed
./bb --def=some.def --terse --base=100		# simulate perlbench
./bb --code='"ababba" =~ /a+/;'		# only this

LICENSE

This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself.

AUTHORS

Tels http://bloodgate.com in late 2001. (C) Tels 2001-2004.