NAME
Benchmark::Timer - Benchmarking with statistical confidence
SYNOPSIS
# Non-statistical usage
use Benchmark::Timer;
$t = Benchmark::Timer->new(skip => 1);
for(1 .. 1000) {
$t->start('tag');
&long_running_operation();
$t->stop('tag');
}
print $t->get_report;
# --------------------------------------------------------------------
# Statistical usage
use Benchmark::Timer;
$t = Benchmark::Timer->new(skip => 1, confidence => 97.5, error => 2);
while($t->need_more_samples('tag')) {
$t->start('tag');
&long_running_operation();
$t->stop('tag');
}
print $t->get_report;
DESCRIPTION
The Benchmark::Timer class allows you to time portions of code conveniently, as well as benchmark code by allowing timings of repeated trials. It is perfect for when you need more precise information about the running time of portions of your code than the Benchmark module will give you, but don't want to go all out and profile your code.
The methodology is simple; create a Benchmark::Timer object, and wrap portions of code that you want to benchmark with start()
and stop()
method calls. You supply a unique tag, or event name, to those methods. This allows one Benchmark::Timer object to benchmark many pieces of code. If you provide error and confidence values, you can also use need_more_samples()
to determine, statistically, whether you need to collect more data.
When you have run your code (one time or over multiple trials), you can obtain information about the running time by calling the results()
method or get a descriptive benchmark report by calling get_report()
.
If you run your code over multiple trials, the average time is reported. This is wonderful for benchmarking time-critical portions of code in a rigorous way. You can also optionally choose to skip any number of initial trials to cut down on initial case irregularities.
METHODS
- $t = Benchmark::Timer->new( [options] );
-
Constructor for the Benchmark::Timer object; returns a reference to a timer object. Takes the following named arguments:
- skip
-
The number of trials (if any) to skip before recording timing information.
- minimum
-
The minimum number of trials to run.
- error
-
A percentage between 0 and 100 which indicates how much error you are willing to tolerate in the average time measured by the benchmark. For example, a value of 1 means that you want the reported average time to be within 1% of the real average time.
need_more_samples()
will use this value to determine when it is okay to stop collecting data.If you specify an error you must also specify a confidence.
- confidence
-
A percentage between 0 and 100 which indicates how confident you want to be in the error measured by the benchmark. For example, a value of 97.5 means that you want to be 97.5% confident that the real average time is within the error margin you have specified.
need_more_samples()
will use this value to compute the estimated error for the collected data, so that it can determine when it is okay to stop.If you specify a confidence you must also specify an error.
- $t->reset;
-
Reset the timer object to the pristine state it started in. Erase all memory of events and any previously accumulated timings. Returns a reference to the timer object. It takes the same arguments the constructor takes.
- $t->start($tag);
-
Record the current time so that when
stop()
is called, we can calculate an elapsed time. Supply a $tag which is simply a string that is the descriptive name of the event you are timing. If you do not supply a $tag, the last event tag is used; if there is none, a "_default" tag is used instead. - $t->stop($tag);
-
Record timing information. The optional $tag is the event for which you are timing, and defaults to the $tag supplied to the last
start()
call. If a $tag is supplied, it must correspond to one given to a previously calledstart()
call. It returns the elapsed time in milliseconds.stop()
throws an exception if the timer gets out of sync (e.g. the number ofstart()
s does not match the number ofstop()
s.) - $t->need_more_samples($tag);
-
Compute the estimated error in the average of the data collected thus far, and return true if that error exceeds the user-specified error. The optional $tag is the event for which you are timing, and defaults to the $tag supplied to the last
start()
call or the default tag ifstart()
has not yet been called. If a $tag is supplied, it must correspond to one given to a previously calledstart()
call.This routine assumes that the data are normally distributed.
- $t->get_report($tag);
-
Get a simple report on the collected timings to STDERR, either for a single specified tag or all tags if none is specified. This report prints the number of trials run, the total time taken, and, if more than one trial was run, the average time needed to run one trial and error information. It prints the events out in the order they were
start()
ed. - $t->result($event);
-
Return the time it took for $event to elapse, or the mean time it took for $event to elapse once, if $event happened more than once.
result()
will complain (via a warning) if an event is still active. - $t->results;
-
Returns the timing data as a hash keyed on event tags where each value is the time it took to run that event, or the are the average time it took, if that event ran more than once. In scalar context it returns a reference to that hash. The return value is actually an array, so that the original event order is preserved.
- $t->data($event), $t->data;
-
These methods are useful if you want to recover the full internal timing data to roll your own reports.
If called with an $event, returns the raw timing data for that $event as an array (or a reference to an array if called in scalar context). This is useful for feeding to something like the Statistics::Descriptive package.
If called with no arguments, returns the raw timing data as a hash keyed on event tags, where the values of the hash are lists of timings for that event. In scalar context, it returns a reference to that hash. As with
results()
, the data is internally represented as an array so you can recover the original event order by assigning to an array instead of a hash.
BUGS
Benchmarking is an inherently futile activity, fraught with uncertainty not dissimilar to that experienced in quantum mechanics. But things are a little better if you apply statistics.
SEE ALSO
Benchmark, Time::HiRes, Time::Stopwatch, Statistics::Descriptive
AUTHOR
The original code (written before April 20, 2001) was written by Andrew Ho <andrew@zeuscat.com>, and is copyright (c) 2000-2001 Andrew Ho. Versions up to 0.5 are distributed under the same terms as Perl.
Maintenance of this module is now being done by David Coppit <david@coppit.org>.