NAME
Test::FAQ - Frequently Asked Questions about testing with Perl
DESCRIPTION
Frequently Asked Questions about testing in general and specific issues with Perl.
Is there any tutorial on testing?
Are there any modules for testing?
A whole bunch. Start with Test::Simple then move onto Test::More.
Then go onto http://search.cpan.org and search for "Test".
Are there any modules for testing web pages/CGI programs?
Test::WWW::Mechanize, Test::WWW::Selenium
Are there any modules for testing external programs?
Can you do xUnit/JUnit style testing in Perl?
Yes, Test::Class allows you to write test methods while continuing to use all the usual CPAN testing modules. It is the best and most perlish way to do xUnit style testing.
Test::Unit is a more direct port of xUnit to Perl, but it does not use the Perl conventions and does not play well with other CPAN testing modules. As of this writing, it is abandoned. Do not use.
The Test::Inline (aka Pod::Tests) is worth mentioning as it allows you to put tests into the POD in the same file as the code.
How do I test my module is backwards/forwards compatible?
First, install a bunch of perls of commonly used versions. At the moment, you could try these
5.7.2
5.6.1
5.005_03
5.004_05
if you're feeling brave, you might want to have on hand these
bleadperl
5.6.0
5.004_04
5.004
going back beyond 5.003 is probably beyond the call of duty.
You can then add something like this to your Makefile.PL. It overrides the ExtUtils::MakeMaker test_via_harness()
method to run the tests against several different versions of Perl.
# If PERL_TEST_ALL is set, run "make test" against
# other perls as well as the current perl.
{
package MY;
sub test_via_harness {
my($self, $orig_perl, $tests) = @_;
# names of your other perl binaries.
my @other_perls = qw(perl5.004_05 perl5.005_03 perl5.7.2);
my @perls = ($orig_perl);
push @perls, @other_perls if $ENV{PERL_TEST_ALL};
my $out;
foreach my $perl (@perls) {
$out .= $self->SUPER::test_via_harness($perl, $tests);
}
return $out;
}
}
and re-run your Makefile.PL with the PERL_TEST_ALL
environment variable set
PERL_TEST_ALL=1 perl Makefile.PL
now make test
will run against each of your other perls.
If I'm testing Foo::Bar, where do I put tests for Foo::Bar::Baz?
How do I know when my tests are good enough?
A: Use tools for measuring the code coverage of your tests, e.g. how many of your source code lines/subs/expressions/paths are executed (aka covered) by the test suite. The more, the better, of course, although you may not be able achieve 100%. If your testsuite covers under 100%, then the rest of your code is, basically, untested. Which means it may work in surprising ways (e.g. doesn't do things like they are intended or documented), have bugs (e.g. return wrong results) or it may not work at all.
How do I measure the coverage of my test suite?
How do I get tests to run in a certain order?
Tests run in alphabetical order, so simply name your test files in the order you want them to run. Numbering your test files works, too.
t/00_compile.t
t/01_config.t
t/zz_teardown.t
0 runs first, z runs last.
To achieve a specific order, try Test::Manifest.
Typically you do not want your tests to require being run in a certain order, but it can be useful to do a compile check first or to run the tests on a very basic module before everything else. This gives you early information if a basic module fails which will bring everything else down.
Another use is if you have a suite wide setup/teardown, such as creating and delete a large test database, which may be too expensive to do for every test.
We recommend against numbering every test file. For most files this ordering will be arbitrary and the leading number obscures the real name of the file. See "What should I name my test files?" for more information.
What should I name my tests?
What should I name my test files?
A test filename serves three purposes:
Most importantly, it serves to identify what is being tested. Each test file should test a clear piece of functionality. This could be at single class, a single method, even a single bug.
The order in which tests are run is usually dictated by the filename. See "How do I get tests to run in a certain order?" for details.
Finally, the grouping of tests into common bits of functionality can be achieved by directory and filenames. For example, all the tests for Test::Builder are in the t/Builder/ directory.
As an example, t/Builder/reset.t contains the tests for Test::Builder->reset
. t/00compile.t checks that everything compiles, and it will run first. t/dont_overwrite_die_handler.t checks that we don't overwrite the $SIG{__DIE__}
handler.
How do I deal with tests that sometimes pass and sometimes fail?
How do I test with a database/network/server that the user may or may not have?
What's a good way to test lists?
is_deeply()
from Test::More as well as Test::Deep.
Is there such a thing as untestable code?
There's always compile/export checks.
Code must be written with testability in mind. Separation of form and functionality.
What do I do when I can't make the code do the same thing twice?
Force it to do the same thing twice.
Even a random number generator can be tested.
How do I test a GUI?
How do I test an image generator?
How do I test that my code handles failures gracefully?
How do I check the right warnings are issued?
How do I test code that prints?
I want to test that my code dies when I do X
I want to print out more diagnostic info on failure.
ok(...) || diag "...";
How can I simulate failures to make sure that my code does the Right Thing in the face of them?
Why use an ok() function?
On Tue, Aug 28, 2001 at 02:12:46PM +0100, Robin Houston wrote: > Michael Schwern wrote: > > Ah HA! I've been wondering why nobody ever thinks to write a simple > > ok() function for their tests! perlhack has bad testing advice. > > Could you explain the advantage of having a "simple ok() function"?
Because writing:
print "not " unless some thing worked;
print "ok $test\n"; $test++;
gets rapidly annoying. This is why we made up subroutines in the first place. It also looks like hell and obscures the real purpose.
Besides, that will cause problems on VMS.
> As somebody who has spent many painful hours debugging test failures, > I'm intimately familiar with the _disadvantages_. When you run the > test, you know that "test 113 failed". That's all you know, in general.
Second advantage is you can easily upgrade the ok()
function to fix this, either by slapping this line in:
printf "# Failed test at line %d\n", (caller)[2];
or simply junking the whole thing and switching to Test::Simple or Test::More, which does all sorts of nice diagnostics-on-failure for you. Its ok()
function is backwards compatible with the above.
There's some issues with using Test::Simple to test really basic Perl functionality, you have to choose on a per test basis. Since Test::Simple doesn't use pack()
it's safe for t/op/pack.t to use Test::Simple. I just didn't want to make the perlhack patching example too complicated.
Dummy Mode
> One compromise would be to use a test-generating script, which allows > the tests to be structured simply and _generates_ the actual test > code. One could then grep the generated test script to locate the > failing code.
This is a very interesting, and very common, response to the problem. I'm going to make some observations about reactions to testing, they're not specific to you.
If you've ever read the Bastard Operator From Hell series, you'll recall the Dummy Mode.
The words "power surging" and "drivers" have got her. People hear
words like that and go into Dummy Mode and do ANYTHING you say. I
could tell her to run naked across campus with a powercord rammed
up her backside and she'd probably do it... Hmmm...
There seems to be a Dummy Mode with respect to testing. An otherwise competent person goes to write a test and they suddenly forget all basic programming practice.
The reasons for using an ok()
function above are the same reasons to use functions in general, we should all know them. We'd laugh our heads off at code that repeated as much as your average test does. These are newbie mistakes.
And the normal 'can do' flair seems to disappear. I know Robin. I *know* that in any other situation he would have come up with the caller()
trick in about 15 seconds flat. Instead weird, elaborate, inelegant hacks are thought up to solve the simplest problems.
I guess there are certain programming idioms that are foreign enough to throw your brain into reverse if you're not ready for them. Like trying to think in Lisp, for example. Or being presented with OO for the first time. I guess writing test is one of those.
How do I use Test::More without depending on it?
Install Test::More into t/lib under your source directory. Then in your tests say use lib 't/lib'
.
How do I deal with threads and forking?
Why do I need more than ok?
Since every test can be reduced to checking if a statement is true, ok()
can test everything. But ok()
doesn't tell you why the test failed. For that you need to tell the test more... which is why you need Test::More.
ok $pirate->name eq "Roberts", "person's name";
not ok 1 - person's name
# Failed test at pirates.t line 23.
If the above fails, you don't know what $person->name
returned. You have to go in and add a diag
call. This is time consuming. If it's a heisenbug, it might not fail again! If it's a user reporting a test failure, they might not be bothered to hack the tests to give you more information.
is $person->name, "Roberts", "person's name";
not ok 1 - person's name
# Failed test at pirates.t line 23.
# got: 'Wesley'
# expected: 'Roberts'
Using is
from Test::More you now know what value you got and what value you expected.
The most useful functions in Test::More are is()
, like()
and is_deeply()
.
What's wrong with print $test ? "ok" : "not ok"
?
How do I check for an infinite loop?
On Mon, Mar 18, 2002 at 03:57:55AM -0500, Mark-Jason Dominus wrote: > > Michael The Schwern <schwern@pobox.com> says: > > Use alarm and skip the test if $Config{d_alarm} is false (see > > t/op/alarm.t for an example). If you think the infinite loop is due > > to a programming glitch, as opposed to a cross-platform issue, this > > will be enough. > > Thanks very much! >
How can I check that flock works?
How do I use the comparison functions of a testing module without it being a test?
Any testing function based on Test::Builder, most are, can be quieted so it does not do any testing. It simply returns true or false. Use the following code...
use Test::More; # or any testing module
use Test::Builder;
use File::Spec;
# Get the internal Test::Builder object
my $tb = Test::Builder->new;
$tb->plan("no_plan");
# Keep Test::Builder from displaying anything
$tb->no_diag(1);
$tb->no_ending(1);
$tb->no_header(1);
$tb->output( File::Spec->devnull );
# Now you can use the testing function.
print is_deeply( "foo", "bar" ) ? "Yes" : "No";