This test isn't so much a unit-test as a simple tool for verifying that generated values fall within a certain acceptable range of bias. However, this test is distributed to use the null seed because there is a chance that some short-term run of random values would trigger a failure (they are random numbers after all).
Of course, as the total number of values generated increases, we get more confident that the values generated are unbiased. Because Session::Token extracts bytes from the PRNG, any bias in our characters will be fairly pronounced. The most difficult alphabet to detect bias in would be something like this:
[ map { chr } (0, 0 .. 254) ]
Note how the 0 is repeated twice.
P(0) = 2/256
P(1) = 1/256
...
P(254) = 1/256
Algorithms that extract entire words from the PRNG and use these words in the modulus computation will have more difficult to detect bias.
For further investigation, set the S_T_NON_DETERMINISTIC environment variable to have it run with a random seed (will fail every now and again due to pure chance).
The default alphabet, total number of characters to generate, and tolerance threshold can be controlled with S_T_ALPHABET, S_T_TOTAL, and S_T_TOLERANCE.
Example: To see this test fail with the example used in Session::Token's mod bias example, run the tests with the alphabet set to "aabc":
$ S_T_ALPHABET=aabc make test
...
t/no-mod-bias.t .... Not within tolerance: a (97): 0.99636 > 0.01 at t/no-mod-bias.t line 95.