NAME
Bencher::Scenario::TextTableModules - Benchmark modules that generate text table
VERSION
This document describes version 0.05 of Bencher::Scenario::TextTableModules (from Perl distribution Bencher-Scenario-TextTableModules), released on 2016-06-26.
SYNOPSIS
To run benchmark with default option:
% bencher -m TextTableModules
To run module startup overhead benchmark:
% bencher --module-startup -m TextTableModules
For more options (dump scenario, list/include/exclude/add participants, list/include/exclude/add datasets, etc), see bencher or run bencher --help
.
BENCHMARKED MODULES
Version numbers shown below are the versions used when running the sample benchmark.
Text::ANSITable 0.48
Text::ASCIITable 0.20
Text::FormatTable 1.03
Text::MarkdownTable 0.3.1
Text::Table 1.130
Text::Table::Tiny 0.04
Text::Table::Org 0.02
Text::Table::CSV 0.01
Text::TabularDisplay 1.38
BENCHMARK PARTICIPANTS
Text::ANSITable (perl_code)
Text::ASCIITable (perl_code)
Text::FormatTable (perl_code)
Text::MarkdownTable (perl_code)
Text::Table (perl_code)
Text::Table::Tiny (perl_code)
Text::Table::Org (perl_code)
Text::Table::CSV (perl_code)
Text::TabularDisplay (perl_code)
BENCHMARK DATASETS
tiny (1x1)
small (3x5)
wide (30x5)
long (3x300)
large (30x300)
SAMPLE BENCHMARK RESULTS
Run on: perl: v5.22.2, CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz (4 cores), OS: GNU/Linux Debian version 8.0, OS kernel: Linux version 3.16.0-4-amd64.
Benchmark with default options (bencher -m TextTableModules
):
+----------------------+----------------+-----------+-------------+------------+---------+---------+
| participant | dataset | rate (/s) | time (ms) | vs_slowest | errors | samples |
+----------------------+----------------+-----------+-------------+------------+---------+---------+
| Text::ANSITable | large (30x300) | 3.66 | 273 | 1 | 8e-05 | 21 |
| Text::ASCIITable | large (30x300) | 8.34 | 120 | 2.28 | 5.5e-05 | 21 |
| Text::FormatTable | large (30x300) | 22.2 | 45 | 6.08 | 1.5e-05 | 20 |
| Text::ANSITable | long (3x300) | 32.8 | 30.5 | 8.97 | 1.5e-05 | 20 |
| Text::TabularDisplay | large (30x300) | 55 | 18.2 | 15 | 5.5e-06 | 20 |
| Text::ASCIITable | long (3x300) | 83.4 | 12 | 22.8 | 9.3e-06 | 20 |
| Text::MarkdownTable | large (30x300) | 111 | 8.97 | 30.5 | 3.1e-06 | 20 |
| Text::Table | large (30x300) | 130 | 7.6 | 36 | 8.1e-06 | 20 |
| Text::ANSITable | wide (30x5) | 160 | 6.26 | 43.7 | 2.9e-06 | 20 |
| Text::FormatTable | long (3x300) | 194 | 5.16 | 53 | 2.2e-06 | 21 |
| Text::Table::CSV | large (30x300) | 250 | 3.9 | 70 | 1e-05 | 21 |
| Text::Table::Org | large (30x300) | 331 | 3.02 | 90.5 | 6.4e-07 | 20 |
| Text::ASCIITable | wide (30x5) | 380 | 2.63 | 104 | 1.5e-06 | 20 |
| Text::TabularDisplay | long (3x300) | 393 | 2.54 | 107 | 1.8e-06 | 20 |
| Text::Table::Tiny | large (30x300) | 403 | 2.48 | 110 | 6.4e-07 | 20 |
| Text::MarkdownTable | long (3x300) | 509 | 1.96 | 139 | 4.3e-07 | 20 |
| Text::Table | long (3x300) | 580 | 1.7 | 160 | 2.2e-06 | 20 |
| Text::FormatTable | wide (30x5) | 843 | 1.19 | 231 | 4.3e-07 | 20 |
| Text::Table | wide (30x5) | 1000 | 0.8 | 300 | 1.5e-05 | 20 |
| Text::ANSITable | small (3x5) | 1250 | 0.803 | 341 | 4e-07 | 23 |
| Text::Table::CSV | long (3x300) | 2040 | 0.49 | 558 | 2.1e-07 | 20 |
| Text::Table::Org | long (3x300) | 2300 | 0.434 | 630 | 5.3e-08 | 20 |
| Text::TabularDisplay | wide (30x5) | 2600 | 0.384 | 711 | 2.1e-07 | 20 |
| Text::Table::Tiny | long (3x300) | 2700 | 0.37 | 738 | 4.7e-08 | 26 |
| Text::ASCIITable | small (3x5) | 3400 | 0.3 | 920 | 4.3e-07 | 20 |
| Text::MarkdownTable | wide (30x5) | 4100 | 0.244 | 1120 | 2.2e-07 | 29 |
| Text::ANSITable | tiny (1x1) | 4480 | 0.223 | 1220 | 1.9e-07 | 26 |
| Text::FormatTable | small (3x5) | 8000 | 0.13 | 2200 | 2.1e-07 | 21 |
| Text::Table | small (3x5) | 8400 | 0.12 | 2300 | 2.1e-07 | 20 |
| Text::Table::Org | wide (30x5) | 11500 | 0.0867 | 3150 | 2.5e-08 | 22 |
| Text::ASCIITable | tiny (1x1) | 13000 | 0.078 | 3500 | 9.9e-08 | 23 |
| Text::Table::Tiny | wide (30x5) | 13000 | 0.077 | 3600 | 1.1e-07 | 20 |
| Text::Table::CSV | wide (30x5) | 13300 | 0.0749 | 3650 | 2.4e-08 | 24 |
| Text::MarkdownTable | small (3x5) | 14339.7 | 0.0697365 | 3919.63 | 1.2e-11 | 20 |
| Text::TabularDisplay | small (3x5) | 17000 | 0.06 | 4600 | 9e-08 | 28 |
| Text::Table | tiny (1x1) | 22400 | 0.0446 | 6130 | 4.4e-08 | 30 |
| Text::MarkdownTable | tiny (1x1) | 29000 | 0.035 | 7800 | 5.3e-08 | 20 |
| Text::FormatTable | tiny (1x1) | 38000 | 0.026 | 10000 | 5.3e-08 | 20 |
| Text::Table::Org | small (3x5) | 66000 | 0.0151 | 18000 | 6.1e-09 | 24 |
| Text::TabularDisplay | tiny (1x1) | 67000 | 0.015 | 18000 | 2e-08 | 20 |
| Text::Table::Tiny | small (3x5) | 67800 | 0.0147 | 18500 | 6.5e-09 | 21 |
| Text::Table::CSV | small (3x5) | 88600 | 0.0113 | 24200 | 9.8e-09 | 21 |
| Text::Table::Tiny | tiny (1x1) | 150000 | 0.0066 | 42000 | 1.3e-08 | 21 |
| Text::Table::Org | tiny (1x1) | 170000 | 0.0059 | 47000 | 6.7e-09 | 20 |
| Text::Table::CSV | tiny (1x1) | 325000 | 0.00308 | 88800 | 7.2e-10 | 27 |
+----------------------+----------------+-----------+-------------+------------+---------+---------+
Benchmark module startup overhead (bencher -m TextTableModules --module-startup
):
+----------------------+-----------+------------------------+------------+---------+---------+
| participant | time (ms) | mod_overhead_time (ms) | vs_slowest | errors | samples |
+----------------------+-----------+------------------------+------------+---------+---------+
| Text::ANSITable | 36 | 33.2 | 1 | 5.8e-05 | 20 |
| Text::MarkdownTable | 31 | 28.2 | 1.2 | 6.8e-05 | 22 |
| Text::Table | 15 | 12.2 | 2.4 | 4.3e-05 | 20 |
| Text::ASCIITable | 15 | 12.2 | 2.4 | 4.3e-05 | 20 |
| Text::FormatTable | 7.8 | 5 | 4.6 | 2.4e-05 | 20 |
| Text::Table::Tiny | 6.9 | 4.1 | 5.2 | 2.7e-05 | 20 |
| Text::TabularDisplay | 5.7 | 2.9 | 6.3 | 1.7e-05 | 20 |
| Text::Table::Org | 3.2 | 0.4 | 11 | 1.2e-05 | 20 |
| Text::Table::CSV | 3.1 | 0.3 | 12 | 1.9e-05 | 21 |
| perl -e1 (baseline) | 2.8 | 0 | 13 | 2.1e-05 | 20 |
+----------------------+-----------+------------------------+------------+---------+---------+
DESCRIPTION
Packaging a benchmark script as a Bencher scenario makes it convenient to include/exclude/add participants/datasets (either via CLI or Perl code), send the result to a central repository, among others . See Bencher and bencher (CLI) for more details.
HOMEPAGE
Please visit the project's homepage at https://metacpan.org/release/Bencher-Scenario-TextTableModules.
SOURCE
Source repository is at https://github.com/perlancar/perl-Bencher-Scenario-TextTableModules.
BUGS
Please report any bugs or feature requests on the bugtracker website https://rt.cpan.org/Public/Dist/Display.html?Name=Bencher-Scenario-TextTableModules
When submitting a bug or request, please include a test-file or a patch to an existing test-file that illustrates the bug or desired feature.
AUTHOR
perlancar <perlancar@cpan.org>
COPYRIGHT AND LICENSE
This software is copyright (c) 2016 by perlancar@cpan.org.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.