NAME
Hash::Ordered::Benchmarks - Ordered hash benchmarking
VERSION
version 0.005
INTRODUCTION
The Hash::Ordered internals are simple: a hash of data and an array of ordered keys. I thought this would perform well for common tasks and likely outperform more complicated ordered hash implementations, so I decided to do some benchmarking to test it.
Note: since the initial benchmarking, Hash::Ordered
gained just-in-time indexing of the keys array to support faster tombstone deletion. It also now support tie
. The revised benchmarks include the tie
mode for consistency with other tied hash implementations.
MODULES TESTED
In my review of alternatives to Hash::Ordered
, six seemed sufficiently general-purpose to be worth benchmarking. The modules tested are listed in the benchmark output in shorthand:
Array::AsHash — denoted
a:ah
Array::OrdHash — denoted
a:oh
Data::XHash — denoted
d:xh
Hash::Ordered — denoted
h:o
and marked with "*"Tie::Hash::Indexed — denoted
t:h:i
Tie::IxHash — denoted
t:ix
Tie::LLHash — denoted
t:llh
Note that Tie::Hash::Indexed is written in XS and also may require forced installation as its tests often fail for Perl 5.18+ due to the hash randomization change.
If there are different methods of doing something with a module, the variations are described in each section below.
BENCHMARKS
I conducted benchmarking with the Benchmark module. The test script is in the devel
directory of the distribution. Tests were run on Perl 5.20.2 on a Mac Book Pro (darwin-thread-multi-2level). Each benchmark ran for 5 CPU seconds.
Benchmarks were run at several different scales to reveal differences in efficiency as hash size grows. The details are described in each section below.
A seed list of keys and values was generated from random integers using Math::Random::MT::Auto. The same seed list was used for all benchmarks unless otherwise noted.
I did not test advanced features of these modules, as apples-to-apples comparison is difficult. Still, the performance on common, simple measures could suggest how features that combine these operations might perform.
Ordered hash creation
I tested hash creation for 10, 100 and 1000 elements. For some modules there were different options for creating a hash:
Array::AsHash
takes an array-reference with an option to use it directly or to clone it. In one case I provided the seed list array reference with the clone option to true ("a:ah_cp"). In another case I created a new array reference from the seed list and provided it directly ("a:ah_rf").Hash::Ordered
can be initialized either withnew
("h:o_oo") or viatie
("h:o_th").Tie::IxHash
can be initialized either withnew
("t:ix_oo") or viatie
("t:ix_th").Data::XHash
can be created with a list ("t:xh_ls") or an array reference ("t:xh_rf").
As expected, when Array::AsHash
gets an array reference, it's very fast. Tie::Hash::Indexed
does well here, also. Of the non-XS, more hash-like choices, Hash::Ordered
does well.
Results for ordered hash creation for 10 elements t:h:i 128982/s a:ah_rf 105873/s h:o_oo 81461/s * h:o_th 79219/s * t:ix_oo 61948/s t:ix_th 61672/s a:ah_cp 59530/s a:oh 49399/s t:llh 32642/s d:xh_rf 14260/s d:xh_ls 14193/s
Results for ordered hash creation for 100 elements t:h:i 15807/s a:ah_rf 14453/s h:o_oo 9414/s * h:o_th 9309/s * t:ix_th 7391/s t:ix_oo 7318/s a:oh 6646/s a:ah_cp 5552/s t:llh 3408/s d:xh_rf 1593/s d:xh_ls 1589/s
Results for ordered hash creation for 1000 elements a:ah_rf 1479/s t:h:i 1455/s h:o_th 873/s * h:o_oo 844/s * t:ix_oo 760/s t:ix_th 754/s a:ah_cp 752/s a:oh 715/s t:llh 331/s d:xh_ls 160/s d:xh_rf 157/s
Getting hash elements
I tested retrieving values for 10% of the keys, randomly selected, from hashes of 10, 100 and 1000 elements. The hash was created beforehand so the benchmarks reflect only element access.
Some modules had choices for how to retrieve an value, usually between a method (denoted with "_oo"), tied hash access ("_th") or with a dereference ("_rf").
Generally, method calls turned out faster than other approaches for a given module, demonstrating the inefficiency of tied objects.
Results for fetching ~10% of 10 elements h:o_oo 1807637/s * d:xh_oo 1351641/s t:ix_oo 1248157/s t:h:i 947034/s h:o_th 835048/s * d:xh_rf 721437/s t:ix_th 661992/s a:oh 568502/s t:llh 516942/s a:ah 258923/s
Results for fetching ~10% of 100 elements h:o_oo 288778/s * d:xh_oo 182530/s t:ix_oo 165729/s t:h:i 128128/s h:o_th 108201/s * t:ix_th 83358/s d:xh_rf 69972/s a:oh 69119/s t:llh 58588/s a:ah 27311/s
Results for fetching ~10% of 1000 elements h:o_oo 29847/s * d:xh_oo 19056/s t:ix_oo 17098/s t:h:i 13097/s h:o_th 11072/s * d:xh_rf 9363/s t:ix_th 8339/s a:oh 6901/s t:llh 5911/s a:ah 2748/s
Setting hash elements
I tested changing values for 10% of the keys, randomly selected, from hashes of 10, 100 and 1000 elements. The hash was created beforehand so the benchmarks reflect only element mutation. No new keys were added.
Some modules had choices for how to modify a value, usually between a method (denoted with "_oo"), tied hash access ("_th") or with a dereference ("_rf").
Again, methods outperformed.
Results for replacing ~10% of 10 elements h:o_oo 1389487/s * t:h:i 981169/s d:xh_oo 967494/s t:ix_oo 906419/s h:o_th 674507/s * t:llh 607517/s d:xh_rf 554796/s a:oh 543047/s t:ix_th 524334/s a:ah 159412/s
Results for replacing ~10% of 100 elements h:o_oo 197605/s * t:h:i 133501/s d:xh_oo 120446/s t:ix_oo 110297/s h:o_th 84731/s * t:llh 73805/s d:xh_rf 65410/s a:oh 64499/s t:ix_th 48432/s a:ah 16573/s
Results for replacing ~10% of 1000 elements h:o_oo 20366/s * t:h:i 13805/s d:xh_oo 12302/s t:ix_oo 11241/s h:o_th 8539/s * t:llh 7485/s d:xh_rf 6792/s a:oh 6596/s t:ix_th 6132/s a:ah 1641/s
Adding hash elements
I tested adding 10, 100 and 1000 elements to an empty hash.
Some modules had choices for how to append a value, usually between a method (denoted with "_oo"), tied hash access ("_th") or with a dereference ("_rf").
For Tie::LLHash
, I did not use the "lazy" option, but did the equivalent using tied
and a method call:
tied(%tllh)->last( irand(), 42 ) for 1 .. $n;
Generally, it seemed like the differences were smaller than for other benchmarks. Methods still outperformed.
Results for adding 10 elements to empty hash h:o_oo 332252/s * t:h:i 295026/s t:ix_oo 251117/s h:o_th 244197/s * t:ix_th 210624/s t:llh 186667/s a:oh 137321/s a:ah 113216/s d:xh_oo 87559/s d:xh_rf 81455/s
Results for adding 100 elements to empty hash h:o_oo 59359/s * t:h:i 57851/s t:ix_oo 46438/s h:o_th 42761/s * t:ix_th 38606/s a:oh 35236/s t:llh 25875/s d:xh_oo 24931/s d:xh_rf 22013/s a:ah 13754/s
Results for adding 1000 elements to empty hash t:h:i 5655/s h:o_oo 5232/s * t:ix_oo 5059/s t:ix_th 4126/s a:oh 3763/s h:o_th 3473/s * t:llh 2728/s d:xh_oo 2436/s d:xh_rf 2434/s a:ah 1310/s
Deleting hash elements
I tested creating hashes of 10, 100 and 1000 elements and then deleting 10% of the keys, chosen randomly. I would have liked to have isolated creation from deletion, but I couldn't figure out a way to do that given how Benchmark
runs the same tests over and over.
Some modules had choices for how to delete a value, usually between a method (denoted with "_oo"), tied hash access ("_th") or with a dereference ("_rf").
Interestingly, the performance of different modules changes a lot at the three different sizes, revealing implementation differences. (Though recall that some of this is the creation performance difference as well as deletion difference.)
For example, Tie::Hash::Indexed
XS does very well, which could be its good creation performance, but could also be good deletion. Hash::Ordered
does linear search deleting a key, which is slow at larger sizes. Tie::LLHash
really shines at larger sizes as deleting from a linked list is faster than splicing out an element of an array.
Results for creating 10 element hash then deleting ~10% t:h:i 129737/s h:o_oo 81617/s * h:o_th 72121/s * a:ah 61683/s t:ix_oo 51891/s t:ix_th 49217/s a:oh 43608/s t:llh 28101/s d:xh_oo 12547/s d:xh_rf 12522/s
Results for creating 100 element hash then deleting ~10% t:h:i 15657/s h:o_oo 8145/s * h:o_th 7569/s * a:oh 5044/s t:ix_oo 4947/s t:ix_th 4749/s a:ah 3600/s t:llh 2999/s d:xh_oo 1418/s d:xh_rf 1374/s
Results for creating 1000 element hash then deleting ~10% t:h:i 1493/s h:o_oo 752/s * h:o_th 698/s * t:llh 298/s a:oh 213/s d:xh_oo 139/s d:xh_rf 137/s t:ix_oo 64/s t:ix_th 63/s a:ah 36/s
Extracting the hash as a list
I tested getting an ordered list of pairs from hashes of 10, 100 and 1000 elements. The hash was created beforehand so the benchmarks reflect only conversion to a list.
Oddly, modules that usually have more than one way to do this don't for this. Even Tie::IxHash
doesn't really have an OO way to do it, so I did it longhand:
@list = map { $_ => $tix_oo->FETCH($_) } $tix_oo->Keys;
Because Array::AsHash
keeps its internal representation as an ordered list of pairs, it outperforms the rest handily.
Results for listing pairs of 10 element hash a:ah 322132/s h:o_oo 147906/s * t:ix_oo 82783/s t:h:i 73645/s h:o_th 49342/s * a:oh 46406/s t:ix_th 45253/s t:llh 35747/s d:xh 35658/s
Results for listing pairs of 100 element hash a:ah 33566/s h:o_oo 16587/s * t:ix_oo 8515/s t:h:i 7189/s h:o_th 5549/s * a:oh 4687/s t:ix_th 4567/s d:xh 3830/s t:llh 3580/s
Results for listing pairs of 1000 element hash a:ah 3334/s h:o_oo 1610/s * t:ix_oo 824/s t:h:i 713/s h:o_th 559/s * a:oh 485/s t:ix_th 450/s d:xh 390/s t:llh 348/s
CONCLUSION
With the exception of hash creation and element deletion, Hash::Ordered
generally outperformed the other ordered hash implementations. Even for creation, it was the fastest of the pure-Perl, hash-based implementations, often by a large margin.
In the original release of Hash::Ordered
, deletion got worse as the hash size grew. The new JIT indexing with tombstones now makes deletion far faster than any pure-Perl implementation.
Array::AsHash
, with the opposite internal implementation compared to Hash::Ordered
, performs best at creation and listing pairs, but is dead last at element access and modification. I believe the poor performance is mostly due to extra indirection (e.g. an extra function call) and logic in the element access methods. For uses that don't require much element access and have lots of creation/serialization, it could still be a useful choice.
Generally, every module that depends on tie
for some portion of its implementation pays a substantial performance penalty. Comparing Hash::Ordered
benchmarks with and without tie
for individual element operations shows how severe this penalty can be. Tie::Hash::Indexed
— likely because of its XS implementation — performs decently, but not well enough in my opinion to justify its use.
As the author of Hash::Ordered
, I'm clearly biased, but I think these benchmarks make a very good case for it being the "go to" module for pure-Perl, general-purpose ordered hashes.
AUTHOR
David Golden <dagolden@cpan.org>
COPYRIGHT AND LICENSE
This software is Copyright (c) 2014 by David Golden.
This is free software, licensed under:
The Apache License, Version 2.0, January 2004