NAME
Schedule::Parallel
SYNOPSIS
use Schedule::Parallel;
@unused_portion = runqueue jobcount, closure, [closure ...];
DESCRIPTION
Fork with a bit of boilerplate and a maximum number of jobs to run at one time (jobcount). The queue (whatever's left on @_ after shifting off the count) is run in parallel by forking perl and exiting the sub-process with the closure's status.
The caller gets back the unexecuted portion of the queue. In a scalar context this will return false if the entire execution succeeded; in an array context it returns the unused portion for your money bac... er, in order to simplify re-execution where the calling code can fix the problems (e.g., if the closures store recovery information).
fork + exit semantics require that code called from the closures exits zero if it succeeds (i.e., shell-like returns). Any non-zero exit from a forked job will abort further processing.
For debugging: if jobcount is zero then the que is run without forking -- basically via $_->() for @_ on the whole que. This saves dealing with fork issues in devlopment.
Notes
Running N jobs in parallel makes the assumption that jobs will consume roughly constant system resource during execution. If this is not true it may be useful to submit a large que in sections with some monitoring to adjust the jobcount parameter as pieces of the queue are completed. Examples would be raising the value of jobcount for long-running, mostly-blocked queues (e.g., web searches) or reducing it to avoid bombarding the network if return times are faster.
AUTHOR
Steven Lembark Workhorse Computing, LLC
lembark@wrkhors.com
Copyright
(C) 2001-2004 Workhorse Computing, LLC
This code is released under the same terms as Perl istelf. Please see the Perl-5.6.1 distribution (or later) for a full description.
In any case, this code is release as-is, with no implied warranty of fitness for a particular purpose or warranty of merchantability.
SEE ALSO
perl(1)