NAME
dbmerge - merge all inputs in sorted order based on the the specified columns
SYNOPSIS
dbmerge --input A.fsdb --input B.fsdb [-T TemporaryDirectory] [-nNrR] column [column...]
or cat A.fsdb | dbmerge --input - --input B.fsdb [-T TemporaryDirectory] [-nNrR] column [column...]
or dbmerge [-T TemporaryDirectory] [-nNrR] column [column...] --inputs A.fsdb [B.fsdb ...]
DESCRIPTION
Merge all provided, pre-sorted input files, producing one sorted result. Inputs can both be specified with --input
, or one can come from standard input and the other from --input
. With --xargs
, each line of standard input is a filename for input.
Inputs must have identical schemas (columns, column order, and field separators).
Unlike dbmerge2, dbmerge supports an arbitrary number of input files.
Because this program is intended to merge multiple sources, it does not default to reading from standard input. If you wish to list - as an explicit input source.
Also, because we deal with multiple input files, this module doesn't output anything until it's run.
dbmerge consumes a fixed amount of memory regardless of input size. It therefore buffers output on disk as necessary. (Merging is implemented a series of two-way merges, so disk space is O(number of records).)
dbmerge will merge data in parallel, if possible. The <--parallelism> option can control the degree of parallelism, if desired.
OPTIONS
General option:
- --xargs
-
Expect that input filenames are given, one-per-line, on standard input. (In this case, merging can start incrementally.
- --removeinputs
-
Delete the source files after they have been consumed. (Defaults off, leaving the inputs in place.)
- -T TmpDir
-
where to put tmp files. Also uses environment variable TMPDIR, if -T is not specified. Default is /tmp.
- --parallelism N
-
Allow up to N merges to happen in parallel. Default is the number of CPUs in the machine.
- --endgame (or --noendgame)
-
Enable endgame mode, extra parallelism when finishing up. (On by default.)
Sort specification options (can be interspersed with column names):
- -r or --descending
-
sort in reverse order (high to low)
- -R or --ascending
-
sort in normal order (low to high)
- -n or --numeric
-
sort numerically
- -N or --lexical
-
sort lexicographically
This module also supports the standard fsdb options:
- -d
-
Enable debugging output.
- -i or --input InputSource
-
Read from InputSource, typically a file name, or
-
for standard input, or (if in Perl) a IO::Handle, Fsdb::IO or Fsdb::BoundedQueue objects. - -o or --output OutputDestination
-
Write to OutputDestination, typically a file name, or
-
for standard output, or (if in Perl) a IO::Handle, Fsdb::IO or Fsdb::BoundedQueue objects. - --autorun or --noautorun
-
By default, programs process automatically, but Fsdb::Filter objects in Perl do not run until you invoke the run() method. The
--(no)autorun
option controls that behavior within Perl. - --help
-
Show help.
- --man
-
Show full manual.
SAMPLE USAGE
Input:
File a.fsdb:
#fsdb cid cname
11 numanal
10 pascal
File b.fsdb:
#fsdb cid cname
12 os
13 statistics
These two files are both sorted by cname
, and they have identical schemas.
Command:
dbmerge --input a.fsdb --input b.fsdb cname
or
cat a.fsdb | dbmerge --input b.fsdb cname
Output:
#fsdb cid cname
11 numanal
12 os
10 pascal
13 statistics
# | dbmerge --input a.fsdb --input b.fsdb cname
SEE ALSO
dbmerge2(1), dbsort(1), Fsdb(3)
CLASS FUNCTIONS
new
$filter = new Fsdb::Filter::dbmerge(@arguments);
Create a new object, taking command-line arugments.
set_defaults
$filter->set_defaults();
Internal: set up defaults.
parse_options
$filter->parse_options(@ARGV);
Internal: parse command-line arguments.
_pretty_fn
_pretty_fn($fn)
Internal: pretty-print a filename or Fsdb::BoundedQueue.
segment_next_output
$out = $self->segment_next_output($output_type)
Internal: return a Fsdb::IO::Writer as $OUT that either points to our output or a temporary file, depending on how things are going.
The $OUTPUT_TYPE can be 'final' or 'ipc' or 'file'.
segment_cleanup
$out = $self->segment_cleanup($file);
Internal: Clean up a file, if necessary. (Sigh, used to be function pointers, but not clear how they would interact with threads.)
_unique_id
$id = $self->_unique_id()
Generate a sequence number for debugging.
segments_merge2_run
$out = $self->segments_merge2_run($out_fn, $is_final_output,
$in0, $in1);
Internal: do the actual merge2 work (maybe our parent put us in a thread, maybe not).
enqueue_work
$self->enqueue_work($depth, $work);
Internal: put $WORK on the queue at $DEPTH, updating the max count.
segments_merge_one_depth
$self->segments_merge_one_depth($depth);
Merge queued files, if any.
Also release any queued threads.
segments_xargs
$self->segments_xargs();
Internal: read new filenames to process (from stdin) and send them to the work queue.
Making a separate Fred to handle xargs is a lot of work, but it guarantees it comes in on an IO::Handle that is selectable.
segments_merge_all
$self->segments_merge_all()
Internal: Merge queued files, if any. Iterates over all depths of the merge tree, and handles any forked threads.
Merging Strategy
Merging is done in a binary tree is managed through the _work
queue. It has an array of depth
entries, one for each level of the tree.
Items are processed in order at each level of the tree, and only level-by-level, so the sort is stable.
Parallelism Model
Parallelism is also managed through the _work
queue, each element of which consists of one file or stream suitable for merging. The work queue contains both ready output (files or BoundedQueue streams) that can be immediately handled, and pairs of semaphore/pending output for work that is not yet started. All manipulation of the work queue happens in the main thread (with segments_merge_all
and segments_merge_one_depth
).
We start a thread to handle each item in the work queue, and limit parallelism to the _max_parallelism
, defaulting to the number of available processors.
There two two kinds of parallelism, regular and endgame. For regular parallelism we pick two items off the work queue, merge them, and put the result back on the queue as a new file. Items in the work queue may not be ready. For in-progress items we wait until they are done. For not-yet-started items we start them, then wait until they are done.
Endgame parallelism handles the final stages of a large merge. When there are enough processors that we can start a merge jobs for all remaining levels of the merge tree. At this point we switch from merging to files to merging into Fsdb::BoundedQueue
pipelines that connect merge processes which start and run concurrently.
The final merge is done in the main thread so that that the main thread can handle the output stream and recording the merge action.
setup
$filter->setup();
Internal: setup, parse headers.
run
$filter->run();
Internal: run over each rows.
AUTHOR and COPYRIGHT
Copyright (C) 1991-2013 by John Heidemann <johnh@isi.edu>
This program is distributed under terms of the GNU general public license, version 2. See the file COPYING with the distribution for details.