NAME

DBD::RAM - a DBI driver for files and data structures

SYNOPSIS

# This sample creates a database, inserts a record, then reads
# the record and prints it.  The output is "Hello, new world!"
#
use DBI;
my $dbh = DBI->connect( 'DBI:RAM:' );
$dbh->func( [<DATA>], 'import' );
print $dbh->selectrow_array('SELECT col2 FROM table1');
__END__
1,"Hello, new world!",sample

All syntax supported by SQL::Statement and all methods supported
by DBD::CSV are also supported, see their documentation for 
details.

DESCRIPTION

DBD::RAM allows you to import almost any type of Perl data
structure into an in-memory table and then use DBI and SQL
to access and modify it.  It also allows direct access to
almost any kind of file, supporting SQL manipulation
of the file without converting the file out of its native
format.

The module allows you to prototype a database without having an
rdbms system or other database engine and can operate either with 
or without creating or reading disk files.

DBD::RAM works with three differnt kinds of tables: tables
stored only in memory, tables stored in flat files, and tables
stored in various DBI-accessible databases.  Users may, for
most purposes, mix and match these differnt kinds of tables
within a script.

Tables may be created from these kinds of files:

FIX   fixed-width record files
CSV   comma separated values (or other "delimited") files
INI   name=value .ini files
XML   XML files (requires XML::Parser)
MP3   MP3 music files! (reads the ID3v1 tags)

And these kinds of data structures

   ARRAY array of arrayrefs
   HASH  array of hashrefs
   CSV   array of comma-separated value strings
   INI   array of name=value strings
   FIX   array of fixed-width record strings
   DBI   statement handle from a DBI database
   USR   array of user-defined data structures

With a data type of 'USR', you can pass a pointer to a subroutine
that parses the data, thus making the module very extendable.

With data type of XML you can specify either local or remote files.  
If remote, and if you have LWP installed, DBD::RAM will create the 
database table by first fetching the data from the remote location.

WARNING

This module is in a rapid development phase and it is likely to
change quite often for the next few days/weeks.  I will try to keep
the interface as stable as possible, but if you are going to start
using this in places where it will be difficult to modify, you might
want to ask me about the stabilitiy of the features you are using.

INSTALLATION & PREREQUISITES

This module should work on any any platform that DBI works on.

You don't need an external SQL engine or a running server, or a
compiler.  All you need are Perl itself and installed versions of DBI
and SQL::Statement. If you do *not* also have DBD::CSV installed you
will need to either install it, or simply copy File.pm into your DBD
directory.

For this first release, there is no makefile, just copy RAM.pm
into your DBD direcotry.

WORKING WITH IN-MEMORY DATABASES

CREATING TABLES

In-memory tables may be created using standard CREATE/INSERT
statements, or using the DBD::RAM specific import method:

   $dbh->func( $spec, $data, 'import' );

The $spec parameter is a hashref containg:

    table_name   a string holding the name of the table
     col_names   a string with column names separated by commas
     data_type   one of: array, hash, etc. see below for full list
       pattern   a string containing an unpack pattern (fixed-width only)
        parser   a pointer to a parsing subroutine (user only)

The $data parameter is a an arrayref containing an array of the type
specified in the $spec{data_type} parameter holding the actual table
data.

Data types for the data_type parameter currently include: ARRAY, HASH,
FIX (fixed-width), CSV, INI (name=value), DBI, and USR. See below for
examples of each of these types.

From an array of CSV strings

$dbh->func(
  {
    table_name => 'phrases',
    table_type => 'CSV',
    col_names  => 'id,phrase''
  }
  [
    qq{1,"Hello, new world!"},
    qq{2,"Junkity Junkity Junk"},
  ],'import' );

From an array of ARRAYS

$dbh->func(
    {
      data_type    => 'ARRAY',
        table_name => 'phrases',
        col_names  => 'id,phrase',
    },
    [
      [1,'Hello new world!'],
      [2,'Junkity Junkity Junk'],
    ],
'import' );

From an array of HASHES

$dbh->func(
    { table_name => 'phrases',
      col_names  => 'id,phrase',
      data_type  => 'HASH',
    },
    [
      {id=>1,phrase=>'Hello new world!'},
      {id=>2,phrase=>'Junkity Junkity Junk'},
    ],
'import' );

From an array of NAME=VALUE strings

$dbh->func(
    { table_name => 'phrases',    # ARRAY OF NAME=VALUE PAIRS
      col_names  => 'id,phrase',
      data_type  => 'INI',
    },
    [
      '1=2Hello new world!',
      '2=Junkity Junkity Junk',
    ],
'import' );

From an array of FIXED-WIDTH RECORD strings

$dbh->func(
    { table_name => 'phrases',
      col_names  => 'id,phrase',
      data_type  => 'FIX',
      pattern    => 'a1 a20',
    },
    [
      '1Hello new world!    ',
      '2Junkity Junkity Junk',
    ],
'import' );

The $spec{pattern} value should be a string describing the fixed-width
record.  See the Perl documentation on "unpack()" for details.

From directories containing MP3 MUSIC FILES

$dbh->func(
    { data_type => 'MP3', dirs => $dirlist}, import
)

$dirlist should be an reference to an array of absolute paths to
directories containing mp3 files.  Each file in those directories
will become a record containing the fields:  file_name, song_name,
artist, album, year, comment,genre. The fields will be filled
in automatically from the ID3v1x header information in the mp3 file
itself, assuming, of course, that the mp3 file contains a
valid ID3v1x header.

From another DBI DATABASE

You can import information from any other DBI accessible database with
the data_type set to 'sth' in the import() method.  First connect to the
other database via DBI and get a database handle for it separate from the
database handle for DBD::RAM.  Then do a prepare and execute to get a
statement handle for a SELECT statement into that database.  Then pass the
statement handle to the DBD::RAM import() method which will perform the
fetch and insert the fetched fields and records into the DBD::RAM table.
After the import() statement, you can then close the database connection
to the other database.

Here's an example using DBD::CSV --

 my $dbh_csv = DBI->connect('DBI:CSV:','','',{RaiseError=>1});
 my $sth_csv = $dbh_csv->prepare("SELECT * FROM mytest_db");
 $sth_csv->execute();
 $dbh->func(
     { table_name => 'phrases',
       col_names  => 'id,phrase',
       data_type  => 'DBI',
     },
     [$sth_csv],
     'import'
 );
 $dbh_csv->disconnect();

From USER-DEFINED DATA STRUCTURES

$dbh->func(
   { table_name => 'phrases',    # USER DEFINED STRUCTURE
     col_names  => 'id,phrase',
     data_type  => 'USR',
     parser     => sub { split /=/,shift },
   },
   [
       '1=Hello new world!',
       '2=Junkity Junkity Junk',
   ],
'import' );

This example shows a way to implement a simple name=value parser.
The subroutine can be as complex as you like however and could, for
example, call XML or HTML or other parsers, or do any kind of fetches
or massaging of data (e.g. put in some LWP calls to websites as part
of the data massaging).  [Note: the actual name=value implementation
in the DBD uses a slightly more complex regex to be able to handle equal
signs in the value.]

The parsing subroutine must accept a row of data in the user-defined
format and return it as an array.  Basically, the import() method
will cycle through the array of data, and for each element in its
array, it will send that element to your parser subroutine.  The
parser subroutine should accept an element in that format and return
an array with the elements of the array in the same order as the
column names you specified in the import() statement.  In the example
above, the sub accepts a string and returns and array.

PLEASE NOTE: If you develop generally useful parser routines that others
might also be able to use, send them to me and I can encorporate them
into the DBD itself.

From SQL STATEMENTS

You may also create tables with standard SQL syntax using CREATE
TABLE and INSERT statements.  Or you can create a table with the
import method and later populate it using INSERT statements.  Howver
the table is created, it can be modified and accessed with all SQL
syntax supported by SQL::Statement.

USING DEFAULTS FOR QUICK PROTOTYPING

If no table type is supplied, an in-memory type 'RAM' will be
assumed.  If no table_name is specified, a numbered table name
will be supplied (table1, or if that exists table2, etc.).  The
same also applies to column names (col1, col2, etc.).  If no
data_type is supplied, CSV will be assumed. If the $spec parameter
to import is missing, then defaults for all values will be used.
Thus, the two statements below have the same effect:

   $dbh->func( [
       qq{1,"Hello, new world!"},
       qq{2,"Junkity Junkity Junk"},
       ],'import' );

   $dbh->func(
       {
           table_name => table1,
           table_type => 'CSV',
           col_names  => 'col1,col2'
       }
       [
         qq{1,"Hello, new world!"},
         qq{2,"Junkity Junkity Junk"},
       ],'import' );

WORKING WITH FLAT FILES

This module now supports working with several different kinds of flat
files and will soon support many more varieties.  Currently supported are
fixed-width record files, comma separated values files, name=value ini
files, and  XML files.  See below for details

To work with these kinds of files, you must first enter the table in a
catalog specifying the table name, file name, file type, and optionally
other information.

Catalogs are created with the $dbh->func('catalog') command.

    $dbh->func([[
                 $table_name,
                 $table_type,
                 $file_name,
                 {optional params}
              ]])

For example this sets up a catalog with three tables of type CSV, FIX, and
INI:

   $dbh->func([
       ['my_csv', 'CSV', 'my_db.csv'],
       ['my_ini', 'INI', 'my_db.ini',{col_names=>'idCol,testCol'}],
       ['my_fix', 'FIX', 'my_db.fix',{pattern=>'a1 a25'}],
   ],'catalog' );

Optional parameters include col_names -- if the column names are not
specified with this parameter, then the module will look for the column
names as a comma-separated list on the first line of the file.

A table only needs to be entered into the catlog once.  After that all
SQL statements operating on $table_name will actually be carried out on
$file_name.  Thus, given the example catalog above, 
"CREATE TABLE my_csv ..." will create a file called 'my_db.csv' and
"SELECT * FROM my_ini" will open and read data from a file called
'my_db.ini'.

In all cases the files will be expected to be located in the directory
named in the $dbh->{f_dir} parameter (similar to in DBD::CSV).  This
parameter may be specified in the connect() statement as part of the
DSN, or may be changed later with $dbh->{f_dir} = $directory, at any
point later.  If no f_dir is specified, the current working directory
of the script will be assumed.

CSV FILES

This works similarly to DBD::CSV (which you may want to use instead
if you are only working with CSV files).  It supports specifying the
column names either in the catalog statement, or as the first line of
the file.  It does not yet support delimiters other than commas or
end of file characters other than newlines.

FIXED WIDTH RECORD FILES

Column names may be specified on the first line of the file (as a
comma separated list), or in the catalog.  A pattern must be specified
listing the widths of the fields in the catalog.  The pattern should
be in Perl unpack format e.g. "a2 a7 a14" would indicate a table with
three columns with widths of 2,7,14 characters.  When data is inserted
or updated, it will be truncated or padded to fill exactly the amount
of space alloted to each field.

NAME=VALUE INI FILES

Column names may be specified on the first line of the file (as a
comma separated list), or in the catalog.

XML

You must have XML::Parser installed in order to use the XML feature of
DBD::RAM.  If you wish to use remote XML files, you also need the LWP
module installed.

As an example, there is a file called slashdot.xml which is kept at
www.slashdot.org and is structured like this:

  <?xml version="1.0" ?>
   <backslash xmlns:backslash="http://slashdot.org/backslash.dtd">
     <story>
       <title>Neal Stephenson on Digital Village</title>
       <url>http://slashdot.org/article.pl?sid=00/04/21/126206</url>
       <time>2000-04-23 23:06:22</time>
       <author>Hemos</author>
       <department>good-listen</department>
       <topic>news</topic>
       <comments>85</comments>
       <section>articles</section>
       
     </story>
     <story> ...</story>
   <backslash>
 </xml>

You can import this into a DBD::RAM table, with this one command:

 $dbh->func({
     remote_source => 'http://www.slashdot.org/slashdot.xml',
     data_type     => 'XML',
     record_tag    => 'backslash story',
     col_names     => [title,url,time,author,department,
                      topic,comments,section,image]
 },'import');

 Notice that the "record_tag" is a space separated list of all of the tags
 that enclose the fields you want to capture starting at the highest level
 with the <backslash> tag.  In this case the names of the database fields
 are the same as the XML tags.  If the two lists are different (for example
 if the XML tag names are not legitimate SQL column names), you can specify
 an additional attribute called "col_mapping" which should be a hashref
 with the keys of the hash the XML tag names and the values, the table
 column names.  E.G.:

    col_mappings => { Article_Title =>'title', Article_URL=>'url' ...}

 That would use "Article_Title" as the database column name but keep the
 same XML tags as listed in the col_names attribute.

 If the XML file is local rather than remote, simply omit the remote_source
 attribute and put in

      file_source => $full_path_to_local_XML_file,

 If the XML is from a string rather than a file, simple omit the remote_source
 attribute and put in

      data_source => $xml_string,

 DBD::RAM treats tag attributes as fields, just like it treats tag text
 so the following three records are exactly the same:

   <quiz>
     <question id='1' Q='An orange is blue.' A='F'/>
     <question id='1'>
       <Q>An orange is blue.</Q>
       <A>F</A>
     </question>
     <question>
       <id>1</id>
       <Q>An orange is blue.</Q>
       <A>F</A>
     </question>
   </quiz>

USING MULTIPLE TABLES

A single script can create as many tables as your RAM will support and you
can have multiple statement handles open to the tables simultaneously. This
allows you to simulate joins and multi-table operations by iterating over
several statement handles at once.

TO DO

Lots of stuff.  An export() method -- dumping the data from in-memory
tables back into files.  Support for a variety of other easily parsed
formats such as Mail files, web logs.  Support for HTML files with
the directory considered as a table, each HTML file considered as a
record and the filename, <TITLE> tag, and <BODY> tags considered as
fields.  More robust SQL (coming when I update Statement.pm)
including RLIKE (a regex-based LIKE), joins, alter table, typed fields?,
authorization mechanisms?  transactions?

Let me know what else...

AUTHOR

Jeff Zucker <jeff@vpservices.com>

    Copyright (c) 2000 Jeff Zucker. All rights reserved. This program is
    free software; you can redistribute it and/or modify it under the same
    terms as Perl itself as specified in the Perl README file.

    This is alpha software, no warranty of any kind is implied.

SEE ALSO

DBI, DBD::CSV, SQL::Statement