NAME
DBIx::DataModel::Doc::Cookbook - Helpful recipes
DOCUMENTATION CONTEXT
This chapter is part of the DBIx::DataModel
manual.
DESCRIPTION
This chapter provides some recipes for common ORM tasks.
SCHEMA DECLARATION
Automatically generate a schema
A schema skeleton can be produced automatically from the following external sources : a DBI
connection, a SQL::Translator parser, or a DBIx::Class
schema. See DBIx::DataModel::Schema::Generator.
Add custom methods into a generated table class
Defining methods in any Perl class does not require to have a file corresponding to that class; it suffices to define the method within the appropriate package. So the easiest way to add methods into tables is to first let DBIx::DataModel
create the schema and table classes, and then switch to those packages, all in the same file :
# define schema, tables, associations (current package doesn't matter)
DBIx::DataModel->Schema('Some::Schema')
->Table(qw/Foo foo foo_id/)
->Table(...)
->Association(...)
->...;
# add a method into table 'Foo'
package Some::Schema::Foo;
sub my_added_method {
my $self = shift;
...
}
# go back to main package
package main;
...
Another way to achieve the same result is to use DBIx::DataModel
's internal utility method for injecting methods into classes :
use DBIx::DataModel::Meta::Utils;
DBIx::DataModel::Meta::Utils->define_method(
class => 'Some::Schema::Foo',
name => 'my_added_method,
body => sub {my $self = shift; ...},
);
Object inflation/deflation
Here is an example of inflating/deflating a scalar value from the database into a Perl object :
# declare column type
use Date::Simple;
$schema->Type(Date_simple =>
from_DB => sub {Date::Simple->new($_[0]) if $_[0] },
to_DB => sub {$_[0] = $_[0]->as_str if $_[0] },
);
# apply column type to columns
My::Table1->metadm->define_column_type(Date_simple => qw/d_start d_end/);
My::Table2->metadm->define_column_type(Date_simple => qw/d_birth/);
Caveat: the from_DB
/ to_DB
functions do not apply automatically within -where
conditions. So the following would not work :
use Date::Simple qw/today/;
my $rows = $schema->table($name)->select(
-where => {d_end => {'<' => today()}}, # BOGUS
);
because today()
returns a Date::Simple
object that will not be understood by SQL::Abstract when generating the SQL query. DBIx::DataModel
is not clever enough to inspect the -where
conditions and decide which column types to apply, so you have to do it yourself :
my $today = today()->as_str;
my $rows = $schema->table($name)->select(
-where => {d_end => {'<' => $today}},
);
SQL Types
At places where a plain value is expected, you can put an arrayref of 2 elements, where the first element is a type specification, and the second element is the value. This is convenient when the DBD driver needs additional information about the values used in the statement. See "BIND VALUES WITH TYPES" in SQL::Abstract::More for explanations and examples. This can also be automated within a to_DB
handler, as shown above for object inflation.
Quoting table and column names
By default, table or column names are inserted "as is" in the generated SQL; but sometimes this could cause conflicts with SQL reserved words. The solution is to quote table and column names, by activating the quote_char
option of SQL::Abstract. Here is an example (assuming single-schema mode) :
# define the schema
DBIx::DataModel->Schema('FOO');
# feed the schema with a custom instance of SQL::Abstract::More
my $sqlam = SQL::Abstract::More->new(quote_char => "`");
FOO->singleton->sql_abstract($sqlam);
# define a table
FOO->Table(qw/Config CONFIG KEY/);
# produce SQL with quoted table and column names
my ($sql, @bind) = FOO::Config->select(
-columns => [qw/KEY VALUE/],
-where => {KEY => 123},
-result_as => 'sql',
);
print $sql; # SELECT `KEY`, `VALUE` FROM `CONFIG` WHERE ( `KEY` = ? )
Schema versioning
Currently DBIx::DataModel
has no specific support for schema versioning. See CPAN module DBIx::VersionedSchema, or switch to the DBIx::Class ORM, that has good support for schema versioning.
DATA RETRIEVAL
Aggregator functions
Use normal SQL syntax for aggregators, and give them column aliases (with a vertical bar |
) in order to retrieve the results.
my $row = $source->select(-columns => [qw/MAX(col1)|max_col1
AVG(col2)|foo
COUNT(DISTINCT(col3))|bar/],
-where => ...,
-result_as => 'firstrow');
print "max is : $row->{max_col1}, average is $row->{foo}";
Or you can dispense with column aliases, and retrieve the results directly into an arrayref, using -result_as => 'flat_arrayref'
:
my $array_ref = $source->select(-columns => [qw/MAX(col1)
AVG(col2)
COUNT(DISTINCT(col3))/],
-where => ...,
-result_as => 'flat_arrayref');
my ($max_col1, $avg_col2, $count_col3) = @$array_ref;
Caveat: currently, from_DB
handlers do not apply to aggregator functions. So if the aggregated result needs any transformation, you have to specify a column type for it :
my $row = $source->select(
-columns => [qw/MAX(d_begin)|max_d_begin MIN(d_end)|min_d_end .../],
-where => ...,
-column_types => {Date_simple => [qw/max_d_begin min_d_end/],
-result_as => 'firstrow'
);
Database functions or stored procedures
Like above: normal SQL syntax and column aliases.
my $rows = $source->select(-columns => [qw/FUNC(col1,col2)|func
(col3+99)|big_col3/],
-where => ...,
);
print "$_->{func} and $_->{big_col3}" foreach @$rows;
Nested queries
my $subquery = $source1->select(..., -result_as => 'subquery');
my $rows = $source2->select(
-columns => ...,
-where => {foo => 123, bar => {-not_in => $subquery}}
);
Hashref inflation
There is no need for a hashref inflator: rows returned by a select()
can be used directly as hashrefs. For example here is a loop that prints a hash slice from each row :
my $rows = $schema->table($name)->select(...);
my @print_cols = qw/col3 col6 col7/;
foreach my $row (@$rows) {
print @{$row}{@print_cols};
}
In fact, each row is a blessed hashref. This can be a problem with some external modules like JSON that croak when encoding a blessed reference. In that case you can use the unbless
function
foreach my $row (@$rows) {
$schema->unbless($row);
print to_json($row);
}
Custom SQL
Create a 'Perl view' to encapsulate your SQL, i.e. a DBIx::DataModel::Source::Table, possibly with a where
clause :
$meta_schema->define_table(
name => 'MyView',
db_table => 'TABLE1 EXOTIC JOIN TABLE2 ON ...',
where => {col1 => $filter1, col2 => $filter2}
parents => [map {meta_schema->table($_)} qw/Table1 Table2/],
);
DATA UPDATE
Transaction
# anonymous sub containing the work to do
my $to_do = sub {
$table1->insert(...);
$table2->delete(...);
};
# so far nothing has happened in the database
# now do the transaction
$schema->do_transaction($to_do);
Nested transaction
$schema->do_transaction(sub {
do_something();
$schema->do_transaction(sub { some_nested_code(); });
$schema->do_transaction(sub { some_other_nested_code(); });
});
Nested transaction involving another database
$schema->dbh($initial_dbh);
$schema->do_transaction(sub {
# start working in $initial_dbh
do_something();
# now some work in $other_dbh
$schema->do_transaction(sub { some_nested_code(); }, $other_dbh);
# here, implicitly we are back in $initial_dbh
$schema->do_transaction(sub { some_other_nested_code(); });
});
# commits in both $initial_dbh and $other_dbh are performed here
Generating random keys
Override the _singleInsert() method
package MySchema::SomeTable;
sub _singleInsert {
my ($self) = @_;
my $class = ref $self;
my ($key_column) = $class->primKey;
for (1..$MAX_ATTEMPTS) {
my $random_key = int(rand($MAX_RANDOM));
$self->{$key_column} = $random_key;
eval {$self->_rawInsert; 1}
and return $random_key; # SUCCESS
# if duplication error, try again; otherwise die
last unless $DBI::errstr =~ $DUPLICATE_ERROR;
}
croak "cannot generate a random key for $class: $@";
}
Cascaded insert
First insert an arrayref of subrecords within the main record hashref; then call insert
on that main record. See example in insert(). This only works if the two classes are associated through a Composition. A datastructure containing the keys of all generated records can be retrieved by using the option
my $tree_of_keys = $table->insert(..., -returning => {});
Cascaded delete
# first gather information tree from the database
my $author = My::DB::Author->fetch($author_id);
my $distribs = $author->expand('distributions');
$_->expand('modules') foreach @$distribs;
# then delete the whole tree from the database
$author->delete;
This only works if the two classes are associated through a Composition. The expand
operations retrieve related records and add them into a tree in memory. Then delete
removes from the database all records found in the tree; therefore this is not a "true" cascaded delete, because the client code is responsible for fetching the related records.
True cascaded delete is best implemented directly in the database, rather than at the ORM layer.
Timestamp validation
Goal : make sure that the record was not touched between the time it was presented to the user (display form) and the time the user wants to update or delete that record.
In order to do this, we will suppose that every record in every table has a timestamp field TS_MODIF
, updated automatically by a trigger within the database. Below is a callback function that checks if the timestamp is still valid :
sub _check_time_stamp {
my ($record, $table, $where) = @_;
if ($where) { # this is an update, not an insert
my $displayed_timestamp = delete $record->{TS_MODIF};
my $db_record = $record->schema->table($table)->select(
-columns => 'TS_MODIF',
-where => $where,
-for => 'update', # optional, depends on your RDBMS
-result_as => 'firstrow',
)
or croak "fetch timestamp: could not find record "
. join(" / ", %$where);
my $db_timestamp = $db_record->{TS_MODIF};
$db_timestamp == $displayed_timestamp
or croak "record in $table was modified by somebody else; please "
. "refresh your screen and try again";
}
}
This callback function can then be registered as an auto_update_column when defining the schema :
DBIx::DataModel->define_schema(
class => 'My::Schema',
auto_update_columns => {TS_MODIF => \&_check_time_stamp},
);
DATA CONVERSION
JSON
use JSON;
my $json_converter = JSON->new->convert_blessed(1);
my $json_text = $json_converter->encode($data_row);
By default, the JSON module refuses to convert any object into JSON; however, the "convert_blessed" in JSON option will accept to convert objects provided they possess a TO_JSON
method. Such a method is implemented in the "DBIx::DataModel::Source" in DBIx::DataModel::Source class, so any data row can be converted into JSON.
DBD Datatypes
DBD drivers sometimes need additional information about the datatypes of values in SQL statements. At the DBI level, this is done with explicit calls to the "bind_param" in DBI method. Within DBIx::DataModel
, datatype specifications can be passed directly as arrayrefs of shape [$orig_value, \%datatype]
, everywhere an ordinary value would be used. Here are some examples :
my $rows = $source->select(
-where => {col => [$val, {sql_type => 'some_type'}]}
);
$source->insert(
{key => $pk, some_col => [$val, {sql_type => 'some_type'}]}
);
$record->update(
{some_col => [$val, {sql_type => 'some_type'}]}
);