NAME
DBIx::Class::Manual::Cookbook - Miscellaneous recipes
SEARCHING
Paged results
When you expect a large number of results, you can ask DBIx::Class for a paged resultset, which will fetch only a defined number of records at a time:
my $rs = $schema->resultset('Artist')->search(
undef,
{
page => 1, # page to return (defaults to 1)
rows => 10, # number of results per page
},
);
return $rs->all(); # all records for page 1
The page
attribute does not have to be specified in your search:
my $rs = $schema->resultset('Artist')->search(
undef,
{
rows => 10,
}
);
return $rs->page(1); # DBIx::Class::ResultSet containing first 10 records
In either of the above cases, you can get a Data::Page object for the resultset (suitable for use in e.g. a template) using the pager
method:
return $rs->pager();
Complex WHERE clauses
Sometimes you need to formulate a query using specific operators:
my @albums = $schema->resultset('Album')->search({
artist => { 'like', '%Lamb%' },
title => { 'like', '%Fear of Fours%' },
});
This results in something like the following WHERE
clause:
WHERE artist LIKE '%Lamb%' AND title LIKE '%Fear of Fours%'
Other queries might require slightly more complex logic:
my @albums = $schema->resultset('Album')->search({
-or => [
-and => [
artist => { 'like', '%Smashing Pumpkins%' },
title => 'Siamese Dream',
],
artist => 'Starchildren',
],
});
This results in the following WHERE
clause:
WHERE ( artist LIKE '%Smashing Pumpkins%' AND title = 'Siamese Dream' )
OR artist = 'Starchildren'
For more information on generating complex queries, see "WHERE CLAUSES" in SQL::Abstract.
Arbitrary SQL through a custom ResultSource
Sometimes you have to run arbitrary SQL because your query is too complex (e.g. it contains Unions, Sub-Selects, Stored Procedures, etc.) or has to be optimized for your database in a special way, but you still want to get the results as a DBIx::Class::ResultSet. The recommended way to accomplish this is by defining a separate ResultSource for your query. You can then inject complete SQL statements using a scalar reference (this is a feature of SQL::Abstract).
Say you want to run a complex custom query on your user data, here's what you have to add to your User class:
package My::Schema::User;
use base qw/DBIx::Class/;
# ->load_components, ->table, ->add_columns, etc.
# Make a new ResultSource based on the User class
my $source = __PACKAGE__->result_source_instance();
my $new_source = $source->new( $source );
$new_source->source_name( 'UserFriendsComplex' );
# Hand in your query as a scalar reference
# It will be added as a sub-select after FROM,
# so pay attention to the surrounding brackets!
$new_source->name( \<<SQL );
( SELECT u.* FROM user u
INNER JOIN user_friends f ON u.id = f.user_id
WHERE f.friend_user_id = ?
UNION
SELECT u.* FROM user u
INNER JOIN user_friends f ON u.id = f.friend_user_id
WHERE f.user_id = ? )
SQL
# Finally, register your new ResultSource with your Schema
My::Schema->register_source( 'UserFriendsComplex' => $new_source );
Next, you can execute your complex query using bind parameters like this:
my $friends = [ $schema->resultset( 'UserFriendsComplex' )->search( {},
{
bind => [ 12345, 12345 ]
}
) ];
... and you'll get back a perfect DBIx::Class::ResultSet.
Using specific columns
When you only want specific columns from a table, you can use columns
to specify which ones you need. This is useful to avoid loading columns with large amounts of data that you aren't about to use anyway:
my $rs = $schema->resultset('Artist')->search(
undef,
{
columns => [qw/ name /]
}
);
# Equivalent SQL:
# SELECT artist.name FROM artist
This is a shortcut for select
and as
, see below. columns
cannot be used together with select
and as
.
Using database functions or stored procedures
The combination of select
and as
can be used to return the result of a database function or stored procedure as a column value. You use select
to specify the source for your column value (e.g. a column name, function, or stored procedure name). You then use as
to set the column name you will use to access the returned value:
my $rs = $schema->resultset('Artist')->search(
{},
{
select => [ 'name', { LENGTH => 'name' } ],
as => [qw/ name name_length /],
}
);
# Equivalent SQL:
# SELECT name name, LENGTH( name )
# FROM artist
Note that the as
attribute has absolutely nothing to with the sql syntax SELECT foo AS bar
(see the documentation in "ATTRIBUTES" in DBIx::Class::ResultSet). If your alias exists as a column in your base class (i.e. it was added with add_columns
), you just access it as normal. Our Artist
class has a name
column, so we just use the name
accessor:
my $artist = $rs->first();
my $name = $artist->name();
If on the other hand the alias does not correspond to an existing column, you have to fetch the value using the get_column
accessor:
my $name_length = $artist->get_column('name_length');
If you don't like using get_column
, you can always create an accessor for any of your aliases using either of these:
# Define accessor manually:
sub name_length { shift->get_column('name_length'); }
# Or use DBIx::Class::AccessorGroup:
__PACKAGE__->mk_group_accessors('column' => 'name_length');
SELECT DISTINCT with multiple columns
my $rs = $schema->resultset('Foo')->search(
{},
{
select => [
{ distinct => [ $source->columns ] }
],
as => [ $source->columns ] # remember 'as' is not the same as SQL AS :-)
}
);
my $count = $rs->next->get_column('count');
SELECT COUNT(DISTINCT colname)
my $rs = $schema->resultset('Foo')->search(
{},
{
select => [
{ count => { distinct => 'colname' } }
],
as => [ 'count' ]
}
);
Grouping results
DBIx::Class supports GROUP BY
as follows:
my $rs = $schema->resultset('Artist')->search(
{},
{
join => [qw/ cds /],
select => [ 'name', { count => 'cds.cdid' } ],
as => [qw/ name cd_count /],
group_by => [qw/ name /]
}
);
# Equivalent SQL:
# SELECT name, COUNT( cds.cdid ) FROM artist me
# LEFT JOIN cd cds ON ( cds.artist = me.artistid )
# GROUP BY name
Please see "ATTRIBUTES" in DBIx::Class::ResultSet documentation if you are in any way unsure about the use of the attributes above ( join
, select
, as
and group_by
).
Predefined searches
You can write your own DBIx::Class::ResultSet class by inheriting from it and define often used searches as methods:
package My::DBIC::ResultSet::CD;
use strict;
use warnings;
use base 'DBIx::Class::ResultSet';
sub search_cds_ordered {
my ($self) = @_;
return $self->search(
{},
{ order_by => 'name DESC' },
);
}
1;
To use your resultset, first tell DBIx::Class to create an instance of it for you, in your My::DBIC::Schema::CD class:
__PACKAGE__->resultset_class('My::DBIC::ResultSet::CD');
Then call your new method in your code:
my $ordered_cds = $schema->resultset('CD')->search_cds_ordered();
Using SQL functions on the left hand side of a comparison
Using SQL functions on the left hand side of a comparison is generally not a good idea since it requires a scan of the entire table. However, it can be accomplished with DBIx::Class
when necessary.
If you do not have quoting on, simply include the function in your search specification as you would any column:
$rs->search({ 'YEAR(date_of_birth)' => 1979 });
With quoting on, or for a more portable solution, use the where
attribute:
$rs->search({}, { where => \'YEAR(date_of_birth) = 1979' });
JOINS AND PREFETCHING
Using joins and prefetch
You can use the join
attribute to allow searching on, or sorting your results by, one or more columns in a related table. To return all CDs matching a particular artist name:
my $rs = $schema->resultset('CD')->search(
{
'artist.name' => 'Bob Marley'
},
{
join => [qw/artist/], # join the artist table
}
);
# Equivalent SQL:
# SELECT cd.* FROM cd
# JOIN artist ON cd.artist = artist.id
# WHERE artist.name = 'Bob Marley'
If required, you can now sort on any column in the related tables by including it in your order_by
attribute:
my $rs = $schema->resultset('CD')->search(
{
'artist.name' => 'Bob Marley'
},
{
join => [qw/ artist /],
order_by => [qw/ artist.name /]
}
);
# Equivalent SQL:
# SELECT cd.* FROM cd
# JOIN artist ON cd.artist = artist.id
# WHERE artist.name = 'Bob Marley'
# ORDER BY artist.name
Note that the join
attribute should only be used when you need to search or sort using columns in a related table. Joining related tables when you only need columns from the main table will make performance worse!
Now let's say you want to display a list of CDs, each with the name of the artist. The following will work fine:
while (my $cd = $rs->next) {
print "CD: " . $cd->title . ", Artist: " . $cd->artist->name;
}
There is a problem however. We have searched both the cd
and artist
tables in our main query, but we have only returned data from the cd
table. To get the artist name for any of the CD objects returned, DBIx::Class will go back to the database:
SELECT artist.* FROM artist WHERE artist.id = ?
A statement like the one above will run for each and every CD returned by our main query. Five CDs, five extra queries. A hundred CDs, one hundred extra queries!
Thankfully, DBIx::Class has a prefetch
attribute to solve this problem. This allows you to fetch results from related tables in advance:
my $rs = $schema->resultset('CD')->search(
{
'artist.name' => 'Bob Marley'
},
{
join => [qw/ artist /],
order_by => [qw/ artist.name /],
prefetch => [qw/ artist /] # return artist data too!
}
);
# Equivalent SQL (note SELECT from both "cd" and "artist"):
# SELECT cd.*, artist.* FROM cd
# JOIN artist ON cd.artist = artist.id
# WHERE artist.name = 'Bob Marley'
# ORDER BY artist.name
The code to print the CD list remains the same:
while (my $cd = $rs->next) {
print "CD: " . $cd->title . ", Artist: " . $cd->artist->name;
}
DBIx::Class has now prefetched all matching data from the artist
table, so no additional SQL statements are executed. You now have a much more efficient query.
Note that as of DBIx::Class 0.05999_01, prefetch
can be used with has_many
relationships.
Also note that prefetch
should only be used when you know you will definitely use data from a related table. Pre-fetching related tables when you only need columns from the main table will make performance worse!
Multi-step joins
Sometimes you want to join more than one relationship deep. In this example, we want to find all Artist
objects who have CD
s whose LinerNotes
contain a specific string:
# Relationships defined elsewhere:
# Artist->has_many('cds' => 'CD', 'artist');
# CD->has_one('liner_notes' => 'LinerNotes', 'cd');
my $rs = $schema->resultset('Artist')->search(
{
'liner_notes.notes' => { 'like', '%some text%' },
},
{
join => {
'cds' => 'liner_notes'
}
}
);
# Equivalent SQL:
# SELECT artist.* FROM artist
# JOIN ( cd ON artist.id = cd.artist )
# JOIN ( liner_notes ON cd.id = liner_notes.cd )
# WHERE liner_notes.notes LIKE '%some text%'
Joins can be nested to an arbitrary level. So if we decide later that we want to reduce the number of Artists returned based on who wrote the liner notes:
# Relationship defined elsewhere:
# LinerNotes->belongs_to('author' => 'Person');
my $rs = $schema->resultset('Artist')->search(
{
'liner_notes.notes' => { 'like', '%some text%' },
'author.name' => 'A. Writer'
},
{
join => {
'cds' => {
'liner_notes' => 'author'
}
}
}
);
# Equivalent SQL:
# SELECT artist.* FROM artist
# JOIN ( cd ON artist.id = cd.artist )
# JOIN ( liner_notes ON cd.id = liner_notes.cd )
# JOIN ( author ON author.id = liner_notes.author )
# WHERE liner_notes.notes LIKE '%some text%'
# AND author.name = 'A. Writer'
Multi-step prefetch
From 0.04999_05 onwards, prefetch
can be nested more than one relationship deep using the same syntax as a multi-step join:
my $rs = $schema->resultset('Tag')->search(
{},
{
prefetch => {
cd => 'artist'
}
}
);
# Equivalent SQL:
# SELECT tag.*, cd.*, artist.* FROM tag
# JOIN cd ON tag.cd = cd.cdid
# JOIN artist ON cd.artist = artist.artistid
Now accessing our cd
and artist
relationships does not need additional SQL statements:
my $tag = $rs->first;
print $tag->cd->artist->name;
ROW-LEVEL OPERATIONS
Retrieving a row object's Schema
It is possible to get a Schema object from a row object like so:
my $schema = $cd->result_source->schema;
# use the schema as normal:
my $artist_rs = $schema->resultset('Artist');
This can be useful when you don't want to pass around a Schema object to every method.
Getting the value of the primary key for the last database insert
AKA getting last_insert_id
If you are using PK::Auto (which is a core component as of 0.07), this is straightforward:
my $foo = $rs->create(\%blah);
# do more stuff
my $id = $foo->id; # foo->my_primary_key_field will also work.
If you are not using autoincrementing primary keys, this will probably not work, but then you already know the value of the last primary key anyway.
Stringification
Employ the standard stringification technique by using the overload
module.
To make an object stringify itself as a single column, use something like this (replace foo
with the column/method of your choice):
use overload '""' => sub { shift->name}, fallback => 1;
For more complex stringification, you can use an anonymous subroutine:
use overload '""' => sub { $_[0]->name . ", " .
$_[0]->address }, fallback => 1;
Stringification Example
Suppose we have two tables: Product
and Category
. The table specifications are:
Product(id, Description, category)
Category(id, Description)
category
is a foreign key into the Category table.
If you have a Product object $obj
and write something like
print $obj->category
things will not work as expected.
To obtain, for example, the category description, you should add this method to the class defining the Category table:
use overload "" => sub {
my $self = shift;
return $self->Description;
}, fallback => 1;
Want to know if find_or_create found or created a row?
Just use find_or_new
instead, then check in_storage
:
my $obj = $rs->find_or_new({ blah => 'blarg' });
unless ($obj->in_storage) {
$obj->insert;
# do whatever else you wanted if it was a new row
}
Dynamic Sub-classing DBIx::Class proxy classes
AKA multi-class object inflation from one table
DBIx::Class classes are proxy classes, therefore some different techniques need to be employed for more than basic subclassing. In this example we have a single user table that carries a boolean bit for admin. We would like like to give the admin users objects(DBIx::Class::Row) the same methods as a regular user but also special admin only methods. It doesn't make sense to create two seperate proxy-class files for this. We would be copying all the user methods into the Admin class. There is a cleaner way to accomplish this.
Overriding the inflate_result
method within the User proxy-class gives us the effect we want. This method is called by DBIx::Class::ResultSet when inflating a result from storage. So we grab the object being returned, inspect the values we are looking for, bless it if it's an admin object, and then return it. See the example below:
Schema Definition
package DB::Schema;
use base qw/DBIx::Class::Schema/;
__PACKAGE__->load_classes(qw/User/);
Proxy-Class definitions
package DB::Schema::User;
use strict;
use warnings;
use base qw/DBIx::Class/;
### Defined what our admin class is for ensure_class_loaded
my $admin_class = __PACKAGE__ . '::Admin';
__PACKAGE__->load_components(qw/Core/);
__PACKAGE__->table('users');
__PACKAGE__->add_columns(qw/user_id email password
firstname lastname active
admin/);
__PACKAGE__->set_primary_key('user_id');
sub inflate_result {
my $self = shift;
my $ret = $self->next::method(@_);
if( $ret->admin ) {### If this is an admin rebless for extra functions
$self->ensure_class_loaded( $admin_class );
bless $ret, $admin_class;
}
return $ret;
}
sub hello {
print "I am a regular user.\n";
return ;
}
package DB::Schema::User::Admin;
use strict;
use warnings;
use base qw/DB::Schema::User/;
sub hello
{
print "I am an admin.\n";
return;
}
sub do_admin_stuff
{
print "I am doing admin stuff\n";
return ;
}
Test File test.pl
use warnings;
use strict;
use DB::Schema;
my $user_data = { email => 'someguy@place.com',
password => 'pass1',
admin => 0 };
my $admin_data = { email => 'someadmin@adminplace.com',
password => 'pass2',
admin => 1 };
my $schema = DB::Schema->connection('dbi:Pg:dbname=test');
$schema->resultset('User')->create( $user_data );
$schema->resultset('User')->create( $admin_data );
### Now we search for them
my $user = $schema->resultset('User')->single( $user_data );
my $admin = $schema->resultset('User')->single( $admin_data );
print ref $user, "\n";
print ref $admin, "\n";
print $user->password , "\n"; # pass1
print $admin->password , "\n";# pass2; inherited from User
print $user->hello , "\n";# I am a regular user.
print $admin->hello, "\n";# I am an admin.
### The statement below will NOT print
print "I can do admin stuff\n" if $user->can('do_admin_stuff');
### The statement below will print
print "I can do admin stuff\n" if $admin->can('do_admin_stuff');
Skip object creation for faster results
DBIx::Class is not built for speed, it's built for convenience and ease of use, but sometimes you just need to get the data, and skip the fancy objects.
To do this simply use DBIx::Class::ResultClass::HashRefInflator.
my $rs = $schema->resultset('CD');
$rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
my $hash_ref = $rs->find(1);
Wasn't that easy?
Get raw data for blindingly fast results
If the HashRefInflator solution above is not fast enough for you, you can use a DBIx::Class to return values exactly as they come out of the data base with none of the convenience methods wrapped round them.
This is used like so:-
my $cursor = $rs->cursor
while (my @vals = $cursor->next) {
# use $val[0..n] here
}
You will need to map the array offsets to particular columns (you can use the select attribute of search()
to force ordering).
RESULTSET OPERATIONS
Getting Schema from a ResultSet
To get the schema object from a result set, do the following:
$rs->result_source->schema
Getting Columns Of Data
AKA Aggregating Data
If you want to find the sum of a particular column there are several ways, the obvious one is to use search:
my $rs = $schema->resultset('Items')->search(
{},
{
select => [ { sum => 'Cost' } ],
as => [ 'total_cost' ], # remember this 'as' is for DBIx::Class::ResultSet not SQL
}
);
my $tc = $rs->first->get_column('total_cost');
Or, you can use the DBIx::Class::ResultSetColumn, which gets returned when you ask the ResultSet
for a column using get_column
:
my $cost = $schema->resultset('Items')->get_column('Cost');
my $tc = $cost->sum;
With this you can also do:
my $minvalue = $cost->min;
my $maxvalue = $cost->max;
Or just iterate through the values of this column only:
while ( my $c = $cost->next ) {
print $c;
}
foreach my $c ($cost->all) {
print $c;
}
ResultSetColumn
only has a limited number of built-in functions, if you need one that it doesn't have, then you can use the func
method instead:
my $avg = $cost->func('AVERAGE');
This will cause the following SQL statement to be run:
SELECT AVERAGE(Cost) FROM Items me
Which will of course only work if your database supports this function. See DBIx::Class::ResultSetColumn for more documentation.
USING RELATIONSHIPS
Create a new row in a related table
my $book->create_related('author', { name => 'Fred'});
Search in a related table
Only searches for books named 'Titanic' by the author in $author.
my $author->search_related('books', { name => 'Titanic' });
Delete data in a related table
Deletes only the book named Titanic by the author in $author.
my $author->delete_related('books', { name => 'Titanic' });
Ordering a relationship result set
If you always want a relation to be ordered, you can specify this when you create the relationship.
To order $book->pages
by descending page_number.
Book->has_many('pages' => 'Page', 'book', { order_by => \'page_number DESC'} );
Many-to-many relationships
This is straightforward using ManyToMany:
package My::DB;
# ... set up connection ...
package My::User;
use base 'My::DB';
__PACKAGE__->table('user');
__PACKAGE__->add_columns(qw/id name/);
__PACKAGE__->set_primary_key('id');
__PACKAGE__->has_many('user_address' => 'My::UserAddress', 'user');
__PACKAGE__->many_to_many('addresses' => 'user_address', 'address');
package My::UserAddress;
use base 'My::DB';
__PACKAGE__->table('user_address');
__PACKAGE__->add_columns(qw/user address/);
__PACKAGE__->set_primary_key(qw/user address/);
__PACKAGE__->belongs_to('user' => 'My::User');
__PACKAGE__->belongs_to('address' => 'My::Address');
package My::Address;
use base 'My::DB';
__PACKAGE__->table('address');
__PACKAGE__->add_columns(qw/id street town area_code country/);
__PACKAGE__->set_primary_key('id');
__PACKAGE__->has_many('user_address' => 'My::UserAddress', 'address');
__PACKAGE__->many_to_many('users' => 'user_address', 'user');
$rs = $user->addresses(); # get all addresses for a user
$rs = $address->users(); # get all users for an address
TRANSACTIONS
As of version 0.04001, there is improved transaction support in DBIx::Class::Storage and DBIx::Class::Schema. Here is an example of the recommended way to use it:
my $genus = $schema->resultset('Genus')->find(12);
my $coderef2 = sub {
$genus->extinct(1);
$genus->update;
};
my $coderef1 = sub {
$genus->add_to_species({ name => 'troglodyte' });
$genus->wings(2);
$genus->update;
$schema->txn_do($coderef2); # Can have a nested transaction
return $genus->species;
};
my $rs;
eval {
$rs = $schema->txn_do($coderef1);
};
if ($@) { # Transaction failed
die "the sky is falling!" #
if ($@ =~ /Rollback failed/); # Rollback failed
deal_with_failed_transaction();
}
Nested transactions will work as expected. That is, only the outermost transaction will actually issue a commit to the $dbh, and a rollback at any level of any transaction will cause the entire nested transaction to fail. Support for savepoints and for true nested transactions (for databases that support them) will hopefully be added in the future.
SQL
Creating Schemas From An Existing Database
DBIx::Class::Schema::Loader will connect to a database and create a DBIx::Class::Schema and associated sources by examining the database.
The recommend way of achieving this is to use the make_schema_at method:
perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:./lib -e 'make_schema_at("My::Schema", { debug => 1 }, [ "dbi:Pg:dbname=foo","postgres" ])'
This will create a tree of files rooted at ./lib/My/Schema/
containing source definitions for all the tables found in the foo
database.
Creating DDL SQL
The following functionality requires you to have SQL::Translator (also known as "SQL Fairy") installed.
To create a set of database-specific .sql files for the above schema:
my $schema = My::Schema->connect($dsn);
$schema->create_ddl_dir(['MySQL', 'SQLite', 'PostgreSQL'],
'0.1',
'./dbscriptdir/'
);
By default this will create schema files in the current directory, for MySQL, SQLite and PostgreSQL, using the $VERSION from your Schema.pm.
To create a new database using the schema:
my $schema = My::Schema->connect($dsn);
$schema->deploy({ add_drop_tables => 1});
To import created .sql files using the mysql client:
mysql -h "host" -D "database" -u "user" -p < My_Schema_1.0_MySQL.sql
To create ALTER TABLE
conversion scripts to update a database to a newer version of your schema at a later point, first set a new $VERSION
in your Schema file, then:
my $schema = My::Schema->connect($dsn);
$schema->create_ddl_dir(['MySQL', 'SQLite', 'PostgreSQL'],
'0.2',
'/dbscriptdir/',
'0.1'
);
This will produce new database-specific .sql files for the new version of the schema, plus scripts to convert from version 0.1 to 0.2. This requires that the files for 0.1 as created above are available in the given directory to diff against.
select from dual
Dummy tables are needed by some databases to allow calling functions or expressions that aren't based on table content, for examples of how this applies to various database types, see: http://troels.arvin.dk/db/rdbms/#other-dummy_table.
Note: If you're using Oracles dual table don't ever do anything other than a select, if you CRUD on your dual table you *will* break your database.
Make a table class as you would for any other table
package MyAppDB::Dual;
use strict;
use warnings;
use base 'DBIx::Class';
__PACKAGE__->load_components("Core");
__PACKAGE__->table("Dual");
__PACKAGE__->add_columns(
"dummy",
{ data_type => "VARCHAR2", is_nullable => 0, size => 1 },
);
Once you've loaded your table class select from it using select
and as
instead of columns
my $rs = $schema->resultset('Dual')->search(undef,
{ select => [ 'sydate' ],
as => [ 'now' ]
},
);
All you have to do now is be careful how you access your resultset, the below will not work because there is no column called 'now' in the Dual table class
while (my $dual = $rs->next) {
print $dual->now."\n";
}
Can't locate object method "now" via package "MyAppDB::Dual" at headshot.pl
line 23.
You could of course use 'dummy' in as
instead of 'now', or add_columns
to your Dual class for whatever you wanted to select from dual, but that's just silly, instead use get_column
while (my $dual = $rs->next) {
print $dual->get_column('now')."\n";
}
Or use cursor
my $cursor = $rs->cursor;
while (my @vals = $cursor->next) {
print $vals[0]."\n";
}
Or use DBIx::Class::ResultClass::HashRefInflator
$rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
while ( my $dual = $rs->next ) {
print $dual->{now}."\n";
}
Here are some example select
conditions to illustrate the different syntax you could use for doing stuff like oracles.heavily(nested(functions_can('take', 'lots'), OF), 'args')
# get a sequence value
select => [ 'A_SEQ.nextval' ],
# get create table sql
select => [ { 'dbms_metadata.get_ddl' => [ "'TABLE'", "'ARTIST'" ]} ],
# get a random num between 0 and 100
select => [ { "trunc" => [ { "dbms_random.value" => [0,100] } ]} ],
# what year is it?
select => [ { 'extract' => [ \'year from sysdate' ] } ],
# do some math
select => [ {'round' => [{'cos' => [ \'180 * 3.14159265359/180' ]}]}],
# which day of the week were you born on?
select => [{'to_char' => [{'to_date' => [ "'25-DEC-1980'", "'dd-mon-yyyy'"
]}, "'day'"]}],
# select 16 rows from dual
select => [ "'hello'" ],
as => [ 'world' ],
group_by => [ 'cube( 1, 2, 3, 4 )' ],
Adding Indexes And Functions To Your SQL
Often you will want indexes on columns on your table to speed up searching. To do this, create a method called sqlt_deploy_hook
in the relevant source class:
package My::Schema::Artist;
__PACKAGE__->table('artist');
__PACKAGE__->add_columns(id => { ... }, name => { ... })
sub sqlt_deploy_hook {
my ($self, $sqlt_table) = @_;
$sqlt_table->add_index(name => 'idx_name', fields => ['name']);
}
1;
Sometimes you might want to change the index depending on the type of the database for which SQL is being generated:
my ($db_type = $sqlt_table->schema->translator->producer_type)
=~ s/^SQL::Translator::Producer:://;
You can also add hooks to the schema level to stop certain tables being created:
package My::Schema;
...
sub sqlt_deploy_hook {
my ($self, $sqlt_schema) = @_;
$sqlt_schema->drop_table('table_name');
}
You could also add views or procedures to the output using "add_view" in SQL::Translator::Schema or "add_procedure" in SQL::Translator::Schema.
Schema versioning
The following example shows simplistically how you might use DBIx::Class to deploy versioned schemas to your customers. The basic process is as follows:
Create a DBIx::Class schema
Save the schema
Deploy to customers
Modify schema to change functionality
Deploy update to customers
Create a DBIx::Class schema
This can either be done manually, or generated from an existing database as described under "Creating Schemas From An Existing Database"
Save the schema
Call "create_ddl_dir" in DBIx::Class::Schema as above under "Creating DDL SQL".
Deploy to customers
There are several ways you could deploy your schema. These are probably beyond the scope of this recipe, but might include:
Require customer to apply manually using their RDBMS.
Package along with your app, making database dump/schema update/tests all part of your install.
Modify the schema to change functionality
As your application evolves, it may be necessary to modify your schema to change functionality. Once the changes are made to your schema in DBIx::Class, export the modified schema and the conversion scripts as in "Creating DDL SQL".
Deploy update to customers
Add the DBIx::Class::Schema::Versioned schema component to your Schema class. This will add a new table to your database called SchemaVersions
which will keep track of which version is installed and warn if the user trys to run a newer schema version than the database thinks it has.
Alternatively, you can send the conversion sql scripts to your customers as above.
Setting quoting for the generated SQL.
If the database contains column names with spaces and/or reserved words, they need to be quoted in the SQL queries. This is done using:
__PACKAGE__->storage->sql_maker->quote_char([ qw/[ ]/] );
__PACKAGE__->storage->sql_maker->name_sep('.');
The first sets the quote characters. Either a pair of matching brackets, or a "
or '
:
__PACKAGE__->storage->sql_maker->quote_char('"');
Check the documentation of your database for the correct quote characters to use. name_sep
needs to be set to allow the SQL generator to put the quotes the correct place.
In most cases you should set these as part of the arguments passed to "conect" in DBIx::Class::Schema:
my $schema = My::Schema->connect(
'dbi:mysql:my_db',
'db_user',
'db_password',
{
quote_char => '"',
name_sep => '.'
}
)
Setting limit dialect for SQL::Abstract::Limit
In some cases, SQL::Abstract::Limit cannot determine the dialect of the remote SQL server by looking at the database handle. This is a common problem when using the DBD::JDBC, since the DBD-driver only know that in has a Java-driver available, not which JDBC driver the Java component has loaded. This specifically sets the limit_dialect to Microsoft SQL-server (See more names in SQL::Abstract::Limit -documentation.
__PACKAGE__->storage->sql_maker->limit_dialect('mssql');
The JDBC bridge is one way of getting access to a MSSQL server from a platform that Microsoft doesn't deliver native client libraries for. (e.g. Linux)
The limit dialect can also be set at connect time by specifying a limit_dialect
key in the final hash as shown above.
BOOTSTRAPPING/MIGRATING
Easy migration from class-based to schema-based setup
You want to start using the schema-based approach to DBIx::Class (see SchemaIntro.pod), but have an established class-based setup with lots of existing classes that you don't want to move by hand. Try this nifty script instead:
use MyDB;
use SQL::Translator;
my $schema = MyDB->schema_instance;
my $translator = SQL::Translator->new(
debug => $debug || 0,
trace => $trace || 0,
no_comments => $no_comments || 0,
show_warnings => $show_warnings || 0,
add_drop_table => $add_drop_table || 0,
validate => $validate || 0,
parser_args => {
'DBIx::Schema' => $schema,
},
producer_args => {
'prefix' => 'My::Schema',
},
);
$translator->parser('SQL::Translator::Parser::DBIx::Class');
$translator->producer('SQL::Translator::Producer::DBIx::Class::File');
my $output = $translator->translate(@args) or die
"Error: " . $translator->error;
print $output;
You could use Module::Find to search for all subclasses in the MyDB::* namespace, which is currently left as an exercise for the reader.
OVERLOADING METHODS
DBIx::Class uses the Class::C3 package, which provides for redispatch of method calls, useful for things like default values and triggers. You have to use calls to next::method
to overload methods. More information on using Class::C3 with DBIx::Class can be found in DBIx::Class::Manual::Component.
Setting default values for a row
It's as simple as overriding the new
method. Note the use of next::method
.
sub new {
my ( $class, $attrs ) = @_;
$attrs->{foo} = 'bar' unless defined $attrs->{foo};
my $new = $class->next::method($attrs);
return $new;
}
For more information about next::method
, look in the Class::C3 documentation. See also DBIx::Class::Manual::Component for more ways to write your own base classes to do this.
People looking for ways to do "triggers" with DBIx::Class are probably just looking for this.
Changing one field whenever another changes
For example, say that you have three columns, id
, number
, and squared
. You would like to make changes to number
and have squared
be automagically set to the value of number
squared. You can accomplish this by overriding store_column
:
sub store_column {
my ( $self, $name, $value ) = @_;
if ($name eq 'number') {
$self->squared($value * $value);
}
$self->next::method($name, $value);
}
Note that the hard work is done by the call to next::method
, which redispatches your call to store_column in the superclass(es).
Automatically creating related objects
You might have a class Artist
which has many CD
s. Further, if you want to create a CD
object every time you insert an Artist
object. You can accomplish this by overriding insert
on your objects:
sub insert {
my ( $self, @args ) = @_;
$self->next::method(@args);
$self->cds->new({})->fill_from_artist($self)->insert;
return $self;
}
where fill_from_artist
is a method you specify in CD
which sets values in CD
based on the data in the Artist
object you pass in.
Wrapping/overloading a column accessor
Problem:
Say you have a table "Camera" and want to associate a description with each camera. For most cameras, you'll be able to generate the description from the other columns. However, in a few special cases you may want to associate a custom description with a camera.
Solution:
In your database schema, define a description field in the "Camera" table that can contain text and null values.
In DBIC, we'll overload the column accessor to provide a sane default if no custom description is defined. The accessor will either return or generate the description, depending on whether the field is null or not.
First, in your "Camera" schema class, define the description field as follows:
__PACKAGE__->add_columns(description => { accessor => '_description' });
Next, we'll define the accessor-wrapper subroutine:
sub description {
my $self = shift;
# If there is an update to the column, we'll let the original accessor
# deal with it.
return $self->_description(@_) if @_;
# Fetch the column value.
my $description = $self->_description;
# If there's something in the description field, then just return that.
return $description if defined $description && length $descripton;
# Otherwise, generate a description.
return $self->generate_description;
}
DEBUGGING AND PROFILING
DBIx::Class objects with Data::Dumper
Data::Dumper can be a very useful tool for debugging, but sometimes it can be hard to find the pertinent data in all the data it can generate. Specifically, if one naively tries to use it like so,
use Data::Dumper;
my $cd = $schema->resultset('CD')->find(1);
print Dumper($cd);
several pages worth of data from the CD object's schema and result source will be dumped to the screen. Since usually one is only interested in a few column values of the object, this is not very helpful.
Luckily, it is possible to modify the data before Data::Dumper outputs it. Simply define a hook that Data::Dumper will call on the object before dumping it. For example,
package My::DB::CD;
sub _dumper_hook {
$_[0] = bless {
%{ $_[0] },
result_source => undef,
}, ref($_[0]);
}
[...]
use Data::Dumper;
local $Data::Dumper::Freezer = '_dumper_hook';
my $cd = $schema->resultset('CD')->find(1);
print Dumper($cd);
# dumps $cd without its ResultSource
If the structure of your schema is such that there is a common base class for all your table classes, simply put a method similar to _dumper_hook
in the base class and set $Data::Dumper::Freezer
to its name and Data::Dumper will automagically clean up your data before printing it. See "EXAMPLES" in Data::Dumper for more information.
Profiling
When you enable DBIx::Class::Storage's debugging it prints the SQL executed as well as notifications of query completion and transaction begin/commit. If you'd like to profile the SQL you can subclass the DBIx::Class::Storage::Statistics class and write your own profiling mechanism:
package My::Profiler;
use strict;
use base 'DBIx::Class::Storage::Statistics';
use Time::HiRes qw(time);
my $start;
sub query_start {
my $self = shift();
my $sql = shift();
my $params = @_;
$self->print("Executing $sql: ".join(', ', @params)."\n");
$start = time();
}
sub query_end {
my $self = shift();
my $sql = shift();
my @params = @_;
my $elapsed = sprintf("%0.4f", time() - $start);
$self->print("Execution took $elapsed seconds.\n");
$start = undef;
}
1;
You can then install that class as the debugging object:
__PACKAGE__->storage->debugobj(new My::Profiler());
__PACKAGE__->storage->debug(1);
A more complicated example might involve storing each execution of SQL in an array:
sub query_end {
my $self = shift();
my $sql = shift();
my @params = @_;
my $elapsed = time() - $start;
push(@{ $calls{$sql} }, {
params => \@params,
elapsed => $elapsed
});
}
You could then create average, high and low execution times for an SQL statement and dig down to see if certain parameters cause aberrant behavior. You might want to check out DBIx::Class::QueryLog as well.