NAME
WWW::Mechanize::Cookbook - Recipes for using WWW::Mechanize
INTRODUCTION
First, please note that many of these are possible just using LWP::UserAgent. Since WWW::Mechanize
is a subclass of LWP::UserAgent, whatever works on LWP::UserAgent
should work on WWW::Mechanize
. See the lwpcook man page included with the libwww-perl distribution.
BASICS
Launch the WWW::Mechanize browser
use WWW::Mechanize;
my $mech = WWW::Mechanize->new( autocheck => 1 );
The autocheck => 1
tells Mechanize to die if any IO fails, so you don't have to manually check. It's easier that way. If you want to do your own error checking, leave it out.
Fetch a page
$mech->get( "http://search.cpan.org" );
print $mech->content;
$mech->content
contains the raw HTML from the web page. It is not parsed or handled in any way, at least through the content
method.
Fetch a page into a file
Sometimes you want to dump your results directly into a file. For example, there's no reason to read a JPEG into memory if you're only going to write it out immediately. This can also help with memory issues on large files.
$mech->get( "http://www.cpan.org/src/stable.tar.gz",
":content_file" => "stable.tar.gz" );
Fetch a password-protected page
Generally, just call credentials
before fetching the page.
$mech->credentials( 'admin' => 'password' );
$mech->get( 'http://10.11.12.13/password.html' );
print $mech->content();
LINKS
Find all image links
Find all links that point to a JPEG, GIF or PNG.
my @links = $mech->find_all_links(
tag => "a", url_regex => qr/\.(jpe?g|gif|png)$/i );
Find all download links
Find all links that have the word "download" in them.
my @links = $mech->find_all_links(
tag => "a", text_regex => qr/\bdownload\b/i );
APPLICATIONS
Check all pages on a web site
Use Abe Timmerman's WWW::CheckSite http://search.cpan.org/dist/WWW-CheckSite/
SEE ALSO
AUTHORS
Copyright 2005-2010 Andy Lester <andy@petdance.com>
Later contributions by Peter Scott, Mark Stosberg and others. See Acknowledgements section in WWW::Mechanize for more.