Security Advisories (4)
CVE-2010-2253 (2010-07-06)

lwp-download in libwww-perl before 5.835 does not reject downloads to filenames that begin with a . (dot) character, which allows remote servers to create or overwrite files via (1) a 3xx redirect to a URL with a crafted filename or (2) a Content-Disposition header that suggests a crafted filename, and possibly execute arbitrary code as a consequence of writing to a dotfile in a home directory.

CPANSA-libwww-perl-2001-01 (2001-03-14)

If LWP::UserAgent::env_proxy is called in a CGI environment, the case-insensitivity when looking for "http_proxy" permits "HTTP_PROXY" to be found, but this can be trivially set by the web client using the "Proxy:" header.

CVE-2011-0633 (2011-01-20)

The Net::HTTPS module in libwww-perl (LWP) before 6.00, as used in WWW::Mechanize, LWP::UserAgent, and other products, when running in environments that do not set the If-SSL-Cert-Subject header, does not enable full validation of SSL certificates by default, which allows remote attackers to spoof servers via man-in-the-middle (MITM) attacks involving hostnames that are not properly validated.

CPANSA-libwww-perl-2017-01 (2017-11-06)

LWP::Protocol::file can open existent file from file:// scheme. However, current version of LWP uses open FILEHANDLE,EXPR and it has ability to execute arbitrary command

NAME

WWW::RobotsRules - Parse robots.txt files

SYNOPSIS

$robotsrules = new WWW::RobotRules 'MOMspider/1.0';

$robotsrules->parse($url, $content);
   
if($robotsrules->allowed($url)) {
    ...
}

DESCRIPTION

This module parses a "/robots.txt" file as specified in "A Standard for Robot Exclusion", described in http://web.nexor.co.uk/users/mak/doc/robots/norobots.html.

Webmasters can use this file to disallow conforming robots access to parts of their WWW server.

The parsed file is kept as a Perl object that support methods to check if a given URL is prohibited.

Note that the same RobotRules object can parse multiple files.

new RobotRules 'MOMspider/1.0'

The argument given to new() is the name of the robot.

parse($url, $content)

Parse takes the URL that was used to retrieve the /robots.txt file, and the contents of the file.

allowed($url)

Returns TRUE if this robot is allowed to retrieve this URL.