NAME

Regression.pm - weighted linear regression package (line+plane fitting)

SYNOPSIS

use Statistics::Regression;

# Create regression object
my $reg = Statistics::Regression->new( 
  3, "sample regression", 
  [ "const", "someX", "someY" ] 
);

# Add data points
$reg->include( 2.0, [ 1.0, 3.0, -1.0 ] );
$reg->include( 1.0, [ 1.0, 5.0, 2.0 ] );
$reg->include( 20.0, [ 1.0, 31.0, 0.0 ] );
$reg->include( 15.0, [ 1.0, 11.0, 2.0 ] );

# Print the result
$reg->print(); 

# Prints the following:
# ****************************************************************
# Regression 'sample regression'
# ****************************************************************
# Theta[0='const']=       0.2950
# Theta[1='someX']=       0.6723
# Theta[2='someY']=       1.0688
# R^2= 0.808, N= 4
# ****************************************************************

# Or, to get the values of the coefficients and R^2
my @theta = $reg->theta;
my $rsq   = $reg->rsq;

DESCRIPTION

Regression.pm is a multivariate linear regression package. That is, it estimates the c coefficients for a line-fit of the type

y= c(0)*x(0) + c(1)*x1 + c(2)*x2 + ... + c(k)*xk

given a data set of N observations, each with k independent x variables and one y variable. Naturally, N must be greater than k---and preferably considerably greater. Any reasonable undergraduate statistics book will explain what a regression is. Most of the time, the user will provide a constant ('1') as x(0) for each observation in order to allow the regression package to fit an intercept.

ALGORITHM

Original Algorithm (ALGOL-60):

W.  M.  Gentleman, University of Waterloo, "Basic Description
For Large, Sparse Or Weighted Linear Least Squares Problems
(Algorithm AS 75)," Applied Statistics (1974) Vol 23; No. 3

INTERNALS

R=Rbar is an upperright triangular matrix, kept in normalized form with implicit 1's on the diagonal. D is a diagonal scaling matrix. These correspond to "standard Regression usage" as

X' X  = R' D R

A backsubsitution routine (in thetacov) allows to invert the R matrix (the inverse is upper-right triangular, too!). Call this matrix H, that is H=R^(-1).

(X' X)^(-1) = [(R' D^(1/2)') (D^(1/2) R)]^(-1)
= [ R^-1 D^(-1/2) ] [ R^-1 D^(-1/2) ]'

Remarks

This algorithm is the statistical "standard." Insertion of a new observation can be done one obs at any time (WITH A WEIGHT!), and still only takes a low quadratic time. The storage space requirement is of quadratic order (in the indep variables). A practically infinite number of observations can easily be processed!

METHODS

new

my $reg = Statistics::Regression->new($n, $name, \@var_names)

Receives the number of variables on each observations (i.e., an integer) and returns the blessed data structure as a Statistics::Regression object. Also takes an optional name for this regression to remember, as well as a reference to a k*1 array of names for the X coefficients.

dump

$reg->dump

Used for debugging.

print

$reg->print

prints the estimated coefficients, and R^2 and N. For an example see the SYNOPSIS.

include

$n = $reg->include( $y, [ $x1, $x2, $x3 ... $xk ], $weight );

Add one new observation. The weight is optional. Note that inclusion with a weight of -1 can be used to delete an observation.

Returns the number of observations so far included.

theta

$theta = $reg->theta
@theta = $reg->theta

Estimates and returns the vector of coefficients. In scalar context returns an array reference; in list context it returns the list of coefficients.

rsq, adjrsq, sigmasq, ybar, sst, k, n

$rsq = $reg->rsq; # etc...

These methods provide common auxiliary information. rsq, adjrsq, sigmasq, sst, and ybar have not been checked but are likely correct. The results are stored for later usage, although this is somewhat unnecessary because the computation is so simple anyway.

BUGS/PROBLEMS

Missing

This package lacks routines to compute the standard errors of the coefficients. This requires access to a matrix inversion package, and I do not have one at my disposal. If you want to add one, please let me know.

Perl Problem

perl is unaware of IEEE number representations. This makes it a pain to test whether an observation contains any missing variables (coded as 'NaN' in Regression.pm).

VERSION

0.15

AUTHOR

Naturally, Gentleman invented this algorithm. Adaptation by ivo welch. Alan Miller (alan@dmsmelb.mel.dms.CSIRO.AU) pointed out nicer ways to compute the R^2. Ivan Tubert-Brohman helped wrap the module as as standard CPAN distribution.

LICENSE

This module is released for free public use under a GPL license.

(C) Ivo Welch, 2001,2004.