NAME
Paws::MachineLearning - Perl Interface to AWS Amazon Machine Learning
SYNOPSIS
use Paws;
my $obj = Paws->service('MachineLearning');
my $res = $obj->Method(
Arg1 => $val1,
Arg2 => [ 'V1', 'V2' ],
# if Arg3 is an object, the HashRef will be used as arguments to the constructor
# of the arguments type
Arg3 => { Att1 => 'Val1' },
# if Arg4 is an array of objects, the HashRefs will be passed as arguments to
# the constructor of the arguments type
Arg4 => [ { Att1 => 'Val1' }, { Att1 => 'Val2' } ],
);
DESCRIPTION
Definition of the public APIs exposed by Amazon Machine Learning
For the AWS API documentation, see https://docs.aws.amazon.com/goto/WebAPI/machinelearning-2014-12-12
METHODS
AddTags
- ResourceId => Str
- ResourceType => Str
- Tags => ArrayRef[Paws::MachineLearning::Tag]
Each argument is described in detail in: Paws::MachineLearning::AddTags
Returns: a Paws::MachineLearning::AddTagsOutput instance
Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you add a tag using a key that is already associated with the ML object, AddTags
updates the tag's value.
CreateBatchPrediction
- BatchPredictionDataSourceId => Str
- BatchPredictionId => Str
- MLModelId => Str
- OutputUri => Str
- [BatchPredictionName => Str]
Each argument is described in detail in: Paws::MachineLearning::CreateBatchPrediction
Returns: a Paws::MachineLearning::CreateBatchPredictionOutput instance
Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a DataSource
. This operation creates a new BatchPrediction
, and uses an MLModel
and the data files referenced by the DataSource
as information sources.
CreateBatchPrediction
is an asynchronous operation. In response to CreateBatchPrediction
, Amazon Machine Learning (Amazon ML) immediately returns and sets the BatchPrediction
status to PENDING
. After the BatchPrediction
completes, Amazon ML sets the status to COMPLETED
.
You can poll for status updates by using the GetBatchPrediction operation and checking the Status
parameter of the result. After the COMPLETED
status appears, the results are available in the location specified by the OutputUri
parameter.
CreateDataSourceFromRDS
- DataSourceId => Str
- RDSData => Paws::MachineLearning::RDSDataSpec
- RoleARN => Str
- [ComputeStatistics => Bool]
- [DataSourceName => Str]
Each argument is described in detail in: Paws::MachineLearning::CreateDataSourceFromRDS
Returns: a Paws::MachineLearning::CreateDataSourceFromRDSOutput instance
Creates a DataSource
object from an Amazon Relational Database Service (http://aws.amazon.com/rds/) (Amazon RDS). A DataSource
references data that can be used to perform CreateMLModel
, CreateEvaluation
, or CreateBatchPrediction
operations.
CreateDataSourceFromRDS
is an asynchronous operation. In response to CreateDataSourceFromRDS
, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource
status to PENDING
. After the DataSource
is created and ready for use, Amazon ML sets the Status
parameter to COMPLETED
. DataSource
in the COMPLETED
or PENDING
state can be used only to perform >CreateMLModel
>, CreateEvaluation
, or CreateBatchPrediction
operations.
If Amazon ML cannot accept the input source, it sets the Status
parameter to FAILED
and includes an error message in the Message
attribute of the GetDataSource
operation response.
CreateDataSourceFromRedshift
- DataSourceId => Str
- DataSpec => Paws::MachineLearning::RedshiftDataSpec
- RoleARN => Str
- [ComputeStatistics => Bool]
- [DataSourceName => Str]
Each argument is described in detail in: Paws::MachineLearning::CreateDataSourceFromRedshift
Returns: a Paws::MachineLearning::CreateDataSourceFromRedshiftOutput instance
Creates a DataSource
from a database hosted on an Amazon Redshift cluster. A DataSource
references data that can be used to perform either CreateMLModel
, CreateEvaluation
, or CreateBatchPrediction
operations.
CreateDataSourceFromRedshift
is an asynchronous operation. In response to CreateDataSourceFromRedshift
, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource
status to PENDING
. After the DataSource
is created and ready for use, Amazon ML sets the Status
parameter to COMPLETED
. DataSource
in COMPLETED
or PENDING
states can be used to perform only CreateMLModel
, CreateEvaluation
, or CreateBatchPrediction
operations.
If Amazon ML can't accept the input source, it sets the Status
parameter to FAILED
and includes an error message in the Message
attribute of the GetDataSource
operation response.
The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified by a SelectSqlQuery
query. Amazon ML executes an Unload
command in Amazon Redshift to transfer the result set of the SelectSqlQuery
query to S3StagingLocation
.
After the DataSource
has been created, it's ready for use in evaluations and batch predictions. If you plan to use the DataSource
to train an MLModel
, the DataSource
also requires a recipe. A recipe describes how each input variable will be used in training an MLModel
. Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.
You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon Redshift datasource to create a new datasource. To do so, call GetDataSource
for an existing datasource and copy the values to a CreateDataSource
call. Change the settings that you want to change and make sure that all required fields have the appropriate values.
CreateDataSourceFromS3
- DataSourceId => Str
- DataSpec => Paws::MachineLearning::S3DataSpec
- [ComputeStatistics => Bool]
- [DataSourceName => Str]
Each argument is described in detail in: Paws::MachineLearning::CreateDataSourceFromS3
Returns: a Paws::MachineLearning::CreateDataSourceFromS3Output instance
Creates a DataSource
object. A DataSource
references data that can be used to perform CreateMLModel
, CreateEvaluation
, or CreateBatchPrediction
operations.
CreateDataSourceFromS3
is an asynchronous operation. In response to CreateDataSourceFromS3
, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource
status to PENDING
. After the DataSource
has been created and is ready for use, Amazon ML sets the Status
parameter to COMPLETED
. DataSource
in the COMPLETED
or PENDING
state can be used to perform only CreateMLModel
, CreateEvaluation
or CreateBatchPrediction
operations.
If Amazon ML can't accept the input source, it sets the Status
parameter to FAILED
and includes an error message in the Message
attribute of the GetDataSource
operation response.
The observation data used in a DataSource
should be ready to use; that is, it should have a consistent structure, and missing data values should be kept to a minimum. The observation data must reside in one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that describes the data items by name and type. The same schema must be used for all of the data files referenced by the DataSource
.
After the DataSource
has been created, it's ready to use in evaluations and batch predictions. If you plan to use the DataSource
to train an MLModel
, the DataSource
also needs a recipe. A recipe describes how each input variable will be used in training an MLModel
. Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.
CreateEvaluation
Each argument is described in detail in: Paws::MachineLearning::CreateEvaluation
Returns: a Paws::MachineLearning::CreateEvaluationOutput instance
Creates a new Evaluation
of an MLModel
. An MLModel
is evaluated on a set of observations associated to a DataSource
. Like a DataSource
for an MLModel
, the DataSource
for an Evaluation
contains values for the Target Variable
. The Evaluation
compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel
functions on the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType
: BINARY
, REGRESSION
or MULTICLASS
.
CreateEvaluation
is an asynchronous operation. In response to CreateEvaluation
, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING
. After the Evaluation
is created and ready for use, Amazon ML sets the status to COMPLETED
.
You can use the GetEvaluation
operation to check progress of the evaluation during the creation operation.
CreateMLModel
- MLModelId => Str
- MLModelType => Str
- TrainingDataSourceId => Str
- [MLModelName => Str]
- [Parameters => Paws::MachineLearning::TrainingParameters]
- [Recipe => Str]
- [RecipeUri => Str]
Each argument is described in detail in: Paws::MachineLearning::CreateMLModel
Returns: a Paws::MachineLearning::CreateMLModelOutput instance
Creates a new MLModel
using the DataSource
and the recipe as information sources.
An MLModel
is nearly immutable. Users can update only the MLModelName
and the ScoreThreshold
in an MLModel
without creating a new MLModel
.
CreateMLModel
is an asynchronous operation. In response to CreateMLModel
, Amazon Machine Learning (Amazon ML) immediately returns and sets the MLModel
status to PENDING
. After the MLModel
has been created and ready is for use, Amazon ML sets the status to COMPLETED
.
You can use the GetMLModel
operation to check the progress of the MLModel
during the creation operation.
CreateMLModel
requires a DataSource
with computed statistics, which can be created by setting ComputeStatistics
to true
in CreateDataSourceFromRDS
, CreateDataSourceFromS3
, or CreateDataSourceFromRedshift
operations.
CreateRealtimeEndpoint
Each argument is described in detail in: Paws::MachineLearning::CreateRealtimeEndpoint
Returns: a Paws::MachineLearning::CreateRealtimeEndpointOutput instance
Creates a real-time endpoint for the MLModel
. The endpoint contains the URI of the MLModel
; that is, the location to send real-time prediction requests for the specified MLModel
.
DeleteBatchPrediction
Each argument is described in detail in: Paws::MachineLearning::DeleteBatchPrediction
Returns: a Paws::MachineLearning::DeleteBatchPredictionOutput instance
Assigns the DELETED status to a BatchPrediction
, rendering it unusable.
After using the DeleteBatchPrediction
operation, you can use the GetBatchPrediction operation to verify that the status of the BatchPrediction
changed to DELETED.
Caution: The result of the DeleteBatchPrediction
operation is irreversible.
DeleteDataSource
Each argument is described in detail in: Paws::MachineLearning::DeleteDataSource
Returns: a Paws::MachineLearning::DeleteDataSourceOutput instance
Assigns the DELETED status to a DataSource
, rendering it unusable.
After using the DeleteDataSource
operation, you can use the GetDataSource operation to verify that the status of the DataSource
changed to DELETED.
Caution: The results of the DeleteDataSource
operation are irreversible.
DeleteEvaluation
Each argument is described in detail in: Paws::MachineLearning::DeleteEvaluation
Returns: a Paws::MachineLearning::DeleteEvaluationOutput instance
Assigns the DELETED
status to an Evaluation
, rendering it unusable.
After invoking the DeleteEvaluation
operation, you can use the GetEvaluation
operation to verify that the status of the Evaluation
changed to DELETED
.
Caution: The results of the DeleteEvaluation
operation are irreversible.
DeleteMLModel
Each argument is described in detail in: Paws::MachineLearning::DeleteMLModel
Returns: a Paws::MachineLearning::DeleteMLModelOutput instance
Assigns the DELETED
status to an MLModel
, rendering it unusable.
After using the DeleteMLModel
operation, you can use the GetMLModel
operation to verify that the status of the MLModel
changed to DELETED.
Caution: The result of the DeleteMLModel
operation is irreversible.
DeleteRealtimeEndpoint
Each argument is described in detail in: Paws::MachineLearning::DeleteRealtimeEndpoint
Returns: a Paws::MachineLearning::DeleteRealtimeEndpointOutput instance
Deletes a real time endpoint of an MLModel
.
DeleteTags
Each argument is described in detail in: Paws::MachineLearning::DeleteTags
Returns: a Paws::MachineLearning::DeleteTagsOutput instance
Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover deleted tags.
If you specify a tag that doesn't exist, Amazon ML ignores it.
DescribeBatchPredictions
- [EQ => Str]
- [FilterVariable => Str]
- [GE => Str]
- [GT => Str]
- [LE => Str]
- [Limit => Int]
- [LT => Str]
- [NE => Str]
- [NextToken => Str]
- [Prefix => Str]
- [SortOrder => Str]
Each argument is described in detail in: Paws::MachineLearning::DescribeBatchPredictions
Returns: a Paws::MachineLearning::DescribeBatchPredictionsOutput instance
Returns a list of BatchPrediction
operations that match the search criteria in the request.
DescribeDataSources
- [EQ => Str]
- [FilterVariable => Str]
- [GE => Str]
- [GT => Str]
- [LE => Str]
- [Limit => Int]
- [LT => Str]
- [NE => Str]
- [NextToken => Str]
- [Prefix => Str]
- [SortOrder => Str]
Each argument is described in detail in: Paws::MachineLearning::DescribeDataSources
Returns: a Paws::MachineLearning::DescribeDataSourcesOutput instance
Returns a list of DataSource
that match the search criteria in the request.
DescribeEvaluations
- [EQ => Str]
- [FilterVariable => Str]
- [GE => Str]
- [GT => Str]
- [LE => Str]
- [Limit => Int]
- [LT => Str]
- [NE => Str]
- [NextToken => Str]
- [Prefix => Str]
- [SortOrder => Str]
Each argument is described in detail in: Paws::MachineLearning::DescribeEvaluations
Returns: a Paws::MachineLearning::DescribeEvaluationsOutput instance
Returns a list of DescribeEvaluations
that match the search criteria in the request.
DescribeMLModels
- [EQ => Str]
- [FilterVariable => Str]
- [GE => Str]
- [GT => Str]
- [LE => Str]
- [Limit => Int]
- [LT => Str]
- [NE => Str]
- [NextToken => Str]
- [Prefix => Str]
- [SortOrder => Str]
Each argument is described in detail in: Paws::MachineLearning::DescribeMLModels
Returns: a Paws::MachineLearning::DescribeMLModelsOutput instance
Returns a list of MLModel
that match the search criteria in the request.
DescribeTags
Each argument is described in detail in: Paws::MachineLearning::DescribeTags
Returns: a Paws::MachineLearning::DescribeTagsOutput instance
Describes one or more of the tags for your Amazon ML object.
GetBatchPrediction
Each argument is described in detail in: Paws::MachineLearning::GetBatchPrediction
Returns: a Paws::MachineLearning::GetBatchPredictionOutput instance
Returns a BatchPrediction
that includes detailed metadata, status, and data file information for a Batch Prediction
request.
GetDataSource
Each argument is described in detail in: Paws::MachineLearning::GetDataSource
Returns: a Paws::MachineLearning::GetDataSourceOutput instance
Returns a DataSource
that includes metadata and data file information, as well as the current status of the DataSource
.
GetDataSource
provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.
GetEvaluation
Each argument is described in detail in: Paws::MachineLearning::GetEvaluation
Returns: a Paws::MachineLearning::GetEvaluationOutput instance
Returns an Evaluation
that includes metadata as well as the current status of the Evaluation
.
GetMLModel
Each argument is described in detail in: Paws::MachineLearning::GetMLModel
Returns: a Paws::MachineLearning::GetMLModelOutput instance
Returns an MLModel
that includes detailed metadata, data source information, and the current status of the MLModel
.
GetMLModel
provides results in normal or verbose format.
Predict
- MLModelId => Str
- PredictEndpoint => Str
- Record => Paws::MachineLearning::Record
Each argument is described in detail in: Paws::MachineLearning::Predict
Returns: a Paws::MachineLearning::PredictOutput instance
Generates a prediction for the observation using the specified ML Model
.
Note: Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
UpdateBatchPrediction
Each argument is described in detail in: Paws::MachineLearning::UpdateBatchPrediction
Returns: a Paws::MachineLearning::UpdateBatchPredictionOutput instance
Updates the BatchPredictionName
of a BatchPrediction
.
You can use the GetBatchPrediction
operation to view the contents of the updated data element.
UpdateDataSource
Each argument is described in detail in: Paws::MachineLearning::UpdateDataSource
Returns: a Paws::MachineLearning::UpdateDataSourceOutput instance
Updates the DataSourceName
of a DataSource
.
You can use the GetDataSource
operation to view the contents of the updated data element.
UpdateEvaluation
Each argument is described in detail in: Paws::MachineLearning::UpdateEvaluation
Returns: a Paws::MachineLearning::UpdateEvaluationOutput instance
Updates the EvaluationName
of an Evaluation
.
You can use the GetEvaluation
operation to view the contents of the updated data element.
UpdateMLModel
Each argument is described in detail in: Paws::MachineLearning::UpdateMLModel
Returns: a Paws::MachineLearning::UpdateMLModelOutput instance
Updates the MLModelName
and the ScoreThreshold
of an MLModel
.
You can use the GetMLModel
operation to view the contents of the updated data element.
PAGINATORS
Paginator methods are helpers that repetively call methods that return partial results
DescribeAllBatchPredictions(sub { },[EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
DescribeAllBatchPredictions([EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
If passed a sub as first parameter, it will call the sub for each element found in :
- Results, passing the object as the first parameter, and the string 'Results' as the second parameter
If not, it will return a a Paws::MachineLearning::DescribeBatchPredictionsOutput instance with all the param
s; from all the responses. Please take into account that this mode can potentially consume vasts ammounts of memory.
DescribeAllDataSources(sub { },[EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
DescribeAllDataSources([EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
If passed a sub as first parameter, it will call the sub for each element found in :
- Results, passing the object as the first parameter, and the string 'Results' as the second parameter
If not, it will return a a Paws::MachineLearning::DescribeDataSourcesOutput instance with all the param
s; from all the responses. Please take into account that this mode can potentially consume vasts ammounts of memory.
DescribeAllEvaluations(sub { },[EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
DescribeAllEvaluations([EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
If passed a sub as first parameter, it will call the sub for each element found in :
- Results, passing the object as the first parameter, and the string 'Results' as the second parameter
If not, it will return a a Paws::MachineLearning::DescribeEvaluationsOutput instance with all the param
s; from all the responses. Please take into account that this mode can potentially consume vasts ammounts of memory.
DescribeAllMLModels(sub { },[EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
DescribeAllMLModels([EQ => Str, FilterVariable => Str, GE => Str, GT => Str, LE => Str, Limit => Int, LT => Str, NE => Str, NextToken => Str, Prefix => Str, SortOrder => Str])
If passed a sub as first parameter, it will call the sub for each element found in :
- Results, passing the object as the first parameter, and the string 'Results' as the second parameter
If not, it will return a a Paws::MachineLearning::DescribeMLModelsOutput instance with all the param
s; from all the responses. Please take into account that this mode can potentially consume vasts ammounts of memory.
SEE ALSO
This service class forms part of Paws
BUGS and CONTRIBUTIONS
The source code is located here: https://github.com/pplu/aws-sdk-perl
Please report bugs to: https://github.com/pplu/aws-sdk-perl/issues