@Generated(value="software.amazon.awssdk:codegen") public interface MachineLearningAsyncClient extends SdkClient, SdkAutoCloseable
builder() method.
Definition of the public APIs exposed by Amazon Machine Learning| Modifier and Type | Field and Description |
|---|---|
static String |
SERVICE_NAME |
| Modifier and Type | Method and Description |
|---|---|
default CompletableFuture<AddTagsResponse> |
addTags(AddTagsRequest addTagsRequest)
Adds one or more tags to an object, up to a limit of 10.
|
default CompletableFuture<AddTagsResponse> |
addTags(Consumer<AddTagsRequest.Builder> addTagsRequest)
Adds one or more tags to an object, up to a limit of 10.
|
static MachineLearningAsyncClientBuilder |
builder()
Create a builder that can be used to configure and create a
MachineLearningAsyncClient. |
static MachineLearningAsyncClient |
create()
Create a
MachineLearningAsyncClient with the region loaded from the
DefaultAwsRegionProviderChain and credentials loaded from the
DefaultCredentialsProvider. |
default CompletableFuture<CreateBatchPredictionResponse> |
createBatchPrediction(Consumer<CreateBatchPredictionRequest.Builder> createBatchPredictionRequest)
Generates predictions for a group of observations.
|
default CompletableFuture<CreateBatchPredictionResponse> |
createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
Generates predictions for a group of observations.
|
default CompletableFuture<CreateDataSourceFromRDSResponse> |
createDataSourceFromRDS(Consumer<CreateDataSourceFromRDSRequest.Builder> createDataSourceFromRDSRequest)
Creates a
DataSource object from an Amazon Relational Database
Service (Amazon RDS). |
default CompletableFuture<CreateDataSourceFromRDSResponse> |
createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
Creates a
DataSource object from an Amazon Relational Database
Service (Amazon RDS). |
default CompletableFuture<CreateDataSourceFromRedshiftResponse> |
createDataSourceFromRedshift(Consumer<CreateDataSourceFromRedshiftRequest.Builder> createDataSourceFromRedshiftRequest)
Creates a
DataSource from a database hosted on an Amazon Redshift cluster. |
default CompletableFuture<CreateDataSourceFromRedshiftResponse> |
createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
Creates a
DataSource from a database hosted on an Amazon Redshift cluster. |
default CompletableFuture<CreateDataSourceFromS3Response> |
createDataSourceFromS3(Consumer<CreateDataSourceFromS3Request.Builder> createDataSourceFromS3Request)
Creates a
DataSource object. |
default CompletableFuture<CreateDataSourceFromS3Response> |
createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
Creates a
DataSource object. |
default CompletableFuture<CreateEvaluationResponse> |
createEvaluation(Consumer<CreateEvaluationRequest.Builder> createEvaluationRequest)
Creates a new
Evaluation of an MLModel. |
default CompletableFuture<CreateEvaluationResponse> |
createEvaluation(CreateEvaluationRequest createEvaluationRequest)
Creates a new
Evaluation of an MLModel. |
default CompletableFuture<CreateMLModelResponse> |
createMLModel(Consumer<CreateMLModelRequest.Builder> createMLModelRequest)
Creates a new
MLModel using the DataSource and the recipe as information sources. |
default CompletableFuture<CreateMLModelResponse> |
createMLModel(CreateMLModelRequest createMLModelRequest)
Creates a new
MLModel using the DataSource and the recipe as information sources. |
default CompletableFuture<CreateRealtimeEndpointResponse> |
createRealtimeEndpoint(Consumer<CreateRealtimeEndpointRequest.Builder> createRealtimeEndpointRequest)
Creates a real-time endpoint for the
MLModel. |
default CompletableFuture<CreateRealtimeEndpointResponse> |
createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
Creates a real-time endpoint for the
MLModel. |
default CompletableFuture<DeleteBatchPredictionResponse> |
deleteBatchPrediction(Consumer<DeleteBatchPredictionRequest.Builder> deleteBatchPredictionRequest)
Assigns the DELETED status to a
BatchPrediction, rendering it unusable. |
default CompletableFuture<DeleteBatchPredictionResponse> |
deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
Assigns the DELETED status to a
BatchPrediction, rendering it unusable. |
default CompletableFuture<DeleteDataSourceResponse> |
deleteDataSource(Consumer<DeleteDataSourceRequest.Builder> deleteDataSourceRequest)
Assigns the DELETED status to a
DataSource, rendering it unusable. |
default CompletableFuture<DeleteDataSourceResponse> |
deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
Assigns the DELETED status to a
DataSource, rendering it unusable. |
default CompletableFuture<DeleteEvaluationResponse> |
deleteEvaluation(Consumer<DeleteEvaluationRequest.Builder> deleteEvaluationRequest)
Assigns the
DELETED status to an Evaluation, rendering it unusable. |
default CompletableFuture<DeleteEvaluationResponse> |
deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
Assigns the
DELETED status to an Evaluation, rendering it unusable. |
default CompletableFuture<DeleteMLModelResponse> |
deleteMLModel(Consumer<DeleteMLModelRequest.Builder> deleteMLModelRequest)
Assigns the
DELETED status to an MLModel, rendering it unusable. |
default CompletableFuture<DeleteMLModelResponse> |
deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
Assigns the
DELETED status to an MLModel, rendering it unusable. |
default CompletableFuture<DeleteRealtimeEndpointResponse> |
deleteRealtimeEndpoint(Consumer<DeleteRealtimeEndpointRequest.Builder> deleteRealtimeEndpointRequest)
Deletes a real time endpoint of an
MLModel. |
default CompletableFuture<DeleteRealtimeEndpointResponse> |
deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
Deletes a real time endpoint of an
MLModel. |
default CompletableFuture<DeleteTagsResponse> |
deleteTags(Consumer<DeleteTagsRequest.Builder> deleteTagsRequest)
Deletes the specified tags associated with an ML object.
|
default CompletableFuture<DeleteTagsResponse> |
deleteTags(DeleteTagsRequest deleteTagsRequest)
Deletes the specified tags associated with an ML object.
|
default CompletableFuture<DescribeBatchPredictionsResponse> |
describeBatchPredictions()
Returns a list of
BatchPrediction operations that match the search criteria in the request. |
default CompletableFuture<DescribeBatchPredictionsResponse> |
describeBatchPredictions(Consumer<DescribeBatchPredictionsRequest.Builder> describeBatchPredictionsRequest)
Returns a list of
BatchPrediction operations that match the search criteria in the request. |
default CompletableFuture<DescribeBatchPredictionsResponse> |
describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
Returns a list of
BatchPrediction operations that match the search criteria in the request. |
default DescribeBatchPredictionsPublisher |
describeBatchPredictionsPaginator()
Returns a list of
BatchPrediction operations that match the search criteria in the request. |
default DescribeBatchPredictionsPublisher |
describeBatchPredictionsPaginator(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
Returns a list of
BatchPrediction operations that match the search criteria in the request. |
default CompletableFuture<DescribeDataSourcesResponse> |
describeDataSources()
Returns a list of
DataSource that match the search criteria in the request. |
default CompletableFuture<DescribeDataSourcesResponse> |
describeDataSources(Consumer<DescribeDataSourcesRequest.Builder> describeDataSourcesRequest)
Returns a list of
DataSource that match the search criteria in the request. |
default CompletableFuture<DescribeDataSourcesResponse> |
describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
Returns a list of
DataSource that match the search criteria in the request. |
default DescribeDataSourcesPublisher |
describeDataSourcesPaginator()
Returns a list of
DataSource that match the search criteria in the request. |
default DescribeDataSourcesPublisher |
describeDataSourcesPaginator(DescribeDataSourcesRequest describeDataSourcesRequest)
Returns a list of
DataSource that match the search criteria in the request. |
default CompletableFuture<DescribeEvaluationsResponse> |
describeEvaluations()
Returns a list of
DescribeEvaluations that match the search criteria in the request. |
default CompletableFuture<DescribeEvaluationsResponse> |
describeEvaluations(Consumer<DescribeEvaluationsRequest.Builder> describeEvaluationsRequest)
Returns a list of
DescribeEvaluations that match the search criteria in the request. |
default CompletableFuture<DescribeEvaluationsResponse> |
describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
Returns a list of
DescribeEvaluations that match the search criteria in the request. |
default DescribeEvaluationsPublisher |
describeEvaluationsPaginator()
Returns a list of
DescribeEvaluations that match the search criteria in the request. |
default DescribeEvaluationsPublisher |
describeEvaluationsPaginator(DescribeEvaluationsRequest describeEvaluationsRequest)
Returns a list of
DescribeEvaluations that match the search criteria in the request. |
default CompletableFuture<DescribeMLModelsResponse> |
describeMLModels()
Returns a list of
MLModel that match the search criteria in the request. |
default CompletableFuture<DescribeMLModelsResponse> |
describeMLModels(Consumer<DescribeMLModelsRequest.Builder> describeMLModelsRequest)
Returns a list of
MLModel that match the search criteria in the request. |
default CompletableFuture<DescribeMLModelsResponse> |
describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
Returns a list of
MLModel that match the search criteria in the request. |
default DescribeMLModelsPublisher |
describeMLModelsPaginator()
Returns a list of
MLModel that match the search criteria in the request. |
default DescribeMLModelsPublisher |
describeMLModelsPaginator(DescribeMLModelsRequest describeMLModelsRequest)
Returns a list of
MLModel that match the search criteria in the request. |
default CompletableFuture<DescribeTagsResponse> |
describeTags(Consumer<DescribeTagsRequest.Builder> describeTagsRequest)
Describes one or more of the tags for your Amazon ML object.
|
default CompletableFuture<DescribeTagsResponse> |
describeTags(DescribeTagsRequest describeTagsRequest)
Describes one or more of the tags for your Amazon ML object.
|
default CompletableFuture<GetBatchPredictionResponse> |
getBatchPrediction(Consumer<GetBatchPredictionRequest.Builder> getBatchPredictionRequest)
Returns a
BatchPrediction that includes detailed metadata, status, and data file information for a
Batch Prediction request. |
default CompletableFuture<GetBatchPredictionResponse> |
getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
Returns a
BatchPrediction that includes detailed metadata, status, and data file information for a
Batch Prediction request. |
default CompletableFuture<GetDataSourceResponse> |
getDataSource(Consumer<GetDataSourceRequest.Builder> getDataSourceRequest)
Returns a
DataSource that includes metadata and data file information, as well as the current status
of the DataSource. |
default CompletableFuture<GetDataSourceResponse> |
getDataSource(GetDataSourceRequest getDataSourceRequest)
Returns a
DataSource that includes metadata and data file information, as well as the current status
of the DataSource. |
default CompletableFuture<GetEvaluationResponse> |
getEvaluation(Consumer<GetEvaluationRequest.Builder> getEvaluationRequest)
Returns an
Evaluation that includes metadata as well as the current status of the
Evaluation. |
default CompletableFuture<GetEvaluationResponse> |
getEvaluation(GetEvaluationRequest getEvaluationRequest)
Returns an
Evaluation that includes metadata as well as the current status of the
Evaluation. |
default CompletableFuture<GetMLModelResponse> |
getMLModel(Consumer<GetMLModelRequest.Builder> getMLModelRequest)
Returns an
MLModel that includes detailed metadata, data source information, and the current status
of the MLModel. |
default CompletableFuture<GetMLModelResponse> |
getMLModel(GetMLModelRequest getMLModelRequest)
Returns an
MLModel that includes detailed metadata, data source information, and the current status
of the MLModel. |
default CompletableFuture<PredictResponse> |
predict(Consumer<PredictRequest.Builder> predictRequest)
Generates a prediction for the observation using the specified
ML Model. |
default CompletableFuture<PredictResponse> |
predict(PredictRequest predictRequest)
Generates a prediction for the observation using the specified
ML Model. |
default CompletableFuture<UpdateBatchPredictionResponse> |
updateBatchPrediction(Consumer<UpdateBatchPredictionRequest.Builder> updateBatchPredictionRequest)
Updates the
BatchPredictionName of a BatchPrediction. |
default CompletableFuture<UpdateBatchPredictionResponse> |
updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
Updates the
BatchPredictionName of a BatchPrediction. |
default CompletableFuture<UpdateDataSourceResponse> |
updateDataSource(Consumer<UpdateDataSourceRequest.Builder> updateDataSourceRequest)
Updates the
DataSourceName of a DataSource. |
default CompletableFuture<UpdateDataSourceResponse> |
updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
Updates the
DataSourceName of a DataSource. |
default CompletableFuture<UpdateEvaluationResponse> |
updateEvaluation(Consumer<UpdateEvaluationRequest.Builder> updateEvaluationRequest)
Updates the
EvaluationName of an Evaluation. |
default CompletableFuture<UpdateEvaluationResponse> |
updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
Updates the
EvaluationName of an Evaluation. |
default CompletableFuture<UpdateMLModelResponse> |
updateMLModel(Consumer<UpdateMLModelRequest.Builder> updateMLModelRequest)
Updates the
MLModelName and the ScoreThreshold of an MLModel. |
default CompletableFuture<UpdateMLModelResponse> |
updateMLModel(UpdateMLModelRequest updateMLModelRequest)
Updates the
MLModelName and the ScoreThreshold of an MLModel. |
serviceNameclosestatic final String SERVICE_NAME
static MachineLearningAsyncClient create()
MachineLearningAsyncClient with the region loaded from the
DefaultAwsRegionProviderChain and credentials loaded from the
DefaultCredentialsProvider.static MachineLearningAsyncClientBuilder builder()
MachineLearningAsyncClient.default CompletableFuture<AddTagsResponse> addTags(AddTagsRequest addTagsRequest)
Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you
add a tag using a key that is already associated with the ML object, AddTags updates the tag's
value.
addTagsRequest - default CompletableFuture<AddTagsResponse> addTags(Consumer<AddTagsRequest.Builder> addTagsRequest)
Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you
add a tag using a key that is already associated with the ML object, AddTags updates the tag's
value.
This is a convenience which creates an instance of the AddTagsRequest.Builder avoiding the need to create
one manually via AddTagsRequest.builder()
addTagsRequest - A Consumer that will call methods on AddTagsInput.Builder to create a request.default CompletableFuture<CreateBatchPredictionResponse> createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
Generates predictions for a group of observations. The observations to process exist in one or more data files
referenced by a DataSource. This operation creates a new BatchPrediction, and uses an
MLModel and the data files referenced by the DataSource as information sources.
CreateBatchPrediction is an asynchronous operation. In response to
CreateBatchPrediction, Amazon Machine Learning (Amazon ML) immediately returns and sets the
BatchPrediction status to PENDING. After the BatchPrediction completes,
Amazon ML sets the status to COMPLETED.
You can poll for status updates by using the GetBatchPrediction operation and checking the
Status parameter of the result. After the COMPLETED status appears, the results are
available in the location specified by the OutputUri parameter.
createBatchPredictionRequest - default CompletableFuture<CreateBatchPredictionResponse> createBatchPrediction(Consumer<CreateBatchPredictionRequest.Builder> createBatchPredictionRequest)
Generates predictions for a group of observations. The observations to process exist in one or more data files
referenced by a DataSource. This operation creates a new BatchPrediction, and uses an
MLModel and the data files referenced by the DataSource as information sources.
CreateBatchPrediction is an asynchronous operation. In response to
CreateBatchPrediction, Amazon Machine Learning (Amazon ML) immediately returns and sets the
BatchPrediction status to PENDING. After the BatchPrediction completes,
Amazon ML sets the status to COMPLETED.
You can poll for status updates by using the GetBatchPrediction operation and checking the
Status parameter of the result. After the COMPLETED status appears, the results are
available in the location specified by the OutputUri parameter.
This is a convenience which creates an instance of the CreateBatchPredictionRequest.Builder avoiding the
need to create one manually via CreateBatchPredictionRequest.builder()
createBatchPredictionRequest - A Consumer that will call methods on CreateBatchPredictionInput.Builder to create a
request.default CompletableFuture<CreateDataSourceFromRDSResponse> createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
Creates a DataSource object from an Amazon Relational Database
Service (Amazon RDS). A DataSource references data that can be used to perform
CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
CreateDataSourceFromRDS is an asynchronous operation. In response to
CreateDataSourceFromRDS, Amazon Machine Learning (Amazon ML) immediately returns and sets the
DataSource status to PENDING. After the DataSource is created and ready
for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
the COMPLETED or PENDING state can be used only to perform
>CreateMLModel>, CreateEvaluation, or CreateBatchPrediction
operations.
If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and
includes an error message in the Message attribute of the GetDataSource operation
response.
createDataSourceFromRDSRequest - default CompletableFuture<CreateDataSourceFromRDSResponse> createDataSourceFromRDS(Consumer<CreateDataSourceFromRDSRequest.Builder> createDataSourceFromRDSRequest)
Creates a DataSource object from an Amazon Relational Database
Service (Amazon RDS). A DataSource references data that can be used to perform
CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
CreateDataSourceFromRDS is an asynchronous operation. In response to
CreateDataSourceFromRDS, Amazon Machine Learning (Amazon ML) immediately returns and sets the
DataSource status to PENDING. After the DataSource is created and ready
for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
the COMPLETED or PENDING state can be used only to perform
>CreateMLModel>, CreateEvaluation, or CreateBatchPrediction
operations.
If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and
includes an error message in the Message attribute of the GetDataSource operation
response.
This is a convenience which creates an instance of the CreateDataSourceFromRDSRequest.Builder avoiding
the need to create one manually via CreateDataSourceFromRDSRequest.builder()
createDataSourceFromRDSRequest - A Consumer that will call methods on CreateDataSourceFromRDSInput.Builder to create a
request.default CompletableFuture<CreateDataSourceFromRedshiftResponse> createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
Creates a DataSource from a database hosted on an Amazon Redshift cluster. A DataSource
references data that can be used to perform either CreateMLModel, CreateEvaluation, or
CreateBatchPrediction operations.
CreateDataSourceFromRedshift is an asynchronous operation. In response to
CreateDataSourceFromRedshift, Amazon Machine Learning (Amazon ML) immediately returns and sets the
DataSource status to PENDING. After the DataSource is created and ready
for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
COMPLETED or PENDING states can be used to perform only CreateMLModel,
CreateEvaluation, or CreateBatchPrediction operations.
If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
includes an error message in the Message attribute of the GetDataSource operation
response.
The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified
by a SelectSqlQuery query. Amazon ML executes an Unload command in Amazon Redshift to
transfer the result set of the SelectSqlQuery query to S3StagingLocation.
After the DataSource has been created, it's ready for use in evaluations and batch predictions. If
you plan to use the DataSource to train an MLModel, the DataSource also
requires a recipe. A recipe describes how each input variable will be used in training an MLModel.
Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it
be combined with another variable or will it be split apart into word combinations? The recipe provides answers
to these questions.
You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon
Redshift datasource to create a new datasource. To do so, call GetDataSource for an existing
datasource and copy the values to a CreateDataSource call. Change the settings that you want to
change and make sure that all required fields have the appropriate values.
createDataSourceFromRedshiftRequest - default CompletableFuture<CreateDataSourceFromRedshiftResponse> createDataSourceFromRedshift(Consumer<CreateDataSourceFromRedshiftRequest.Builder> createDataSourceFromRedshiftRequest)
Creates a DataSource from a database hosted on an Amazon Redshift cluster. A DataSource
references data that can be used to perform either CreateMLModel, CreateEvaluation, or
CreateBatchPrediction operations.
CreateDataSourceFromRedshift is an asynchronous operation. In response to
CreateDataSourceFromRedshift, Amazon Machine Learning (Amazon ML) immediately returns and sets the
DataSource status to PENDING. After the DataSource is created and ready
for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
COMPLETED or PENDING states can be used to perform only CreateMLModel,
CreateEvaluation, or CreateBatchPrediction operations.
If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
includes an error message in the Message attribute of the GetDataSource operation
response.
The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified
by a SelectSqlQuery query. Amazon ML executes an Unload command in Amazon Redshift to
transfer the result set of the SelectSqlQuery query to S3StagingLocation.
After the DataSource has been created, it's ready for use in evaluations and batch predictions. If
you plan to use the DataSource to train an MLModel, the DataSource also
requires a recipe. A recipe describes how each input variable will be used in training an MLModel.
Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it
be combined with another variable or will it be split apart into word combinations? The recipe provides answers
to these questions.
You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon
Redshift datasource to create a new datasource. To do so, call GetDataSource for an existing
datasource and copy the values to a CreateDataSource call. Change the settings that you want to
change and make sure that all required fields have the appropriate values.
This is a convenience which creates an instance of the CreateDataSourceFromRedshiftRequest.Builder
avoiding the need to create one manually via CreateDataSourceFromRedshiftRequest.builder()
createDataSourceFromRedshiftRequest - A Consumer that will call methods on CreateDataSourceFromRedshiftInput.Builder to create a
request.default CompletableFuture<CreateDataSourceFromS3Response> createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
Creates a DataSource object. A DataSource references data that can be used to perform
CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
CreateDataSourceFromS3 is an asynchronous operation. In response to
CreateDataSourceFromS3, Amazon Machine Learning (Amazon ML) immediately returns and sets the
DataSource status to PENDING. After the DataSource has been created and is
ready for use, Amazon ML sets the Status parameter to COMPLETED.
DataSource in the COMPLETED or PENDING state can be used to perform only
CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.
If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
includes an error message in the Message attribute of the GetDataSource operation
response.
The observation data used in a DataSource should be ready to use; that is, it should have a
consistent structure, and missing data values should be kept to a minimum. The observation data must reside in
one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that
describes the data items by name and type. The same schema must be used for all of the data files referenced by
the DataSource.
After the DataSource has been created, it's ready to use in evaluations and batch predictions. If
you plan to use the DataSource to train an MLModel, the DataSource also
needs a recipe. A recipe describes how each input variable will be used in training an MLModel. Will
the variable be included or excluded from training? Will the variable be manipulated; for example, will it be
combined with another variable or will it be split apart into word combinations? The recipe provides answers to
these questions.
createDataSourceFromS3Request - default CompletableFuture<CreateDataSourceFromS3Response> createDataSourceFromS3(Consumer<CreateDataSourceFromS3Request.Builder> createDataSourceFromS3Request)
Creates a DataSource object. A DataSource references data that can be used to perform
CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
CreateDataSourceFromS3 is an asynchronous operation. In response to
CreateDataSourceFromS3, Amazon Machine Learning (Amazon ML) immediately returns and sets the
DataSource status to PENDING. After the DataSource has been created and is
ready for use, Amazon ML sets the Status parameter to COMPLETED.
DataSource in the COMPLETED or PENDING state can be used to perform only
CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.
If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
includes an error message in the Message attribute of the GetDataSource operation
response.
The observation data used in a DataSource should be ready to use; that is, it should have a
consistent structure, and missing data values should be kept to a minimum. The observation data must reside in
one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that
describes the data items by name and type. The same schema must be used for all of the data files referenced by
the DataSource.
After the DataSource has been created, it's ready to use in evaluations and batch predictions. If
you plan to use the DataSource to train an MLModel, the DataSource also
needs a recipe. A recipe describes how each input variable will be used in training an MLModel. Will
the variable be included or excluded from training? Will the variable be manipulated; for example, will it be
combined with another variable or will it be split apart into word combinations? The recipe provides answers to
these questions.
This is a convenience which creates an instance of the CreateDataSourceFromS3Request.Builder avoiding the
need to create one manually via CreateDataSourceFromS3Request.builder()
createDataSourceFromS3Request - A Consumer that will call methods on CreateDataSourceFromS3Input.Builder to create a
request.default CompletableFuture<CreateEvaluationResponse> createEvaluation(CreateEvaluationRequest createEvaluationRequest)
Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set
of observations associated to a DataSource. Like a DataSource for an
MLModel, the DataSource for an Evaluation contains values for the
Target Variable. The Evaluation compares the predicted result for each observation to
the actual outcome and provides a summary so that you know how effective the MLModel functions on
the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or
MulticlassAvgFScore based on the corresponding MLModelType: BINARY,
REGRESSION or MULTICLASS.
CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon
Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After
the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.
You can use the GetEvaluation operation to check progress of the evaluation during the creation
operation.
createEvaluationRequest - default CompletableFuture<CreateEvaluationResponse> createEvaluation(Consumer<CreateEvaluationRequest.Builder> createEvaluationRequest)
Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set
of observations associated to a DataSource. Like a DataSource for an
MLModel, the DataSource for an Evaluation contains values for the
Target Variable. The Evaluation compares the predicted result for each observation to
the actual outcome and provides a summary so that you know how effective the MLModel functions on
the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or
MulticlassAvgFScore based on the corresponding MLModelType: BINARY,
REGRESSION or MULTICLASS.
CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon
Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After
the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.
You can use the GetEvaluation operation to check progress of the evaluation during the creation
operation.
This is a convenience which creates an instance of the CreateEvaluationRequest.Builder avoiding the need
to create one manually via CreateEvaluationRequest.builder()
createEvaluationRequest - A Consumer that will call methods on CreateEvaluationInput.Builder to create a request.default CompletableFuture<CreateMLModelResponse> createMLModel(CreateMLModelRequest createMLModelRequest)
Creates a new MLModel using the DataSource and the recipe as information sources.
An MLModel is nearly immutable. Users can update only the MLModelName and the
ScoreThreshold in an MLModel without creating a new MLModel.
CreateMLModel is an asynchronous operation. In response to CreateMLModel, Amazon
Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING
. After the MLModel has been created and ready is for use, Amazon ML sets the status to
COMPLETED.
You can use the GetMLModel operation to check the progress of the MLModel during the
creation operation.
CreateMLModel requires a DataSource with computed statistics, which can be created by
setting ComputeStatistics to true in CreateDataSourceFromRDS,
CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.
createMLModelRequest - default CompletableFuture<CreateMLModelResponse> createMLModel(Consumer<CreateMLModelRequest.Builder> createMLModelRequest)
Creates a new MLModel using the DataSource and the recipe as information sources.
An MLModel is nearly immutable. Users can update only the MLModelName and the
ScoreThreshold in an MLModel without creating a new MLModel.
CreateMLModel is an asynchronous operation. In response to CreateMLModel, Amazon
Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING
. After the MLModel has been created and ready is for use, Amazon ML sets the status to
COMPLETED.
You can use the GetMLModel operation to check the progress of the MLModel during the
creation operation.
CreateMLModel requires a DataSource with computed statistics, which can be created by
setting ComputeStatistics to true in CreateDataSourceFromRDS,
CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.
This is a convenience which creates an instance of the CreateMLModelRequest.Builder avoiding the need to
create one manually via CreateMLModelRequest.builder()
createMLModelRequest - A Consumer that will call methods on CreateMLModelInput.Builder to create a request.default CompletableFuture<CreateRealtimeEndpointResponse> createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
Creates a real-time endpoint for the MLModel. The endpoint contains the URI of the
MLModel; that is, the location to send real-time prediction requests for the specified
MLModel.
createRealtimeEndpointRequest - default CompletableFuture<CreateRealtimeEndpointResponse> createRealtimeEndpoint(Consumer<CreateRealtimeEndpointRequest.Builder> createRealtimeEndpointRequest)
Creates a real-time endpoint for the MLModel. The endpoint contains the URI of the
MLModel; that is, the location to send real-time prediction requests for the specified
MLModel.
This is a convenience which creates an instance of the CreateRealtimeEndpointRequest.Builder avoiding the
need to create one manually via CreateRealtimeEndpointRequest.builder()
createRealtimeEndpointRequest - A Consumer that will call methods on CreateRealtimeEndpointInput.Builder to create a
request.default CompletableFuture<DeleteBatchPredictionResponse> deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
Assigns the DELETED status to a BatchPrediction, rendering it unusable.
After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation
to verify that the status of the BatchPrediction changed to DELETED.
Caution: The result of the DeleteBatchPrediction operation is irreversible.
deleteBatchPredictionRequest - default CompletableFuture<DeleteBatchPredictionResponse> deleteBatchPrediction(Consumer<DeleteBatchPredictionRequest.Builder> deleteBatchPredictionRequest)
Assigns the DELETED status to a BatchPrediction, rendering it unusable.
After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation
to verify that the status of the BatchPrediction changed to DELETED.
Caution: The result of the DeleteBatchPrediction operation is irreversible.
This is a convenience which creates an instance of the DeleteBatchPredictionRequest.Builder avoiding the
need to create one manually via DeleteBatchPredictionRequest.builder()
deleteBatchPredictionRequest - A Consumer that will call methods on DeleteBatchPredictionInput.Builder to create a
request.default CompletableFuture<DeleteDataSourceResponse> deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
Assigns the DELETED status to a DataSource, rendering it unusable.
After using the DeleteDataSource operation, you can use the GetDataSource operation to verify
that the status of the DataSource changed to DELETED.
Caution: The results of the DeleteDataSource operation are irreversible.
deleteDataSourceRequest - default CompletableFuture<DeleteDataSourceResponse> deleteDataSource(Consumer<DeleteDataSourceRequest.Builder> deleteDataSourceRequest)
Assigns the DELETED status to a DataSource, rendering it unusable.
After using the DeleteDataSource operation, you can use the GetDataSource operation to verify
that the status of the DataSource changed to DELETED.
Caution: The results of the DeleteDataSource operation are irreversible.
This is a convenience which creates an instance of the DeleteDataSourceRequest.Builder avoiding the need
to create one manually via DeleteDataSourceRequest.builder()
deleteDataSourceRequest - A Consumer that will call methods on DeleteDataSourceInput.Builder to create a request.default CompletableFuture<DeleteEvaluationResponse> deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
Assigns the DELETED status to an Evaluation, rendering it unusable.
After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation
to verify that the status of the Evaluation changed to DELETED.
The results of the DeleteEvaluation operation are irreversible.
deleteEvaluationRequest - default CompletableFuture<DeleteEvaluationResponse> deleteEvaluation(Consumer<DeleteEvaluationRequest.Builder> deleteEvaluationRequest)
Assigns the DELETED status to an Evaluation, rendering it unusable.
After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation
to verify that the status of the Evaluation changed to DELETED.
The results of the DeleteEvaluation operation are irreversible.
This is a convenience which creates an instance of the DeleteEvaluationRequest.Builder avoiding the need
to create one manually via DeleteEvaluationRequest.builder()
deleteEvaluationRequest - A Consumer that will call methods on DeleteEvaluationInput.Builder to create a request.default CompletableFuture<DeleteMLModelResponse> deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
Assigns the DELETED status to an MLModel, rendering it unusable.
After using the DeleteMLModel operation, you can use the GetMLModel operation to verify
that the status of the MLModel changed to DELETED.
Caution: The result of the DeleteMLModel operation is irreversible.
deleteMLModelRequest - default CompletableFuture<DeleteMLModelResponse> deleteMLModel(Consumer<DeleteMLModelRequest.Builder> deleteMLModelRequest)
Assigns the DELETED status to an MLModel, rendering it unusable.
After using the DeleteMLModel operation, you can use the GetMLModel operation to verify
that the status of the MLModel changed to DELETED.
Caution: The result of the DeleteMLModel operation is irreversible.
This is a convenience which creates an instance of the DeleteMLModelRequest.Builder avoiding the need to
create one manually via DeleteMLModelRequest.builder()
deleteMLModelRequest - A Consumer that will call methods on DeleteMLModelInput.Builder to create a request.default CompletableFuture<DeleteRealtimeEndpointResponse> deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
Deletes a real time endpoint of an MLModel.
deleteRealtimeEndpointRequest - default CompletableFuture<DeleteRealtimeEndpointResponse> deleteRealtimeEndpoint(Consumer<DeleteRealtimeEndpointRequest.Builder> deleteRealtimeEndpointRequest)
Deletes a real time endpoint of an MLModel.
This is a convenience which creates an instance of the DeleteRealtimeEndpointRequest.Builder avoiding the
need to create one manually via DeleteRealtimeEndpointRequest.builder()
deleteRealtimeEndpointRequest - A Consumer that will call methods on DeleteRealtimeEndpointInput.Builder to create a
request.default CompletableFuture<DeleteTagsResponse> deleteTags(DeleteTagsRequest deleteTagsRequest)
Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover deleted tags.
If you specify a tag that doesn't exist, Amazon ML ignores it.
deleteTagsRequest - default CompletableFuture<DeleteTagsResponse> deleteTags(Consumer<DeleteTagsRequest.Builder> deleteTagsRequest)
Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover deleted tags.
If you specify a tag that doesn't exist, Amazon ML ignores it.
This is a convenience which creates an instance of the DeleteTagsRequest.Builder avoiding the need to
create one manually via DeleteTagsRequest.builder()
deleteTagsRequest - A Consumer that will call methods on DeleteTagsInput.Builder to create a request.default CompletableFuture<DescribeBatchPredictionsResponse> describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
Returns a list of BatchPrediction operations that match the search criteria in the request.
describeBatchPredictionsRequest - default CompletableFuture<DescribeBatchPredictionsResponse> describeBatchPredictions(Consumer<DescribeBatchPredictionsRequest.Builder> describeBatchPredictionsRequest)
Returns a list of BatchPrediction operations that match the search criteria in the request.
This is a convenience which creates an instance of the DescribeBatchPredictionsRequest.Builder avoiding
the need to create one manually via DescribeBatchPredictionsRequest.builder()
describeBatchPredictionsRequest - A Consumer that will call methods on DescribeBatchPredictionsInput.Builder to create a
request.default CompletableFuture<DescribeBatchPredictionsResponse> describeBatchPredictions()
Returns a list of BatchPrediction operations that match the search criteria in the request.
default DescribeBatchPredictionsPublisher describeBatchPredictionsPaginator(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
Returns a list of BatchPrediction operations that match the search criteria in the request.
This is a variant of
describeBatchPredictions(software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeBatchPredictionsPublisher publisher = client.describeBatchPredictionsPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeBatchPredictionsPublisher publisher = client.describeBatchPredictionsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeBatchPredictions(software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsRequest)
operation.
describeBatchPredictionsRequest - default DescribeBatchPredictionsPublisher describeBatchPredictionsPaginator()
Returns a list of BatchPrediction operations that match the search criteria in the request.
This is a variant of
describeBatchPredictions(software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeBatchPredictionsPublisher publisher = client.describeBatchPredictionsPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeBatchPredictionsPublisher publisher = client.describeBatchPredictionsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeBatchPredictions(software.amazon.awssdk.services.machinelearning.model.DescribeBatchPredictionsRequest)
operation.
default CompletableFuture<DescribeDataSourcesResponse> describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
Returns a list of DataSource that match the search criteria in the request.
describeDataSourcesRequest - default CompletableFuture<DescribeDataSourcesResponse> describeDataSources(Consumer<DescribeDataSourcesRequest.Builder> describeDataSourcesRequest)
Returns a list of DataSource that match the search criteria in the request.
This is a convenience which creates an instance of the DescribeDataSourcesRequest.Builder avoiding the
need to create one manually via DescribeDataSourcesRequest.builder()
describeDataSourcesRequest - A Consumer that will call methods on DescribeDataSourcesInput.Builder to create a request.default CompletableFuture<DescribeDataSourcesResponse> describeDataSources()
Returns a list of DataSource that match the search criteria in the request.
default DescribeDataSourcesPublisher describeDataSourcesPaginator(DescribeDataSourcesRequest describeDataSourcesRequest)
Returns a list of DataSource that match the search criteria in the request.
This is a variant of
describeDataSources(software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeDataSourcesPublisher publisher = client.describeDataSourcesPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeDataSourcesPublisher publisher = client.describeDataSourcesPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeDataSources(software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesRequest)
operation.
describeDataSourcesRequest - default DescribeDataSourcesPublisher describeDataSourcesPaginator()
Returns a list of DataSource that match the search criteria in the request.
This is a variant of
describeDataSources(software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeDataSourcesPublisher publisher = client.describeDataSourcesPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeDataSourcesPublisher publisher = client.describeDataSourcesPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeDataSources(software.amazon.awssdk.services.machinelearning.model.DescribeDataSourcesRequest)
operation.
default CompletableFuture<DescribeEvaluationsResponse> describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
Returns a list of DescribeEvaluations that match the search criteria in the request.
describeEvaluationsRequest - default CompletableFuture<DescribeEvaluationsResponse> describeEvaluations(Consumer<DescribeEvaluationsRequest.Builder> describeEvaluationsRequest)
Returns a list of DescribeEvaluations that match the search criteria in the request.
This is a convenience which creates an instance of the DescribeEvaluationsRequest.Builder avoiding the
need to create one manually via DescribeEvaluationsRequest.builder()
describeEvaluationsRequest - A Consumer that will call methods on DescribeEvaluationsInput.Builder to create a request.default CompletableFuture<DescribeEvaluationsResponse> describeEvaluations()
Returns a list of DescribeEvaluations that match the search criteria in the request.
default DescribeEvaluationsPublisher describeEvaluationsPaginator(DescribeEvaluationsRequest describeEvaluationsRequest)
Returns a list of DescribeEvaluations that match the search criteria in the request.
This is a variant of
describeEvaluations(software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeEvaluationsPublisher publisher = client.describeEvaluationsPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeEvaluationsPublisher publisher = client.describeEvaluationsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeEvaluations(software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsRequest)
operation.
describeEvaluationsRequest - default DescribeEvaluationsPublisher describeEvaluationsPaginator()
Returns a list of DescribeEvaluations that match the search criteria in the request.
This is a variant of
describeEvaluations(software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeEvaluationsPublisher publisher = client.describeEvaluationsPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeEvaluationsPublisher publisher = client.describeEvaluationsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeEvaluations(software.amazon.awssdk.services.machinelearning.model.DescribeEvaluationsRequest)
operation.
default CompletableFuture<DescribeMLModelsResponse> describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
Returns a list of MLModel that match the search criteria in the request.
describeMLModelsRequest - default CompletableFuture<DescribeMLModelsResponse> describeMLModels(Consumer<DescribeMLModelsRequest.Builder> describeMLModelsRequest)
Returns a list of MLModel that match the search criteria in the request.
This is a convenience which creates an instance of the DescribeMLModelsRequest.Builder avoiding the need
to create one manually via DescribeMLModelsRequest.builder()
describeMLModelsRequest - A Consumer that will call methods on DescribeMLModelsInput.Builder to create a request.default CompletableFuture<DescribeMLModelsResponse> describeMLModels()
Returns a list of MLModel that match the search criteria in the request.
default DescribeMLModelsPublisher describeMLModelsPaginator(DescribeMLModelsRequest describeMLModelsRequest)
Returns a list of MLModel that match the search criteria in the request.
This is a variant of
describeMLModels(software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeMLModelsPublisher publisher = client.describeMLModelsPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeMLModelsPublisher publisher = client.describeMLModelsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeMLModels(software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsRequest)
operation.
describeMLModelsRequest - default DescribeMLModelsPublisher describeMLModelsPaginator()
Returns a list of MLModel that match the search criteria in the request.
This is a variant of
describeMLModels(software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber). Each call to the subscribe
method will result in a new Subscription i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the forEach helper method
software.amazon.awssdk.services.machinelearning.paginators.DescribeMLModelsPublisher publisher = client.describeMLModelsPaginator(request);
CompletableFuture<Void> future = publisher.forEach(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.machinelearning.paginators.DescribeMLModelsPublisher publisher = client.describeMLModelsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Note: If you prefer to have control on service calls, use the
describeMLModels(software.amazon.awssdk.services.machinelearning.model.DescribeMLModelsRequest)
operation.
default CompletableFuture<DescribeTagsResponse> describeTags(DescribeTagsRequest describeTagsRequest)
Describes one or more of the tags for your Amazon ML object.
describeTagsRequest - default CompletableFuture<DescribeTagsResponse> describeTags(Consumer<DescribeTagsRequest.Builder> describeTagsRequest)
Describes one or more of the tags for your Amazon ML object.
This is a convenience which creates an instance of the DescribeTagsRequest.Builder avoiding the need to
create one manually via DescribeTagsRequest.builder()
describeTagsRequest - A Consumer that will call methods on DescribeTagsInput.Builder to create a request.default CompletableFuture<GetBatchPredictionResponse> getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
Returns a BatchPrediction that includes detailed metadata, status, and data file information for a
Batch Prediction request.
getBatchPredictionRequest - default CompletableFuture<GetBatchPredictionResponse> getBatchPrediction(Consumer<GetBatchPredictionRequest.Builder> getBatchPredictionRequest)
Returns a BatchPrediction that includes detailed metadata, status, and data file information for a
Batch Prediction request.
This is a convenience which creates an instance of the GetBatchPredictionRequest.Builder avoiding the
need to create one manually via GetBatchPredictionRequest.builder()
getBatchPredictionRequest - A Consumer that will call methods on GetBatchPredictionInput.Builder to create a request.default CompletableFuture<GetDataSourceResponse> getDataSource(GetDataSourceRequest getDataSourceRequest)
Returns a DataSource that includes metadata and data file information, as well as the current status
of the DataSource.
GetDataSource provides results in normal or verbose format. The verbose format adds the schema
description and the list of files pointed to by the DataSource to the normal format.
getDataSourceRequest - default CompletableFuture<GetDataSourceResponse> getDataSource(Consumer<GetDataSourceRequest.Builder> getDataSourceRequest)
Returns a DataSource that includes metadata and data file information, as well as the current status
of the DataSource.
GetDataSource provides results in normal or verbose format. The verbose format adds the schema
description and the list of files pointed to by the DataSource to the normal format.
This is a convenience which creates an instance of the GetDataSourceRequest.Builder avoiding the need to
create one manually via GetDataSourceRequest.builder()
getDataSourceRequest - A Consumer that will call methods on GetDataSourceInput.Builder to create a request.default CompletableFuture<GetEvaluationResponse> getEvaluation(GetEvaluationRequest getEvaluationRequest)
Returns an Evaluation that includes metadata as well as the current status of the
Evaluation.
getEvaluationRequest - default CompletableFuture<GetEvaluationResponse> getEvaluation(Consumer<GetEvaluationRequest.Builder> getEvaluationRequest)
Returns an Evaluation that includes metadata as well as the current status of the
Evaluation.
This is a convenience which creates an instance of the GetEvaluationRequest.Builder avoiding the need to
create one manually via GetEvaluationRequest.builder()
getEvaluationRequest - A Consumer that will call methods on GetEvaluationInput.Builder to create a request.default CompletableFuture<GetMLModelResponse> getMLModel(GetMLModelRequest getMLModelRequest)
Returns an MLModel that includes detailed metadata, data source information, and the current status
of the MLModel.
GetMLModel provides results in normal or verbose format.
getMLModelRequest - default CompletableFuture<GetMLModelResponse> getMLModel(Consumer<GetMLModelRequest.Builder> getMLModelRequest)
Returns an MLModel that includes detailed metadata, data source information, and the current status
of the MLModel.
GetMLModel provides results in normal or verbose format.
This is a convenience which creates an instance of the GetMLModelRequest.Builder avoiding the need to
create one manually via GetMLModelRequest.builder()
getMLModelRequest - A Consumer that will call methods on GetMLModelInput.Builder to create a request.default CompletableFuture<PredictResponse> predict(PredictRequest predictRequest)
Generates a prediction for the observation using the specified ML Model.
Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
predictRequest - DataSource.MLModel.default CompletableFuture<PredictResponse> predict(Consumer<PredictRequest.Builder> predictRequest)
Generates a prediction for the observation using the specified ML Model.
Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
This is a convenience which creates an instance of the PredictRequest.Builder avoiding the need to create
one manually via PredictRequest.builder()
predictRequest - A Consumer that will call methods on PredictInput.Builder to create a request.DataSource.MLModel.default CompletableFuture<UpdateBatchPredictionResponse> updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
Updates the BatchPredictionName of a BatchPrediction.
You can use the GetBatchPrediction operation to view the contents of the updated data element.
updateBatchPredictionRequest - default CompletableFuture<UpdateBatchPredictionResponse> updateBatchPrediction(Consumer<UpdateBatchPredictionRequest.Builder> updateBatchPredictionRequest)
Updates the BatchPredictionName of a BatchPrediction.
You can use the GetBatchPrediction operation to view the contents of the updated data element.
This is a convenience which creates an instance of the UpdateBatchPredictionRequest.Builder avoiding the
need to create one manually via UpdateBatchPredictionRequest.builder()
updateBatchPredictionRequest - A Consumer that will call methods on UpdateBatchPredictionInput.Builder to create a
request.default CompletableFuture<UpdateDataSourceResponse> updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
Updates the DataSourceName of a DataSource.
You can use the GetDataSource operation to view the contents of the updated data element.
updateDataSourceRequest - default CompletableFuture<UpdateDataSourceResponse> updateDataSource(Consumer<UpdateDataSourceRequest.Builder> updateDataSourceRequest)
Updates the DataSourceName of a DataSource.
You can use the GetDataSource operation to view the contents of the updated data element.
This is a convenience which creates an instance of the UpdateDataSourceRequest.Builder avoiding the need
to create one manually via UpdateDataSourceRequest.builder()
updateDataSourceRequest - A Consumer that will call methods on UpdateDataSourceInput.Builder to create a request.default CompletableFuture<UpdateEvaluationResponse> updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
Updates the EvaluationName of an Evaluation.
You can use the GetEvaluation operation to view the contents of the updated data element.
updateEvaluationRequest - default CompletableFuture<UpdateEvaluationResponse> updateEvaluation(Consumer<UpdateEvaluationRequest.Builder> updateEvaluationRequest)
Updates the EvaluationName of an Evaluation.
You can use the GetEvaluation operation to view the contents of the updated data element.
This is a convenience which creates an instance of the UpdateEvaluationRequest.Builder avoiding the need
to create one manually via UpdateEvaluationRequest.builder()
updateEvaluationRequest - A Consumer that will call methods on UpdateEvaluationInput.Builder to create a request.default CompletableFuture<UpdateMLModelResponse> updateMLModel(UpdateMLModelRequest updateMLModelRequest)
Updates the MLModelName and the ScoreThreshold of an MLModel.
You can use the GetMLModel operation to view the contents of the updated data element.
updateMLModelRequest - default CompletableFuture<UpdateMLModelResponse> updateMLModel(Consumer<UpdateMLModelRequest.Builder> updateMLModelRequest)
Updates the MLModelName and the ScoreThreshold of an MLModel.
You can use the GetMLModel operation to view the contents of the updated data element.
This is a convenience which creates an instance of the UpdateMLModelRequest.Builder avoiding the need to
create one manually via UpdateMLModelRequest.builder()
updateMLModelRequest - A Consumer that will call methods on UpdateMLModelInput.Builder to create a request.Copyright © 2017 Amazon Web Services, Inc. All Rights Reserved.