

# CreateMLEndpoint
<a name="API_CreateMLEndpoint"></a>

Creates a new Neptune ML inference endpoint that lets you query one specific model that the model-training process constructed. See [Managing inference endpoints using the endpoints command](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-api-endpoints.html).

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the [neptune-db:CreateMLEndpoint](https://docs.aws.amazon.com/neptune/latest/userguide/iam-dp-actions.html#createmlendpoint) IAM action in that cluster.

## Request Syntax
<a name="API_CreateMLEndpoint_RequestSyntax"></a>

```
POST /ml/endpoints HTTP/1.1
Content-type: application/json

{
   "id": "string",
   "instanceCount": number,
   "instanceType": "string",
   "mlModelTrainingJobId": "string",
   "mlModelTransformJobId": "string",
   "modelName": "string",
   "neptuneIamRoleArn": "string",
   "update": boolean,
   "volumeEncryptionKMSKey": "string"
}
```

## URI Request Parameters
<a name="API_CreateMLEndpoint_RequestParameters"></a>

The request does not use any URI parameters.

## Request Body
<a name="API_CreateMLEndpoint_RequestBody"></a>

The request accepts the following data in JSON format.

 ** [id](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-id"></a>
A unique identifier for the new inference endpoint. The default is an autogenerated timestamped name.  
Type: String  
Required: No

 ** [instanceCount](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-instanceCount"></a>
The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction. The default is 1  
Type: Integer  
Required: No

 ** [instanceType](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-instanceType"></a>
The type of Neptune ML instance to use for online servicing. The default is `ml.m5.xlarge`. Choosing the ML instance for an inference endpoint depends on the task type, the graph size, and your budget.  
Type: String  
Required: No

 ** [mlModelTrainingJobId](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-mlModelTrainingJobId"></a>
The job Id of the completed model-training job that has created the model that the inference endpoint will point to. You must supply either the `mlModelTrainingJobId` or the `mlModelTransformJobId`.  
Type: String  
Required: No

 ** [mlModelTransformJobId](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-mlModelTransformJobId"></a>
The job Id of the completed model-transform job. You must supply either the `mlModelTrainingJobId` or the `mlModelTransformJobId`.  
Type: String  
Required: No

 ** [modelName](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-modelName"></a>
Model type for training. By default the Neptune ML model is automatically based on the `modelType` used in data processing, but you can specify a different model type here. The default is `rgcn` for heterogeneous graphs and `kge` for knowledge graphs. The only valid value for heterogeneous graphs is `rgcn`. Valid values for knowledge graphs are: `kge`, `transe`, `distmult`, and `rotate`.  
Type: String  
Required: No

 ** [neptuneIamRoleArn](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-neptuneIamRoleArn"></a>
The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will be thrown.  
Type: String  
Required: No

 ** [update](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-update"></a>
If set to `true`, `update` indicates that this is an update request. The default is `false`. You must supply either the `mlModelTrainingJobId` or the `mlModelTransformJobId`.  
Type: Boolean  
Required: No

 ** [volumeEncryptionKMSKey](#API_CreateMLEndpoint_RequestSyntax) **   <a name="neptunedata-CreateMLEndpoint-request-volumeEncryptionKMSKey"></a>
The Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.  
Type: String  
Required: No

## Response Syntax
<a name="API_CreateMLEndpoint_ResponseSyntax"></a>

```
HTTP/1.1 200
Content-type: application/json

{
   "arn": "string",
   "creationTimeInMillis": number,
   "id": "string"
}
```

## Response Elements
<a name="API_CreateMLEndpoint_ResponseElements"></a>

If the action is successful, the service sends back an HTTP 200 response.

The following data is returned in JSON format by the service.

 ** [arn](#API_CreateMLEndpoint_ResponseSyntax) **   <a name="neptunedata-CreateMLEndpoint-response-arn"></a>
The ARN for the new inference endpoint.  
Type: String

 ** [creationTimeInMillis](#API_CreateMLEndpoint_ResponseSyntax) **   <a name="neptunedata-CreateMLEndpoint-response-creationTimeInMillis"></a>
The endpoint creation time, in milliseconds.  
Type: Long

 ** [id](#API_CreateMLEndpoint_ResponseSyntax) **   <a name="neptunedata-CreateMLEndpoint-response-id"></a>
The unique ID of the new inference endpoint.  
Type: String

## Errors
<a name="API_CreateMLEndpoint_Errors"></a>

For information about the errors that are common to all actions, see [Common Error Types](CommonErrors.md).

 ** BadRequestException **   
Raised when a request is submitted that cannot be processed.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the bad request.
HTTP Status Code: 400

 ** ClientTimeoutException **   
Raised when a request timed out in the client.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in question.
HTTP Status Code: 408

 ** ConstraintViolationException **   
Raised when a value in a request field did not satisfy required constraints.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in question.
HTTP Status Code: 400

 ** IllegalArgumentException **   
Raised when an argument in a request is not supported.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in question.
HTTP Status Code: 400

 ** InvalidArgumentException **   
Raised when an argument in a request has an invalid value.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in question.
HTTP Status Code: 400

 ** InvalidParameterException **   
Raised when a parameter value is not valid.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request that includes an invalid parameter.
HTTP Status Code: 400

 ** MissingParameterException **   
Raised when a required parameter is missing.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in which the parameter is missing.
HTTP Status Code: 400

 ** MLResourceNotFoundException **   
Raised when a specified machine-learning resource could not be found.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in question.
HTTP Status Code: 404

 ** PreconditionsFailedException **   
Raised when a precondition for processing a request is not satisfied.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in question.
HTTP Status Code: 400

 ** TooManyRequestsException **   
Raised when the number of requests being processed exceeds the limit.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request that could not be processed for this reason.
HTTP Status Code: 429

 ** UnsupportedOperationException **   
Raised when a request attempts to initiate an operation that is not supported.    
 ** code **   
The HTTP status code returned with the exception.  
 ** detailedMessage **   
A detailed message describing the problem.  
 ** requestId **   
The ID of the request in question.
HTTP Status Code: 400

## See Also
<a name="API_CreateMLEndpoint_SeeAlso"></a>

For more information about using this API in one of the language-specific AWS SDKs, see the following:
+  [AWS Command Line Interface V2](https://docs.aws.amazon.com/goto/cli2/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for .NET V4](https://docs.aws.amazon.com/goto/DotNetSDKV4/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for C\$1\$1](https://docs.aws.amazon.com/goto/SdkForCpp/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for Go v2](https://docs.aws.amazon.com/goto/SdkForGoV2/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for Java V2](https://docs.aws.amazon.com/goto/SdkForJavaV2/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for JavaScript V3](https://docs.aws.amazon.com/goto/SdkForJavaScriptV3/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for Kotlin](https://docs.aws.amazon.com/goto/SdkForKotlin/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for PHP V3](https://docs.aws.amazon.com/goto/SdkForPHPV3/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for Python](https://docs.aws.amazon.com/goto/boto3/neptunedata-2023-08-01/CreateMLEndpoint) 
+  [AWS SDK for Ruby V3](https://docs.aws.amazon.com/goto/SdkForRubyV3/neptunedata-2023-08-01/CreateMLEndpoint) 