/AWS1/CL_BDKGETMODELINVCJOBRSP¶
GetModelInvocationJobResponse
CONSTRUCTOR¶
IMPORTING¶
Required arguments:¶
iv_jobarn TYPE /AWS1/BDKMODELINVOCATIONJOBARN /AWS1/BDKMODELINVOCATIONJOBARN¶
The Amazon Resource Name (ARN) of the batch inference job.
iv_modelid TYPE /AWS1/BDKMODELID /AWS1/BDKMODELID¶
The unique identifier of the foundation model used for model inference.
iv_rolearn TYPE /AWS1/BDKROLEARN /AWS1/BDKROLEARN¶
The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
iv_submittime TYPE /AWS1/BDKTIMESTAMP /AWS1/BDKTIMESTAMP¶
The time at which the batch inference job was submitted.
io_inputdataconfig TYPE REF TO /AWS1/CL_BDKMDELINVJOBINPDAT00 /AWS1/CL_BDKMDELINVJOBINPDAT00¶
Details about the location of the input to the batch inference job.
io_outputdataconfig TYPE REF TO /AWS1/CL_BDKMDELINVJOBOUTDAT00 /AWS1/CL_BDKMDELINVJOBOUTDAT00¶
Details about the location of the output of the batch inference job.
Optional arguments:¶
iv_jobname TYPE /AWS1/BDKMODELINVCJOBNAME /AWS1/BDKMODELINVCJOBNAME¶
The name of the batch inference job.
iv_clientrequesttoken TYPE /AWS1/BDKMDELINVIDEMPOTENCYTOK /AWS1/BDKMDELINVIDEMPOTENCYTOK¶
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
iv_status TYPE /AWS1/BDKMODELINVCJOBSTATUS /AWS1/BDKMODELINVCJOBSTATUS¶
The status of the batch inference job.
The following statuses are possible:
Submitted – This job has been submitted to a queue for validation.
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
Your IAM service role has access to the Amazon S3 buckets containing your files.
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInputvalue matches the request body for the model.Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
InProgress – This job has begun. You can start viewing the results in the output S3 location.
Completed – This job has successfully completed. View the output files in the output S3 location.
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web Services Support Center.
Stopped – This job was stopped by a user.
Stopping – This job is being stopped by a user.
iv_message TYPE /AWS1/BDKMESSAGE /AWS1/BDKMESSAGE¶
If the batch inference job failed, this field contains a message describing why the job failed.
iv_lastmodifiedtime TYPE /AWS1/BDKTIMESTAMP /AWS1/BDKTIMESTAMP¶
The time at which the batch inference job was last modified.
iv_endtime TYPE /AWS1/BDKTIMESTAMP /AWS1/BDKTIMESTAMP¶
The time at which the batch inference job ended.
io_vpcconfig TYPE REF TO /AWS1/CL_BDKVPCCONFIG /AWS1/CL_BDKVPCCONFIG¶
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
iv_timeoutdurationinhours TYPE /AWS1/BDKMDELINVJOBTODURINHO00 /AWS1/BDKMDELINVJOBTODURINHO00¶
The number of hours after which batch inference job was set to time out.
iv_jobexpirationtime TYPE /AWS1/BDKTIMESTAMP /AWS1/BDKTIMESTAMP¶
The time at which the batch inference job times or timed out.
iv_modelinvocationtype TYPE /AWS1/BDKMODELINVOCATIONTYPE /AWS1/BDKMODELINVOCATIONTYPE¶
The invocation endpoint for ModelInvocationJob
iv_totalrecordcount TYPE /AWS1/BDKNONNEGATIVELONG /AWS1/BDKNONNEGATIVELONG¶
The total number of records in the batch inference job.
iv_processedrecordcount TYPE /AWS1/BDKNONNEGATIVELONG /AWS1/BDKNONNEGATIVELONG¶
The number of records that have been processed in the batch inference job.
iv_successrecordcount TYPE /AWS1/BDKNONNEGATIVELONG /AWS1/BDKNONNEGATIVELONG¶
The number of records that were successfully processed in the batch inference job.
iv_errorrecordcount TYPE /AWS1/BDKNONNEGATIVELONG /AWS1/BDKNONNEGATIVELONG¶
The number of records that failed to process in the batch inference job.
Queryable Attributes¶
jobArn¶
The Amazon Resource Name (ARN) of the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_JOBARN() |
Getter for JOBARN, with configurable default |
ASK_JOBARN() |
Getter for JOBARN w/ exceptions if field has no value |
HAS_JOBARN() |
Determine if JOBARN has a value |
jobName¶
The name of the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_JOBNAME() |
Getter for JOBNAME, with configurable default |
ASK_JOBNAME() |
Getter for JOBNAME w/ exceptions if field has no value |
HAS_JOBNAME() |
Determine if JOBNAME has a value |
modelId¶
The unique identifier of the foundation model used for model inference.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_MODELID() |
Getter for MODELID, with configurable default |
ASK_MODELID() |
Getter for MODELID w/ exceptions if field has no value |
HAS_MODELID() |
Determine if MODELID has a value |
clientRequestToken¶
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_CLIENTREQUESTTOKEN() |
Getter for CLIENTREQUESTTOKEN, with configurable default |
ASK_CLIENTREQUESTTOKEN() |
Getter for CLIENTREQUESTTOKEN w/ exceptions if field has no |
HAS_CLIENTREQUESTTOKEN() |
Determine if CLIENTREQUESTTOKEN has a value |
roleArn¶
The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_ROLEARN() |
Getter for ROLEARN, with configurable default |
ASK_ROLEARN() |
Getter for ROLEARN w/ exceptions if field has no value |
HAS_ROLEARN() |
Determine if ROLEARN has a value |
status¶
The status of the batch inference job.
The following statuses are possible:
Submitted – This job has been submitted to a queue for validation.
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
Your IAM service role has access to the Amazon S3 buckets containing your files.
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInputvalue matches the request body for the model.Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
InProgress – This job has begun. You can start viewing the results in the output S3 location.
Completed – This job has successfully completed. View the output files in the output S3 location.
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web Services Support Center.
Stopped – This job was stopped by a user.
Stopping – This job is being stopped by a user.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_STATUS() |
Getter for STATUS, with configurable default |
ASK_STATUS() |
Getter for STATUS w/ exceptions if field has no value |
HAS_STATUS() |
Determine if STATUS has a value |
message¶
If the batch inference job failed, this field contains a message describing why the job failed.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_MESSAGE() |
Getter for MESSAGE, with configurable default |
ASK_MESSAGE() |
Getter for MESSAGE w/ exceptions if field has no value |
HAS_MESSAGE() |
Determine if MESSAGE has a value |
submitTime¶
The time at which the batch inference job was submitted.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SUBMITTIME() |
Getter for SUBMITTIME, with configurable default |
ASK_SUBMITTIME() |
Getter for SUBMITTIME w/ exceptions if field has no value |
HAS_SUBMITTIME() |
Determine if SUBMITTIME has a value |
lastModifiedTime¶
The time at which the batch inference job was last modified.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_LASTMODIFIEDTIME() |
Getter for LASTMODIFIEDTIME, with configurable default |
ASK_LASTMODIFIEDTIME() |
Getter for LASTMODIFIEDTIME w/ exceptions if field has no va |
HAS_LASTMODIFIEDTIME() |
Determine if LASTMODIFIEDTIME has a value |
endTime¶
The time at which the batch inference job ended.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_ENDTIME() |
Getter for ENDTIME, with configurable default |
ASK_ENDTIME() |
Getter for ENDTIME w/ exceptions if field has no value |
HAS_ENDTIME() |
Determine if ENDTIME has a value |
inputDataConfig¶
Details about the location of the input to the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_INPUTDATACONFIG() |
Getter for INPUTDATACONFIG |
outputDataConfig¶
Details about the location of the output of the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_OUTPUTDATACONFIG() |
Getter for OUTPUTDATACONFIG |
vpcConfig¶
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_VPCCONFIG() |
Getter for VPCCONFIG |
timeoutDurationInHours¶
The number of hours after which batch inference job was set to time out.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_TIMEOUTDURATIONINHOURS() |
Getter for TIMEOUTDURATIONINHOURS, with configurable default |
ASK_TIMEOUTDURATIONINHOURS() |
Getter for TIMEOUTDURATIONINHOURS w/ exceptions if field has |
HAS_TIMEOUTDURATIONINHOURS() |
Determine if TIMEOUTDURATIONINHOURS has a value |
jobExpirationTime¶
The time at which the batch inference job times or timed out.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_JOBEXPIRATIONTIME() |
Getter for JOBEXPIRATIONTIME, with configurable default |
ASK_JOBEXPIRATIONTIME() |
Getter for JOBEXPIRATIONTIME w/ exceptions if field has no v |
HAS_JOBEXPIRATIONTIME() |
Determine if JOBEXPIRATIONTIME has a value |
modelInvocationType¶
The invocation endpoint for ModelInvocationJob
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_MODELINVOCATIONTYPE() |
Getter for MODELINVOCATIONTYPE, with configurable default |
ASK_MODELINVOCATIONTYPE() |
Getter for MODELINVOCATIONTYPE w/ exceptions if field has no |
HAS_MODELINVOCATIONTYPE() |
Determine if MODELINVOCATIONTYPE has a value |
totalRecordCount¶
The total number of records in the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_TOTALRECORDCOUNT() |
Getter for TOTALRECORDCOUNT, with configurable default |
ASK_TOTALRECORDCOUNT() |
Getter for TOTALRECORDCOUNT w/ exceptions if field has no va |
HAS_TOTALRECORDCOUNT() |
Determine if TOTALRECORDCOUNT has a value |
processedRecordCount¶
The number of records that have been processed in the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_PROCESSEDRECORDCOUNT() |
Getter for PROCESSEDRECORDCOUNT, with configurable default |
ASK_PROCESSEDRECORDCOUNT() |
Getter for PROCESSEDRECORDCOUNT w/ exceptions if field has n |
HAS_PROCESSEDRECORDCOUNT() |
Determine if PROCESSEDRECORDCOUNT has a value |
successRecordCount¶
The number of records that were successfully processed in the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SUCCESSRECORDCOUNT() |
Getter for SUCCESSRECORDCOUNT, with configurable default |
ASK_SUCCESSRECORDCOUNT() |
Getter for SUCCESSRECORDCOUNT w/ exceptions if field has no |
HAS_SUCCESSRECORDCOUNT() |
Determine if SUCCESSRECORDCOUNT has a value |
errorRecordCount¶
The number of records that failed to process in the batch inference job.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_ERRORRECORDCOUNT() |
Getter for ERRORRECORDCOUNT, with configurable default |
ASK_ERRORRECORDCOUNT() |
Getter for ERRORRECORDCOUNT w/ exceptions if field has no va |
HAS_ERRORRECORDCOUNT() |
Determine if ERRORRECORDCOUNT has a value |