

# Use a deployment for on-demand inference


After you deploy your custom model for on-demand inference, you can use it to generate responses by making inference requests. For `InvokeModel` or `Converse` operations, you use the deployment Amazon Resource Name (ARN) as the `modelId`.

For information about making inference requests, see the following topics:
+ [Submit prompts and generate responses with model inference](https://docs.aws.amazon.com/bedrock/latest/userguide/inference.html)
+ [Prerequisites for running model inference](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-prereq.html)
+ [Submit prompts and generate responses using the API](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-api.html)