

# Comparing Retrieval Augmented Generation and fine-tuning
<a name="rag-vs-fine-tuning"></a>

The following table describes the advantages and disadvantages of the fine-tuning and RAG-based approaches.


****  

| Approach | Advantages | Disadvantages | 
| --- | --- | --- | 
| Fine-tuning | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/retrieval-augmented-generation-options/rag-vs-fine-tuning.html) | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/retrieval-augmented-generation-options/rag-vs-fine-tuning.html) | 
| RAG | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/retrieval-augmented-generation-options/rag-vs-fine-tuning.html) | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/retrieval-augmented-generation-options/rag-vs-fine-tuning.html) | 

If you need to build a question-answering solution that references your custom documents, then we recommend that you start from a RAG-based approach. Use fine-tuning if you need the model to perform additional tasks, such as summarization.

You can combine the fine-tuning and RAG approaches in a single model. In the case, the RAG architecture does not change, but the LLM that generates the answer is also fine-tuned with the custom documents. This combines the best of both worlds, and it might be an optimum solution for your use case. For more information about how to combine supervised fine-tuning with RAG, see the [RAFT: Adapting Language Model to Domain Specific RAG](https://arxiv.org/pdf/2403.10131) research from the University of California, Berkeley.