A COMPARATIVE STUDY OF FINE-TUNING, RETRIEVAL-AUGMENTED GENERATION AND HYBRID APPROACHES FOR LARGE LANGUAGE MODELS
DOI:
https://doi.org/10.25215/8194288770.03Abstract
Large Language Models (LLMs) are used through either by fine-tuning or Re- trieval Augmented Generation (RAG) to specific area tasks, but empirically compar- ing these two approaches within a unified framework is limited. The three adapta- tion strategies, which are fine-tuning alone, RAG alone, and hybrid approach vary- ing between the three, are performed in this study with the same base model, data, and evaluation configuration. We evaluate performance by cosine similarity scor- ing with true knowledge generated by a higher-capacity evaluator model, and a qualitative error analysis and practical implementation issues. The findings indi- cate that domain alignment is greatly enhanced by fine-tuning, factual grounding is enhanced by RAG, and hybrid method is stable and more accurate. The paper con- cludes by providing the advice of choosing the methods of adaptation depending on computational cost, domain specificity and real world application requirements.Published
2026-03-11
Issue
Section
Articles
