MEDICAL CHATBOT EFFICIENCY USING RAG
DOI:
https://doi.org/10.25215/8194288770.10Abstract
Timely access to reliable healthcare information is a significant challenge, especially for individuals seeking preliminary guidance on symptoms before consulting a doctor. This paper presents a Generative AIbased system that integrates LangChain, embedding models, and vector databases to enable intelligent and contextaware responses. The model uses sentence-transformers/all-MiniLM-L6-v2 to generate text embeddings, which are stored and retrieved using Pinecone for efficient semantic search. LangChain acts as the core framework connecting data loading, text splitting, embedding, and retrieval components. Through prompt engineering, the Large Language Model (LLM) is guided to produce accurate and contextually relevant responses. This approach enhances the performance of Retrieval-Augmented Generation (RAG) systems and demonstrates how Gen AI can improve knowledge accessibility in real-world applications such as healthcare chatbots and information assistants.Published
2026-03-11
Issue
Section
Articles
