Abstract
<jats:p>Generative Artificial Intelligence (GAI) based large language models (LLMs) sometimes make concessions on accuracy. Even with their remarkable abilities, LLMs tend to generate incorrect content because they are based on limited or old information. Retrieval-augmented generation (RAG) was initially proposed to counter this by bringing in external knowledge, but it is still weak when the information retrieved is incorrect. Because of a trade-off in accuracy with RAG, a new method, Corrective-RAG (CRAG) improves accuracy by having a self-correction process. This method filters and fine-tunes extracted information, dramatically limiting errors and enhancing accuracy. The Self-Corrective-RAG is implemented for text retrieval and is tested on four datasets, namely PopQA, Biography, Pub Health, and Arc-Challenge. The performance accuracy of Self-CRAG and Self-RAG is compared on PopQA dataset. It was found that the performance of Self-CRAG was better compared to Self-RAG on boosting the robustness against retrieval performance.</jats:p>