Foundation language models like GPT has sparked public interest in large language models (LLMs) and have paved the way to experiment, experience and democratize the benefits of AI. However, the LLM models including GPT models currently face several challenges. The most prominent among them is Hallucinations. Hallucinations are assertions or claims or information that sound plausible and authoritative but are verifiably incorrect.
In this paper, we detail our strategy and solution approach in controlling foundation models like GPT to not hallucinate while responding. We followed a novel structured way by regulating the GPT models to formulate the query response leveraging information only from the underlying knowledge base of documents that were in the formats – PDF, HTML and Word Doc. This paper details our approach implementing a production grade Query Bot application for a customer in a regulatory domain that requires formulating query responses that are Hallucination free. We have used the Retriever Augmented Generation technique coupled with a unique prompt engineering technique and post processing approach to accomplish it.
We have also defined a scoring mechanism to determine the answer score when multiple passages are sent as context to GPT model as the GPT model does not provide the score for the answer generated. The solution that we built has humongous business impact by reducing the turnaround time by 150% and enabling Query Status Tracking, and Improved User Experience.
Access The Research Paper:
-
Lattice | Vol 5 Issue 1₹1,679.00