Implementing RAG-as-a-Service using Vectara

Discover Vectara and simplify RAG-as-a-Service for seamless generative AI application building.

RAG-as-a-Service refers to the set of techniques that support the implementation of retrieval augmented generation without the inconvenience of setting it up by oneself. RAG-as-a-Service is a combination of RAG and cloud-based services which evolves as a best practice for organisations and users to avoid unnecessary complexities and focus on a definite generative AI application-building process thereby increasing the scalability. 

This article explores Vectara, an end-to-end platform, offering RAG-as-a-Service for generative AI application builders. 

Table of Content

  1. Understanding RAG-as-a-Service
  2. Vectara’s Competency Framework
  3. Using the Vectara Platform for implementing RAG-as-a-Service

Understanding RAG-as-a-Service

The development and implementation of the retrieval augmented generation technique is a promising solution to counter hallucination, outdated knowledge and non-traceable reasoning process. RAG incorporates data from external sources and knowledge databases to enhance accuracy and credibility of the generated responses. This technology may seem a bit difficult to implement and maintain infrastructure wise. RAG-as-a-Service can solve this issue of RAG infrastructure and maintenance as its a cloud-based service which provides access to pre-built and managed RAG functionality. 

RAG-as-a-Service is much easier to use and implement and supports user in focusing on application development that use RAG-as-a-Service API to integrate RAG functionality. RAG-as-a-Service allows users to start quickly and easily without requiring extensive technical knowledge or infrastructure setup. This is useful for users or businesses that want to leverage RAG technology quickly and easily, keeping cost-effectiveness, scalability and simplified development in view. 

Vectara’s Competency Framework

The platform (Vectara) offers a remarkable set of tools and functionalities that support and aid users in building and deploying generative AI applications using retrieval augmented generation. The key capabilities offered by Vectara are indexing, retrieval, metadata search filtering, summariser models, evaluations and generative prompts. 

Indexing – The platform uses vectara-ingest which is an open-source Python project (https://github.com/vectara/vectara-ingest). This project offers pre-built crawlers for data ingestion and supports users in building one as well. Vectara also provides indexing APIs – File Upload API, Standard Indexing API and Low-Level Indexing API. 

Retrieval – The platform supports various types of retrieval techniques such as Hybrid Search, Keyword Search, Reranking, Pagination and Semantic Recommendation System. 

Metadata Search Filtering – Users control the search over the corpus using metadata filters. Vectara supports a wide range of functions, operators, and data types for these filter expressions.

Summarizer Models – Vectara provides two official summarizer models 1.2.0 and 1.3.0 – vectara-summary-ext-v1.2.0 (gpt-3.5-turbo) and vectara-summary-ext-v1.3.0 (gpt-4.0). Along with them, Vectara also offers citation summarizer models, enabling users to state custom citation styles in summary requests – vectara-summary-ext-24-05-sml (gpt-3.5-turbo), vectara-summary-ext-24-05-med (gpt-4.0) and vectara-summary-ext-24-05-large (gpt-4.0-turbo)

Evaluations – Vectara implements a factual consistency score that detects hallucinations in the generated RAG responses. The score ranges between 0.0 and 1.0 where a higher value indicates a higher probability of factual accuracy. 

Using the Vectara Platform for implementing RAG-as-a-Service

Step 1 – Log in to the Vectara platform (https://vectara.com/) and click on Create Corpus. The create corpus functionality helps users create a data container that includes their data, which is further utilised to extract relevant information through queries. Vectara supports Markdown, PDF, Open Office, Word, Powerpoint, Text, HTML, LXML, RTF, EPUB, JSON, etc. document types. 

Step 2 – Specify the type of application. Vectara supports RAG-based chat applications and semantic search application types. 

Step 3 – Name your corpus. This name will serve as a way to reference the corpus for application building and working on Vectara. 

Step 4 – Upload your data files. These data files will be used to understand the context and generate responses based on it.

Step 5 – Use the Query tab for experimentation. Users can select their application type using the left panel and change the type on the fly. 

Step 6 – Vectara allows configurable retrieval where the users can choose if they want to implement Hybrid Search or apply RAG Reranking. The reranker allows configuration of the diversity factor and control the number of results that are generated. 

Step 7 – Check the response generated as per the Semantic Search algorithm. 

Step 8 – Factual consistency score can be checked along with the facts based on which the results were generated. 

Step 9 – Vectara permits the application of filters as well, to provide a more constrained generation based on user requirements. 

Step 10 – The user conversation history is logged with the necessary details and can be accessed from the Conversations option available on the left pane. 

Step 11 – The GitHub repo of Vectara provides comprehensive code samples and generator functions that can be used for developing applications instead of writing codes from scratch. 

The users will be able to implement RAG-as-a-Service using the Vectara platform with ease using the steps mentioned above. This implementation does not require any additional setup by the user, the platform is capable of serving all on its own. 

Final Words

The synergy between retrieval augmented generation and cloud services represents a remarkable leap forward in implementing RAG-based services. This innovative integration not only addresses concerns regarding computational demands but also mitigates worries about hardware or software requirements and unnecessary inconveniences that often accompany RAG implementation. Vectara is a great platform to deploy conversational chat applications or semantic search-based applications without worrying about internal mechanisms and struggles. This allows the user to focus on more relevant tasks and use Vectara’s improved searching and indexing capabilities for driving relevant insights and generating appropriate responses. 

References

  1. Vectara Documentation
  2. Survey of Semantic Search Research
  3. Survey of RAG for LLMs

Learn RAG in depth through our hand-picked courses:

Picture of Sachin Tripathi

Sachin Tripathi

Sachin Tripathi is the Manager of AI Research at AIM, with over a decade of experience in AI and Machine Learning. An expert in generative AI and large language models (LLMs), Sachin excels in education, delivering effective training programs. His expertise also includes programming, big data analytics, and cybersecurity. Known for simplifying complex concepts, Sachin is a leading figure in AI education and professional development.

The Chartered Data Scientist Designation

Achieve the highest distinction in the data science profession.

Elevate Your Team's AI Skills with our Proven Training Programs

Strengthen Critical AI Skills with Trusted Generative AI Training by Association of Data Scientists.

Our Accreditations

Get global recognition for AI skills

Chartered Data Scientist (CDS™)

The highest distinction in the data science profession. Not just earn a charter, but use it as a designation.

Certified Data Scientist - Associate Level

Global recognition of data science skills at the beginner level.

Certified Generative AI Engineer

An upskilling-linked certification initiative designed to recognize talent in generative AI and large language models

Join thousands of members and receive all benefits.

Become Our Member

We offer both Individual & Institutional Membership.

Subscribe to our Newsletter