Generative AI Crash Course for Non-Tech Professionals. Register Now >

Understanding Okapi BM25: A Guide to Modern Information Retrieval

A ranking algorithm that enhances the relevance of search results
information retrieval

In today’s digital age, the fusion of information retrieval and generative AI is transforming how we access and interact with vast amounts of data. At the heart of this transformation lies Okapi BM25, a powerful ranking algorithm that enhances the relevance of search results in AI-driven systems. As generative AI continues to evolve, understanding the intricacies of Okapi BM25 becomes crucial for developers and data scientists aiming to create more intelligent and responsive search engines. In this guide, we explore the key concepts, benefits, and applications of Okapi BM25 in the context of generative AI. 

Table of content

  1. Overview of Information Retrieval in LLMs
  2. Understanding Okapi BM25
  3. Real World Applications
  4. Challenges and Considerations

Let’s understand information retrieval in large language models (LLMs).

Overview of Information Retrieval in LLMs

In an era where data is generated at an unprecedented rate, the ability to retrieve relevant information efficiently is more important than ever. Information retrieval (IR) systems are the backbone of this process, enabling users to find specific data within vast collections of documents, web pages, and other data sources. As generative AI technologies like GPT series and beyond continue to advance, the synergy between generative models and IR systems is reshaping the landscape of search and data retrieval.

Generative AI models have shown remarkable capabilities in generating human-like text, answering questions, and even creating content. However, their effectiveness is significantly amplified when combined with robust IR techniques that ensure the generated content is both relevant and accurate. This is where ranking algorithms like Okapi BM25 come into play.

Ranking algorithms are crucial components of IR systems, determining the order in which documents are presented to the user based on their relevance to a given query. The quality of the ranking directly impacts the user’s experience, as more relevant results lead to more efficient and satisfactory information retrieval.

Okapi BM25, a probabilistic ranking algorithm, has emerged as one of the most effective and widely used methods for this purpose. By considering factors such as term frequency, document length, and inverse document frequency, BM25 provides a nuanced approach to ranking that enhances the relevance of search results.

Understanding Okapi BM25

Okapi BM25 is a probabilistic information retrieval model that belongs to the BM (Best Matching) family of algorithms. The name “Okapi” originates from the Okapi information retrieval system, which was the project under which this algorithm was developed. BM25 stands for “Best Matching 25,” with 25 being a version number in the series of ranking functions.

Key Concepts

To fully grasp the power and utility of Okapi BM25, it’s essential to understand its core concepts: 

Term Frequency (TF)

Term Frequency refers to the number of times a term appears in a document. The underlying idea is that the more frequently a term appears in a document, the more relevant that document is likely to be for a search query containing that term. However, BM25 adjusts this frequency to prevent overly high scores for documents with very high term frequencies. This adjusted term frequency is represented as:

where f(qi, D) is the frequency of term qi​ in document DDD, and k1​ is a tuning parameter.

Inverse Document Frequency (IDF)

Inverse Document Frequency measures the importance of a term across the entire document collection. Terms that are common across many documents are considered less significant, while rare terms are deemed more important. The IDF component helps in penalizing common terms and boosting rare ones. It is calculated as:

where N is the total number of documents in the collection, and n(qi) is the number of documents containing the term qi​.

Document Length Normalization

Document Length Normalization adjusts the relevance score based on the length of the document. The idea is to prevent bias towards longer documents, which may contain more instances of a term simply due to their length. The normalization factor in BM25 is:

where |D| is the length of the document, and avgdl is the average document length in the collection.

Formula

The BM25 score for a document 𝐷 to a query 𝑄 is given by:

Where:

  • f(qi,D) is the frequency of the query term qi in document D.
  • |D| is the length of document D.
  • avgdl is the average document length in the collection.
  • k1​ and b are free parameters. Typically, k1​ is set to a value between 1.2 and 2.0, and b is set to 0.75.

Real World Applications

The BM25 ranking algorithm is a widely used term-based ranking model that can be effectively combined with large language models (LLMs) to improve search and retrieval performance in real-world applications:

Integrating BM25 with LLMs using Retrieval-Augmented Generation (RAG)

RAG is a paradigm that integrates LLMs with external knowledge sources, such as document databases. By combining the strengths of BM25 for efficient term-based retrieval and LLMs for semantic understanding, RAG can overcome the limitations of pure LLMs in handling large and dynamic data. The BM25 component retrieves relevant documents, which are then used by the LLM to generate high-quality, context-aware responses.

Hybrid Search with BM25 and Vector Search

Many real-world search applications use a combination of traditional BM25-based search and vector-based semantic search powered by LLMs. The BM25 scores provide a strong baseline for term-level relevance, while the vector search results capture deeper semantic relationships. A learning-to-rank model can then be used to optimally combine the outputs of these different retrieval methods.

Leveraging BM25 as a Cost-effective Semantic Cache

When integrating LLMs into production systems, the high API costs associated with frequent LLM queries can be a challenge. By using vector databases (VecDBs) as a semantic cache and combining them with BM25-based retrieval, the overall system can achieve cost-effective end-to-end LLM applications.

Handling Hallucinations in LLMs

One of the key challenges with pure LLMs is their tendency to generate plausible but factually incorrect information, known as hallucinations. By incorporating BM25-based retrieval as a component in the overall system, the outputs can be grounded in actual document content, reducing the risk of hallucinations.

Challenges and Considerations

The key challenges and considerations in applying the BM25 ranking algorithm in real-world applications include:

Handling Long Documents

The search results indicate that BM25 tends to overly penalize very long documents, as the document length normalization component can become too dominant. This can lead to relevant long-form content being ranked lower than it should be. Addressing this issue may require modifications to the BM25 formula or the use of hybrid approaches that combine BM25 with other ranking signals.

Lack of Semantic Understanding

As noted, BM25 is a term-based ranking algorithm that does not inherently capture the semantic meaning of queries and documents. This can be a limitation when dealing with queries that require a deeper understanding of context and intent. Combining BM25 with large language models (LLMs) that provide semantic understanding can help address this gap, as shown in the “Retrieval-Augmented Generation (RAG)” approach.

Difficulty with Rare Terms

The inverse document frequency (IDF) component of BM25 can struggle with rare terms that appear in only a few documents. While the IDF is meant to boost the importance of rare terms, in some cases it may not be enough to overcome the low term frequency. Techniques like pseudo-relevance feedback or query expansion may be needed to better handle queries with rare terms .

Lack of Personalization

As mentioned, BM25 treats all queries equally and does not incorporate personalization or user-specific preferences. In many real-world applications, personalized search results are crucial. Extending BM25 to leverage user profiles, search histories, or other personalization signals can help address this limitation.

Computational Complexity

Calculating the BM25 scores for large document collections can be computationally expensive, especially when dealing with dynamic content or real-time search scenarios. Optimizations, such as caching or approximation techniques, may be necessary to ensure efficient BM25 implementation in production systems.

Conclusion

Okapi BM25 is a cornerstone in information retrieval, bridging traditional search algorithms and generative AI. Its robust probabilistic approach effectively balances term frequency, document length, and inverse document frequency, outperforming simpler ranking methods. Integrating BM25 with large language models (LLMs) and vector search enhances generative AI accuracy and reduces hallucinations. 

References

  1. Improvement to BM25 and Language Models
  2. When Large Language Models Meet Vector Databases
Picture of Sourabh Mehta

Sourabh Mehta

The Chartered Data Scientist Designation

Achieve the highest distinction in the data science profession.

Elevate Your Team's AI Skills with our Proven Training Programs

Strengthen Critical AI Skills with Trusted Generative AI Training by Association of Data Scientists.

Our Accreditations

Get global recognition for AI skills

Chartered Data Scientist (CDS™)

The highest distinction in the data science profession. Not just earn a charter, but use it as a designation.

Certified Data Scientist - Associate Level

Global recognition of data science skills at the beginner level.

Certified Generative AI Engineer

An upskilling-linked certification initiative designed to recognize talent in generative AI and large language models

Join thousands of members and receive all benefits.

Become Our Member

We offer both Individual & Institutional Membership.