Generative AI Crash Course for Non-Tech Professionals. Register Now >

Analysing Generative AI Embeddings with Azure AI Hub: A Hands-on Guide

Explore and implement various embedding models in Azure AI Hub to enhance contextual search and query responses.

Azure AI services consist of several embeddings that offer different ways to understand and respond to user queries in a contextually relevant and efficient manner. These embeddings allow us to query our knowledge bases for relevant documents using semantic search. The embedding-ada-002, for instance, converts textual data into vector counterparts, which can be used for implementing similarity metrics such as cosine similarity. In this article, we will understand and implement the different embedding models offered in Azure AI Hub

Table of Contents

  1. Understanding Azure AI Embeddings
    1. Benefits and Use Cases
    2. Types of Embeddings offered by Azure AI
      1. Similarity Embeddings
      2. Text Search Embeddings
      3. Code Search Embeddings
  2. Implementing Azure Open AI Embeddings
  3. Final Words
  4. References

Understanding Azure AI Embeddings

Embeddings are high-dimensional vectors where each dimension represents a semantic feature. Words or phrases that are semantically similar in terms of meaning or logic are grouped, making it easy for AI algorithms to perform tasks such as semantic search. The distance between two vectors measures their relevance; a smaller distance corresponds to a higher degree of similarity or relevance. 

Embeddings are generated based on the model training of a large textual corpus or image data. The training proceeds by altering vector representations to minimise the loss function, which measures how well the model can classify text or make a prediction. Once the training is finished, the generated embeddings can be used to transform the data into numerical vectors, which can be used in generative AI models such as GPT 3.5-Turbo for different tasks or for computing similarities between texts. Embedding-ada-002 is one of the most prominent text embedding algorithms developed by OpenAI and is available in Azure AI Hub’s embedding for generative AI application development. This embedding can be utilised for different tasks such as similarity searching, textual searching and code searching. 

Benefits and Use Cases

Embeddings have the following advantages: 

  1. By capturing the semantic meanings and connections between various words or phrases, embedded systems allow machines to comprehend and analyse natural language, such as English.
  2. By transforming high-dimensional vector data into a low-dimensional representation, embeddings facilitate working with large datasets and reduce algorithmic computing complexity.
  3. Reusability is facilitated as a generated embedding can be used in different models and applications. 

Different use cases of embeddings are listed below: 

  1. Embeddings can be used to create models for sentiment analysis and opinion mining.
  2. Information retrieval and search engine optimisation can be performed using embeddings as they can find relevant documents based on semantic similarity rather than simple keyword matching. 
  3. QA systems can be built using embeddings where understanding and retrieving relevant responses from a text corpus is needed. 
  4. Embeddings are useful in areas where text generation and machine translation are required, as they can assist large language models in understanding and translating text based on multiple languages.

Types of Embeddings offered by Azure AI 

Azure AI Portal hosts a multitude of embeddings that users can use for their generative AI application development. These embeddings can be used for tasks based in text, images or multimodal data. 

Azure AI Hub Embeddings (Model Catalog)

Similarity Embeddings

Similarity embeddings aim to capture the overall semantic similarity between different chunks of text. Users can use these types of embeddings to identify and compare different texts in meaning. Applications of these embeddings can include text summarisation, finding similar text chunks or documents, plagiarism detection, etc. 

Text Search Embeddings

The idea behind text search embedding is to retrieve relevant text based on the user query. These embeddings are optimised for fast search in large amounts of text data. They consider both the semantic meaning and keywords in the user query. These embeddings are used in tasks such as information retrieval, search engine optimisation, QA systems, and recommendation engines. 

Code Search Embeddings

Users can find relevant code snippets based on natural language queries using code search embeddings. These embeddings understand the code syntaxes and structure with the natural language used in user queries. Users can use it in code development applications and testing. 

Implementing Azure OpenAI Embeddings

Let’s understand and deploy Azure OpenAI embedding-ada-002  and check its utility based on different data sources.

Step 1: Visit Azure AI Studio (https://ai.azure.com) and create a new project. 

Step 2: The project requires Azure AI Hub for resource management and collaboration, so we will set up an Azure AI Hub for our project using the create project dialogue box. Make sure to create the related Azure services during Hub creation – AI Search, AI Services, Resource Group, 

Step 3: Once the project and hub set-up are complete, visit the deployment section using the left panel to deploy the OpenAI embedding model. We will deploy the GPT 3.5-Turbo and embedding-ada-002 models for our chat application. 

Check the deployment section to verify the model deployments.

Set 4: Using the project’s playground chat option, we will add our data, index it, and use it for response generation. For a comparative analysis, I have used two text files based on Game of Thrones as inputs to our deployed embedding and chat model. 

Input 1: A shorter version of the Game of Thrones script

Input 2: A Larger version of the Game of Thrones script

Add the data and index it for use with the models. Select the add a new data source option and upload your data file. We will upload inputs 1 and 2 sequentially and check their responses, requiring separate indexes.

Step 5: Checking the responses generated based on inputs 1 and 2 show different outputs: 

Output based on Input 1 (shorter version of Game of Thrones script)

Output based on Input 2 (larger  version of Game of Thrones script)

We can observe the change in the generated answer when we limit our embedding model for the data we provide. Due to the smaller amount of data provided as input, the response based on input 1 data is brief in context. in contrast, the response based on input 2 data is considerably more thorough and contextually relevant. 

Final Words

In conclusion, embeddings are very important for working with large amounts of data and understanding their context for effective and efficient generative AI application development. The integration of Azure AI services, resource manager, AI search, and Azure OpenAI within Azure AI Hub makes it very easy to deploy generative AI models and operate on them securely and collaboratively. 

References

  1. Azure Documentation
  2. Azure AI Services Documentation
  3. OpenAI Embeddings
Picture of Sachin Tripathi

Sachin Tripathi

Sachin Tripathi is the Manager of AI Research at AIM, with over a decade of experience in AI and Machine Learning. An expert in generative AI and large language models (LLMs), Sachin excels in education, delivering effective training programs. His expertise also includes programming, big data analytics, and cybersecurity. Known for simplifying complex concepts, Sachin is a leading figure in AI education and professional development.

The Chartered Data Scientist Designation

Achieve the highest distinction in the data science profession.

Elevate Your Team's AI Skills with our Proven Training Programs

Strengthen Critical AI Skills with Trusted Generative AI Training by Association of Data Scientists.

Our Accreditations

Get global recognition for AI skills

Chartered Data Scientist (CDS™)

The highest distinction in the data science profession. Not just earn a charter, but use it as a designation.

Certified Data Scientist - Associate Level

Global recognition of data science skills at the beginner level.

Certified Generative AI Engineer

An upskilling-linked certification initiative designed to recognize talent in generative AI and large language models

Join thousands of members and receive all benefits.

Become Our Member

We offer both Individual & Institutional Membership.