LLMFlows for Building Flow-Based Chat Application: A Hands-on Guide

Build advanced conversational AI applications with LLMFlows with practical examples.
LLMFlow flow

In today’s rapidly evolving landscape of conversational AI, creating sophisticated and seamless chat flows is crucial for delivering exceptional user experiences. Whether building a customer service chatbot, a virtual assistant, or any interactive AI application, managing complex conversations is key to maintaining engagement and satisfaction. In this article, we will explore LLMFlows an innovative framework designed to simplify the development of LLM applications by providing a clear and structured approach to managing interactions.

Table of content

  1. Overview of Conversational AI
  2. Understanding the Core Concepts of LLMFlows
  3. Building Complex Chat Flows with LLMFlows
  4. Use case 

Let’s start with understanding conversational AI and its benefits in modern applications.

Overview of Conversational AI

The journey of Conversational AI began with early-stage chatbots that followed scripted responses. These early systems were limited in scope, often unable to handle queries outside predefined scenarios. However, the advent of machine learning and natural language processing (NLP) marked a significant leap forward. Machine learning algorithms enabled systems to learn from vast amounts of data, improving their ability to understand and generate human language.

The introduction of neural networks, particularly deep learning models, brought about a new era for Conversational AI. Models like Google’s BERT and OpenAI’s GPT series demonstrated unprecedented capabilities in understanding context and generating human-like text. These models could handle a variety of tasks, including language translation, sentiment analysis, and conversational agents, with remarkable accuracy.

Image Source

Large Language Models (LLMs) like the GPT-3 and GPT-4 series and their successors have been game-changers in the field of Conversational AI. These models are pre-trained on vast datasets encompassing diverse topics, allowing them to generate human-like responses and understand complex queries. Their ability to perform a variety of language-related tasks with minimal fine-tuning makes them ideal for building sophisticated conversational agents.

Understanding the Core Concepts of LLMFlows

LLMFlows is designed to make the development of LLM-based applications straightforward and intuitive. It provides a minimalistic set of abstractions that help developers manage complex interactions between LLMs and other components of a chat application. By focusing on simplicity, explicitness, and transparency, LLMFlows ensures that developers have full control over their applications and can easily monitor, maintain, and debug their chat flows.

Key Components

Flows and FlowSteps

At the heart of LLMFlows are Flows and FlowSteps. These abstractions allow developers to define and manage the sequence of operations in a chat application.

  • Flow: A Flow represents the entire conversation or a significant part of it. It is composed of multiple FlowSteps, each of which performs a specific task.
  • FlowStep: A FlowStep is a single step in a Flow, representing an interaction with an LLM or another operation. Each FlowStep can depend on the outputs of previous FlowSteps, allowing for complex dependencies and sequences.

Prompt Templates

Prompt templates are a powerful feature in LLMFlows that allow for the dynamic creation of prompts. By using templates, developers can insert variables into prompts and generate text based on specific inputs.

Integrating LLMs with LLMFlows

LLMFlows simplifies the integration of LLMs such as OpenAI’s models. Developers can easily configure and call these models, handle retries for failed calls, and format responses.

One of the core philosophies of LLMFlows is to provide full transparency and control over each component of a chat application. This includes:

  • Explicit API: LLMFlows’ explicit API enables developers to write clear and maintainable code.
  • Detailed Monitoring: Developers can trace the execution of each FlowStep, monitor input and output variables, and access comprehensive logs for debugging and maintenance.
  • Customization: LLMFlows supports extensive customization through callbacks and custom functions, allowing developers to tailor their applications to specific needs.

Building Complex Chat Flows with LLMFlows

We will walk through the setup, design, and execution of a detailed example that showcases the capabilities of LLMFlows in managing sophisticated conversational patterns. Below is the flowchart of the chat flow.

Setting Up Your Environment

Before we begin, ensure you have LLMFlows installed in your development environment. Here is the code snippet.

!pip install llmflows

Additionally, you will need an API key from OpenAI or any other supported LLM provider to utilize their language models.

Designing and Structuring Chat Flows

The design of our chat flow involves multiple steps, each contributing to the overall conversation. Our example will create a flow that generates a movie title, identifies the main characters, suggests a song title, and finally writes song lyrics based on the generated content, followed by reviews from music critics. Here is the code snippet.

import openai
import json
from llmflows.flows import Flow, FlowStep, ChatFlowStep
from llmflows.llms import OpenAI, OpenAIChat, MessageHistory
from llmflows.prompts import PromptTemplate
from google.colab import userdata

In this block, we import the necessary libraries and set up the OpenAI LLM with our API key. We then define prompt templates for generating a movie title, song title, main characters, and song lyrics. These templates will help us create structured and relevant prompts for each step of our chat flow. Here is the code snippet.

open_ai_llm = OpenAI(model= 'gpt-3.5-turbo-instruct', api_key=userdata.get('OPENAI_API_KEY'))

title_template = PromptTemplate("What is a good title of a movie about {topic}?")
song_template = PromptTemplate("What is a good song title of a soundtrack for a movie called {movie_title}?")
characters_template = PromptTemplate("What are two main characters for a movie called {movie_title}?")
lyrics_template = PromptTemplate("Write lyrics of a movie song called {song_title}. The main characters are {main_characters}")

Define FlowSteps

Each FlowStep represents a specific task within the flow. We will create FlowSteps for generating the movie title, song title, main characters, and song lyrics. Here is the code snippet.

flowstep1 = FlowStep(
    name="Flowstep 1",
    llm=open_ai_llm,
    prompt_template=title_template,
    output_key="movie_title",
)

flowstep2 = FlowStep(
    name="Flowstep 2",
    llm=open_ai_llm,
    prompt_template=song_template,
    output_key="song_title",
)

Here, we define the individual FlowSteps for our chat flow. Each FlowStep uses the OpenAI LLM and a specific prompt template to generate an output. The output_key parameter specifies the variable name that stores the result of each step. This setup allows us to break down the conversation into manageable and reusable steps.

Add Music Critic Reviews

We will also include steps where music critics review the generated song lyrics. This involves creating a ChatFlowStep for each critic.

critics = []
critic_system_prompt = "You are a music critic who writes short reviews of song lyrics"
critic_message_template = PromptTemplate("Hey, what is your opinion on the following song: {song_lyrics}")

for i in range(3):
    message_history = MessageHistory()
    message_history.system_prompt = critic_system_prompt

    critics.append(
        ChatFlowStep(
            name=f"Critic Flowstep {i+1}",
            llm=OpenAIChat(model='gpt-3.5-turbo', api_key=userdata.get('OPENAI_API_KEY')),
            message_history=message_history,
            message_prompt_template=critic_message_template,
            message_key="song_lyrics",
            output_key=f"song_review_{i+1}",
        )
    )

In this block, we add three ChatFlowSteps for music critics to review the generated song lyrics. Each critic is set up with a system prompt that defines their role and a message template to review the lyrics. The MessageHistory class manages the conversation history, ensuring the critics have the context needed for their reviews.

Connect FlowSteps

To create a coherent flow, we need to define the dependencies between FlowSteps. This ensures that each step receives the necessary inputs from the preceding steps. Here is the code snippet.

flowstep1.connect(flowstep2, flowstep3, flowstep4)
flowstep2.connect(flowstep4)
flowstep3.connect(flowstep4)
flowstep4.connect(*critics)

Here, we connect the FlowSteps to define the execution order and data dependencies. The connect method ensures that each FlowStep only runs when its dependencies are satisfied. This setup allows us to manage complex interactions and ensure the correct sequence of operations in our chat flow.

Create and Run the Flow

With the FlowSteps connected, we can now create the flow and execute it. This will process the input through each step and generate the final output. Here is the code snippet.

soundtrack_flow = Flow(flowstep1)
results = soundtrack_flow.start(topic="friendship", verbose=True)
print(json.dumps(results, indent=4))

Finally, we create an instance of the Flow class with the first FlowStep as the entry point. We then start the flow with a specific topic and print the results in a readable JSON format. This block initiates the entire conversation flow, executing each step in the defined order and generating the final output, including the critics’ reviews

Conclusion

LLMFlows offers a structured approach to managing dependencies and sequences within chat applications, ensuring that each component operates seamlessly within the overall flow. By leveraging prompt templates, FlowSteps, and ChatFlowSteps, developers can design dynamic and context-aware interactions that enhance user experience and operational efficiency.

References

  1. Link to the above code
  2. Official Github of LLMFlows
Picture of Sourabh Mehta

Sourabh Mehta

The Chartered Data Scientist Designation

Achieve the highest distinction in the data science profession.

Elevate Your Team's AI Skills with our Proven Training Programs

Strengthen Critical AI Skills with Trusted Generative AI Training by Association of Data Scientists.

Our Accreditations

Get global recognition for AI skills

Chartered Data Scientist (CDS™)

The highest distinction in the data science profession. Not just earn a charter, but use it as a designation.

Certified Data Scientist - Associate Level

Global recognition of data science skills at the beginner level.

Certified Generative AI Engineer

An upskilling-linked certification initiative designed to recognize talent in generative AI and large language models

Join thousands of members and receive all benefits.

Become Our Member

We offer both Individual & Institutional Membership.