LLM-based applications like conversational AI applications often fail to maintain state, memory, and context, which can degrade their output quality and performance. To overcome this drawback, LangChain has introduced LangGraph. This simplifies the creation and management of AI agents and their runtimes. It is uniquely suited for creating reliable, fault-tolerant agent-based systems. It uses StateGraph to define the states of agents and has the power of being able to loop, allowing for handling more ambiguous inputs than simple chains. In this article, we will break down what LangGraph is and see how we can build a customer support AI Agent using LangChain and support its runtimes through the LangGraph framework.
Table of Contents
- Understanding LangGraph
- Key Components of LangGraph
- How LangGraph can Enhance AI Chat Agent Experience?
- Supporting LangChain Conversational Agent Runtimes by using LangGraphs
Let’s start with an introduction to LangGraph and understand how LangGraph enhances the working of customer support AI chat agents.
Understanding LangGraph
LangGraph is a specialized library in the LangChain ecosystem that helps build more efficient AI applications. It also helps build stateful, multi-actor applications with LLMs. LangGraph’s main advantage is that it helps coordinate and check-point different chains or actors using regular Python functions.
LangGraph introduces a significant advancement in handling cyclic computational steps, where traditional Directed Acyclic Graphs (DAGs) are inadequate. This, inspired by systems like Pregel and Apache Beam and using an interface similar to NetworkX, extends the LangChain Expression Language to manage multiple actors and chains in sophisticated workflows. Such capabilities allow developers to orchestrate complex program structures that are essential for applications requiring intelligent, agent-like behaviors and iterative decision-making processes.
Key Components of LangGraph
LangGraph simplifies AI agent development by focusing on three key components:
State: The State is an accurate representation of the current status of the agent.
Node: Nodes are the building blocks executing computations. The nodes can be LLM-based, Python code. Each graph execution builds a state, and this is passed between nodes during the execution. These nodes understand the state and update it with the new state after the execution.
Edges: Through conditional nodes and cycles, edges introduce flexibility into the control flow of agents. Edges depict the complex web of roadways on which cars (data) take diverse courses in response to signals or choices.
LangGraph uses a smart graph algorithm that processes information by passing messages between different points, or “nodes,” in a network. Each node represents a point that can perform a task. When a node finishes its task, it sends a message to one or more other nodes, performing their own tasks and passing on the results to the next nodes, creating a chain of tasks. This process happens in “super-steps,” where many tasks can occur simultaneously. When the graph starts running, all nodes are inactive, waiting to be triggered by a message.
A node becomes active when it receives a message, performs its task, and sends out updates. At the end of each super-step, nodes decide if they should become inactive again by checking if they have more messages to process. The whole process stops when all nodes are inactive and no messages are left to be passed around. This method ensures that tasks are processed efficiently and in an organized manner, similar to how a team works together, passing information and updates until the project is complete.
How LangGraph Can Enhance AI Chat Agent Experience?
Unlike traditional NLP frameworks that depend on statistical analysis, LangGraph’s graph-based approach offers several advantages:
Flow Engineering
LangGraph enables a more iterative and controllable approach to working with LLMs, leading to superior results compared to single-prompt interactions. It facilitates an iterative “flow” where the LLM is queried in a loop, allowing it to influence subsequent actions.
Stateful Workflows
LangGraph supports complex workflows that require maintaining and referencing past information. It allows for the creation of cyclical graphs, which are particularly useful for agent runtimes, enabling the introduction of cycles into chains and enhancing the reasoning capabilities of AI systems.
Supporting LangChain Conversational Agent Runtimes by using LangGraphs
Here, we are going to use LangGraph and build a customer support AI chat agent using gpt-3.5-turbo model. We are going to use LangGraph, where a StateGraph object defines the structure of our AI chat agent as a “state machine”, which is either in a resting state or active state. We’ll add nodes to represent the LLM and functions our AI chat agent and user can call and edges to specify how the bot should transition between these functions.
LangGraph simplifies the creation of cyclical graphs, a key element in advanced agent runtimes. Here we are using LangGraph to provide clear and suitable responses to user queries and also solve the problem of the user. LangGraph will use its cyclic flow to get the feedback or responses between the agents and change the state of the agents accordingly enhancing the AI chat agent’s work efficiency.
To begin with, we will install the required packages and import all the required libraries. Then, we will set up the environment by connecting to our OpenAI API key.
!pip install --q openai langchain_community langchain_openai
from typing import List
import openai
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import chain
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.messages import AIMessage
from langchain_community.adapters.openai import convert_message_to_dict
from langgraph.graph import END, MessageGraph
from langgraph.checkpoint.sqlite import SqliteSaver
os.environ["OPENAI_API_KEY"] = "sk-****"
Define an AI chat agent of your choice and mention its role and system message. This Python function, my_chat_bot, creates a conversational interface grasping OpenAI’s GPT-3.5-turbo model. It begins by setting a system message indicating the user’s role as a customer support agent. Then, it appends this message to the list of messages provided as input.
Using OpenAI’s API, it generates a response based on the conversation history and returns the model’s output, allowing seamless interaction between users and the AI chat agent.
def my_chat_bot(messages: List[dict]) -> dict:
system_message = {
"role": "system",
"content": "You are a customer support agent for a product company.",
}
messages = [system_message] + messages
completion = openai.chat.completions.create(
messages=messages, model="gpt-3.5-turbo"
)
return completion.choices[0].message.model_dump()
my_chat_bot([{"role": "user", "content": "hi!"}])
Create a customer agent and give them the proper instructions. This piece of code lays out a scenario where Mahesh, a customer, talks to support about a faulty fan he bought recently. It guides the conversation and asks Mahesh to type ‘TERMINATE’ when he’s done. The setup includes a message from Mahesh explaining his problem—he wants a refund for a fan that broke just two days after buying it. Then, it kicks off a simulated chat between Mahesh and a support agent using AI.
system_prompt_template = """You are a customer of a company who sells charging fans. \
You are interacting with a user who is a customer support person. \
{instructions}
When you are finished with the conversation, respond with a single word 'TERMINATE'"""
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt_template),
MessagesPlaceholder(variable_name="messages"),
]
)
instructions = """Your name is Mahesh. You are trying to get a refund for the charging fan. \
You want them to give you ALL the money back. \
You bought the fane 2 days back. \
And it is not working properly."""
prompt = prompt.partial(name="Mahesh", instructions=instructions)
model = ChatOpenAI()
simulated_user = prompt | model
messages = [HumanMessage(content="Hi! How can I help you?")]
simulated_user.invoke({"messages": messages})
The method below, chat_bot_node, facilitates communication between LangChain and the OpenAI AI chat agent. It converts messages from LangChain format to the required OpenAI format, then calls the AI chat agent and returns an AI Message response. This process ensures a smooth interaction between users and the AI chat agent.
def chat_bot_node(messages):
# Convert from LangChain format to the OpenAI format, which our chatbot function expects.
messages = [convert_message_to_dict(m) for m in messages]
# Call the chat bot
chat_bot_response = my_chat_bot(messages)
# Respond with an AI Message
return AIMessage(content=chat_bot_response["content"])
The _swap_roles function takes a list of messages, checks their types, and swaps their roles accordingly. The simulated_user_node method utilizes _swap_roles to ensure proper message formatting, then calls a simulated user with the adjusted messages. Finally, it converts the AI-generated response back into a human-readable message before returning it.
def _swap_roles(messages):
new_messages = []
for m in messages:
if isinstance(m, AIMessage):
new_messages.append(HumanMessage(content=m.content))
else:
new_messages.append(AIMessage(content=m.content))
return new_messages
def simulated_user_node(messages):
# Swap roles of messages
new_messages = _swap_roles(messages)
# Call the simulated user
response = simulated_user.invoke({"messages": new_messages})
# This response is an AI message - we need to flip this to be a human message
return HumanMessage(content=response.content)
This Python function, named should_continue, checks whether a conversation should continue or end based on the number of messages exchanged. If there are more than six messages or if the last message is “TERMINATE,” it returns “end”; otherwise, it suggests continuing the conversation. It’s a simple yet effective tool for managing dialogue flow in automated systems.
def should_continue(messages):
if len(messages) > 6:
return "end"
elif messages[-1].content == "TERMINATE":
return "end"
else:
return "continue"
Now, we are going to create our AI chat agent graph. We will add nodes “user” and “chat_bot” to the graph and add edges between “user” and “chat_bot”. We will set the entry point for the graph to be “chat_bot”. Our conditional edge is going to be that once the should_continue conditions are reached, then end the conversation.
graph_builder = MessageGraph()
graph_builder.add_node("user", simulated_user_node)
graph_builder.add_node("chat_bot", chat_bot_node)
# The input will first go to your chat bot
graph_builder.set_entry_point("chat_bot")
# simulated user
graph_builder.add_edge("chat_bot", "user")
graph_builder.add_conditional_edges(
"user",
should_continue,
# If the finish criteria are met, we will stop the simulation,
# otherwise, the virtual user's message will be sent to your chat_bot
{
"end": END,
"continue": "chat_bot",
},
)
graph_builder.add_edge("generate", END)
graph_builder.add_edge("chat_bot", "user")
memory = SqliteSaver.from_conn_string(":memory:")
graph_1 = graph_message.compile(checkpointer=memory)
simulation = graph_builder.compile()
Let’s plot the flow of chat,
from IPython.display import Image, display
try:
display(Image(graph_1.get_graph(xray=True).draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass
Now, initiate the chat, and the output will be something like this:
for chunk in simulation.stream([]):
# Print out all events aside from the final end chunk
if END not in chunk:
print(chunk)
print("----")
Output:
From the above example, we can see that an AI chat agent can remember user’s preferences, history, interests, and issues, and provide more personalized responses and solutions through LangGraph’s help. We used LangGraph to create better user-agent interaction and smooth flow of conversation. We even visualized how the conversational flow is. And also stored this conversation in memory.
In conclusion, LangGraph speeds up the development process and makes it simpler to create AI applications with different designs. We can define the actors, their attributes, relationships, and behaviors using a graph-based system. Another special feature of LangGraph is its cyclic data flows. This means the system can learn from past interactions and improve future responses.
Possible Development in AI Chat Agent through LangGraph
The future of AI chat agents through LangGraph looks promising, offering a glimpse into the development of conversational AI. LangGraph, as a specialized library within the LangChain ecosystem, empowers developers to build sophisticated AI chat agents with advanced capabilities.
By leveraging LangGraph, the future of AI chat agents is poised to witness significant advancements in several key areas:
- Enhanced Conversational Experiences
- Iterative Interactions and Control
- Customizable Conversational Flows
- Multi-Agent Collaboration
- Adaptive RAG Integration
Conclusion
A major change in our method of approaching language understanding is accomplished by using the LangGraph library. It opens the door for machines to fully comprehend the complexity of human language by utilizing the strength of graphs. With the ability to completely change how humans communicate with machines, this module could pave the way for a more intelligent, intuitive, and deeper understanding of communication in the future.
References
Learn more about LangChain and building Generative AI Applications using LangChain. Enroll to the following course.