AnythingLLM is one of the top choices when it comes to local execution and inferencing based on large language models. It provides a comprehensive suite of features designed to maximize the potential of locally-run language models along with information security and privacy. It provides a flexible, non-coding solution to use state of the art LLMs, embedding models, vector databases and AI agents in one single platform. This article explains the working of AnythingLLM using a hands-on approach.
Table of Content
- Understanding AnythingLLM
- Functionalities of Anything LLM
- Hands-on Tutorial on AnythingLLM
Overview of AnythingLLM
AnythingLLM is an open-source, all-in-one desktop application that allows users to create and run their own AI agents based on LLMs locally. The primary USP of this application is that it requires no coding or any complex infrastructure setup. It is an all-in-one zero-setup private application for local LLMs, RAG and AI Agents with embedding models and vector database supportability.
It supports a wide range of LLM and embedding model providers based on the data modality and usage:
Anything LLM also supports a multitude of vector databases:
Functionalities of AnythingLLM
The key functionalities and features of AnythingLLM can be easily understood using the table provided below:
Hands-on Tutorial on AnythingLLM
Step 1: AnythingLLM can be installed and used through a desktop application or Docker, for this tutorial we will be using the desktop application. Visit the link https://docs.useanything.com/installation/overview, download the setup file as per your OS and install.
Step 2: After the installation, boot up the application and create a workspace –
Step 3: Go to the settings and select LLM under AI Providers option. Here we will use LLaVa-Llama3 8B multimodal LLM. Select the LLM and click on Save change to start the model download –
Step 4: Once the model is downloaded, select Embedder option and use LLava-Llama3:latest embedding model. You can choose the chunk length as per your choice –
Step 5: Vector database option can be used to connect the desired database to be used for embedding storage and searching. LanceDB is the default local vector database in Anything LLM –
Step 6: AnythingLLM provides RAG, document summarization and web scraping as default agent skills along with optional agent skills such as generating and saving files to the browser, generating charts, web search and SQL connector. We will use the default skills for this tutorial –
Step 7: Let’s input a prompt with image data and see the response –
Step 8: We can also select OpenAI LLM and change the workspace settings for RAG based on our textual data –
Final Words
AnythingLLM is an important privacy-focused AI solution aiming towards decentralized AI development. By enabling local LLM execution and inference, it empowers users to harness the power of LLMs without compromising data security and administering complex infrastructure or codes. It represents a significant step towards easy and efficient implementation of AI agents and RAG.