-
A Deep Dive into ElasticSearch and Kibana’s Semantic Capabilities
ElasticSearch’s vector search capabilities enable intelligent, context-aware applications through AI-powered semantic understanding.
-
DSPy based Prompt Optimization: A Hands-On Guide
DSPy simplifies prompt and parameter optimization for LLMs by automating adjustments, freeing developers from manual tweaks to focus on building impactful systems.
-
Hands-On Guide to build an AI-Driven Local Search Engine with Ollama
Building an AI-Driven Local Search Engine with Ollama
-
A Practitioner’s Guide on Inferencing over 1-bit LLMs using bitnet.cpp
Explore 1-bit LLMs and bitnet.cpp for faster, efficient inferencing in large language models.
-
A Hands-on Guide to Airtrain AI: A No-code Compute Platform
Airtrain AI simplifies LLM fine-tuning with a no-code interface and high-quality models.
-
A Practical Guide to Janus 1.3B’s Multimodal AI Capabilities
Janus is a cutting-edge AI system designed to handle both image and text tasks, excelling in understanding and generating images.
-
Multi-agent Orchestration through OpenAI’s Swarm – A Hands-on Guide
OpenAI’s Swarm framework explores multi-agent orchestration, showcasing simple routines and handoffs in action.
-
A Guide to Running LLMs Locally with No-Code Framework Dify
Running LLMs locally on your CPU with Dify and Ollama opens up a world of possibilities for AI enthusiasts, developers, and privacy-conscious users.
-
Optimizing LLM Inference for Faster Results Using Quantization – A Hands on Guide
Optimizing LLM inference through quantization is a powerful strategy that can dramatically enhance performance while slightly reducing accuracy
-
Multilingual Tokenization Efficiency in Large Language Models: A Study on Indian Languages
Authors: Mohamed Azharudeen M, Balaji Dhamodharan