
A Deep Dive into Federated Learning of LLMs
Federated Learning (FL) enables privacy-preserving training of Large Language Models (LLMs) across decentralized data sources,
Federated Learning (FL) enables privacy-preserving training of Large Language Models (LLMs) across decentralized data sources,
DeepSeek-Prover-V2 combines informal reasoning and formal proof steps to solve complex theorems , achieving top
Browser-Use is an open-source Python library that lets LLM-powered agents interact with websites via natural
Step-Video-T2V, a cutting-edge text-to-video model with 30B parameters, enhances video quality using Video-VAE, Video-DPO, and
Nomic Embed Text V2 revolutionizes text embeddings with Mixture-of-Experts (MoE), enhancing efficiency, multilingual support, and
DeepSearch revolutionizes question-answering in LLMs, enhancing precision, completeness, and efficiency in information retrieval.
Unstract automates document processing with AI, reducing manual effort, errors, and costs.
TAID enhances LLM distillation by dynamically interpolating student-teacher distributions, solving capacity gaps and mode collapse.
DeepSeek’s R1 model revolutionizes AI reasoning, balancing reinforcement learning with structured training techniques.
AISuite provides a unified API for interacting with multiple LLM providers, enabling seamless model switching.
MiniMax-01 achieves up to 4M tokens with lightning attention and MoE, setting new standards for
Constitutional Classifiers provide a robust framework to defend LLMs against universal jailbreaks, leveraging adaptive filtering
Author(s): Mohamed Azharudeen M, Balaji Dhamodharan
We noticed you're visiting from India. We've updated our prices to Indian rupee for your shopping convenience. Use United States (US) dollar instead. Dismiss