
Mastering Long Context AI through MiniMax-01
MiniMax-01 achieves up to 4M tokens with lightning attention and MoE, setting new standards for
MiniMax-01 achieves up to 4M tokens with lightning attention and MoE, setting new standards for
Constitutional Classifiers provide a robust framework to defend LLMs against universal jailbreaks, leveraging adaptive filtering
Author(s): Mohamed Azharudeen M, Balaji Dhamodharan
Author(s): Varun Malhotra, Gaurav Adke, Ameya Divekar
Author(s): Chandan Kumar Agarwal, Aditi Raghuvanshi, Suresh S K, Sovan Gosh
Kimi K1.5 revolutionizes LLM scaling by leveraging RL for long-context reasoning, policy optimization, and multimodal
Janus-Pro advances multimodal AI by decoupling visual understanding and generation, optimizing training strategies for superior
Transformer2 is a revolutionary framework enhancing LLMs with self-adaptive capabilities through Singular Value Fine-Tuning and
Smolagents enable large language models (LLMs) to handle dynamic workflows with ease. Learn how its
AI hallucinations challenge generative models’ reliability in critical applications. Learn about advanced mitigation techniques, including
DeepSeek-R1 harnesses reinforcement learning to achieve cutting-edge reasoning capabilities, outperforming traditional SFT approaches. Discover its
Titans redefine neural memory by integrating short- and long-term components for efficient retention and retrieval.
CAG eliminates retrieval latency and simplifies knowledge workflows by preloading and caching context. Learn how
We noticed you're visiting from India. We've updated our prices to Indian rupee for your shopping convenience. Use United States (US) dollar instead. Dismiss