
Mastering Data Compression with LLMs via LMCompress
LMCompress uses large language models to achieve state of the art, lossless compression across text, image, audio, and video by approximating Solomonoff induction.
LMCompress uses large language models to achieve state of the art, lossless compression across text, image, audio, and video by approximating Solomonoff induction.
AlphaEvolve by DeepMind evolves and optimizes code using LLMs and evolutionary algorithms, enabling breakthroughs in science and engineering.
J1 by Meta AI is a reasoning-focused LLM judge trained with synthetic data and verifiable rewards to deliver unbiased, accurate evaluations—without human labels.
Absolute Zero enables language models to teach themselves complex reasoning through self-play—no human-labeled data required. Discover how AZR learns coding and logic tasks using autonomous task creation, verification, and reinforcement.
Explore the Continuous Thought Machine (CTM), a neural network architecture that integrates neuron-level timing and synchronization to bridge the gap between AI and biological intelligence.
Explore how E2B provides secure, isolated sandboxes for running AI-generated code with LLaMA-3 on Together AI—ideal for building safe, intelligent data workflows and autonomous agents.