1. 📘 Topic and Domain: The paper focuses on recursive reasoning models for solving complex puzzle tasks using small neural networks in the domain of machine learning and artificial intelligence.
2. 💡 Previous Research and New Ideas: Based on the Hierarchical Reasoning Model (HRM), the paper proposes a simpler Tiny Recursive Model (TRM) that uses a single tiny network instead of two networks recursing at different frequencies.
3. ❓ Problem: The paper aims to solve the challenge of achieving high performance on complex puzzle tasks (like Sudoku, Maze, ARC-AGI) with minimal parameters while avoiding the complexity and theoretical requirements of existing approaches.
4. 🛠️ Methods: TRM uses a single tiny 2-layer network that recursively improves its latent reasoning feature and predicted answer through multiple supervision steps, incorporating exponential moving average and simplified adaptive computational time.
5. 📊 Results and Evaluation: TRM achieved better results than HRM and large language models on multiple benchmarks while using fewer parameters (7M vs 27M), including 87.4% accuracy on Sudoku-Extreme, 85.3% on Maze-Hard, 44.6% on ARC-AGI-1, and 7.8% on ARC-AGI-2.