1. 📘 Topic and Domain: This paper surveys efficient agents in the domain of Large Language Model (LLM)-based autonomous systems, focusing on memory, tool learning, and planning components.
2. 💡 Previous Research and New Ideas: The paper builds on existing LLM agent research but identifies that while effectiveness has improved, efficiency (latency, token consumption, computational cost) has been overlooked; it proposes a comprehensive framework analyzing efficiency across memory, tool learning, and planning components.
3. ❓ Problem: The paper addresses the critical efficiency bottleneck in LLM-based agents, where recursive multi-step execution leads to exponentially growing resource consumption through token accumulation, context window saturation, and excessive computational costs.
4. 🛠️ Methods: The authors conduct a systematic literature review categorizing efficiency techniques into three core components: efficient memory (construction, management, access), efficient tool learning (selection, calling, reasoning), and efficient planning (single-agent and multi-agent strategies).
5. 📊 Results and Evaluation: The survey synthesizes efficiency metrics across benchmarks and methods, revealing common principles like context compression, reinforcement learning for minimizing tool invocation, and controlled search mechanisms, while identifying gaps in standardized efficiency evaluation frameworks.