1. 📘 Topic and Domain: The paper presents ReCode, a paradigm for Large Language Model (LLM) based agents that focuses on universal granularity control in decision-making through recursive code generation.
2. 💡 Previous Research and New Ideas: Based on previous LLM agent frameworks like ReAct and planner-based agents, it introduces a novel approach that unifies planning and action into a single code representation, treating high-level plans as abstract placeholder functions.
3. ❓ Problem: The paper addresses the limitation of current LLM-based agents that have rigid separation between high-level planning and low-level actions, preventing flexible decision granularity control across different task complexities.
4. 🛠️ Methods: ReCode uses recursive code generation where placeholder functions are progressively decomposed into finer-grained sub-functions until reaching primitive actions, implementing this through a unified variable namespace and error handling system.
5. 📊 Results and Evaluation: Across three environments (ALFWorld, ScienceWorld, WebShop), ReCode achieved significant improvements over baselines, with an average score increase of 20.9% in inference tasks and demonstrated superior data efficiency in training, using 3.7x less data while maintaining better performance.