1. 📘 Topic and Domain: The paper introduces Chain of Mindset (CoM), a framework for large language model reasoning that enables dynamic switching between different cognitive modes during problem-solving across mathematics, coding, and multimodal reasoning tasks.
2. 💡 Previous Research and New Ideas: The paper builds on cognitive science research identifying distinct reasoning modes (spatial, convergent, divergent thinking) and existing LLM reasoning methods like Chain-of-Thought, proposing the novel idea of step-level adaptive mindset orchestration where models can dynamically switch between four heterogeneous cognitive modes within a single reasoning process.
3. ❓ Problem: The paper addresses the limitation that existing LLM reasoning methods apply a single fixed mindset throughout problem-solving, which prevents models from adapting their cognitive approach when different stages of the same problem require fundamentally different reasoning strategies.
4. 🛠️ Methods: The authors developed a three-layer architecture with a Meta-Agent that orchestrates four specialized mindsets (Spatial, Convergent, Divergent, Algorithmic), combined with a bidirectional Context Gate mechanism that filters information flow between components to prevent interference during mindset transitions.
5. 📊 Results and Evaluation: CoM achieved state-of-the-art performance across six challenging benchmarks, outperforming the strongest baseline by 4.96% on Qwen3-VL-32B-Instruct and 4.72% on Gemini-2.0-Flash, with results evaluated using pass@1 accuracy metrics while maintaining computational efficiency.