Advancements in the capabilities of Large Language Models (LLMs) have created a promising foundation for developing autonomous agents. With the right tools, these agents could learn to solve tasks in new environments by accumulating and updating their knowledge. Current LLM-based agents process past experiences using a full history of observations, summarization, retrieval augmentation. However, these unstructured memory representations do not facilitate the reasoning and planning essential for complex decision-making. In our study, we introduce AriGraph, a novel method wherein the agent constructs and updates a memory graph that integrates semantic and episodic memories while exploring the environment. We demonstrate that our Ariadne LLM agent, consisting of the proposed memory architecture augmented with planning and decision-making, effectively handles complex tasks within interactive text game environments difficult even for human players. Results show that our approach markedly outperforms other established memory methods and strong RL baselines in a range of problems of varying complexity. Additionally, AriGraph demonstrates competitive performance compared to dedicated knowledge graph-based methods in static multi-hop question-answering.
Building similarity graph...
Analyzing shared references across papers
Loading...
Anokhin et al. (Mon,) studied this question.
www.synapsesocial.com/papers/68d46aa631b076d99fa67352 — DOI: https://doi.org/10.24963/ijcai.2025/2
Petr Anokhin
Nikita Semenov
Artyom Sorokin
University of Oxford
Skolkovo Institute of Science and Technology
London Institute for Mathematical Sciences
Building similarity graph...
Analyzing shared references across papers
Loading...