Large Language Models (LLMs) demonstrate remarkable reasoning capabilities but suffer from static knowledge bases frozen at training time and inability to persistently accumulate new information from interactions. Despite progress in memory-augmented LLMs, no existing system provides a structured, interactive framework for resolving the inevitable conflicts that arise when humans teach knowledge to an AI through dialogue. This research proposal presents a novel cognitive architecture for knowledge elicitation where a frozen LLM builds and maintains an external knowledge graph through natural language dialogue, starting from tabula rasa. We introduce the first hierarchical, interactive conflict resolution taxonomy specifically designed for dialogue-driven knowledge-graph construction in frozen-LLM architectures, systematically addressing temporal state changes, cardinality violations, entity canonicalization conflicts, and logical contradictions. The architecture decouples reasoning (LLM) from memory (hybrid vector store and property graph), enabling model-agnostic operation while maintaining full explainability through externalized knowledge representation. This work advances Explainable AI by making the system's mental model fully inspectable, correctable, and transferable across LLM backends.
Building similarity graph...
Analyzing shared references across papers
Loading...
Leszek J. Cierniak (Wed,) studied this question.
www.synapsesocial.com/papers/69d8968f6c1944d70ce08149 — DOI: https://doi.org/10.5281/zenodo.19470983
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context:
Leszek J. Cierniak
Building similarity graph...
Analyzing shared references across papers
Loading...