Agentic AI systems extend large language models beyond single-turn text generation into tool use, task execution, planning, environmental feedback, and iterative interaction with external systems. As these systems become more action-oriented, a central control problem emerges, agents must decide not only what they can do, but what kind of operation is appropriate for a given task state. Current agent architectures often rely on procedural loops, tool-selection heuristics, or task-specific prompting, but they do not always provide an explicit structure for determining when an agent should search, retrieve memory, reason, execute, ask for clarification, revise its plan, stop, or redirect its behaviour. This paper proposes the Ontology–Process–Trajectory framework as a cognitive control layer for agentic AI. OPT conceptualizes cognition as pathway-based signal propagation between generative sources and stabilization sinks. In the context of AI agents, this framework is not used to claim machine consciousness or to describe the internal neural mechanisms of language models. Rather, it is proposed as an external orchestration layer that can classify task states, constrain tool use, guide reasoning and action modes, define stopping conditions, and generate interpretable traces of agent behaviour. From this perspective, agent behaviour is not treated as a flat sequence of tool calls, but as a process of pathway-based stabilization. Different task states may require different routes: responding to current external feedback, reconstructing prior context, inferring an underlying structure, generating possible solutions, adapting communication to an audience, or checking alignment with the user’s original goal. These pathways can guide when an agent should act, when it should delay action, when it should shift modes, and when the task has been sufficiently stabilized. The paper develops a conceptual architecture for OPT-guided agents, including pathway classification, pathway-to-tool policy mapping, feedback-based pathway adjustment, pathway-specific stopping conditions, pathway logging, and possible extensions to multi-agent orchestration. The framework is presented as a theoretical design rather than an implemented system. Its aim is to explore whether agentic AI can become more transparent, bounded, and structurally interpretable by making its external decision process explicit.
Building similarity graph...
Analyzing shared references across papers
Loading...
Eve Liu
Building similarity graph...
Analyzing shared references across papers
Loading...
Eve Liu (Fri,) studied this question.
www.synapsesocial.com/papers/69f6e67c8071d4f1bdfc729b — DOI: https://doi.org/10.5281/zenodo.19954668
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: