The rapid deployment of large language models across high-stakes domains has shifted the dominant risk profile of artificial intelligence systems. While contemporary research emphasizes accuracy, alignment, uncertainty expression, and post-hoc safety mechanisms, a more fundamental failure mode remains insufficiently examined: the illegitimate continuation of cognition. Modern AI systems routinely generate coherent, confident, and contextually fluent reasoning in situations where epistemic grounding, authority, or consequence proportionality are absent. In such cases, harm arises not from incorrect content, but from the act of reasoning itself, which functions as an implicit simulation of authority and a transfer of perceived decision responsibility from human to machine. This paper introduces Governed Cognition, a foundational theory of authority-aware artificial intelligence in which the ability to reason is explicitly separated from the right to reason. We formalize epistemic legitimacy as a first-class constraint on cognition and argue that reasoning must be conditionally authorized rather than treated as a default system behavior. Governed Cognition is structured around six core primitives: question-first cognition, cognitive permission, cognitive reflexivity, cognitive risk friction, minimum action-enabling distance, and sovereign output resolution. Within this framework, refusal and silence are not safety failures or terminal fallbacks, but intentional, auditable cognitive outcomes when continuation would constitute illegitimate participation. We further introduce the Burden of Abstention: the requirement that intelligent systems be able to demonstrate that non-participation was possible, intentional, and structurally enforced prior to reasoning activation. This reframes AI safety away from output correctness and toward governance over cognitive participation itself. The framework is architectural and epistemic rather than algorithmic, and is applicable across regulated and high-consequence environments. Governed Cognition establishes disciplined restraint—not predictive power or fluency—as the defining criterion of responsible artificial intelligence.
Building similarity graph...
Analyzing shared references across papers
Loading...
Teodor Minchev (Sat,) studied this question.
www.synapsesocial.com/papers/6980ffa4c1c9540dea812495 — DOI: https://doi.org/10.5281/zenodo.18441502
Teodor Minchev
Building similarity graph...
Analyzing shared references across papers
Loading...