Artificial intelligence systems frequently generate fluent and internally coherent outputs that are nevertheless false. These failures are commonly described as hallucinations and attributed to optimization errors, data deficiencies, or alignment gaps. This paper argues that such explanations misdiagnose the phenomenon. Building on the Explanatory Constraint Knife, hallucination is analyzed as a structural consequence of insufficient constraint persistence. AI systems succeed at perceptual completion while lacking survival-grade feedback mechanisms that would render false outputs impossible. As a result, hallucination is not a failure of perception but a failure of constraint enforcement. The paper introduces a formal framework based on constraint-admissible sets and perceptual attractors, and shows that hallucination emerges when perceptual attractors extend beyond enforced constraints. This analysis generalizes across artificial, biological, and non-living systems. This work is part of the YU-AI series.
Building similarity graph...
Analyzing shared references across papers
Loading...
Aruna Reddy Katanguri
Building similarity graph...
Analyzing shared references across papers
Loading...
Aruna Reddy Katanguri (Fri,) studied this question.
www.synapsesocial.com/papers/69e3213840886becb65405bf — DOI: https://doi.org/10.5281/zenodo.19617031