Contemporary artificial intelligence systems are frequently described as possessing representations, internal world models, or understanding-like states. These descriptions persist despite repeated failures in reliability, grounding, and generalization. This paper argues that such failures are not incidental but structural. We introduce the Explanatory Constraint Knife (ECK): a criterion that distinguishes genuine explanation from representational description. Under ECK, an explanation is valid only insofar as it constrains what cannot occur. Representational accounts, we argue, systematically fail this criterion. They describe internal correlations but do not enforce exclusion. As a result, they misclassify perceptual fluency as understanding and optimization success as explanatory adequacy. This paper establishes ECK as a necessary epistemic filter for AI explanation and prepares the ground for subsequent analyses of hallucination and long-horizon failure.
Building similarity graph...
Analyzing shared references across papers
Loading...
Aruna Reddy Katanguri
Building similarity graph...
Analyzing shared references across papers
Loading...
Aruna Reddy Katanguri (Thu,) studied this question.
www.synapsesocial.com/papers/69e321aa40886becb6540b7a — DOI: https://doi.org/10.5281/zenodo.19614522
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: