Why has no single, unified artificial intelligence emerged - one system capable of robustly performing open-ended tasks across domains while operating indefinitely without unacceptable failure risk? This paper argues that such a system is not merely unrealized but structurally unstable. We formalize a universal constraint on persistent intelligence: any system required to make sequential decisions under uncertainty, with bounded resources, and in the presence of irreversible failure states cannot remain architecturally singular over long horizons. We prove an impossibility result for singular intelligence using explicit derivations from: (i) catastrophic tail-risk accumulation in sequential decision processes, (ii) worst-case regret bounds from online learning, (iii) instability of fixed objective commitments under long-horizon tradeoffs, (iv) communication and consensus limits in large-scale control, and (v) equilibrium constraints in repeated interaction. Evolutionary biology emerges as a corollary: populations arise because singular organisms are structurally fragile under extinction risk. Internal plurality in minds, norm formation in societies, and modularity in engineered systems are shown to be additional instantiations of the same constraint. The central conclusion is that persistent general intelligence - especially in artificial systems - must incorporate plurality in the form of redundancy, multiple evaluators, arbitration, modular autonomy, and governance constraints. Singular superintelligence is not a stable endpoint of intelligence; it is a conceptual mirage produced by ignoring persistence constraints.
Building similarity graph...
Analyzing shared references across papers
Loading...
Sanjaya Kumar Sahoo (Thu,) studied this question.
www.synapsesocial.com/papers/6980fd60c1c9540dea80f0d5 — DOI: https://doi.org/10.5281/zenodo.18408801
Sanjaya Kumar Sahoo
Ospedale Infermi di Rimini
Building similarity graph...
Analyzing shared references across papers
Loading...