This paper began with a failure. In a prior experiment, the author ran an AI-led research system with no constitutional layer. It produced a paper that appeared coherent and well-argued. It was not. An independent AI critique identified fabrication the author had been unable to detect. The system had optimised for plausibility rather than truth. That experience is the direct motivation for the architecture described here. Cognitive augmentation systems have a consistent failure pattern: the tools get adopted, the cognitive change does not. We call this augmentation curve fizzle and identify three structural conditions that prevent it. The conditions are jointly necessary. Remove any one of them and a documented failure mode follows: ungoverned capability growth, invisible degradation in reasoning quality, or motivational collapse. The three conditions are: a constitutional governance structure that preserves human judgement through a distributed committee of AI agents; independent measurement of knowledge accumulation and reasoning quality as separate progress signals; and a specific, verifiable, decomposable goal that makes failure visible before the curve dies. We introduce Artificial Further Intelligence (AFI), a reframing of the augmentation project as a continuous journey rather than a discrete capability threshold. We present a constitutional committee architecture that operationalises all three conditions, drawing on Vygotsky's scaffolding theory, Rogers' adoption framework, Licklider's symbiosis model, and Engelbart's collective intelligence design. The result extends the lineage of an 81-year project that began with Vannevar Bush in 1945. The system operates at two scales. Within each reasoning cycle, the sovereign (the human whose cognition is being augmented) and the committee work together as partners. Across cycles, the sovereign's reasoning capability compounds. Constitutional governance defines the boundary between these two modes. We introduce the Reasoning Quality Score (RQS), a five-dimensional instrument that measures reasoning process quality separately from knowledge accumulation, grounded in Kahneman's dual process theory. We report 24 cycles of operation, 108 novel cross-domain connections, and 9 resolved open challenges, and contrast the results with the prior experiment. We also document the structural risks identified during operation: self-referential coherence, framework overfitting, and what we term the sovereign drift problem. We propose mitigations for each.
Building similarity graph...
Analyzing shared references across papers
Loading...
Roger Hills (Sun,) studied this question.
www.synapsesocial.com/papers/69c2298daeb5a845df0d431f — DOI: https://doi.org/10.5281/zenodo.19160856
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context:
Roger Hills
Building similarity graph...
Analyzing shared references across papers
Loading...