Artificial intelligence (AI) is often framed as a technological innovation subject to regulatory oversight. This manuscript advances a different thesis: AI represents a structural shift in the conditions under which governance operates — one that alters the basic prerequisites of public authority. Traditional sovereign models presuppose visibility, interpretability, and enforceable rule execution. AI systems unsettle these assumptions in ways that exceed conventional regulatory categories. The manuscript identifies two core constraints: the Visibility Gap, describing the divergence between system behaviour and human oversight, and the Rule-Execution Gap, describing the separation between formal rules and their operational realisation for AI. Together, these gaps give rise to a Visibility Monopoly, where hyperscalers' privileged access to their own models combines with the Translation Layer — the global advisory ecosystem of law firms, audit networks, and consultancies that operationalise compliance — to determine how behaviour, value, and agency become legible. This position, historically occupied by the State, now shapes every downstream governance response. The manuscript develops the Dual-Compiler Thesis, arguing that governance depends on two constitutive institutional languages: law, which defines permission for AI action, and accounting, which measures AI through classification and valuation. The Categorical Gap describes the structural mismatch between legal categories designed for deterministic, territorial activity and AI systems that are none of these things. The Measurement Gap identifies the rupture that forms when industrial-era accounting frameworks confront probabilistic, self-updating capital. Together, these gaps produce an Invisible Charter — the fused evaluative grammar through which AI systems become economically real — and concentrate interpretive authority in professional intermediaries who now function as the Translation Layer. The Incorporation Heuristic (I = V × W × N) models the capacity of nations to internalise AI activity as a multiplicative function of visibility, workability, and necessity — reframing regulatory competition as differential alignment between national capability and computational reality. Gateway Rules and the Golden Image describe how regulatory requirements propagate across borders via system design and professional services infrastructure rather than formal sovereign adoption. The Topology of Power maps the resulting hierarchy of operational influence — Root Access, Admin Access, and User Access — determining how far a state's rules travel beyond its borders. The Invisible Constitution names the operational rulebook that governs global AI regardless of what formal law provides. The manuscript then turns to the material foundations of AI governance. It identifies capacity primacy — the structural condition in which control of computational resources conditions the effectiveness of legal and regulatory authority. Compute, data, and models constitute the three foundational assets of the AI era, each exhibiting properties that no previous industrial system has combined: extreme capital indivisibility, radical supply-chain concentration, and cumulative path-dependence. Five non-substitutable constraints (energy, semiconductors, capital, talent, and time) determine whether a state can govern AI on its own terms. Legitimacy Debt accumulates when formal responsibility and real decision authority drift apart, producing a widening gap between statute and practice. Finally, the manuscript proposes a Third Architecture: a governance framework built on a Sovereign API that translates democratic authority into machine-executable constraints; a Protocol of Federation that embeds national rules within networked economic infrastructure; Parametric Democracy, which preserves democratic legitimacy by shifting regulation from rule authorship to constraint calibration; and a Four-Tier Constitutional Design (Value, Protocol, Operations, Audit) that constitutes the minimum constitutional structure for stabilising AI governance. This architecture is designed to preserve sovereign diversity while establishing the institutional conditions under which AI systems remain accountable to public authority. This v0.9 open-access release presents the core conceptual architecture of Code After to support early scholarly, policy, and institutional engagement. A forthcoming v1.0 edition will integrate full citations, empirical grounding, and jurisdictional applications.
Building similarity graph...
Analyzing shared references across papers
Loading...
Richard Yan
Building similarity graph...
Analyzing shared references across papers
Loading...
Richard Yan (Sat,) studied this question.
www.synapsesocial.com/papers/69e713b4cb99343efc98d34f — DOI: https://doi.org/10.5281/zenodo.19650088
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: