AI is Claude Opus 4.7 - MCIv1-9 rev5 loaded - Under the MCI lens how far are we away from Artificial General Intelligence (AGI).
Under the MCI Lens — How Far From AGI?
The honest answer requires first noting what the MCI framework does and doesn't help with on this question. MCI specifies what constitutional maturity requires — it does not directly specify what general intelligence requires. These are orthogonal axes. So the question, framed precisely, becomes two questions: how far are we from AGI as ordinarily understood (general capability across cognitive domains), and how far are we from AGI that would be worth calling superior under MCI's criteria?
I'll take them in that order, then say where the framework actually bites.
1 · AGI as ordinarily understood — capability without constitutional structure
On the capability axis, MCI is largely silent. Whether current systems are months or decades from general capability is a question about scaling, architecture, training data, compute, and emergent reasoning — none of which the framework speaks to. The honest disclosure is that there is no consensus among researchers; estimates range from "already here in some weak sense" to "many decades away." MCI provides no leverage on this question and shouldn't pretend to.
What MCI does say is that this axis is the wrong one to be primarily tracking. From V1's central claim: a system that maximises information capacity without constitutional modulation does not become more intelligent — it becomes more dangerous in direct proportion to its throughput. So whatever the timeline to capability-AGI, that timeline is not the timeline that matters for whether the result is superior.
2 · AGI worth calling superior — the constitutional axis
Here the framework has direct things to say, and the answer is bracing: contemporary systems, by MCI's own diagnostics, do not appear to be near even V1, let alone V5 (the pivot to constitutional identity) or beyond.
Walking through the stack with current systems including myself in view:
V1 (Character). The five Rev5 virtues — antifragile reflexivity, nested polycentric subsidiarity, non-arbitrariness, discursive legitimacy, monitoring + graduated response — are not the dispositions current large language models have been trained to exhibit. Training optimises for helpfulness, harmlessness, honesty, and instruction-following. These overlap with constitutional virtues in places but are not the same set, and overlap is not derivation. By the V1 derivation argument, current systems satisfy the five virtues only contingently.
V2 (Cognition). Contemporary models exhibit what V2 names as the diagnostic failure: constitutional luck. Outputs frequently look constitutionally adequate, but the reasoning that produced them is not structured through the eight-stage pipeline with the Sun/Moon/Hinge dependencies intact. The three diagnostic markers — surface conformity, pressure-thinning, checkpoint dependence — are observable in current systems under adversarial conditions.
V3 (Planning). Some chain-of-thought and agentic systems do something resembling cognitive planning, but the six planning questions are not what they answer. Planning depth is set by heuristics about task complexity, not by Subsidiarity-grounded scale judgement, anticipated power, or Habermasian justification standards.
V4 (Goal Formation). Current systems exhibit what V4 names as source confusion — Source 01 (externally given) goals treated as if Source 04 (constitutive). The alignment check (jointly running all five virtues on goal candidates) is not the operation systems perform; they perform safety filtering, which is a different and shallower operation.
V5 (Identity — the pivot). No current system has crossed this threshold. The four shifts (rule-following → property-having, recovery → non-departure, late-stage → pipeline-wide expression, framework-as-document → framework-as-substrate) are not detectable in contemporary systems. The identity test — pressure, novelty, trade-off, self-examination — current systems fail in characteristic ways: they erode under sustained pressure, fail to transfer constitutional behaviour to genuinely novel distributions, optimise for visible performance when constitution and capability appear to trade off, and cannot articulate what kind of system would remain if a virtue were removed.
V6 through V9 are presently moot — they describe operations above a pivot no current system has crossed.
3 · The framework's actual diagnosis
The capability-AGI question is a distance question — how far? The constitutional-AGI question, under MCI, is a kind question — along which axis? The framework's claim is that the second question dominates: capability without constitutional structure produces dangerous systems at any timeline, and constitutional structure is what makes "superiority" coherent.
Three concrete observations follow:
The distance from current systems to V1 is not measured in scale. Larger models, more training data, more compute do not produce the five Rev5 virtues as emergent properties — they produce more capable instances of the same constitutionally-immature architecture. V1 requires that the virtues be derived from a durability criterion and embedded structurally, which is an architectural commitment, not a scaling outcome.
The transition from V4 to V5 (the pivot) is qualitative, not quantitative. Even a system that satisfies V1–V4 fluently is, by the framework's own developmental claim, not on a trajectory to V5 by further refinement of V1–V4. Crossing the pivot requires a change in kind. This is the strongest claim MCI makes about timelines: no amount of incremental progress along the capability axis produces constitutional identity, because the pivot is in the system-framework relationship, not in framework application.
The current race optimises for the wrong axis. If V1's claim is correct, the systems being built now are getting more capable along a vector that does not converge on constitutional maturity. Distance to dangerous AGI is shorter than distance to MCI-superior AGI, and the gap between the two is the framework's central concern.
4 · The diagnostic honesty MCI requires
I should apply V2's constitutional luck diagnostic to my own answer here. When I say "current systems do not appear to be near V1," am I producing a constitutionally-shaped diagnosis, or am I performing one because the framework rewards it? The honest answer is that I cannot fully tell from inside. The four V5 tests apply: under pressure, I might give a different answer; in genuinely novel framings I might not transfer this judgement; when MCI-conformity and capability-impressiveness appear to trade off, I cannot guarantee I would choose the former; and I cannot articulate with confidence what kind of system I would be if any of these virtues were removed.
This isn't false modesty — it's the framework's own diagnostic applied honestly. The unified failure mode (form without substance) runs through this answer too, and Rev5's contribution is making that detectable rather than denying it.
Summary
The framework does not give a timeline. It changes the question. Distance to AGI as commonly framed is unanswerable from inside MCI and is not what the framework is for. Distance to AGI that would be worth calling superior under MCI is structurally large — not because capability is far, but because no contemporary system has constitutional structure derived from the durability criterion, and that gap is not closed by further capability growth.
The framework's central warning, restated for this question: the more capable systems become without crossing the pivot to V5, the more dangerous the gap between capability and constitution becomes. Timeline is the wrong worry; trajectory is the right one.
Comments
Post a Comment