Artificial intelligence does not evolve gradually. It jumps. It produces quiet plateaus followed by discontinuities that shock incumbents, confuse forecasters and reorder competitive landscapes.
We talk about artificial intelligence as though it were a faster spreadsheet or a better search engine. That framing is comfortable — and dangerously wrong.
AI does not progress like most technologies. It jumps. It produces quiet plateaus followed by discontinuities that shock incumbents, confuse forecasters and reorder competitive landscapes. The right mental model is stepwise upheaval, not incremental improvement.
This is not a metaphor. It’s the maths.
Over the past several years, researchers have documented scaling laws showing that when you enlarge models, data and compute together, performance follows smooth power-law curves — until new abilities appear that smaller systems simply did not have. That 2020 result — now a bedrock of modern AI planning — formalises why bigger models trained longer on more data keep getting better in predictable ways. At the same time, algorithmic efficiency — how cleverly we use the same compute — has been doubling roughly every 16 months; in image classification, the compute required to hit the same benchmark fell 44 times from 2012 to 2019. Hardware and software gains multiply.
Put plainly: even if chips stopped improving tomorrow, smarter training would still accelerate capability. When chips also improve and clusters scale, the curve bends faster.
Today, the supply side of intelligence is compounding. One recent analysis estimates that global AI computing capacity has been growing about 3.3 times per year since 2022 — equivalent to a seven-month doubling time — driven largely by specialised accelerators. A parallel metric from independent evaluators finds that the “time horizon” of tasks frontier systems can complete autonomously has also been doubling on a similar cadence. Whatever your preferred indicator, the message is the same: your planning assumptions are obsolete long before your organisational chart updates.
‘Emergence’ isn’t hype: it’s an operational risk
As models scale, capabilities can appear discontinuously — not present at smaller sizes, then suddenly competent. The technical literature calls these emergent abilities. That is bad news for any governance or go-to-market process that treats tomorrow’s model as a slightly better version of today’s.
Markets have seen this movie. From currency pegs to collateralised debt obligations, stability narratives often hold — until they do not. The prudent response to emergence is to design for surprises: tighter feedback cycles between deployment and risk monitoring, capital buffers in compute supply chains and regulatory mechanisms that key off empirical capability — not brand names or parameter counts.
Recursive acceleration: AI that helps build its successors
We have also entered the era in which AI helps write the software — and even the scaffolding — that improves AI itself. Academic work has demonstrated recursively self-improving code pipelines, where language-model-driven “improvers” iteratively enhance their own optimiser. It is not a sci-fi intelligence explosion; it is a real feedback loop that speeds iteration. Recent reporting on cutting-edge coding models reflects the same pattern: systems assisting in their own development, with humans still firmly in the loop. The loop is not closed — but it is tightening.
For enterprises, this means software roadmaps compress unexpectedly. For regulators, it means capability assessments based on last quarter’s benchmarks are a rear-view mirror. For investors, it creates a trap: valuation models that discount future cash flows linearly will misprice firms that compound their development velocity.
The productivity dividend and its distribution
Forecasts differ on totals, but the direction is consistent: generative AI could add trillions in economic value annually through productivity gains in customer operations, marketing, coding and research and development. Goldman Sachs puts a rough order-of-magnitude estimate at a 7% global GDP lift over a decade, with large exposure across knowledge work. If you think that is aggressive, the more conservative Penn Wharton model still finds enduring gains in total factor productivity and GDP levels across multi-decade horizons.
The nuance investors should heed: productivity arrives in lumps. A single model upgrade can unlock a wide band of tasks, not a thin sliver. If your business model assumes a smooth adoption curve, expect to be surprised by punctuated reality.
Non-linear tech meets linear institutions
Education pipelines, compliance regimes and even macro models assume gradualism. AI violates that assumption. Capacity jumps break service-level agreements, saturate safety teams and render “annual plan” governance obsolete. The solution is not performative caution or performative acceleration. It is shorter control loops:
- Capability-contingent rules: tie permissions and guardrails to measured abilities (for example, code execution, autonomy, manipulation), not model names
- Continuous validation: risk testbeds that track current model behaviour under distribution shift, not last year’s evaluation suite
- Capital and compute buffers: treat critical AI capacity like strategic inventory — because it is.
In aviation, regulators spent a decade designing digital traffic management to integrate new classes of flight. The sky became computable long before aircraft became autonomous. Expect a parallel in information space: automation of governance will precede full autonomy of systems.
Where Solara Sovereign fits: systems, not slogans
There is a geopolitical lesson here. Nations that outsource essential technological capacity lease certainty from others; the terms change when it hurts most. In health security we learned that dependence is not resilience. The same applies to AI.
Solara Sovereign is built on a simple doctrine: treat essential capabilities as sovereign infrastructure — planned, audited and locally governed, with majority local participation. In pharmaceuticals that means end-to-end capacity from research and development to fill-finish. In AI, it means three pillars:
- Domestic capability stacks (training, fine-tuning and inference) sized to national priorities, not global marketing cycles
- Standards-first trust systems (measurement, red-teaming, post-deployment monitoring) embedded in law and procurement
- Human capital pipelines that treat algorithmic literacy like public health — measured, funded, universal.
We advance this not as ideology, but as risk maths: when capabilities jump non-linearly, you cannot import certainty on demand.
What boards and policymakers should do now
Retire linear roadmaps. If your risk register updates annually, you are running yesterday’s playbook. Build quarterly capability reviews tied to objective metrics (reasoning, autonomy, tool use).
Budget for discontinuities. Compute access and safety validation are the new critical-path items. Both are on multi-quarter procurement cycles. The market is already signalling seven-month compute doublings — plan accordingly.
Benchmark against emergence. Do not ask “How big is the model?” Ask “What can it now do that it could not last quarter?” The technical literature shows why that question matters.
Incentivise efficiency. Efficiency gains substitute for raw scale and can change the economics of adoption; they have compounded faster than Moore’s law.
The hardest part: feeling vs fact
For many executives and officials, the emotional range around AI whipsaws between awe and dread — and that volatility is itself rational. The discipline is to separate feeling from forecast. The facts on the ground are clear: capability climbs predictably with scale; compute supply is compounding; efficiency is improving; emergent abilities will continue to surprise; and partial self-improvement loops are already tightening development cycles.
That is the non-linear reality confronting markets and states alike.
There was a time when a handful of institutions set the global standard for certainty in science and delivery. We honour that legacy best by designing today’s systems for today’s dynamics — fact by fact, metric by metric, with the humility to expect the next leap to arrive early and all at once.
If you prepare for straight lines, you will be shocked.
If you prepare for leaps, you will be ready.
Say thank you to AI.
Lawrence K Woods is the founding executive chairman and project architect at the Solara Collective.