Beyond the Lab#001

AI Is Growing Up Fast. Luckily, You're Not As Behind As You Think.

The gap between imagination and execution just collapsed—AI went from annoying assistant to 10x capability amplifier—but most people are still debating whether it's useful while others are already building.

Published
14 Feb 2026
Read time
8 min
Archetype
observerdabblerexplorer
Series
Foundations of AI Alchemy
Project
0:00 / 15:35

Somewhere in the last twelve months, the ground shifted. Not in the breathless, "everything just changed" way the AI hype machine announces every Tuesday — but in the quiet, measurable way that shows up when you actually sit down and try to build something.

If you're still deciding whether AI is worth your time, I get it. I do. Lucky for you, the window to step in is still open, but it won't stay that way forever.

The Ground Shifted While You Were Looking Away

Here's the thing — the scepticism was earned. If you tried ChatGPT in 2023, got a confidently wrong answer, watched someone on LinkedIn explain "prompt engineering" like it was a priesthood, and thought this is exhausting bullshit — you were right. The tools were rough. The hallucinations were real. The gap between what was promised and what was delivered was wide enough to drive a truck through.

So you tuned out. Reasonable.

But while you were tuned out, the money moved. Enterprise AI spending surged from $11.5 billion in 2024 to $37 billion in 2025 — a 3.2x year-over-year increase.¹ That's not hype — that's procurement decisions made by people whose jobs depend on getting it right.

The window where "still deciding" was a defensible position quietly closed. Not because the conversation changed — the conversation is still mostly noise. Because the tools changed. The usability changed. The capability gap between what these models could do a year ago and what they can do now isn't incremental. It's a different category of thing.

What Actually Changed

I need to be specific here, because "AI got better" is the kind of vague hand-wave that makes people's eyes glaze over.

In 2023, AI models couldn't hold a full codebase in their head or debug across multiple files without losing the thread. Today, Claude Opus can coordinate agent teams, autonomously fix bugs across entire systems, and work with a million tokens of context² — that's roughly 750,000 words held in active memory at once. That's not a better calculator. That's a colleague.

The maths tells a similar story. The American Invitational Mathematics Examination is an invite-only competition for high-school students who score in the top 5% nationally. The median competitor gets about a third of the questions right. Gemini 3 Pro scored 95%³ — without using any tools. We went from models that stumbled over word problems to ones outperforming the best young mathematicians in the country.

And then there's image generation.

This is easier to show than describe. Early AI image generators gave you melted faces, extra fingers, and text that looked like it was written by a drunk spider. Current models like Nano Banana Pro produce 4K resolution output with real-world grounding — referencing live web data for accuracy. Whatever your position on AI-generated imagery (and yes, that is a complex space), the technical leap is undeniable.

The shift wasn't "models got faster." It was models going from strong junior — enthusiastic, occasionally useful, frequently wrong — to subject matter expert. They ask better questions now. They validate context before diving in. They make fewer assumptions. The experience of working with them changed as much as the raw capability did.

What 10x Looks Like on the Ground

That shift from junior to subject matter expert? It compounds. When the tools stop making basic mistakes, you stop babysitting them — and the speed multipliers get real.

I'm not going to theorise about what's possible. Let me show you what's actually happening in my work.

My consulting work involves solution architecture, SaaS integrations, scripted automations, and more recently, building full applications with Claude Code acting as an autonomous development partner. The speed gains aren't uniform — they scale with the nature of the task:

  • Ideation and document creation: 2–3x faster
  • Solution architecture — where I used to spend hours Googling across tech stacks to figure out how systems talk to each other: 4–5x
  • Formulas and scripted automations with debugging: 5–10x
  • Full application development with autonomous agents: 10–20x

That last number sounds absurd until you experience it. I built a full-stack authenticated wine cellar management app in five days — complete with AI integration — despite not being a full-stack developer. Five days. For something that would have taken months of learning foundations before I could even start.

The Execution Bottleneck Just Collapsed

When I was 20, I had my first website built for a business I was running. It was a six-month process. A $20,000 investment — and they were doing us a decent deal. There was endless back and forth, long stretches with no visible progress, and what we got at the end was good — but probably not quite what we imagined.

That wine cellar app? I had a concept and in less than a week it was real. Actual people using it. Giving feedback. Iterating. I could take that feedback and turn around changes in hours, not months. That is what AI is unlocking — not just speed, but the collapse of the gap between idea and reality.

Now — is the output perfect every time? No. I operate on something I think of as the 95% rule. In most contexts, the output is good enough to move fast and refine. The remaining 5% matters enormously in some domains — medical diagnosis, legal analysis, financial modelling — and barely at all in others. Context determines whether that gap is catastrophic or negligible.

What AI removes isn't the need for expertise. It removes the execution drag — that long, expensive stretch between "I know what I want to build" and "I have the skills to build it." That stretch used to be months or years. Now it's hours or days.

What Becomes Scarce When Knowledge Isn't

There's a phrase that's been embedded in our cultural firmware for generations: knowledge is power.

If that's true — and I think it largely is — then what happens when knowledge is suddenly at your fingertips? When the bottleneck isn't what you know, but what you do with it? The power shifts. Not toward the people who've memorised the most, but toward something else entirely.

So what becomes scarce? Three things, from what I can see:

  • Imagination — how big can you dream when execution isn't holding you back? The quality of your ideas becomes the constraint, not the cost of building them.
  • Discernment — can you tell when the output is solid and when it's confidently wrong? The models don't always get it right, and knowing the difference is what separates useful from dangerous.
  • Persistence — are you willing to learn an entirely new way of working, with all the discomfort that entails? Because it is uncomfortable. It's a new competency, and new competencies feel clumsy before they feel powerful.

Think about it this way. Google didn't make libraries obsolete — it made "knowing how to find information" more important than "knowing facts." AI is doing the same thing to search itself. Not replacing human judgment, but raising the bar for what "literate" means in a professional context. This is the next literacy requirement.

A lot of people think AI will make you dumber, lazier, worse at your job. I see it differently. It doesn't just execute faster — it pushes back on my thinking. As a solo operator, I don't have a team to pressure-test ideas against. These models can hold enough context and challenge me with enough substance that my strategic thinking has genuinely improved. That's not a shortcut. That's a capability amplifier. A whole new competency.

The people who develop that competency are playing a different game than the ones still debating whether the tools are worth trying.

The Window Is Open — But Not Forever

Here's the thing that surprised me most when I started researching this piece.

Despite all the noise, despite the breathless headlines and the LinkedIn evangelists and the billions flowing in — according to the US Census Bureau, AI adoption among American firms sat at just 9.7% as of mid-2025.⁴ Up from 3.7% in 2023, so it's moving fast. But fewer than one in ten businesses are actually using this.

I say that not to minimise the shift, but to reframe it. If you've been sitting on the sidelines thinking you're already too late — you're not. The window is still wide open. Most people haven't started.

But that won't last. If you have strong opinions about AI — that it's unethical (a big conversation for another day), that it's scary, confusing, hard to use — all valid. But society rewards productivity, innovation, competency. It doesn't particularly care about your reservations. When a workforce starts seeing what this unlocks for individuals, it becomes the new baseline. Ambivalence isn't neutral — it's a position, and it has consequences.

The Takeaway

So here's what I'd sit with: if knowledge wasn't the bottleneck anymore — if you could access foundations in minutes, not years — what would you actually build? What have you been putting off because it felt too far outside your skillset? Pick one of those things. Spend an hour with one of the current models walking through it. Not to finish it — just to see where the real bottleneck is now. You might find it's no longer where you assumed.

References

¹ Menlo Ventures, 2025: The State of Generative AI in the Enterprise, January 2026.

² Anthropic, Introducing Claude Opus 4.6, February 2026.

³ Google DeepMind, Gemini 3: Introducing the latest Gemini AI model from Google, November 2025. AIME context via Vals.ai AIME Benchmark.

⁴ US Census Bureau, Business Trends and Outlook Survey, as cited in Anthropic, Economic Index Report, September 2025.


Louis Razuki

Louis Razuki

Founder & Guide

I write about working with AI — the tools, the mindsets, the builds that actually deliver. Three years of daily AI practice distilled into experiments, insights, and honest takes on what's real and what's just hype.

About Me
Assessment

Discover Your AI Archetype

Take the 5-minute AI Journey Assessment and find out how you relate to AI — from Observer to Alchemist.

Take the assessment