The recent explosion of social media frenzy over agentic orchestration tools such as OpenClaw and GasTown, vibe-coded and quickly released into the wild, is yet another symptom of overhyped expectations related to generative AI models. If it takes three weeks to go viral after a few months of vibe coding, it’s a trivial solution, and it will be copied, relentlessly, and everyone will move on to something else–not sure what that will be.
We’ve arrived at the point where everyone can be 1000x, but with no good ideas on where to find value to apply it. “When everyone is super…” Perhaps this will be a good thing. Once the hangover wears off and we start looking for harder problems to solve, we’ll find bigger nails. What should those problems look like?
1000x Productivity = Same Idea x 1000?
How many internal dashboards, custom support bots, and AI-enhanced document search tools do we really need? One thing that’s becoming very clear is that armies of agents will be able to create infinitely bespoke versions of these in mere hours (possibly minutes), on-demand and as-needed. If your business model derives its value from finding things, or analyzing things, better than anyone else, you might want to think about creating new things to find instead.
We should be shifting from “exploit” to “explore” post-haste while it’s getting cheaper to iterate. This is an area where large language models can provide good value. They make excellent sounding boards and the agent armies will help, but it’s not the abundance of capacity which we should be chasing–it’s how we can improve execution, judgment, and domain insight to raise value, and therefore profits.
When “Anything” Becomes Possible
In a world where boilerplate code is instant, CRUD apps are trivial, API wrappers are disposable, and infrastructure scaffolding is one prompt away, then what changes? The scarcity frontier moves. The scarce resource is no longer syntax knowledge and coding skill. The scarce resource becomes problem selection, taste, systems thinking, and understanding of the messy reality of constraints imposed by physics, law, and economics.
Elon Musk’s idea of “Data Centers in Space” comes to mind as a masterclass in ignoring constraints.
If everyone can wrap an LLM, add tool calls, add vector search, and an agent loop, then that stack is infrastructure–not differentiation. The moat moves to proprietary data, deep domain insight, distribution, workflow integration, regulatory defensibility, and trust and reliability. Companies chasing ‘AI features’ without asking what hard problem are we uniquely positioned to solve? or what do we understand better than anyone else? will end up shipping demos rather than defensible, value-added systems.
What This Means for Engineers
The career question: if you can get anything you ask for by prompting, then what should you strive for?
Engineers in tech have some short-term advantages:
- Understanding failure modes–how systems and models break
- Understanding data and its provenance
- Understanding problem selection
- Understanding boundaries and constraints
Longer term, the focus will need to move to systems thinking and constraint-heavy domains, such as energy, risk modeling, things with atoms in them, design, architecture, and research.
If an LLM (or agentic orchestration system) can write the code, then the value moves from “I can implement” to “I can decide what should exist.”
What This Means for Companies
If your AI strategy is to “add an LLM to the product,” you don’t have a strategy—you have a feature that will be table stakes within a quarter. The companies that win won’t be the ones who learned to marshal agent hordes better. They’ll be the ones who understand which problems are worth solving in the first place.
The tooling is getting commoditized. The execution layer is getting commoditized. What isn’t getting commoditized is judgment, taste, and deep understanding of where value actually lives. The shift from exploit to explore is the only move that doesn’t end with you competing on price against a thousand identical wrappers.
Build for the hard problems. The easy ones are already solved, a thousand times over, by agents that don’t sleep.