
Anthropic’s hackathon just shattered the biggest myth in tech: that coding is the most valuable skill in the AI era. It isn’t. Here’s what actually is.
Imagine showing up to the Olympics as a competitive swimmer — only to lose to someone who’s never been in a pool.
That’s essentially what happened at Anthropic’s “Built with Claude” hackathon in February 2026.
Thirteen thousand people applied. Five hundred were selected. Most of them were engineers — people who’d spent years writing code, shipping products, and building software for a living. The kind of people you’d expect to dominate an AI building competition.
They didn’t.
When the dust settled, first place went to Mike Brown — a California lawyer who had never shipped software in his life. Third place went to Dr. Michał Nedoszytko — a cardiologist from Brussels. The “Keep Thinking” prize went to Kyeyune Kazibwe — a road engineer from Uganda. And of the five final winners, exactly one was a professional software developer.
The tech industry saw this and said: “Wow, anyone can build AI products now.”
That conclusion is true. But it’s also dangerously incomplete — and it’s making both individuals and companies miss the far more important lesson hiding underneath.
What Actually Happened in That Hackathon Room
Let’s slow down and really look at what each of these winners built, because the details matter enormously.
Mike Brown is a California attorney. His friend builds backyard cottages — accessory dwelling units, or ADUs — in the state. And like thousands of Californians trying to address the housing crisis, his friend spends months fighting permit rejections. Not because his designs are wrong. Because a code citation is slightly off. Because a local rule quietly overrides a state rule that nobody documented cleanly. California ADU permits have a 90%+ rejection rate on first submission, and most rejections are bureaucratic: missing signatures, incorrect code citations, incomplete forms. The average six-month permit delay costs homeowners $30,000.
Brown built CrossBeam — an AI-powered permit assistant that navigates this labyrinth — in six days. And he won.
Dr. Michal Nedoszytko is a cardiologist who has spent a decade watching patients forget half of what he tells them before they reach their car. He knows which questions they call back with the next day, which discharge instructions nobody reads, which parts of the medical explanation evaporate first. No product team could interview their way into that knowledge. He built Postvisit.ai — a tool that helps patients actually understand and act on their medical visits — in a week.
Kyeyune Kazibwe is a road technician in Uganda whose job is to drive roads, assess damage, and estimate repair costs. There are more roads than technicians, so schools and clinics wait while paperwork catches up. He built TARA, a system that turns dashcam footage into infrastructure investment recommendations — and tested it on an actual road under construction in Uganda.
A musician from Estonia built an AI band that plays with you in real time. He won the creativity prize.
Notice the pattern? None of these people won because they were clever about AI. They won because they were intimately familiar with a broken, painful, specific system — and they finally had a tool powerful enough to let them fix it themselves.
The Myth That Just Died — And Why It Was Always a Lie
Here’s the myth: coding is the moat.
For decades, the ability to write software was a genuine superpower. It was the scarce skill that separated the people who could build things from the people who could only imagine them. If you had a great idea for a product, you either learned to code, hired someone who could, or watched your idea die.
That scarcity created an entire cultural narrative: coders are the builders. Everyone else is just a user.
That narrative was always a simplification. Software has always been a means to an end. The actual value was never in the syntax — it was in understanding what needed to be built and why. It was in the problem, not the plumbing.
AI just made that truth impossible to ignore.
The bottleneck moved. It used to be “can you code this?” Now it’s “do you know what needs to be coded and why?”
That single shift changes everything about who wins at software. And it’s creating a new kind of person — what some observers are calling the Hybrid Domain Expert — who is quietly becoming the most dangerous builder in the room.
Why Developers Actually Lost
This is the uncomfortable part that most coverage has glossed over.
The developers who entered Anthropic’s hackathon weren’t bad at building. Many of them are excellent engineers. They lost for a specific, structural reason: they optimized for technical impressiveness at the expense of problem clarity.
When you’ve spent years learning how to build things, you naturally gravitate toward showcasing what you can build. Multi-agent pipelines. Elegant architectures. Systems that demonstrate technical sophistication.
But the judges weren’t evaluating sophistication. They were evaluating utility — how well a product solved a real, felt problem for real people.
Mike Brown didn’t build CrossBeam to show off Claude’s capabilities. He built it because his friend was losing money every month to a process that shouldn’t be that hard. The problem came first. The technology was just the wrench.
A developer who does not understand permit law will build a permit app that hallucinates plausible nonsense. A lawyer who understands permit law will build one that gives correct answers.
That gap — between a plausible-sounding answer and a correct answer — is everything in high-stakes domains. And it cannot be bridged by better prompting. It requires lived, hard-won expertise in the problem itself.
The New Competitive Advantage Nobody Is Talking About
So if coding isn’t the moat anymore, what is?
The popular answer is “domain expertise.” And that’s directionally right, but still too vague.
Let me be more precise: The new moat is intimate familiarity with institutional friction.
Every industry, in every country, is full of processes that are broken in ways that are invisible to outsiders but agonizingly obvious to insiders. Permit rejections. Insurance denials. Discharge instructions nobody reads. Infrastructure reports that pile up while roads deteriorate. These aren’t abstract inefficiencies — they’re specific, documented, reproducible sources of human suffering.
For decades, fixing these problems required either political power (to change the systems) or technical power (to build around them). Most domain experts had neither.
Now they have both — in the form of AI that can translate their problem-knowledge directly into working software.
The people who will win the next decade aren’t primarily the best coders. They’re the people who have spent years inside a broken system, accumulated a deep map of where exactly it breaks, who it hurts, and why existing solutions keep missing the mark — and who now have a tool that can act on that map.
Your years of frustration are, suddenly, a strategic asset.
The Three Profiles of the New Builder
The Anthropic hackathon didn’t produce one archetype. It produced three, and each one is instructive.
The Practitioner-Builder (Mike Brown, Dr. Nedoszytko): Deep expertise in a regulated, high-stakes field. Builds AI tools that require intimate knowledge of domain-specific rules, edge cases, and failure modes that no outside team could replicate. Their advantage is precision — they know what “correct” looks like in ways that protect users from confident-sounding AI errors.
The Infrastructure Bridger (Kyeyune Kazibwe): Operating in resource-constrained environments where the gap between need and capacity is enormous. Builds AI tools that multiply the effectiveness of scarce human expertise. Kazibwe had no team and no budget. He tested TARA on an actual road under construction in Uganda, proving that the “moat” of venture capital is being bypassed by raw utility.
The Creative Synthesizer (Asep Bagja P., the Estonian musician): Brings expertise from a non-technical creative domain and uses AI to explore what wasn’t previously possible. Less about fixing broken systems, more about opening new ones.
All three profiles share one thing: they brought knowledge that money couldn’t buy and courses couldn’t teach. They brought lived experience with a specific problem.
What This Means for You
If you’re reading this and you’re not a developer, here’s what I want you to hear clearly: you are sitting on something valuable, and you may not realize it yet.
The question to ask yourself isn’t “can I build AI products?” — the answer is increasingly yes, almost regardless of technical background. The question is: “What do I know, from years of lived experience, that an outsider could never fully understand?”
That’s your starting point. Here’s a framework for finding it:
1. Map the friction. What processes in your field routinely fail in predictable ways? Where do things get stuck? What information falls through the cracks? Brown mapped California’s permit rejection patterns. Nedoszytko mapped post-visit patient comprehension collapse. What’s your equivalent?
2. Find the institutional knowledge gap. What do you know that’s not written down anywhere? What would take a new person two years of hard experience to internalize? That tacit knowledge — the stuff that lives in your head, not in any manual — is precisely what makes your AI tool defensible.
3. Start with the smallest painful thing. The winners didn’t try to fix their entire industry. They fixed one specific, repeatedly painful step in a broken process. CrossBeam isn’t a full legal practice management system — it’s a permit assistant. Postvisit.ai isn’t an EMR replacement — it’s a post-visit comprehension tool. Specificity is strength.
4. Use AI as a translator, not a replacement. Your job isn’t to think less because AI can code — it’s to think more precisely about the problem. The quality of your domain understanding directly determines the quality of what you build. What matters is clarity of input — the precision of how you describe the problem drives output quality more than which AI tool you use.
The Warning That Most Coverage Missed
Here’s where I want to push beyond the celebration, because there’s a real trap lurking in this story.
A hackathon demo is not a product. Six days of building proves you can express domain knowledge as software. It proves nothing about whether that software will still work correctly six months from now, whether it will handle the edge cases that only surface in production, or whether its outputs can be audited.
CrossBeam works brilliantly for California ADU permits in early 2026. But California revises its ADU code regularly. Permit rules change. Local ordinances get updated. What happens to the tool when the underlying legal landscape shifts?
This is the next frontier: not just building AI tools from domain expertise, but building them in ways that stay correct over time. The domain expert advantage is real and it’s durable — but only for builders who treat their tools as living products, not demos.
The winners who will matter five years from now aren’t the ones who built something impressive in six days. They’re the ones who build something that earns institutional trust — tools that can be audited, corrected, and improved as the domains they serve evolve.
Domain expertise is the raw material. Governed, compounding domain expertise is the actual moat.
A Structural Shift That’s Already Happening
This hackathon wasn’t a curiosity. It was a data point in a much larger trend.
Total programmer employment has decreased by 27.5% in the last two years, with junior developer hiring hit hardest. The entry-level coding job market that existed five years ago is being fundamentally restructured — not because developers are being replaced, but because the value chain of software development is shifting. The scarcest resource is no longer code — it’s the clear, precise, expert understanding of what the code needs to do.
Of the five final winners at Anthropic’s “Built with Opus 4.6” Claude Code Hackathon, exactly one was a professional software engineer. That’s not an accident or a fluke. That’s a signal.
The age of the Hybrid Domain Expert has begun — and it belongs to doctors who understand patients, lawyers who understand systems, engineers who understand infrastructure, teachers who understand how children learn, and countless others who have spent years accumulating knowledge that AI can now help them act on.
The Most Important Question You Can Ask Right Now
Here’s the reframe I want to leave you with.
We’ve spent years asking: “How do I learn to build with AI?”
That’s the wrong question. The right question is: “What do I already know that AI can now help me build with?”
The gap between those two questions is the difference between chasing a skill and leveraging one you already have.
Mike Brown didn’t win Anthropic’s hackathon by becoming a better developer. He won by becoming a more powerful lawyer — one who could finally act on everything he understood about a broken system.
The developers in that room were excellent at building. They just didn’t always know what needed to be built, or precisely why it kept failing.
You might.
And in the AI era, that might be enough to win.
If this reframe changed how you see your own expertise, share it with someone who’s been waiting for “permission” to build something. They might be sitting on exactly the insight the world needs.
The Developers Didn’t Win. A Lawyer, a Cardiologist, and a Road Engineer from Uganda Did was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.