The Great Library Unbundling: How AI is Eating the Software Stack

More than a year ago, I told people that ORMs like Entity Framework were going to feel like relics. That LLMs would just generate the SQL you need, making all that object-relational mapping overhead pointless. A few people told me I was crazy and doing things wrong (and to be honest I wasn't good enough at explaining why I thought this).

Well, time seems to have been on my side (Wwo is laughing now hahahaha!!).

Now, this is a hot take, so I will go forward and talk as a hot take. If I put my devil's advocate hat on, I would probably rage against whoever wrote this post. But because I LOVE MYSELF, I won't do it. I certainly see flaws in my arguments. So buckle up—this is not rage bait, it is just a hot take from a Mexican developer who spent the weekend arguing with AI agents instead of sleeping like a normal person.

The ORM Prediction, or How I Accidentally Became Right About Something

Here's the thing about ORMs: they exist because writing SQL is supposedly "hard" and "error-prone." Meanwhile, I've been writing raw SQL since my first job at a tiny shop in Tepic where we didn't have the luxury of Entity Framework or Hibernate or whatever fancy abstraction layer the cool kids were using. We just wrote the queries, crossed ourselves, and hit execute. Catholic upbringing has some unexpected engineering applications.

Now, to be fair—we did end up building our own tailored ORM for the common cases. Because once you've written the same parameterized INSERT fifteen times, even the most stubborn raw-SQL purist starts thinking "maybe I should abstract this." And ORMs do earn their keep in some areas: SQL injection prevention, enforcing parameterized queries, handling connection lifecycle stuff that nobody wants to think about. Security guardrails that your 2 AM brain will absolutely forget to implement on its own. I get it. ORMs aren't stupid—they solved real problems.

But here's the trap: as soon as you start getting into complex queries, you're no longer just maintaining your application logic—you're also maintaining the ORM and the complex queries. You end up fighting the abstraction to make it do what raw SQL does naturally, and now you have two problems instead of one. The thing that was supposed to simplify your life becomes another layer you have to debug, optimize, and keep in your head at the same time. And whoever has run a high-performance application—like an ecommerce site during Black Friday—knows exactly what I'm talking about. That query you spent weeks trying to optimize through the ORM, doing quirky reflection tricks and fighting with expression trees? You ended up raw-dogging the SQL anyway, because that was the only way to get the performance you needed. The ORM was never going to get you there. It was just standing in the way, politely.

Now Claude just writes you the exact SQL you need—parameterized, injection-safe, the whole deal. No mapping, no configuration, no surprises. Just the data access pattern you actually want. It's like having a DBA who never sleeps, never complains about schema changes, and doesn't passive-aggressively CC your manager when you write a bad join. The abstraction layer that was supposed to save us from SQL is now the thing standing between us and the AI that's better at SQL than most of us ever were.

Agentic Development and the Death of One-Size-Fits-None

This isn't just about ORMs. I've been thinking about how agentic development fundamentally changes the whole premise of general-purpose libraries and building blocks.

Platform teams across the industry are going to start questioning their investments in broad, one-size-fits-all libraries. And honestly? Good. Because "one-size-fits-all" always meant "fits nobody perfectly but everyone tolerably." Like those beach ponchos they sell at every tourist shop in Nayarit—technically covers you, technically functions, but you look ridiculous and you know it.

Take CSS frameworks. Bootstrap, Bulma, Fluent UI—they're great for getting started quickly. But you end up learning their specific class naming conventions, carrying tons of CSS you'll never use, fighting against the framework the moment you want something custom, and—my personal favorite—looking like every other Bootstrap site on the internet. Your site ends up with more unused CSS than empty beer bottles after a Sunday carne asada in Nayarit. And that's a lot of bottles, trust me.

With agentic development? You just tell the agent what you want your UI to look like. It generates exactly the CSS you need. No framework lock-in, no unused code, no learning curve. The agent doesn't need to know Bootstrap's grid system (although it probably was trained on such codebases)—it just writes "vanilla" CSS that does what you asked for. I know this because that's exactly what happened with this blog. The CSS you're looking at right now? Generated. By an agent. Who understood what I wanted better than I understood what I wanted (debatable).

MCP is Already Feeling Old

Speaking of things that are aging quickly: the Model Context Protocol. MCP. Remember when everyone was wrapping every CLI tool as an MCP server like it was the new hotness? That was, what, months ago? In AI time that's basically the Paleolithic era.

Here's the thing—CLI tools already come with perfect documentation in the form of man pages. They're basically documented APIs already. Your agent doesn't need a special protocol wrapper to use gh pr create or az webapp deploy. It just reads the docs, fumbles the first attempt (like any of us), and then figures it out. Combine Claude Code with existing CLI tools—GitHub CLI, Azure CLI, kubectl, whatever—and you've got everything MCP promised, but without the ceremony.

Microsoft's already doing something interesting with their dotnet/skills repo. Skills that are just prompts guiding the agent through repeatable processes. No protocol, no server, no serialization format drama. Just a well-written prompt. Turns out the best API for an AI agent is just... words. Who knew. (Okay, a lot of people knew. But still.)

Hugo Drove Me to Build My Own (Again, Because I Never Learn)

I just finished redoing my entire publishing workflow. Hugo felt limiting and bloated—all these features I'm maintaining, constantly broken by dependency updates, GitHub workflows failing for mysterious reasons. Every time I pushed a new post it was a coin flip whether the build would succeed or I'd spend an hour debugging some Go template issue that made me question my life choices.

So I rewrote it in .NET. Dead simple. I don't expect anyone else to use it—exclusive distribution, zero units available—but damn, it felt liberating. The CSS and HTML of this blog changed completely, and I actually understand every piece of it now. No more cargo-culting Hugo partials I copied from a theme three years ago and was afraid to touch.

This is the pattern I keep seeing: when AI can generate exactly what you need, the appeal of heavyweight, general-purpose solutions just evaporates. Like fog burning off in the morning sun in Tepic. One moment it's there, thick and omnipresent, and then it's just... gone.

The Accountability Problem

Here's something that's been bugging me. I saw a conversation on the fediverse about more and more tools being created with no clear ownership. Developers suspect some might be entirely AI-generated, with humans just shepherding them into existence like zookeepers who can't actually control the animals.

That sucks. Humans should be in charge and taking responsibility for the software they put into the world. There's a difference between using AI as a tool and letting AI be the architect while you nod along pretending you reviewed the blueprints.

When something breaks, when there's a security issue, when users need support—who's accountable? You can't file a bug report against GPT-4. And "the AI did it" is not an incident postmortem. Not yet, anyway. Give it a year (debatable).

Copilot's Infinite Loop of Self-Criticism

Speaking of AI quirks: Copilot keeps finding issues in my PRs, I ask it to fix them, it fixes them, and then when I ask it to review again, it finds more issues. In the code it just wrote. In the code. It. Just. Wrote.

This is like watching someone argue with themselves in the mirror. Except the mirror is burning tokens, the person is burning my budget, and the argument never ends. The AI equivalent of "works on my machine" syndrome, except it doesn't even work on its own machine.

Blog Posts Are Better Than Repos for Teaching AI

Here's something that genuinely surprised me. A friend at Microsoft pointed his AI agent to my blog series to implement ActivityPub. Not to a GitHub repo with code samples. Not to the official W3C spec. To my blog posts. The ones I write at 1 AM like a gremlin-raccoon, full of rambling asides and half-baked metaphors.

Turns out prose explanations with context and reasoning are way more effective for agents than just dumping code at them. Blog posts tell the story of why decisions were made, not just what the final result looks like. The agent needs to understand intent, not just syntax. It needs the narrative—the "I tried this and it broke spectacularly, so I did this other thing instead" part that never makes it into a README.

So all those years of writing meandering blog posts about my projects instead of writing proper documentation? Turns out I was ahead of my time. Or just lazy. Probably both.

Where This Is All Heading

I think we're heading toward a world where general-purpose libraries become luxury items—nice to have, but not essential. Where AI-generated, purpose-built solutions become the norm. Where documentation and prose become more important than code artifacts. And where human accountability becomes the thing that actually differentiates good software from the rest.

This doesn't mean libraries disappear overnight. But the incentives are shifting. Why build and maintain a framework used by millions when everyone can have their own custom solution?

The future might be less about sharing code and more about sharing knowledge, context, and decision-making frameworks. Less "here's my npm package" and more "here's the blog post explaining why I built it this way so your agent can do something better."

Anyway

This post is a collection of half-formed thoughts and observations. I'm not claiming to have all the answers—hell, I'm not even sure these are the right questions. But something is shifting under our feet, and pretending it isn't doesn't make it stop.

Are we heading toward a more fragmented, AI-generated software landscape? Or am I just another old developer yelling at algorithmic clouds from a small corner of the internet?

Honestly, I don't know. But I'd rather be wrong and loud about it than quiet and surprised when the whole toolchain landscape looks unrecognizable in five years.