The case against general-purpose engineers

There's a piece of conventional wisdom in engineering that I think is wrong. The idea is that the strongest engineers should be generalists who can figure out any technology or programming language you throw at them. You'll hear this as "good engineers understand fundamentals."

More specifically, there's an expectation that engineers should be able to do system design for all the software they interact with in their day-to-day life. It's considered fair game to ask: "Can you design Netflix? Can you design YouTube? Can you design Facebook live comments?" The expectation is that you'll trace the flow from end-to-end and identify both the functional and non-functional requirements for any of these systems.

I think this approach is fundamentally misguided, at least for most engineers in most contexts.

Why this mindset persists

I suspect this general-purpose engineer ideal persists because it works well at big tech companies. Engineers there often produce so much business value that they can afford to spend time on intellectual exercises. They're solving genuinely novel problems where generalist thinking pays off.

But I don't want to confuse "getting away with it" with "optimal." Just because something works doesn't make it the best approach.

The startup context is different

This becomes clear when you look at post-Series A startups. I'm talking about companies with around $5M ARR, maybe 12 salespeople—the kind of place where you can eat lunch with the sales team twice a week if you want to.

I've found that as an engineer, it's worth seeking out these lunches. At a startup small enough, you can often just show up without it being weird. Try doing that at a 10,000-person company and you'll get some strange looks. But at a Series A startup, the boundaries are much more fluid.

The value isn't in giving your opinion about sales strategy—it's in listening. You get to absorb real data about what's coming into the business, what customers are asking for, and what deals are getting stuck on. This context makes you much better at prioritizing technical work.

At this stage, the hard work of product-market fit is done. The founders identified a capability that could stand alone and proved it's valuable. Now you're bundling additional capabilities to reduce churn, expand market opportunity, and make deals easier for prospects who aren't slam-dunks for your core offering.

In this context, being a generalist is expensive. What matters is shipping capabilities that help sales close deals, and doing it quickly.

Engineers as bloom filters

Instead of generalists, I think engineers should optimize themselves as bloom filters. When someone presents you with 10 potential features, you should be able to immediately say "I don't know how to do that" to 9 of them. And that should be the end of the conversation—no further exploration, no "but I could learn it," no technical deep-dives.

This sounds limiting, but it's actually optimal. The business person can keep iterating through ideas until they find something you can execute immediately. Why spend time figuring things out when you could deliver value right now with skills you already have?

If you're constantly learning on the job, that suggests the company has no other way to make money except by doing things they don't know how to do. That's a dangerous position.

The real question is cost, not possibility

When evaluating features, the question isn't "how would I build this end-to-end?" It's "can I build this with my existing expertise, and what will it cost?"

But more importantly, you should respond to business leaders with "I can't do that cheaply." You need to provide estimates on both the unit economics and your time to implement it. Tell them "I don't know how to do that within a month" and then ask "what other ideas do you have?"

Make them exhaust their entire list of ideas before you move to prioritization. Only after they've gone through everything should you say "OK, if I don't know how to do any of those cheaply, let's figure out which one is most important."

The fact that something is technically possible is worthless information. What matters is the cost in time and money to deliver it, and whether you can do it within reasonable business constraints.

What about learning new skills?

If you want to expand your capabilities, do it strategically. Invest in skills during your own time so you can say "yes" to more types of business requests. But the goal isn't intellectual exploration—it's expanding your ability to immediately deliver value.

Think of it as expanding from saying "yes" to 1 out of 10 requests to 2 out of 10 requests. This is a much better use of your side-bet energy than exploring low-priority projects that might not matter to the business.

In my experience, learning on the job has never paid off. When you're trying to figure out how to accomplish a business goal, I've never seen someone stumble onto a gold mine. They just eventually solve the problem—after paying all the development costs along the way.

There's a crucial difference between serendipity and exploration. Serendipity is when you can spot patterns and put things together because you already know how to connect them. Exploration is when you set out deciding you're going to put two things together, but you have to figure out what they are first.

You may eventually succeed with exploration, but when you look at the business outcome, you have all these costs you accumulated along the way. Compare that with inspiration, where you paid no development cost and got there in a single step.

The iteration trap

There's a popular idea that you should ship something minimal, get user feedback, and iterate toward success. I've watched this play out many times: an engineer gets excited about a project, talks about it for weeks, ships something that works, then never mentions it again.

Not because it failed, but because it achieved "replacement level" success. You got some money, but you could have picked any other project in that timeslot and gotten similar results. It wasn't an outlier. It wasn't something you'd point to and say "I did that."

There's a classic example of this when you ask girlfriends about what their boyfriend's job is like. What they'll say is that their boyfriend is always excited about this thing, this project is always very important, they'll talk about it for a couple weeks and then they never bring it up again. Because they executed on it and then it turned out to be basically replacement level—you could've done any product in this timeslot and gotten the same amount of money.

The harsh reality is that iteration reliably gets you to "unit economically positive" but rarely to breakthrough success. You're not iterating your way to breakthrough outcomes—you're iterating your way to "this was fine, it roughly paid for itself."

When specialization creates breakthroughs

There's a crucial difference between steady value creation and breakthrough capability creation. You can think of it like a blacksmith working with scrap metal—breaking it down, refining it, turning it into useful items. It's honest work, but it's not transformative.

Compare that to the blacksmith who creates a sword and sticks it in a stone, then hypes up the kingdom and sells everyone on the fear of missing out of being King if they can pull the sword out of the stone. That one sword becomes immensely valuable because you've created the capability of becoming king. All kinds of people will travel to your door to make the attempt. All that inbound interest creates an economic boom for your town. There are countless ways to cross-sell these weary travelers as they cluster around your core capability.

The key insight: both blacksmiths are using the same metalworking skills. The difference is recognizing the right opportunity and executing immediately when it presents itself.

Many fortunes have been made quickly by people executing things they already knew how to do, in contexts that created breakthrough capability. They didn't iterate their way to success—they pattern-matched existing skills to solve problems that generated enormous demand.

What actually counts as shipping

I think there's a useful way to think about what counts as "shipping" something. Put yourself in your customer's shoes—what do they physically see? What can you imagine yourself experiencing through their eyes?

If you've made the system more stable so there are fewer errors or no more pages at night, they can't see that. Compare that to adding something they couldn't do before and now they can. The difference is obvious—one is invisible to them, the other is a new capability they can immediately use.

In my experience, this customer-visible definition of shipping is what actually matters for business outcomes.

Feeding sales people rope

The capabilities you build should help your sales team close deals. I like to think of this as feeding your sales people rope so they can lasso customers. You have to keep the rope coming to them.

The capabilities that are eligible for development have usually already been pre-selected by your sales organization. They know what prospects are asking for, what objections they're hearing, and what would turn a "maybe" into a "yes."

Refactoring and architectural improvements don't give sales people any rope at all. Engineers often love the idea that paying down tech debt is necessary for long-term velocity, but that's a hard sell when deals are waiting to be closed.

The worst-case scenario is when engineering takes control, stops the world, and cuts off all rope exports to work on internal improvements. Engineers level up, but it doesn't help the business bring in more money.

Avoiding tech debt through specialization

There's a better approach: when you're shipping capabilities that are deeply in your wheelhouse, you naturally avoid creating tech debt in the first place.

If you're implementing things you really understand—things you've done multiple times—you'll sidestep most architectural problems. Tech debt is often a manifestation of people figuring things out as they go.

When systems do become problematic, I've found the solution isn't usually to stop shipping features while your team refactors. It's to hire someone with specific expertise who can solve the architectural problems in a couple of months, rather than having your team spend six months figuring it out while also trying to ship features.

The practical approach

This isn't about being intellectually lazy. It's about optimizing for business outcomes in competitive environments where speed matters.

Instead of asking "how would I design this system?" ask: - "What's the fastest way I can deliver this using what I already know?" - "What existing tools can I combine to solve this?" - "How can I minimize implementation time?"

Train yourself to see opportunities in these terms: Can I do it quickly with my existing skills? If not, move on. If yes, what exactly will it cost?

Conclusion

The general-purpose engineer ideal works well in some contexts, but it's not universally optimal. In fast-moving environments where shipping capabilities matters more than architectural elegance, specialization and speed often win.

The goal isn't to be the engineer who can figure out anything given enough time. It's to be the engineer who can immediately execute when the right opportunity presents itself.

Want a weekly digest of this blog?