Skip to main content

Command Palette

Search for a command to run...

Beyond Scale: What Growth Means in B2B

Updated
7 min read

The conventional playbook for engineering growth comes from companies like Google and Facebook; places optimizing for massive scale. But many of us work in B2B, where the challenges are fundamentally different.

Through many conversations with junior engineers over the years, I've seen the confusion this mismatch creates. Here's what growth actually means when complexity, not scale, is your primary challenge.

About me

I have been writing software professionally for 13 years now, and worked across the stack with different languages and in different domains (Private Equity, Legal Tech, Finance) and while the language, market and domain kept changing, one thing remained constant: it was always B2B.

I’ve built software and software teams and this article is a combination of my experience as an IC and conversations I had as a manager.

The Scale Assumption

"Scale" carries outsized weight in software engineering. It's the word that gets thrown around in architecture reviews, used to justify technical decisions, and cited as the benchmark for engineering excellence.

When junior engineers ask me about growth, they often reference the same sources: blog posts from Google engineers optimizing for billions of requests, Meta's approach to managing 129K-GPU clusters, or Shopify handling 284 million requests per minute on Black Friday. The advice is solid; if you're working at that scale.

The problem is, most of us aren't. In B2B software, you're rarely optimizing for billions of users. Your database isn't collapsing under petabytes of data. When performance issues do arise, the solution is usually straightforward: fix an N+1 query, add a queue and worker, or provision more resources.

The conventional wisdom around engineering growth assumes scale is your primary constraint. But in B2B, complexity is what slows you down.

Managing Scale vs (Business) Complexity

If you've worked in B2B long enough, you've seen this pattern: You build a logical solution to a well-defined problem. Then you take it to customers. They love it, but they need a pre-approval workflow. And a different calculation method for European clients. And an integration with their legacy system. And custom notifications. And exceptions for certain user roles.

Each request makes sense in isolation. But requirements accumulate. What started as a clean, understandable system becomes a maze of conditionals, feature flags, and special cases. Your elegant solution disappears under layers of "if this client, then that behavior."

This is the primary challenge in B2B: building something that is malleable and extensible, all while it remains simple to reason with. You need software that can absorb customer-specific needs without collapsing, code that can handle variation without becoming incomprehensible to the next person who touches it. How do you get better at this?

First, the fundamentals still matter. Clean code, solid abstractions, good tests - these don’t change, whether you're managing scale of complexity. If anything, complexity makes poor engineering fundamentals more painful. You can’t hide bad design behind throwing more hardware at the problem. But beyond the fundamentals, here are some actionables specific to the B2B environment.

Using retros

Most retros ask what went well and what went badly. That's fine for high level feedback, but it doesn't give you much to act on. A more useful question: what slowed us down this sprint? What took longer than you expected? What felt like a grind?

Not just surprise requirements, or missed planning elements – focus on the engineering papercuts, things you can live with, but they slow you down anyways.

This shifts the conversation from general feedback to specific friction points. You start noticing patterns, the same integration keeps breaking, everyone struggles with the same part of the codebase, estimates are always off for a particular type of work.

From these, you can derive engineering projects and your opportunity to rethink, improve the codebase, maybe try a newer pattern or a library.

As an IC, you might not be able to act on everything immediately, but coming to these meetings prepared to answer these questions forces you to think differently about problems. You develop ideas about what could be better and start learning from the patterns you see.

I learned this late, the subtle shift from what went wrong to what was more complicated than it should lead to a different set of answers, but it was useful.

Learn from on-call, don't just firefight

Being on-call gives you a unique view of your product that you don't get anywhere else. You see bugs across different subsystems, notice patterns in what breaks, and understand how customers actually use the software in ways that surprise you.

On-call often reveals where complexity manifests as fragility, edge cases you didn't account for, integrations that break under specific conditions, workflows you didn't know existed.

After you've put out the immediate fire, take time to learn from it. This might seem obvious, but in real life, this gets lost under time pressures.

This is your opportunity to identify which subsystems are fragile, where observability is lacking, and what patterns keep causing issues.

It's a chance to build the internal tooling that would have made debugging faster, or to advocate for hardening parts of the system that break too often.

On-call isn't just about fixing things; it's an opportunity to make the next incident easier to handle.

Question why things are the way they are

My belief is that people grow when they try to make improvements every day instead of waiting for a large refactoring project to come around. This is something engineers can incorporate in their day-to-day work by questioning the friction they encounter.

Your build pipeline takes 8 minutes and everyone's learned to context-switch during builds. There's a pattern in the codebase that feels overly complex for what it does, but it's been there forever. Your tests are flaky, so the team just reruns them. You're copying the same boilerplate across files because that's how things are structured. These things become background noise.

Good engineers don't accept this. They ask: why am I writing code that doesn't make sense? Why does this need to be structured this way? Why are the builds this slow? These questions feel obvious, but they rarely get asked under delivery pressure.

The answers point to real improvements and learnings, profiling builds to find bottlenecks, refactoring confusing code, fixing flaky tests. You don't need permission to ask these questions or to chip away at the problems you find.

Share what you've noticed with your manager and team. If you've done some research or a small proof of concept, bring that too. Often, you'll find others have noticed the same friction. These conversations surface problems that affect everyone and create opportunities for team-wide improvements.

Road to Staff Engineer

As one goes up the ladder as an IC, the nature of work in terms of projects you do go from work assigned to you to work that you find for yourself. The impact of your work shifts from task local to project local to work that impacts your team too.

The underlying theme of the previous three suggestions is to develop higher-order thinking. Retros teach you to identify systemic problems, not just surface-level issues. On-call teaches you to see patterns across the product, not just individual bugs. Questioning friction teaches you to spot improvement opportunities others have normalized. This is how you move from executing tasks to identifying what needs to be done.

At senior and staff levels, a significant part of your value comes from knowing where to focus effort. You recognize which complexity is essential to the business and which is accidental. You know when to refactor and when to ship. You can look at a system and see not just what's broken, but what could be better.

These aren't skills you develop through large projects alone, they come from paying attention every day to what slows you down and why.

Measuring Growth

You'll know you're getting better at managing complexity when the gap between your expectations and reality starts to shrink. You estimate two days, it takes two days. You think a feature will be straightforward, and it mostly is. You anticipate where edge cases will appear, and you're right.

You're developing accurate intuition about where complexity hides in B2B systems. That intuition comes from paying attention to what slows you down, to what breaks in production, to what feels harder than it should be. The practices above are how you build it.

🎤Audio

I got Notebook LM to generate a conversation with this article at its core, you can listen to the ideas here - https://drive.google.com/file/d/1HanvXh2z_VfvDbJB6DNdPPfAkvzBilF8/view?usp=sharing

Note: This article was initially published on LinkedIn - https://www.linkedin.com/pulse/beyond-scale-what-growth-means-b2b-anirudh-varma-xs2rc/?trackingId=HDE1JFUmeEhAOQ13IqJrlg%3D%3D

Beyond Scale: What Growth Means in B2B