The Modern Development Burnout Machine
Welcome to my rant on what I’m calling the modern development burnout machine.
Unfortunately, this is not about doing burnouts in the pictured car.
But the experience of being a person whose job is to think for a living—and what happens when our systems, unintentionally, make sustained thinking harder than it needs to be.
Over my career, I’ve noticed the day-to-day cognitive strain of building software: it shows up in calendars, Slack, Jira, design files, incident channels—and in that end-of-day feeling where we worked hard, but can’t quite name what moved.
And this isn’t any one discipline’s fault to solve. It’s a shared systems problem in the way we coordinate.
More tools, less thinking

Software engineering hasn’t necessarily gotten harder in the sense of raw capability.
In many ways, we’re more powerful than ever. We have better tooling, libraries, infrastructure, and automation. We have more tools than ever—and, increasingly, tools to manage our tools, and dashboards to monitor whether our tools are managing our tools. We can spin up environments in minutes, deploy safely, observe systems in real time, roll back quickly, run experiments, and even coordinate across continents.
And yet many of us feel like we’re running just to stay in place—touching more systems, shipping more output, and getting fewer uninterrupted stretches to think about what we’re actually doing.
This isn’t because people care less, or because quality suddenly matters less. It’s that we have less usable thinking time: fewer uninterrupted stretches where we can hold a complex model in our heads long enough to make a good decision.
Imagine I read out the names of all 1,025 Pokémon and then ask everyone to pick six. But how was anyone meant to have remembered half the names by the time I’m done reading?
That’s what we’re losing: those long arcs of attention where we can move from “what’s the problem?” to “what’s the right solution?” to “what are the second‑order consequences?” and then still have enough left to make it humane, accessible, maintainable, and real.
We’ve increased capability—but we’ve increased cognitive load.
The cognitive stack (every role feels it)

What we call “software development” now comes with a cognitive stack—everything we have to keep loaded to make good decisions. It’s a stack in the same sense my kitchen counter is a ‘storage system’: technically holding, spiritually collapsing.
We don’t just write code.
We hold build systems, linters, CI pipelines, deployments, ticketing systems, design specs, analytics dashboards, incident channels, and documentation. We hold multiple repos and multiple service boundaries. We hold security and privacy constraints. We hold user experience, accessibility, and performance.
And crucially, we hold the social model too: who owns what, who to ask, which channel has the real answer, what’s safe to change, what will blow up in production.
That last piece—ownership, trust, tacit knowledge—is part of the stack too. It often the difference between safe change and accidental chaos. And none of it is in the README.
Here’s the constraint underneath it all: working memory is limited. We can’t keep stacking abstractions and channels and handoffs and still expect consistent precision.
Designers feel this when the design system has drifted, the components are inconsistent, and every “small” UI decision implies a dozen downstream states.
Product managers feel it when every decision depends on analytics that require interpretation, experiments that require coordination, and tradeoffs that aren’t visible from any single dashboard.
Engineers feel it when a “simple” change spans multiple repos, multiple environments, feature flags, migrations, rollout plans, observability checks, and a permissions model nobody fully understands anymore—including, critically, the person who wrote it.
Interns feel it when they ask ‘where do I start?’ and three seniors point in three different directions.
So when people say “everything feels harder than it should,” it’s not that people got worse. It’s that the active context expanded past what anyone can comfortably hold for long.
Context switching is not a virtue

The modern workday can feel like a cognitive relay race. The baton is always being handed off—sometimes by other people, sometimes by our tools, sometimes by the structure of the system itself. And the baton isn’t just “a task.” It’s a whole mental model: this repo, that repo, this pipeline, that cloud service, this data shape, that user journey, this product constraint, that design intent. Each handoff isn’t just time—it’s a reset. And like every reset, you lose whatever you hadn’t saved. Which is why half of modern engineering is just trying to remember what you were about to do.
But there’s a grounded way to talk about it: Context switching isn’t multitasking. It’s just rerunning npm install on your brain every time someone asks “got a minute?”
Attention has a real energy cost. Context switching is a biochemical reset. Every time we switch tasks, our brains have to rebuild the active model: what matters, what’s risky, what we already tried, what the goal is.
If we’re forced to rebuild constantly, we spend our days paying “reload costs” instead of doing the actual work. We’re not lazy. We’re just buffering. That’s why a day can feel exhausting without being productive.
For years, many of us internalised the story that if we struggled with this, it meant we needed to “get good” at context switching. Like it was a virtue, and if we couldn’t do it indefinitely, we were less capable.
But context switching is a cost everyone pays—and some roles, weeks, and projects simply charge more of it. And unlike AWS, it doesn’t even send you a bill—it just takes it straight from your ability to think clearly.
When we treat attention like it’s free, we build workflows that look efficient on paper but are expensive in reality—because attention is one of the most finite resources.
The problem v. the environment

But the complexity we feel isn’t always the complexity of the problem. It’s the complexity of the environment. We’re not just building software; we’re navigating ecosystems. And when that starts to crack, we usually don’t interpret it as “maybe the system is asking too much.”
We interpret it as: “maybe I’m doing something wrong.” Which is exactly what Big CI Pipeline wants you to believe. And that interpretation is rough on individuals—and it distracts us from the more useful question: what can we change about the system?
Because if overload is a personal failing, the solution is self-improvement: better habits, more hustle, maybe a new subscription to Grit-as-a-Service, more learning on weekends, more “just be full stack,” more “just keep up.”
But overload is structural—if the job quietly became “hold ten mental models at once and switch between them at machine tempo”—so then the solution is design. Organisational, workflow, tooling, team, and expectation design.
The full-stack trap

This brings me to one of the biggest cultural contributors: the quiet expectation that everyone should be full stack.
At one point, “full stack” was kind of reasonable because the stack was smaller. You could be full stack when the stack was HTML, a CGI script, and a prayer. The system fit in our heads because the system was sized for humans.
Then we had a period where specialisation felt like progress. We acknowledged that front-end is a discipline, backend is a discipline, infrastructure is a discipline, design is a discipline, accessibility is a discipline. We built interfaces between those disciplines and treated collaboration as an engineering problem in its own right.
But now we’ve drifted back toward “everyone does everything,” except we did it at the exact moment when the breadth of knowledge required is basically immeasurable. Today “full stack” often means: be a specialist in twelve disciplines simultaneously—plus on-call therapist for the build pipeline—under constant interruption, with shrinking time to think.
When that breaks down, the temptation is to treat it as a hiring or performance problem. But often it’s a boundary and system-design problem. More and more, we’re asking people to do precision work in environments that make precision unnecessarily hard.
AI: relief, or the treadmill speeding up?

Now, AI enters the chat—literally, in most cases. It’s in my IDE, inbox, PR reviews, docs, Slack, and Zoom, saying Tai said this, Arnold asked that, Rashmita was interrupted by Zeus, and Alex is off joining another team again.
The pitch is: “Don’t worry, the machine will carry some of that load.” And AI can help. Like any tool, it’s powerful in the right spots and a waste of resources in the wrong ones.
But like I mentioned last year, there’s a trap: when we increase productivity in an environment that already optimises heavily for throughput, we don’t automatically get breathing room—we often just get higher expectations. If AI lets us generate a first draft in minutes instead of hours, the schedule doesn’t magically stay the same. The schedule tightens. The “time saved” becomes the new baseline. The treadmill speeds up.
And there’s something especially strange about what the industry is choosing to automate. It’s rushing to use AI on the parts of the work that benefit most from being human: taste, clarity, accessibility, creativity, the subtle art of making something understandable and kind.
People auto-generate interfaces, visual language, front-end code, as if the user-facing layer is packaging we should get through as quickly as possible. Meanwhile, they leave humans to do the tedious parts that machines are actually great at: repetitive admin, status churn, mechanical refactors, yak shaving disguised as “alignment.”
It can start to feel like we’re automating the parts that benefit from taste and judgment, while leaving humans with the busywork. And this isn’t just a tooling choice. It’s a value choice.
Front‑end and design aren’t “just polish.” They’re where software meets people—in the messy conditions of real life: different devices, imperfect networks, tight time, high stakes, stress, distraction, disability. So if anything deserves a human touch, it’s the part humans actually touch.
And then the question isn’t “who’s failing?” The question is: what changes when attention becomes a real constraint we design for?
What we do about it

So what do we actually do?
The good news is: I think we can design for this—and the smallest changes tend to pay back the most.
I’m going to share a few ideas—some will fit our context, some won’t—because I’m looking at modern development in general, not just us. If something doesn’t fit, ignore it.
And per Billy’s standing policy: throw tomatoes at the end. Preferably directed at Billy.
Audit the reload cost
First: audit the reload cost. Before we add anything—a new tool, dashboard, process, standup, channel—ask who pays the reload cost.
Nothing is free. Every new tool is another tab. Every dashboard is another place to check. Every process is another thing to remember. And often, the costs show up later—and they’re distributed across the whole team, a little bit at a time, until nobody can point to the single thing that broke them—but everyone’s exhausted anyway.
If it doesn’t reduce cognitive load, it increases it. There is no neutral.
Reclaim “fail fast”
Second: reclaim “fail fast.” It’s been corrupted. The original idea was good: learn quickly, test assumptions, don’t over-invest.
But “fail fast” often became “ship undercooked things and call it iteration”—a permission structure for not thinking. If we’re failing but not learning, we’re not failing fast. We’re just failing.
So reframe it as: learn fast, fail small. Scope experiments so failure is cheap and contained. Build in reflection time after something doesn’t work—not as a luxury, but as the point.
Know who holds what
Third: know who holds what. For any system, be able to answer: who actually understands it end-to-end? Who can wake up at 2 AM and fix it? Then ask: is that a reasonable amount for one human brain?
If not, we don’t have a resilient system—we have a single point of cognitive failure. That person will burn out or leave, and then we’ll discover how much institutional knowledge was never written down.
Time the small change
Fourth: time the small change. Literally measure how long a trivially small change takes: a copy tweak, a config update, a one-line fix.
Watch what actually happens: connecting to a VPN, finding the relevant repos; spinning up whatever proxy, servers, projects, etc. necessary; checking dashboards, undocumented tribal knowledge, and designs; build pipeline, deploy, verification.
If a five-minute change takes two hours, that’s not a skill issue. That’s a complexity tax. And it compounds until it eats our roadmap alive while everyone blames individuals for being slow.
Budget attention like infrastructure
Fifth: budget attention like infrastructure. We’d never run a server at 100 percent CPU and act surprised when it falls over. We’d build headroom, because headroom isn’t waste—it’s capacity for spikes.
Meetings aren’t free. Every meeting is a context switch, and every context switch has recovery time. If someone has six meetings in a day, all they have left are fragments too short for deep work, plus cognitive residue from six different conversations competing for attention.
Deep work needs structural protection.
Here’s a metric we can track without turning it into surveillance: the longest uninterrupted block per person per week. If that trends down over time, our workflow is eating our product.
Flip the automation
Sixth: flip the automation. When we’re deciding what to automate—especially with AI—ask: are we automating drudgery, or automating judgment? Are we freeing people for deeper work, or just raising throughput expectations?
And if we’re using AI to generate the user-facing layer so humans can spend more time on the machine-facing layer… sit with that. That’s an inverted value stack.
Scope for excellence
Seventh: scope for excellence. Ask, for every role: can this person realistically achieve excellence in their core responsibility with the time and focus they have? And when we staff up quickly are we setting people up with clear goals, context, and ownership, or are we hoping headcount alone will substitute for clarity?
If the job is scoped so five things get baseline competence and nothing gets depth, we’re building a team spread too thin to do anything well. Sometimes teams still perform heroically under that kind of load—but the cost tends to show up later in quality and people’s health.
Build in recovery
Finally: build in recovery. Not as a nice-to-have—as structure. High intensity must be paired with lower intensity. Sprints without cooldowns aren’t sustainable; they’re death marches we normalised. Crunch without rest doesn’t build resilience. It builds attrition.
People don’t get tougher—they get tired. And “we’ll rest after this” becomes a lie we tell until retention makes the decision for us.
Good software design always starts with constraints

I don’t think it’s accurate to label people as “not good enough” when the system ignores the physics of attention.
What I’m suggesting is this: shape tools, workflows, and expectations that adapt to human cognition That means fewer needless interrupts, clearer boundaries that reduce how much any one person has to hold in their head, respect for specialisation as a route to excellence, and AI used only where it genuinely helps, built into the system, so humans can spend more time on judgment, empathy, creativity, and care.
Because the brain’s limits aren’t obstacles. They’re design constraints. If you ignore constraints long enough, reality files a bug report. In production.
And good software design always starts with constraints.