Search for "capacity planning formula" and you'll find plenty of clean equations.
Available hours minus scheduled hours equals remaining capacity. Billable hours divided by available hours equals utilization. Plot it on a dashboard. Green means go, red means stop.
The math is simple. The dashboards are satisfying. The decisions feel confident.
Except the math is wrong. Not the arithmetic—the assumptions underneath it.
How capacity formulas work
Most capacity planning tools follow the same logic:
- Define available hours per person (say, 40 per week)
- Schedule work against those hours
- Calculate what's left over
- Display utilization as a percentage
When Sarah has 32 hours scheduled against 40 available, she's at 80% utilization with 8 hours of remaining capacity. Simple.
This approach works beautifully in manufacturing. An hour of machine time produces an hour of output. Capacity is literal—you can measure it, schedule against it, and predict results.
Professional services adopted the same logic. Track hours. Calculate utilization. Manage capacity like inventory.
The problem: creative and knowledge work doesn't behave like machines.
The assumption that breaks everything
Capacity formulas assume all hours are equal.
They're not.
Four hours of focused writing is not the same as four hours of client calls. An easy client is not the same as a difficult one. Work that energizes you is not the same as work that drains you.
Two people can both show 80% utilization on the dashboard while having completely different experiences. One has energy to spare. One is drowning. The formula can't tell the difference.
This is why capacity tools confidently tell you "there's room for more" when your team is already stretched. Or "you're maxed out" when someone could easily take on more of the right kind of work.
The numbers are precise. The insight is useless.
What the formulas can't capture
Hours scheduled tells you nothing about what those hours actually cost. Several factors that matter enormously don't show up in the math:
Cognitive load. Some work takes more mental energy per hour. A complex strategy session drains differently than routine production work, even if both take the same time.
Context switching. The hidden cost of interruptions and task juggling. A day fragmented across six clients is harder than a day focused on two, even if the hours are identical.
Emotional labor. Difficult clients, sensitive conversations, relationship management. This work is exhausting in ways that don't correlate with duration.
Complexity variance. The same deliverable for Client A versus Client B can require three times the effort. One client is organized and decisive. The other changes direction weekly.
Experience gaps. What's routine for a senior team member is challenging for a junior one. The work takes different amounts of time—but more importantly, it takes different amounts of capacity.
None of this shows up in scheduled hours. So your capacity math is technically correct but practically meaningless.
The dashboard is green. The team is burned out. The math was right. The insight was wrong.
Why this matters for the decisions that count
The whole point of capacity planning is to answer real questions:
"Can we take on this new client?" The formula says yes—hours are available. But your team is already stretched thin on high-complexity work. Taking on more means something breaks.
"When do we need to hire?" The formula says not yet—utilization is below target. But you're about to lose someone because their workload feels unsustainable, even if the numbers look fine.
"Who has room for more work?" The formula points to Jordan—lowest utilization on the team. But Jordan's clients are twice as demanding as the numbers suggest. Adding more pushes them over the edge.
These are the questions that matter. Capacity formulas give you confident wrong answers.
The cost: bad hiring timing, preventable turnover, lost clients, a team that doesn't trust the data because the data doesn't match their reality.
What works instead: effort, not hours
Instead of tracking time, score effort.
Effort captures what hours miss. When you ask "how demanding is this work when it shows up in someone's week?" you get answers that reflect cognitive load, complexity, emotional labor—all the things that determine whether someone has real capacity or just empty hours.
Here's how it works:
Effort scores attach to work types, not people. Monthly reporting is a 3. Campaign setup is a 5. Social media management is a 2. The work is what it is, regardless of who does it.
Role levels have different capacity targets. A junior team member might have a target of 400 total effort. A senior might have 600. This reflects what you already know: experienced people can handle more.
The ratio is what makes it work. That monthly report (effort score of 3) represents a different proportion of capacity depending on who's doing it:
- For a junior at 400 capacity, it's 0.75% of their bandwidth
- For a senior at 600 capacity, it's 0.5% of their bandwidth
The senior completes it faster, yes. But more importantly, it takes less out of them relative to what they can handle. The math reflects reality without anyone tracking hours.
Compare this to time-based systems. The junior logs 4 hours, the senior logs 2 hours. Same work, different time, confusing utilization percentages. Did the senior work less hard? Are they less utilized? The numbers can't tell you.
With effort scoring, the same work has the same score. Different capacity targets mean different proportions. The math just works.
The conversation the formulas skip
Effort scoring requires something most teams haven't done: actually agreeing on what work costs.
Not manager estimates based on assumptions. Not historical averages from time tracking. A real conversation where the team decides together how demanding each type of work is.
This is uncomfortable. Someone will say "client reporting is a 2" and someone else will say "it's a 4." Don't average it. Dig in. Why do you see it differently?
You'll often find that experience shapes perception—and that's exactly the insight you need. The work might be a 3 for the team overall. But now you understand that newer team members experience it as harder, which is already accounted for in their lower capacity target.
This conversation is the foundation. Without it, you're calculating capacity of undefined work. With it, you can actually answer "can we handle more?" and "when do we hire?"
Making the shift
You don't have to throw out your tools entirely. But you need to build the foundation they're missing:
-
Define what work actually exists. Not projects or clients—types of work that repeat. This becomes your work catalog.
-
Score effort collaboratively. The team decides together. Disagreements surface insights. The numbers reflect real experience, not management guesses.
-
Set capacity targets by role level. What can a junior handle? A mid-level? A senior? Now you have a denominator that means something.
Then—and only then—the capacity math becomes useful. You're no longer tracking utilization of undefined work against arbitrary targets. You're measuring real effort against realistic capacity.
The formulas start working because you've fixed the assumptions underneath them.
Capacity formulas aren't wrong. They're just incomplete. They work great once you've had the conversations about what work actually costs.
We put together a guide with the 10 capacity conversations your team isn't having—including "How demanding is this work—really?" and "When do we need to hire?"
Or if you're ready to move beyond formulas: Try Capysaurus free




