← All posts
Software Methodologies & Delivery

The SPACE Framework vs. DORA: Measuring Developer Productivity Without Destroying It

Balanced metrics without surveillance. When to use DORA. When to use SPACE. How to avoid turning metrics into hammer tools.

Craig Hoffmeyer8 min read

The SPACE Framework vs. DORA: Measuring Developer Productivity Without Destroying It

I've watched good engineering leaders turn into surveillance autocrats the moment they discover DORA metrics.

Suddenly, it's not about shipping. It's about "hitting our deployment frequency target" or "keeping lead time under 24 hours." The metrics become the goal instead of the proxy for the goal.

This is when you know you've made a mistake.

DORA metrics (Deployment Frequency, Lead Time for Changes, Mean Time to Recovery, Change Failure Rate) are real. They correlate with business outcomes. But they're incomplete. They tell you how fast you ship, not what you're building or who's at capacity. If you optimize only for speed, you end up with burnout, technical debt, and eventually, a slower system.

That's where SPACE comes in. Not instead of DORA, but alongside it.

What DORA Gets Right (and Why It Matters)

DORA came from research at Google and Microsoft. The signal is strong: high-performing teams deploy more frequently, recover faster from failures, and ship more reliable features. These correlate with revenue growth and customer satisfaction.

The four metrics:

Deployment Frequency: How often do you ship to production? Daily? Weekly? Monthly? Higher frequency (daily to weekly) correlates with better outcomes.

Lead Time for Changes: From first commit to production. Two days? Two weeks? Months? Shorter is better. It means you're getting feedback faster.

Mean Time to Recovery (MTTR): How long does it take to fix a production incident? 15 minutes? Two hours? A day? Faster recovery means less damage, less toil.

Change Failure Rate: What percentage of deployments cause a production incident? 5%? 15%? Lower is better.

These matter because they're operational leading indicators. High DORA performance predicts business success. It's not magic; it's physics. Fast feedback loops compound. Small batches reduce risk. Quick recovery means you learn from failures instead of drowning in them.

The problem: DORA only looks at speed. It doesn't tell you if your team is burned out, if they're building the right things, or if key engineers are blocked.

What SPACE Adds (and Why It's Needed)

The SPACE framework came out of Microsoft research in 2021 (Forsgren, Storey, Williams, Gorman). It's a deliberate answer to the surveillance problem. It adds five dimensions:

Satisfaction & Well-being: Is your team happy? Burned out? Engaged? Measured via surveys and retention.

Performance: Not just DORA metrics, but business outcomes. Are you shipping features users care about? Is revenue growing? Impact per engineer.

Activity: What are engineers actually doing? Building features? Fixing bugs? Fighting fires? Time-tracking, PR volume, commit frequency. (This is where teams often screw up.)

Communication & Collaboration: Is the team working together? Are async workflows effective? Is knowledge getting shared? Measured via surveys, PR review time, pairing sessions.

Efficiency & Flow: Can engineers focus for blocks of time? How many interruptions? Context switching tax? Measured via meeting load, Slack activity, calendar fragmentation.

The genius of SPACE is that it's balanced. You can't game it. You can't hit all five by sprinting harder. You hit Satisfaction by removing interruptions. You hit Performance by shipping the right things. You hit Efficiency by avoiding meetings.

These metrics are supposed to inform leadership decisions, not performance reviews. They're about system health, not individual productivity.

A Concrete Example: The Team That DORA-ed Themselves Into the Ground

I worked with a 15-person team at a B2B SaaS company. Their leader was obsessed with DORA. "We need to be a high-performer," she'd say. Deployment frequency was the goal.

They optimized for it. By month three, they were deploying 8x per day. Lead time was 4 hours. Change Failure Rate was 2%. By DORA, they were world-class.

But:

  • Two engineers burned out and quit
  • Technical debt had ballooned (fast shipping meant skipping tests, deferring refactors)
  • They weren't building the right things (high velocity in the wrong direction)
  • Their MTTR was only good because failures didn't matter; users didn't care about half the features being deployed

I asked them to measure SPACE for a month. The results:

  • Satisfaction: 4.2/10. Half the team wanted to leave.
  • Performance: Revenue per engineer was declining even as shipping increased.
  • Activity: 60% of deploys were small bug fixes or technical debt cleanup, not features.
  • Communication: Asynchronous PRs were bottlenecked; no one had time to review properly.
  • Efficiency: 18 meetings per week for a 15-person team. Context-switching was constant.

DORA was lying. Or rather, DORA was right about speed, but speed alone was harming the system.

We dialed back. Deployment frequency fell to 2–3x per week. Meeting load dropped to 6 per week. Engineers started shipping fewer, more substantial features. Revenue per engineer went back up. Satisfaction went to 7.8/10 in three months.

DORA metrics were still good. But SPACE metrics were what made the business sustainable.

DORA vs. SPACE: How to Use Both

Here's the real insight: DORA and SPACE aren't competing frameworks. They're looking at different things.

Use DORA for operational health: Is your delivery pipeline working? Are you shipping frequently? Can you recover from failures? These are table stakes. If DORA metrics are bad, you have a technical problem.

Use SPACE for team health and strategy: Are we building the right things? Is the team sustainable? Are we communicating? Where are the bottlenecks? These tell you if you're optimizing for the right outcome.

The right cadence:

  • DORA metrics: weekly or monthly. Track trend lines. Are they stable? Getting better? These should inform your CI/CD work and incident response, not individual performance.
  • SPACE metrics: monthly or quarterly. Measure satisfaction via pulse surveys. Measure performance via business outcomes. Activity is easy to track but easy to misinterpret; use it alongside satisfaction. Communication and Efficiency can be measured quarterly.

The rule I give all my clients:

If DORA is perfect but SPACE is suffering, you're optimizing for the wrong thing. Course-correct.

If SPACE is perfect but DORA is lagging, you probably have a technical or process problem. Fix the CI, reduce batch sizes, invest in testing.

If both are good, you're in the top 5% of engineering organizations. Stay here.

The Counterpoint: Metrics Can Mislead No Matter What

Here's the honest truth: metrics can be gamed, misinterpreted, or used as weapons regardless of which framework you pick.

The fix isn't a better framework. It's leadership discipline.

Don't use metrics to measure individuals. "This engineer has low commit frequency." That's not a signal; that's surveillance. Use metrics to understand system health.

Don't optimize for the metric itself. If you care about lead time, the next step isn't "make lead time shorter"; it's "what's blocking fast shipping?"

Don't forget context. A team supporting a legacy monolith will have different DORA metrics than a team building a new microservice. That doesn't mean one is better.

Don't use SPACE as permission to be slow. "Our team is happy, so it's okay we deploy monthly." No. Happy teams should also ship frequently. If they're not, there's a technical problem.

Metrics are inputs to decisions, not decisions themselves.

Where I Come In

I've built DORA and SPACE dashboards for a dozen engineering teams. I know where they help (identifying CI bottlenecks, spotting burnout patterns early) and where they mislead (gaming activity metrics, ignoring context). I also know how to set them up so they inform leadership without becoming a surveillance tool.

If you're trying to measure developer productivity or understand why your team feels slow despite looking good on paper, I can help you design the right metrics dashboard.

Let's talk about it.

Action Checklist

  • Measure your DORA metrics for the last month. Deployment frequency, lead time, MTTR, change failure rate. Write them down.
  • Run a satisfaction survey. Ask: "On a scale of 1–10, how engaged are you?" and "Do you plan to stay at this company for the next 12 months?"
  • Measure meeting load. How many hours of meetings is your team in per week? If it's >8, you have a communication problem.
  • Pick one DORA metric and one SPACE metric you want to improve. Don't try to improve everything at once.
  • Set a baseline. Measure for a month.
  • Decide: is the bottleneck technical (DORA) or systemic (SPACE)? Work on that first.
  • Recheck in a month. Are things improving? Are you seeing trade-offs?

Related Reading

Let's build your metrics dashboard

Get in touch →