
The Metric Definition Governance Playbook
- Jason B. Hart
- Revenue operations
- April 8, 2026
- Updated April 7, 2026
Table of Contents
What Is Metric Definition Governance?
Metric definition governance is the operating discipline that decides what a number means, which system is allowed to produce it, who approves changes, and how the rest of the company is supposed to use it.
That sounds obvious until you watch one SaaS company use pipeline, bookings, ARR, or CAC three different ways in the same quarter.
Most teams do not have a dashboard problem first. They have a definition problem that eventually shows up as a dashboard problem.
If marketing, sales, finance, and data all have defensible reasons for using slightly different versions of the same metric, the fix is not just a prettier report. The fix is deciding which definition is official for which decision and giving that decision an operating home.
Why Metric Definitions Drift So Fast
Definition drift usually looks small at first.
- a finance adjustment gets added to the board deck but not the warehouse model
- a sales stage gets renamed in the CRM without a downstream governance step
- marketing keeps using sourced pipeline logic from last year because nobody ever approved a new standard
- RevOps builds a practical spreadsheet workaround that becomes the de facto source of truth
None of those changes feel dramatic in isolation. Together, they turn routine reporting into negotiation.
The Expensive Version of the Problem
Here is what metric conflict often looks like in practice:
| Team | What they say the metric means | Why the conflict shows up |
|---|---|---|
| Marketing | Sourced pipeline tied to campaign influence rules | The definition is optimized for channel evaluation, not finance-grade reporting |
| Sales | Bookings or committed revenue | The definition is optimized for forecast and rep accountability |
| Finance | Recognized revenue or finance-approved ARR | The definition is optimized for accounting and formal planning |
| RevOps / Data | Warehouse-modeled pipeline or revenue view | The definition is optimized for consistency and reproducibility |
The mistake is not that these teams care about different things. The mistake is pretending one unlabeled metric should answer every one of those questions at once.
When You Need a Governance Playbook, Not Just a New Dashboard
You need metric definition governance when:
- leadership meetings start with caveats about whose number is being shown
- the board deck uses a number that teams cannot reproduce outside one spreadsheet
- CAC, pipeline, or revenue change depending on which system somebody screenshots
- compensation or budget decisions are being made on definitions that are still politically contested
- every reporting disagreement gets routed to the data team even when the real issue is ownership
If those patterns sound familiar, you are not dealing with a one-off reconciliation project. You are dealing with an operating system gap.
Start Small: Which Metrics Actually Need Governance First?
Do not govern everything. Govern the numbers that create the most expensive confusion.
A good first pass is usually two to five metrics pulled from this list:
- net new ARR
- bookings
- sourced pipeline
- qualified pipeline
- CAC
- recognized revenue
- gross margin by channel or segment
A useful selection rule is simple:
If a metric regularly changes a budget, forecast, board answer, or compensation conversation, it deserves governance before a hundred lower-stakes KPIs do.
The Five-Part Governance Playbook
1. Run one alignment workshop before you start writing definitions
Do not begin with documentation. Begin with comparison.
The first workshop should force the room to answer:
- what does this metric mean here?
- what decision is it used for?
- what system currently produces it?
- what exclusions or caveats are hiding in the current version?
- who believes the current version is misleading, and why?
If you skip that workshop, you usually end up documenting one team’s preferred definition and calling it governance.
2. Create a definition record that is usable in the real world
A metric definition document should be short enough to read and specific enough to survive pressure.
What should go into a metric definition record?
Use one definition record per governed metric.
| Field | What it needs to answer |
|---|---|
| Metric name | What is the exact label we will use publicly and internally? |
| Business definition | What does the number actually mean in plain language? |
| Formula / logic | How is it calculated? |
| System of record | Which system or model is authoritative? |
| Owner | Who approves changes or exceptions? |
| Refresh cadence | How often is it updated? |
| Included / excluded elements | What is explicitly in or out? |
| Primary use case | Which decision is this definition designed to support? |
| Confidence level | Directional, decision-grade, or board-grade? |
| Known caveats | What should users know before leaning too hard on the number? |
That last row matters more than most teams think. A definition is stronger when it names its limits instead of pretending uncertainty is gone.
A simple metric definition template you can actually use
If you want a practical starting format, keep it this plain:
Metric: Net New ARR
Definition: New annual recurring revenue added in the period, net of churned recurring revenue and excluding one-time services.
System of record:fct_revenue_movementsmodel in the warehouse.
Owner: RevOps, with finance approval for changes.
Refresh cadence: Daily.
Primary use case: Leadership reporting and quarterly planning.
Confidence level: Decision-grade.
Known caveats: Multi-year deal normalization is still reconciled manually before board reporting.
That is already more useful than most internal wiki pages teams call governance.
3. Set a governance RACI so changes stop happening by accident
Governance fails when everyone is involved and nobody is accountable.
Who should be responsible for metric governance?
A lightweight RACI is usually enough.
| Governance activity | RevOps | Finance | Marketing / Sales | Data / Analytics | Executive sponsor |
|---|---|---|---|---|---|
| Propose a new or revised definition | R | C | C | C | I |
| Approve the canonical definition | A | A/C | C | C | I |
| Implement reporting/model changes | C | I | I | R | I |
| Communicate the new definition | R | C | C | I | I |
| Escalate unresolved conflicts | R | C | C | C | A |
| Run quarterly governance review | R | C | C | C | I |
A few practical notes:
- RevOps often owns the operating process because it sits closest to the recurring reporting conflict.
- Finance should be tightly involved when the metric enters planning, investor, or board use.
- Data should not be made the sole owner when the real disagreement is business meaning rather than SQL.
- One executive sponsor matters when the room cannot settle priority or tradeoffs on its own.
4. Give definition changes a visible path
If the only way a definition changes is informally, the company will create shadow governance whether it means to or not.
A usable change path usually looks like this:
- someone proposes a change and names the reason
- the owner checks whether the change affects a governed decision
- impacted teams review the tradeoff
- the approved definition record is updated
- reporting and downstream assets are updated together
- the change is communicated before the next decision cycle depends on it
This is what keeps a CRM tweak from quietly becoming a board-level reporting fork.
5. Review governed metrics quarterly
Governance is not finished when the document exists. It is finished when the document still matches reality three months later.
What should a quarterly metric governance review cover?
Keep the review short and operational.
| Review prompt | Why it matters |
|---|---|
| Did any source system or stage logic change this quarter? | Small workflow changes often break reproducibility before anyone notices |
| Did any team start using a local variant of the metric? | That is usually the first sign the official definition is not serving the real workflow |
| Are the owner and approver still the right people? | Ownership drifts after org changes faster than most teams expect |
| Did any known caveat become a blocker? | Caveats should shrink over time, not quietly become permanent |
| Does the metric still match its intended decision use case? | A metric that worked for planning may still be wrong for board reporting |
This is where governance stops being a one-time workshop and becomes a repeatable operating behavior.
A 90-Minute Metric Governance Workshop Agenda
If you need a practical way to get started, here is a clean first-session structure.
0-10 minutes: frame the problem
State the rule clearly:
We are not here to prove one team is right. We are here to decide which definition fits which decision and which version becomes canonical.
10-25 minutes: compare current definitions
Have each team show:
- the metric definition it currently uses
- the report or dashboard it trusts most
- the system behind that number
- the decision it uses that number for
25-45 minutes: trace the current system of record
Pick the most contentious metric and work backward:
- where is it shown?
- where is it calculated?
- which upstream systems feed it?
- where do manual adjustments happen?
- who currently owns the definition?
45-65 minutes: make the actual governance decisions
For each in-scope metric, decide:
- canonical definition
- official system of record
- owner / approver
- key exclusions
- confidence level
- follow-up implementation work
65-80 minutes: assign the operating model
Confirm:
- the RACI
- the change path
- the communication plan
- the quarterly review cadence
80-90 minutes: lock the next actions
End with three things only:
- which metric definitions are now official
- which systems or models need follow-up work
- how the decisions will be communicated to the broader company
That is enough to create momentum without turning the first workshop into governance theater.
Directional vs. Decision-Grade vs. Board-Grade Definitions
A useful governance system does not just define the metric. It also defines how hard the business can lean on it.
| Confidence level | What it means | When to use it |
|---|---|---|
| Directional | Good enough to spot a pattern or triage a question | Fast operating read, channel checks, early exploration |
| Decision-grade | Reliable enough to support budget, prioritization, or operating changes | Quarterly planning, team targets, channel allocation |
| Board-grade | Reconciled enough for executive commitments and formal reporting | Board materials, investor updates, official leadership narratives |
This framing is useful because it stops people from pretending every metric needs perfection before it becomes usable. It also stops the opposite problem, where clearly fragile numbers get presented with executive certainty.
What to communicate after the workshop
After you settle the definitions, send one short written summary.
It should include:
- the official definition for each governed metric
- the system of record for each
- who owns change approval
- what changed from the previous approach
- what follow-up work is still in flight
- where questions or exceptions should go
Keep it plain. Governance messages should reduce interpretation, not create more of it.
Where this playbook fits in the broader single-source-of-truth journey
This playbook is not the entire single-source-of-truth program. It is the governance layer inside it.
If the bigger journey is:
- audit what exists
- align on definitions
- architect the data model
- build the reporting layer
- govern it over time
then this article lives deepest in steps 2 and 5.
That is why a company can have solid warehouse work and still have weak metric trust. The build is necessary. The governance keeps it from drifting back into politics.
Download the Governance Kit
The PDF version includes the workshop agenda, one-page definition template, lightweight RACI, quarterly review checklist, and rollout prompts in one place.
Download the Metric Definition Governance Kit (PDF)
A text-first governance kit with the workshop agenda, definition record template, RACI, quarterly review checklist, and rollout prompts. Enter your email and we'll send it over.
If your teams are already arguing about whose number counts, start with Three Teams, Three Numbers. If the definition fight is exposing deeper warehouse, CRM, or system-of-record issues, the next step is usually Data Foundation. For proof that this kind of alignment work can stick, review the B2B SaaS single-source-of-truth case study.
Book the Metric-Alignment DiagnosticDownload the Metric Definition Governance Kit
A lightweight PDF kit with the workshop agenda, metric-definition template, governance RACI, quarterly review checklist, and rollout prompts.
DownloadSee It in Action
Common questions about metric definition governance
What is metric definition governance, really?
How many metrics should we govern first?
Who should own metric definitions?
What if the systems are the real problem, not the definitions?

About the author
Jason B. Hart
Founder & Principal Consultant
Founder & Principal Consultant at Domain Methods. Helps mid-size SaaS and ecommerce teams turn messy marketing and revenue data into decisions leaders trust.
Jason B. Hart is the founder of Domain Methods, where he helps mid-size SaaS and ecommerce teams build analytics they can trust and operating systems they can actually use. He has spent the better …
Get posts like this in your inbox
Subscribe for practical analytics insights — no spam, unsubscribe anytime.
