What Should You Fix First: Definitions, Source Data, or Dashboards?

What Should You Fix First: Definitions, Source Data, or Dashboards?

Table of Contents

What is the first thing to fix when reporting trust breaks?

The first thing to fix is the layer creating the next expensive decision failure: metric definitions when teams mean different things by the same number, source data when the records or logic underneath the number are unstable, and dashboards only when the underlying truth is already trusted but the presentation still blocks action.

That distinction sounds obvious when written down. In practice, most teams skip it.

They say the dashboard is wrong. What they usually mean is one of three different things:

  • marketing, sales, finance, and data are not using the same definition
  • the source records are messy enough that nobody trusts the rollup
  • the output is technically accurate enough, but still not helping anyone decide what to do next

Those are not the same job. Treating them like the same job is how a lot of companies end up rebuilding the dashboard before they settle the argument underneath it.

Salesforce’s State of Data and Analytics report says data and analytics leaders estimate that 26% of their data is untrustworthy.1 That is a useful gut check, because the reporting fight is rarely only about charts. A lot of the time, the chart is just the visible place where hidden trust debt finally becomes social.

Why teams pick the wrong first move

The wrong first move usually feels easier, not smarter.

A dashboard rebuild sounds concrete. A warehouse cleanup sounds technical. A definitions workshop sounds like progress.

The trap is that each one can be the wrong answer if it is chosen before the team names the actual failure layer.

Here is the pattern I see most often:

  1. a forecast, planning, or board conversation goes sideways
  2. everyone agrees the reporting is not working
  3. the team grabs the most visible object in the system, usually the dashboard
  4. the real trust break survives and comes back in the next meeting

That loop is expensive because it creates motion without relief.

Agile Data’s summary of Gartner research puts the average annual cost of poor data quality at $12.9 million per organization.2 Even if your company is nowhere near that exact number, the point still lands: fixing the wrong layer first creates more labor, more caveats, and more executive patience burn.

The three failure layers

If you want to decide what to fix first, separate the problem into three layers.

1. Definitions

A definitions problem shows up when people use the same words for different business realities.

You hear things like:

  • “pipeline is down”
  • “marketing-sourced revenue is off”
  • “the dashboard does not match finance”
  • “the board number is wrong”

Then you ask what the metric includes, excludes, or is supposed to support, and the room immediately forks.

That is not a chart problem. That is not even a source-data problem yet. That is a business-definition problem.

The operator clue is that the argument becomes political before it becomes technical. People are defending meanings, not records.

2. Source data

A source-data problem shows up when the intended definition is at least directionally known, but the systems feeding it are unreliable.

That can mean:

  • CRM stages are inconsistent
  • campaign mapping is incomplete
  • finance timing rules do not line up with commercial reporting
  • identity resolution is brittle
  • warehouse models are patching around upstream mess
  • the trusted version still depends on exports and spreadsheets

This is where teams often say, “The dashboard is wrong,” when the dashboard is only repeating what the source path gave it.

The operator clue is repeated manual rescue work. Somebody always has to patch the number before the meeting.

3. Dashboards and reporting artifacts

A dashboard problem is real, but it is narrower than most teams think.

It usually means the underlying number is good enough for the decision, but the output is still failing because:

  • the audience is wrong
  • the layout hides the signal
  • the cadence is off
  • the report is trying to serve too many jobs at once
  • the team actually needed a decision brief, board pack, or workflow alert instead of a dashboard

The operator clue is this: people trust the number more than they trust the artifact. They are not asking whether the metric is true. They are asking why the report still does not help.

If that is the real issue, the dashboard may actually be first in line. But only then.

The fast triage question I use first

Before I ask about tools, charts, or even data models, I ask one question:

What exactly happened in the last important meeting that made everyone say the reporting was broken?

That answer usually tells you the first move faster than a long architecture review.

What happened in the meeting?Most likely first layer to fixWhy
Two teams used the same metric name but defended different logicDefinitionsThe room is fighting over meaning before it gets to implementation
Everyone agreed what the metric should mean, but the records still did not line upSource dataThe logic path underneath the number is unstable
The number was trusted enough, but the report still created confusion or no actionDashboard / artifactThe issue is the delivery shape, not the core truth
The report needed a spoken disclaimer every time it appearedDefinitions or source dataPresentation is not the first blocker if caveats are still doing the real work
Someone exported the dashboard into a spreadsheet before leadership saw itSource data first, artifact secondThe patching behavior is diagnosing the trust break for you

That is the first cut. Not the whole diagnosis, but enough to stop defaulting to a rebuild.

A practical scoring model: words, records, or presentation?

When the problem is muddy, score the symptoms across the three layers.

Signals that definitions should go first

Definitions should usually go first when you see three or more of these signs at once:

  • the same metric label means different things in adjacent meetings
  • every team has a different “trusted” version for the same use case
  • the debate turns into ownership and use-case questions before anyone looks at SQL or dashboards
  • leaders ask for one number, but nobody can agree which version is fit for which decision
  • the same caveat keeps showing up because the metric was never actually locked

This is the lane for Three Teams, Three Numbers. If the number itself has become a cross-functional argument, trying to fix the chart first just turns the chart into the argument.

Signals that source data should go first

Source data should usually go first when you see signs like these:

  • the definition is mostly understood, but underlying records still fail reconciliation
  • the warehouse and source systems diverge in repeatable ways
  • finance or RevOps keeps maintaining side logic outside the official reporting path
  • the business can describe the metric, but the team cannot reproduce it cleanly without manual cleanup
  • the same freshness, stage, mapping, or join issue keeps resurfacing every cycle

This is the lane for Data Foundation. The team does not need prettier reporting. It needs a more reliable path from record to decision.

BigDATAwire’s summary of dbt Labs’ State of Analytics Engineering 2024 notes that 57% of respondents named data quality as one of the top three challenges in data preparation, up from 41% in 2022.3 That maps closely to what this layer feels like in practice: the logic exists, but the records keep stealing confidence from the output.

Signals that the dashboard should go first

The dashboard or reporting artifact should go first only when the underlying truth is already good enough and the output still fails.

That usually looks like:

  • people trust the number, but the meeting still drags
  • the report serves too many audiences at once
  • operators need a workflow trigger while leadership keeps getting a dashboard
  • the dashboard hides the one comparison or threshold that matters
  • the issue is visual hierarchy, reporting cadence, or artifact mismatch rather than data confidence

This is where pieces like The Reporting Artifact Hierarchy become more useful than another trust workshop. The number may be fine. The format may not be.

Definitions vs. source data vs. dashboards at a glance

Layer to fix firstWhat it sounds likeWhat is actually brokenBest first move
Definitions“We all use the same number, but it never means the same thing.”Metric scope, use case, exclusions, owner, confidence levelRun a metric-alignment session and lock the definition record
Source data“We agree on the metric, but the system path still does not support it.”CRM hygiene, mappings, joins, warehouse logic, freshness, handoffsTrace the logic path and remove the recurring manual rescue points
Dashboards“We trust the number, but the report still does not help us act.”Artifact shape, audience fit, cadence, layout, decision routingRedesign the output or choose a tighter artifact

A lot of teams are dealing with all three. That is normal. The goal is not to pretend only one layer is broken. The goal is to choose the first intervention that changes the next decision fastest.

A worked example: one complaint, three different first moves

Say a leadership team says, “Our pipeline dashboard is not trustworthy.”

That sentence can point to three very different first moves.

If the real problem is definitions

Marketing is using sourced pipeline one way. Sales is talking about qualified pipeline another way. Finance is reacting to recognized revenue timing. Everyone is looking at the same dashboard and still talking past each other.

The first move is not a dashboard rebuild. It is forcing the room to decide which metric exists for which decision, who owns it, and what confidence level it has to carry.

If the real problem is source data

The team already agrees on what qualified pipeline means. But CRM stage hygiene is weak, campaign mapping is incomplete, and the revenue rollup only becomes believable after RevOps patches it in a spreadsheet the night before the forecast meeting.

The first move is not a definitions workshop. It is tracing the source path and fixing the manual rescue work that keeps the official report from being trusted.

If the real problem is the dashboard itself

The metric is trusted enough. The source path is stable enough. But the meeting is still a mess because one artifact is trying to serve the CRO, the board, and the day-to-day operators at the same time.

Now the dashboard or reporting artifact is first in line. Maybe the real answer is a decision brief for leadership, a simpler operator dashboard for the team, or a workflow alert for follow-up instead of one giant page that pleases nobody.

The question to ask before approving any reporting project

Before anyone starts redesigning, rebuilding, or buying, ask this:

If we fixed only one layer before the next important meeting, which fix would remove the most confusion?

Use that answer to make the first move smaller and sharper.

A practical version looks like this:

QuestionIf yes, start here
Are teams still using the same metric label to mean different things?Definitions
Does the trusted number still depend on spreadsheet cleanup, overrides, or undocumented mapping logic?Source data
Would the meeting improve even if the number stayed exactly the same, just presented differently?Dashboard / artifact
Is the real issue that the output is solving the wrong job altogether?Dashboard / artifact choice
Are people arguing about who is right more than what the record says?Definitions

That is the whole game. Not complexity for its own sake. Just enough honesty to stop solving the wrong layer first.

Download the triage worksheet and use it in a real meeting

If this problem is live right now, download the Reporting Trust Triage Worksheet and use it in the next working session with RevOps, finance, marketing, or data.

The worksheet is built to do one job: help the room score whether the first intervention belongs in definitions, source data, or dashboards before another reporting project expands into three projects.

That is also the easiest way to tell whether you need a metric-alignment diagnostic or deeper foundation work.

The mistake to avoid

The mistake is not fixing the wrong thing eventually. Teams usually get around to the real issue.

The mistake is fixing the wrong thing first, then using the failed first pass as evidence that the reporting problem is bigger, weirder, or more mysterious than it really is.

Most of the time, it is not mysterious. It is just layered.

Name the layer. Pick the first move. Make the next meeting cleaner. Then decide whether the second fix is still necessary.

Sources

  1. Salesforce, State of Data and Analytics: data and analytics leaders estimate that 26% of organizational data is untrustworthy. Source.
  2. Agile Data summary citing Gartner research: poor data quality costs organizations an average of $12.9 million per year. Source.
  3. BigDATAwire summary of dbt Labs' State of Analytics Engineering 2024: 57% of respondents named data quality as one of the top three data-prep challenges, up from 41% in 2022. Source.

Download the Reporting Trust Triage Worksheet (PDF)

A one-page worksheet for scoring whether the first fix belongs in definitions, source data, or dashboards before another reporting project sprawls.

Download

If the argument is really about whose number wins

Three Teams, Three Numbers

Use the diagnostic when marketing, sales, finance, and data are all defending different versions of the same truth and the first fix needs to start with alignment.

Start with the metric-alignment diagnostic

If the triage points to broken logic under the surface

Data Foundation

When the first move is source cleanup, warehouse logic repair, or ownership hardening, this is the path that fixes the system underneath the reporting fight.

See Data Foundation

Common questions about fixing definitions, source data, or dashboards first

What is the difference between a definitions problem and a source-data problem?

A definitions problem means teams use the same label for different business logic. A source-data problem means the underlying records, joins, mappings, freshness, or system handoffs are unstable even if the definition is already agreed in principle.

When is the dashboard actually the first thing to fix?

Only when the metric logic is already trusted, the source path is stable enough for the decision, and the real blocker is presentation: layout, cadence, audience fit, or the wrong artifact for the meeting.

Can more than one layer be broken at the same time?

Yes. That is normal. The point of triage is not pretending only one thing is wrong. The point is deciding which layer is creating the most decision drag right now so the first intervention is honest and useful.

What should we do if the score lands close across all three layers?

Start with the layer causing the most expensive meeting failure. If leadership cannot even agree what the metric means, fix definitions first. If everyone agrees on the definition but nobody trusts the records, fix source data first. If trust is good but the output is still unusable, fix the dashboard or reporting artifact.

Share :

Jason B. Hart

About the author

Jason B. Hart

Founder & Principal Consultant

Founder & Principal Consultant at Domain Methods. Helps mid-size SaaS and ecommerce teams turn messy marketing and revenue data into decisions leaders trust.

Marketing attribution Revenue analytics Analytics engineering

Jason B. Hart is the founder of Domain Methods, where he helps mid-size SaaS and ecommerce teams build analytics they can trust and operating systems they can actually use. He has spent the better …

Get posts like this in your inbox

Subscribe for practical analytics insights — no spam, unsubscribe anytime.

Related Posts

Book a Discovery Call