
The Fastest Way to Waste a dbt Investment Is to Treat It Like a Documentation Project
- Jason B. Hart
- Data Engineering
- April 25, 2026
- Updated April 24, 2026
Table of Contents
What It Means to Waste a dbt Investment
The fastest way to waste a dbt investment is to treat it like a documentation project.
Not because documentation is bad. Good dbt documentation matters. Clear model descriptions, lineage, owners, and tests are all part of a serious analytics engineering practice.
The waste happens when the team mistakes documented artifacts for business trust.
A dbt project can have tidy folders, named marts, lineage graphs, model descriptions, and a polished docs site while the VP of Marketing still asks, “Which revenue number am I supposed to use?” It can pass a technical review while finance still does not trust the pipeline dashboard. It can look mature inside the data team and still fail in the meeting where the number has to support a decision.
That is the real test. A dbt investment is not wasted because the project lacks enough metadata. It is wasted when the work never turns into decision-grade confidence for the people using the numbers.
What Looks Mature Versus What Creates Trust
A lot of dbt projects improve the visible surface area first. That is understandable. It is easier to audit a repo than to fix an ownership fight between finance, RevOps, and growth.
But those are not the same job.
| What looks mature in dbt | What creates business trust |
|---|---|
| Every model has a description | Critical metrics say what they include, exclude, and match |
| The DAG is organized | The important dependency paths are tested and monitored |
| The docs site is published | Non-data stakeholders know where to find the decision caveats |
| Models have owners in YAML | Owners have authority to approve definition changes |
| CI runs on pull requests | Releases that change leadership numbers have a handoff plan |
| Naming conventions are cleaner | Teams stop building parallel spreadsheets because they trust the canonical mart |
The left column is useful. The right column is the point.
This is where many mid-size SaaS teams get stuck. They finally put transformation logic in the warehouse. They adopt dbt. They create standards. The data team can explain why the project is better than the old mess.
Then the business keeps operating around it.
A marketing ops lead keeps a spreadsheet because campaign cost logic still changes too often. Finance keeps a separate revenue view because exclusions are not explicit. Product trusts one activation metric for roadmap planning while growth uses another for lifecycle reporting. The dbt project improved the technical environment, but it did not resolve the operating disagreement.
Documentation Is Not the Same as Definition Discipline
The easiest documentation to write is the least useful kind.
total_revenue equals “total revenue.” customer_id is “customer id.” created_at is “created at timestamp.” The repo looks less empty, but nobody learned anything they can use when the number is challenged.
Real definition discipline answers the questions that create tension:
- Does revenue include services, refunds, credits, and usage overages?
- Does CAC include only media spend, or also agency fees and tooling?
- Does active customer mean billed, logged in, paid, or eligible to use the product?
- Which version of pipeline matches the board deck?
- Who can approve a definition change after the metric is live?
That last question is the one most teams avoid.
A definition without an owner is just a well-written argument waiting to reopen. The data team may know where the logic lives, but that does not mean it has the political authority to decide what the metric should mean. If a revenue definition affects compensation, board reporting, or hiring decisions, the owner cannot be a comment in a YAML file. The owner has to be a person or function with the authority to settle the tradeoff.
This is where dbt documentation should become a record of business decisions, not just a description of tables. If the docs cannot tell a new VP which number is safe for which decision, they are not done.
For a broader scoring approach, the dbt Project Health Scorecard is useful. This piece is narrower: it is about the moment a clean dbt project still leaves the business unsure which number deserves trust.
Tests Need to Protect Decisions, Not Just Columns
dbt makes it easier to add tests. That does not mean the tests are protecting the right risk.
A project can have not_null, unique, accepted-values, and relationship tests and still miss the failure that matters. Those tests can catch broken plumbing. They do not automatically catch business-rule drift.
The operator-level question is sharper:
If this model broke in the way the business would actually care about, would the test suite catch it before the meeting?
That changes what the team tests.
For a pipeline model, the risky failure may not be a null primary key. It may be a stage mapping change that quietly moves expansion pipeline into new-business reporting. For an attribution mart, the risky failure may not be row duplication. It may be spend arriving before the campaign taxonomy is stable enough to join cleanly. For a retention model, the risky failure may be a billing-status edge case that makes churn look better in the month when leadership is deciding whether to slow hiring.
Generic tests are the floor. Decision-risk tests are where trust begins.
That does not mean every model needs an elaborate custom test suite. It means the team should know which models sit under high-stakes decisions and test those models differently. A staging model for an internal ops report does not carry the same risk as the mart behind board-level ARR, payback, or pipeline conversion.
This is the same practical distinction behind what to fix first in dbt: the first fix should be the one that reduces business risk, not the one that makes the project look most complete.
Ownership Has to Be Operational, Not Decorative
Many dbt projects now list owners. That is progress. It is also easy to fake.
An owner field does not answer the operating question unless the owner can actually do the work the role implies.
Can they approve a metric definition change? Can they tell finance that the new revenue mart will not match last quarter’s deck until a handoff is complete? Can they decide whether a model change is safe to ship before a board meeting? Can they say no when a stakeholder asks for a shortcut that would make the metric easier this week and less trustworthy next month?
If not, the owner is decorative.
The practical pattern I see is that data teams often own the implementation while the business owns the consequences. That gap creates bad behavior on both sides. The business treats dbt as a vending machine for numbers. The data team ships changes that are technically correct but socially surprising. Then everyone acts confused when trust drops.
A stronger model separates the roles:
| Role | Real responsibility |
|---|---|
| Business owner | Approves the definition, caveats, and decision use |
| Data owner | Implements, tests, documents, and monitors the model |
| RevOps or finance partner | Confirms cross-system alignment where revenue or pipeline is involved |
| Executive consumer | Understands the confidence level and any transition risk |
That may feel heavier than just adding an owner key. It is also the difference between a model that exists and a metric that can survive executive scrutiny.
If the problem is that business stakeholders keep handing the data team vague asks, the upstream issue may be translation, not dbt. The business-to-data translation gap usually shows up before the repo does.
Release Discipline Is Where Trust Gets Won or Lost
Most dbt trust failures are not dramatic outages. They are quiet changes that arrive without context.
A model changes. A dashboard shifts. A field gets renamed. A definition gets tightened. The data team knows why. The stakeholder sees a different number and assumes the old problem is back.
That is not just a communication failure. It is a release-discipline failure.
If a dbt change affects a number used in a revenue meeting, board deck, compensation review, forecast, or budget decision, the release needs more than a merged pull request. It needs a business handoff.
At minimum, the handoff should answer:
- What changed?
- Which reports or dashboards move because of it?
- Which old number should no longer be used?
- What caveat should travel with the new number for the first review cycle?
- Who is accountable if finance, RevOps, and growth see different impacts?
This is where technically strong teams can still lose the room. They assume trust comes from better engineering. It partly does. But trust also comes from not surprising the people who have to defend the number.
A clean release note in GitHub is not enough if the person using the metric never sees it.
What This Does Not Mean
This is not an argument against dbt docs.
It is not an argument against naming conventions, exposures, tests, model ownership, CI, or a cleaner repo. Those things matter. A chaotic dbt project is expensive to maintain and painful to trust.
The argument is simpler: those practices are inputs, not outcomes.
Documentation is valuable when it records usable business context. Tests are valuable when they protect decisions from silent drift. Ownership is valuable when it gives someone the authority to resolve disputes. Release discipline is valuable when it prevents stakeholders from being surprised by technically correct changes.
If the dbt project stops at artifacts, it becomes an expensive place to store unresolved disagreement.
The dbt Investment Trust Checklist
Use this quick review when a dbt project looks better than it feels.
| Question | Warning sign | Better next move |
|---|---|---|
| Which decision is this model supposed to support? | The answer is “reporting” or “analytics” in general | Name the specific meeting, metric, or decision |
| Who owns the business definition? | The owner is only the data team by default | Assign business approval rights before the model becomes canonical |
| What failure would damage trust most? | Tests only cover generic column health | Add tests for the risky business-rule drift |
| How will a definition change be released? | The change ships through Git but not through the business rhythm | Create a handoff note for affected reports and stakeholders |
| What caveat travels with the number? | Stakeholders learn caveats live in the meeting | Put caveats in the docs, dashboard, and release note |
| What parallel workaround should disappear? | The spreadsheet stays because nobody trusts the mart yet | Use adoption of the canonical model as the success measure |
If you cannot answer those questions, the next dbt investment probably should not be more documentation. It should be definition cleanup, test coverage tied to decision risk, ownership repair, and release discipline.
Download the checklist
Use the checklist to review one high-value dbt model or mart before the next planning, revenue, or reporting meeting. The point is not to grade every model. The point is to find the place where dbt looks mature but the business still cannot use the number with confidence.
Download the dbt Investment Trust Checklist (PDF)
A lightweight working-session checklist for connecting dbt documentation, testing, ownership, and release discipline to business trust.
Instant download. No email required.
Want future posts like this in your inbox?
This form signs you up for the newsletter. It does not unlock the download above.
The Better Question
Do not ask, “Is our dbt project documented?”
Ask, “Can the business use the number without a data person translating the caveats live?”
If the answer is no, the project is not done. It may be cleaner. It may be more maintainable. It may be a real step forward from the spreadsheet era.
But the business did not buy dbt to admire the docs site. It bought a path toward trusted decisions.
If your dbt project is stuck between technical maturity and business trust, start with the operating layer: definitions, owners, tests, release discipline, and the few models that matter most. That is where the investment starts paying back.
Download the dbt Investment Trust Checklist
A lightweight PDF checklist for testing whether dbt work is connected to business definitions, ownership, decision risk, and release confidence.
DownloadIf the dbt project needs a stronger operating foundation
Data Foundation
Use this engagement when dbt, warehouse, governance, testing, and reporting trust need to become one maintained business system instead of disconnected cleanup work.
See Data FoundationIf the real issue is translating the business ask
Translate the Ask
Use the sprint when the data team keeps getting vague requests, shifting definitions, or stakeholder feedback that arrives too late to shape the model correctly.
See the translation sprintSee It in Action
Common questions about dbt documentation and trust
Is dbt documentation still worth doing?
What usually makes a dbt investment feel wasted?
Who should own dbt metric definitions?
How is this different from a dbt project health scorecard?

About the author
Jason B. Hart
Founder & Principal Consultant
Helps mid-size SaaS and ecommerce teams turn messy marketing and revenue data into decisions leaders trust.


