Power BI audit: the checklist I use on recovery engagements
A concrete checklist for auditing a legacy Power BI estate: what to look at first, in what order, and which signals should set off alarms.
Auditing a legacy Power BI estate is not a code review. It is closer to archaeology: you have to understand what was built, why, and how it has drifted over time. Below is the checklist I run on recovery engagements. It is not exhaustive, it is pragmatic.
Scope & governance
- List of active workspaces, owners, status.
- Naming conventions: datasets, reports, measures, tables.
- Access and sharing policy: who sees what, how.
- Is there a dev / UAT / production workspace separation?
- Number of orphan or unused reports.
Datasets & modelling
- Schema: star, snowflake, flat tables.
- Relationships: cardinalities, bidirectional flags, ambiguous paths.
- Technical tables (date dimension, measure tables) properly isolated.
- Calculated columns vs measures: what could be moved across.
- Table sizes, unused columns, data types.
- Compression and in-memory dataset size.
DAX measures
- Most used measures and their execution times.
- Risky patterns: nested iterations, misplaced FILTER clauses.
- Whether VAR is used or not.
- Duplicated measures, orphan measures, measures referenced nowhere.
- Consistency of naming and formatting.
Power Query (M)
- Query folding: which queries fold back to the source, which do not.
- Useless or redundant steps.
- Loaded volumes vs needed volumes.
- Robustness to source variations (types, nulls, missing rows).
- Sensitive sources: local Excel files, unversioned APIs.
Refresh & infrastructure
- Refresh success rate over the last 30 / 90 days.
- Refresh durations: average and peaks.
- On-premises gateways used, redundancy, service accounts.
- Refresh time windows and their impact on transactional sources.
- Failure alerts: who is notified, how.
Security & RLS
- Whether Row-Level Security is implemented or not.
- Consistency with business roles.
- RLS tests documented.
- Privileged accounts: who, why, until when.
Three red flags that mean act fast
Three signals that, in my experience, almost always mean the environment needs a quick intervention:
- No reverse engineering is possible without calling the original author. That is a critical vendor dependency.
- Refreshes that fail silently. Nobody is watching, and reports keep serving stale data for days.
- Discrepancies between reports that should show the same number. Classic symptom of a duplicated model or measures redefined several times.
Auditing is not the end. It is what lets you decide, with eyes open, what can be fixed, what must be rebuilt, and what can stay as-is for now because it works.
Running into something similar?
Tell me about your situation. I reply within one business day.