DAX optimisation: the patterns that slow Power BI reports down
DAX optimisation gets discussed in the abstract a lot. Here are the concrete patterns that show up most often in legacy Power BI models, and how to fix them.
When a Power BI report is slow, the first instinct is to blame the data volume. It is rarely the real cause. In most legacy models I pick up, performance drops because of a handful of classic DAX patterns that show up on almost every engagement.
1. Hidden iterations
A measure like SUMX(table, table[col1] * table[col2]) is perfectly legitimate. The trouble starts when you nest several of them, or when a plain SUM is replaced by SUMX on a large table for no reason.
Simple rule: if the operation can be written as a direct aggregate on a column, do not run it through an iterator. And if you must iterate, do it on the smallest table you can.
2. Filter contexts that go off the rails
CALCULATE is the most powerful tool in DAX, and the one that produces the most side effects. A filter context tweaked without a clear intent is all it takes to turn a measure into a performance black hole, or a wrong number.
Three questions to ask yourself at every CALCULATE:
- Which filter am I adding, which one am I removing?
- Is that filter on the fact table or on a dimension?
- Do I really need
REMOVEFILTERSorALL, or could I be more surgical withKEEPFILTERS?
3. Forgotten variables
A lot of legacy measures recompute the same sub-expression several times. A VAR could have evaluated it once and reused it. On a measure used in a matrix with ten thousand cells, the performance gain is spectacular.
// Before
Sales YoY % =
DIVIDE(
[Sales] - CALCULATE([Sales], DATEADD(Calendar[Date], -1, YEAR)),
CALCULATE([Sales], DATEADD(Calendar[Date], -1, YEAR))
)
// After
Sales YoY % =
VAR _ly =
CALCULATE([Sales], DATEADD(Calendar[Date], -1, YEAR))
RETURN
DIVIDE([Sales] - _ly, _ly)4. Calculated columns that should have been measures
A calculated column is evaluated at every refresh, stored in memory, and grows the dataset. A measure is evaluated on the fly and weighs nothing in memory. When a calculation can be a measure, it should be a measure.
5. Flattened models
A well-built star schema, with a central fact table and clearly separated dimensions, will almost always outperform a single wide flat table. Legacy models often drifted toward wide tables because they were easier to load, but they pay the price on every render.
DAX optimisation is not virtuoso work. It is a hygiene discipline: name, isolate, reuse, measure.
Measure before optimising
Last point, and the most important: you cannot optimise what you do not measure. Performance Analyzer in Power BI Desktop, DAX Studio, and a SQL Profiler trace on Analysis Services are the three tools I use systematically. Any unmeasured optimisation is an act of faith.
Running into something similar?
Tell me about your situation. I reply within one business day.