Groupers are your friends |
Luckily DAX has a few Groupers that can handle this quite nicely.
The other day I helped someone in the forums with a grouping question like this, and he asked for some resources to learn more about this useful technique. So here's my spin on it from a PowerPivot expression perspective.
Ok, so let's work through this with a demo scenario. We'll use AdventureWorks since everybody knows that dataset. Let's take Internet Sales (the FactInternetSales table in AdventureWorksDW). We've got sales data at the line item level (i.e., the individual items purchased within a given shopping basket or ticket). And we want to analyze average sales over time or sliced by segment. But we want to analyze at the ticket level, not the item level. So a regular Average won't work here. Our challenge then is to roll up sales to the ticket level (sum it up by ticket), and then average across tickets.
This is a very common requirement, and one that is easily satisfied in DAX. There are actually several ways to accomplish this, so let's walk through them one at a time.
Approach #1: Total Sum of Sales / Distinct Tickets
This is probably the simplest approach and actually doesn't do any grouping. I'm just including it to be thorough in solving the scenario at hand. Anyway, it works in this scenario, but you won't always be able to calculate in 1 pass like this. Regardless, here's the formula for reference.
=SUM( FactInternetSales[SalesAmount] )
/ DISTINCTCOUNT( FactInternetSales[SalesOrderNumber] )
Approach #2: AggX(VALUES)
With the next two approaches, instead of rolling all the way up like in #1, we want to take a two-step approach. We group by the "ticket" and sum up the underlying item sales amounts, and then calculate an average across all of the tickets.
Here's the actual formula for our scenario and we'll break into it's components.
=AVERAGEX(VALUES(FactInternetSales[SalesOrderNumber])
,CALCULATE(SUM(FactInternetSales[SalesAmount]))
)
There are three key parts to this formula.
1. The VALUES function behaves like a subquery and returns a list of the distinct SalesOrderNumbers that we want to group by (the list of tickets).
2. The CALCULATE(SUM()) defines how we will roll SalesAmount up to the ticket level (we're grouping by SalesOrderNumber here).
3. The AVERAGEX is the outer function that iterates over each of the items in the list (each SalesOrderNumber or "ticket"). For each "ticket", it executes the CALCULATE(SUM) to roll SalesAmount up to the ticket level. And then it calculates the average of those values for the final result.
Approach #3: AggX(SUMMARIZE)
Essentially, this approach accomplishes the same thing as approach #2 above. But instead of using VALUES to get the list of SalesOrderNumbers, we'll use SUMMARIZE to do the grouping.
The SUMMARIZE function works like this:
SUMMARIZE(<table>, <groupBy_columnName>[, <groupBy_columnName>]…[, <name>, <expression>]…)
The first parameter is the table you want to roll up (FactInternetSales)
The second parameter is the column to group by (FactInternetSales[SalesOrderNumber])
Optionally, you can group by additional columns. And you can add aggregation expressions, similar to a GROUP BY in SQL.
Those last two optional parts give SUMMARIZE an extra bit of flexibility that can come in handy when you need more than just a single-column list back. Anyway, probably easier with an example, so here's our formula:
=AVERAGEX(
SUMMARIZE( FactInternetSales
,FactInternetSales[SalesOrderNumber]
)
,CALCULATE( SUM( FactInternetSales[SalesAmount] ) )
)
In this formula, the inner SUMMARIZE is working over the FactInternetSales table and grouping by SalesOrderNumber. The result is a list of the distinct SalesOrderNumbers we need (just like VALUES produced). With that list, the outer AVERAGEX iterates through each one, calculating the sum of the underlying SalesAmounts, and then calculating an average across all of the ticket-level sums.
Another approach that leverages SUMMARIZE, and that can be useful in a variety of situations, is to perform aggregations while grouping and return both the grouping columns and the aggregations to the outer function. So in our scenario, we can actually move the CALCULATE(SUM()) that rolls item-level sales up to the ticket level inside the SUMMARIZE. Like this:
=AVERAGEX(
SUMMARIZE( FactInternetSales
,FactInternetSales[SalesOrderNumber]
,"SumTicket"
,CALCULATE( SUM( FactInternetSales[SalesAmount] ) )
)
,[SumTicket]
)
With that, the inner SUMMARIZE behaves like a SQL subquery that returns, not only the list of tickets (SalesOrderNumbers), but also the rolled-up SUM of item-level SalesAmounts for each ticket. Then with that two-column table, the outer AVERAGEX iterates over each row, picking up the values for SumTicket and then calculates an average over the list for the final result.
Pretty straight-forward. The only twist is that, within the SUMMARIZE, you give the aggregate expression an "alias" (in our case "SumTicket"). So the table that SUMMARIZE returns includes that column and you can reference it by name in the outer AggX function. Not really tricky, but the intellisense doesn't pick it up, so you have to make sure you reference the name correctly.
These grouping techniques (#2 and #3) can be absolutely critical in situations where you have data at different granularities that you need to aggregate in a multi-pass fashion like above. Or in situations where you need to grab a date or value from each segment in your dataset.
Just to give you some more ideas for applying this approach, here is another example that I recently came up against in the forums.
The requirement was for a fitness training club, and they needed to calculate the number of sessions attended and the number of sessions purchased summarized for each customer and across all customers. The shape of the data was like this:
CustomerID | CoachingSession | SessionsPurchased |
100 | 1 | 3 |
100 | 2 | 3 |
100 | 3 | 3 |
100 | 4 | 3 |
101 | 1 | 1 |
101 | 2 | 1 |
And the goal output was:
Clients | SessionsPurchased |
2 | 4 |
Essentially, we have a fact table with session attendance transactions. The real challenge was with the sessions purchased, since they are pivoted out and duplicated on each session attendance row.
Given time and resources, I would recommend breaking the sessions purchased data out into a separate fact table. This would be the proper way to model it dimensionally, since the two events are independent and occur at different grains.
But for this exercise we'll deal with the data in the shape it's in.
In this scenario, we can use the SUMMARIZE function to group by the customer and get the MAX sessions purchased for each like this:
=SUMX(
SUMMARIZE( Coaching
,Coaching[CustomerID]
,"MaxPurchased"
,MAX( Coaching[SessionsPurchased] )
)
,[MaxPurchased]
)
=SUMX(
VALUES( Coaching[CustomerID] )
,CALCULATE( MAX( Coaching[SessionsPurchased] ) )
)
In both cases, notice that the measure uses an outer SUMX to iterate through each of the rows from the inner grouping function. And then sums the results of those iterations. Ultimately, we've removed the duplicate values and simply summed the sessions purchased at the customer level.
That's it for now. Happy holidays!
I can't tell you how much this helped me out. Thanks for the post!
ReplyDeleteThanks for the feedback Elaine. Glad it was useful.
Deletenice article, Brent -- very helpful -- keep up the great work!!!
ReplyDeleteHi,
ReplyDeleteThat was very helpful.
But what if in the second pass , we need to calculate MAX or MIN? Cause MAX or MIN takes only a column reference as argument.
Thanks in advance
I keep coming back to this article. I have used the methods a few times but even so I find your explanation really helps me to know what I am doing. And this topic serves as a good springboard for a more general understanding of DAX. Thanks a lot!
ReplyDeleteThanks for the article Brent. I'm looking to Rank the Sales Orders by Sales amount and return a scalar value showing the rank for a particular Sales Order. I have been able to use RankX to return a table of Sales Orders and respective rank value. But I would want to have a scalar value for a particular Sales Order instead. How would I accomplish this?
ReplyDeleteThanks for the article Brent. I'm trying to rank the Sales Orders by Sales amount and return a scalar value showing the rank of a particular Sales Order. I have used RankX to return a table of Sales Orders and respective rank values but I'd want to return a single value showing the rank of 'the latest Sales Order'. Any ideas on how I would do this?
ReplyDeleteVery Very Helpfull, Thanks, Great Explanation
ReplyDeleteAwesome post, very helpful! Thank you
ReplyDeleteThank you so much for this post... I can only say I did not understand well the possibilities of AggX until I read this in detail. Fantastic work, thanks again!
ReplyDeleteHello, Is there possibly a way to apply your grouping logic to replace the hard-coded content_id values (in the calculation below) with a grouped set of content_id values that are specific to a row attribute?
ReplyDeleteBasically, I would like to replace the content_id values below with the IDs that are relevant to a row filter.
test:=CALCULATE ( [sum_adj_sentiment_es],Filter(ALL(DimContent),DimContent[content_id] = 26430405 || DimContent[content_id] = 26425331))
Many thanks in advance!
Ben
Hello,
ReplyDeleteI'm stuck trying to implement a dynamic nested filter in - what should be - a simple DAX calculation.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/ed5ad223-bda0-4013-8a6a-9008428ed3e1/dynamic-nested-filter-in-dax?forum=sqlkjpowerpivotforexcel
Any help would be much appreciated since this thread seems to be relevant to nested grouping, etc.
Thank you!
Ben
Great Article!
ReplyDeleteThank you for the article!
ReplyDeleteI stopped at approach number two, but think I used approach number three
Thanks for the article. I get it to work with a single table but when my roll up or category list is in another table I can't get it to apportion out the values per category
ReplyDeleteThis was very useful! Approach 3 is exactly what I've been searching for over the past two days. Group By > Distinct count > Average.
ReplyDeleteFinally; fantastic; thanks!