When looking at time series data, it's good to rely on a metric that reveals an underlying trend — something robust enough to deal with volatility and short-term fluctuations. A question that Looker users frequently pose is: How does average sale price fluctuate over time? This question points to a moving average and sum calculation within SQL using a monthly interval. There are several ways of accomplishing this. I'm going to demonstrate two approaches: correlated subqueries and derived tables.
My example uses a simple purchases table to create rolling sums on revenue. The sample code below can be modified to calculate many different aggregates and for comparing other timeframes, such as daily or hourly. Here's how the raw data looks:
Here is the result set we expect:
|Derived Table: Purchases weekly rolling on monthly interval|
The finished result set makes it faster and easier to create reports and to discover interesting insights. We can also add dimensions to the final form so we can see how different facets alter our rolling aggregate. Now let's take a look at how to get to this form.
The easiest way to produce our desired result set is by using a simple correlated subquery.
In the first pass, you can see we're grabbing all the weeks for which we have data and summing the revenue for that week. In the second pass, we're calculating for each week the revenue from four weeks prior.
Unfortunately, there are some drawbacks to this technique. The simplicity and elegance of the query comes at great cost to performance. Correlated subqueries run a SELECT statement for each row and column within the result set. With a large dataset (one that has many weeks, for example), this query may end up running for a long time. The correlated subquery approach is ideal for small datasets or for looking at a small subset of the data (using a WHERE clause to limit query range).
Please note that correlated subqueries are not implemented in every database. Popular MPP databases, such as Redshift and Vertica, only partially support correlated subqueries.
So how can we answer our question when dealing with very large datasets? We want to avoid scanning any row of the raw data more than once. The best method is to use derived tables. Here's the code:
This query may seem a bit strange, so let's go over it step by step. First, we create a simple common table expression, or CTE, with every week and its revenue. Next, we join this CTE onto itself to create four rows, one for every prior week. We're fanning out the data and then summing on each of the new joined-on weekly_revenues. Here's how the form looks right after the JOIN, but before the SUM:
|Intermediate Form: Fanned out weekly_revenue|
There are now four rows for the week starting 2014-03-31. Each row has a weekly revenue from one of the four prior weeks. To get to the result set we want, we simply sum and average the wr2.weekly_revenue and then group by the original week's date and revenue value.
|Derived Table: Our result set|
This query will run much faster than the subquery method, since we only scan the raw table once.
(I used a WITH statement in this example query. Your syntax may be different, depending on your SQL database. You may prefer to use CREATE TEMPORARY TABLE or CREATE TABLE.)
Sometimes our data isn't perfect. There may be some weeks when no purchases are made. To make sure our query addresses that, we simply add a table with all the weeks we're interested in:
|Derived Table: weeks|
Now we can use this table for our queries by incorporating it with the second pass of our subquery or within the join of the derived table.
With these two approaches, we can now create rolling calculations in SQL — which enables us to understand complex metrics, such as ratios and medians, across meaningful intervals of time.