AppsFlyer gets creative with analyzing and delivering mobile app marketing metrics
Mar 18, 2021
AppsFlyer provides marketers of mobile apps with the metrics that drive their success. We measure and track every data point within their apps, including clicks, installs, and purchases — and then aggregate and present that data in attractive and easy-to-use dashboards. Our tools allow them to get exceptionally granular on analyzing the performance of their ad campaigns.
We are often situated in between the advertisers that utilize our insights — leading brands such as Walmart, HBO, and Samsung — and the partners through which their ads are delivered, such as Facebook, Google, and Pinterest. We work with many leading ad agencies in this middle ground, including Havas, Omnicom Group, and 360.
Big data is our business
With these huge brands as advertisers, you can imagine the massive amounts of data flowing through our systems. We process about 100 billion events daily. We use Amazon Web Services (AWS) for our daily data storage needs of 90 terabytes of data per day. We also use Google’s BigQuery database for storing our data longer term, and there we have about 40 petabytes of data stored. Annually, we’re storing two-and-half times more data each year. And Looker is the platform that powers all of this analysis, for both our advertiser’s data and our own internal data.
Our best practices for optimizing performance
We turned to Looker’s in-house customer support team for advice on how to get the most out of the product. Once we had our basic structure in place, we wanted to know if there was anything else we needed to learn or if there were best practices we could utilize. This took our modeling to the next level. We learned that we were not properly reusing code. When we discovered this, we ended up changing our logic so that we would have one place where everything was written and then extend from there. Now, if we change the base, everyone will see the same results.
Performance is something we have to be mindful of since we’re handling petabytes of data. We address it in numerous ways:
- Using the caching feature of Looker, which allows us to set up data groups and triggers. For example, we can set up a trigger for Looker to search for a new update of a table only if a new ETL has been done.
- Limiting the number of elements we put in a dashboard.
- Avoiding large data sets when possible.
- We set a schedule to refresh the cache early in the morning, before our users open Looker.
- In terms of LookML, using a lot of derived tables for aggregating the data to a different level of granularity. We use predefined views with materialized views on top of them to be able to analyze the data with better performance.
Internal analytics on Looker
We rely heavily on Looker for our internal data needs as well. For example, our product managers use Looker to analyze how our customers use our product, to understand what needs to be fixed, and to determine what needs to be added in terms of features and functionality. Our research and development team uses Looker to measure the product and to understand how it can perform better. In fact, all our business departments — from customer service, sales, finance, and operations to human resources — use Looker for their own needs and KPIs.
We also have a business intelligence (BI) team specifically tasked with managing data sources, storage, and analytics. This BI team makes sure everything is available in Looker and that Looker is being utilized in the best way possible. This team provides data governance, manages user permissions, and does other administrative work within Looker.
On our BI team, we have full-stack developers who write the extract, transform, and load (ETL) processes using Spark and Python. We also have data engineers, BI developers, administrators, and analysts. We use a self-serve approach within our BI team to avoid bottlenecks. Our main goal is to provide one source of truth to all our people. We want this BI tool to be reliable, useful, and comfortable to use for everyone, company-wide.
The main challenge we faced when implementing Looker was keeping up with our rapid growth. In two years, we went from 200 Looker users to more than 800 daily active users. We went from five analysts to about 30.
Powering our rapid growth with the Looker Guild
As we grew, we put analysts in different departments outside of the BI team. They are our specialists within their business units and know the unique KPIs and business needs of their departments, essentially acting as liaisons. All of them needed to be supported in terms of training and access to the relevant data sources, projects, models, and permissions in Looker.
Eventually, we tapped the most Looker-obsessed of these analysts to form what we call our Looker Guild.
Looker Guild members are responsible for:
- Optimizing the interface between the BI team and the analytics teams
- Keeping Looker as one source of truth for all of AppsFlyer
- Governing data access
- Leading the implementation of Looker in their departments
- Helping us to implement new features and LookML projects
- Ensuring best practices are followed
- Managing the content permissions
- Onboarding and training new users of Looker on their team
- Engaging with the Slack channel dedicated to Looker developers, answering questions from team members
- Defining the business logic and KPIs relevant to their departments
Structuring our systems for ongoing success
With so many different business units and people accessing Looker, we wanted to set up a project structure that would support the numerous projects across our organization and keep everything organized in the database. Each business team has its own project folder inside Looker and creates the content according to their needs. They are responsible for providing and sharing that information with the other relevant departments, since some of our teams work on the same KPIs or need access to the same data. Each team takes ownership of setting up the permissions to the data in their projects.
Our central BI team owns the core model, which is made up of the base tables that are created as a result of our ETL processes. We open up those tables to the various projects and teams who need them and allow them to build their business logic and KPIs on top. Within the central BI team, a Looker data owner and a data governor work with Looker Guild members from each business unit to ensure that the core model is still supporting their business needs. They act as the key interface with the central BI team and feed in the requirements for the core model, helping to strike the right balance of what is centrally managed versus what is team managed.
As we’ve grown, we’ve recently added a new department — a business analytics department — that is responsible for defining the cross-business KPIs to make sure they’re aligned between the different teams. In this way we have a system that is efficient, organized, and optimized.
Using Looker to measure how we use Looker
The last way we use Looker is to measure how we use data. We can see how our people are using Looker in the system activity dashboards. There, we measure our user’s content activity, keep an eye on performance issues, and take note when there is unused content that needs to be deleted. This way we can see how we are doing as a BI team and data provider. We want everyone to use Looker to its full advantage.
To hear even more details on our Looker journey, watch my JOIN@Home session.