At Looker our customers are “Good Lookers,” Frank, our CEO is “FrankenLooker,” and Margaret, who fosters and grows our customer community, is “Looker Afterer.” We have almost as much fun with the name “Looker” as we do with the product we make. This is the first post of many that explains how we use Looker internally.
Looker is a data exploration product that fits on top of any SQL database. The platform is primarily a web server, so we use it to share data between teams, embed into other applications, and do analysis on data we collect from our users and teammates' activities.
Approximately 70% of Looker's 100+ employees use our internal “Meta” Looker on a given weekday, logging a median 2,500 “usage minutes” (a proxy metric we invented) per day over the past few weeks (March 2015). We don't have any internal-facing data analysts, but everyone builds and edits our LookML data models collaboratively (via git).
This post explains how we use Looker to analyze our sales process, improve our product, manage customer support, and find the best surf near Santa Cruz.
Our data warehouse contains all the metadata we collect from Looker instances out in the world (via in-house tools and Snowplow) and from regular data dumps from Salesforce, Marketo, Zendesk, and other third party services.
We move data from each source to our MySQL (Amazon RDS) database with a few scripts and tools. We copy data to AWS Redshift, where our clickstream event data lives, as the event logs are too big for analysis on MySQL at this point.
Once we have all the data in one place, we build a data model around it to describe the relationships and business logic. All of our customers and trial accounts have a unique ID set in Salesforce (Account API ID). That ID maps to a customer's license key, Looker instance identifier, and Zendesk organization. We wrote this logic into our LookML model so we can jump from one question to another and have a map of the full sales funnel from lead to initial meeting, trial to customer and beyond.
For sales analysis, we prefer the flexibility of LookML + SQL to the reporting capabilities in Salesforce. Centralizing our efforts around one platform helps us be successful together. Storing Salesforce data in our own database also means we can join it with other data sources, such as product usage statistics.
We also decided early on to track things that we thought might be useful some day, but didn't have a clear use case for at the time. For example, we track our sales team's e-mails. This has allowed us to answer questions like “What is the best time of day to send emails to customers in South Africa?”
Some of our teammates live in other tools, and that's okay. We bring the data to them by embedding Looker dashboards and charts into Salesforce itself, for example, to give the sales team an informed view of how a prospect or customer is using our software:
Embedding Looker into Salesforce requires that each user have Looker access, as data is embedded as a parameterized iframe, not passed to the Salesforce platform. (Thanks to the biz ops team at Heroku for showing us how they do this!)
Looker comes with built-in chat support for all of our admin and developer / data analyst users. This is primarily for LookML support, but our team fields hundreds of questions every day about visualizations, security features, and more.
Because our customers are all over the world it's important to set the right hours for our support team. We tag every support chat in Zendesk and use the data we collect to monitor shifts in ticket root cause and spikes in support for new customers.
We also tag each support chat with metadata like tone and root cause so we can tie it to other customer account data and share it across teams (e.g. a sales person might need to know if one of her trial accounts is having trouble setting up a database connection).
GitHub + Looker
We use GitHub Issues for all feature request, bug, and issue tracking. GitHub has a robust API, so Nate on our engineering team wrote a script to bring all the issues into our data warehouse every 15 minutes. From there, we generated a simple LookML model and built some dashboards and scheduled reports:
Our engineers and support team get an email every time a critical issue is posted or fixed, and we schedule reminders and digests via the Look scheduler.
Looker is based in Santa Cruz, a coastline town in northern California known for surfing. Phil C on the marketing team tracks swell, tide, wind, and other conditions using exported Surfline data, and then adds notes to his dashboards on an internal Looker.
Phil can filter based on upcoming conditions and where he's gone in the past to determine where to go.
Segah, who previously worked in data science consulting for financial institutions and startups, does research with the help of a LookML model on top of open Lending Club data:
It's not hard to imagine because we've already lived a life before Looker — it's handwriting hundreds or thousands of SQL queries, publishing tedious reports, and reconciling differences in calculations of key metrics.
Tired of being the bottleneck between your data and your coworkers? Want to help build a data modeling language and web application to leverage the knowledge of few and serve the needs of many? We're based in sunny Santa Cruz with teammates all over the world, come join us!