<![CDATA[Looker | Blog]]> /blog Looker blog discover.looker.com Copyright 2019 2019-11-21T23:11:27-08:00 <![CDATA[Looker 7: Making Data Experiences a Reality]]> http://looker.com/blog/looker-7 http://looker.com/blog/looker-7 The explosion of data generated by SaaS applications is fundamentally changing the data and analytics industry. IDC1 predicts that the collective sum of the world’s data will grow to 175 ZB by 2025. To put this in perspective, if each gigabyte in a zettabyte were a brick, we could build the Great Wall of China (made of 3,873,000,000 bricks)... 258 times!

This data volume and complexity is something we would not have envisioned just a few years ago, and it has been a driving force behind advancements in data and analytics technologies in recent years. Data is no longer isolated in a single monolithic software suite. It is spread out across multiple applications in the cloud. Modern databases are more powerful, faster and cheaper. The traditional ETL paradigm is giving way to data transformation on demand.

But it’s not just tools and technologies that have evolved. We’re also seeing a profound change in the way companies use data. The first generation of the digital and internet native workforce has an expectation to leverage data in their day to day work. Forward-looking organizations no longer see data simply as something that’s displayed on a screen to be analyzed. These companies and their workforces want data integrated into everyday business workflows.

According to Gartner2, “spending and focus on traditional query and reporting BI environments will shift dramatically to analytics integrated into business operations and digital solutions.” In other words, unlocking the full potential of data consists of broadening its use from informational to operational.

This doesn’t mean that our beloved BI dashboards will disappear. These tools are and will always be essential to inform our understanding of our business. What it does mean is that traditional BI is becoming just one of many ways to interact with data.

Today, there are already many other ways in which companies use data beyond traditional BI.

  • Marketing departments are using data to automatically adjust bids on ads up or down depending on their performance.
  • Some organizations are acting as a data supplier to deliver a data pipeline to feed their customers’ data science workflows.
  • Solutions providers are creating new revenue streams by bringing a data product to market (DaaS — Data as a Service).

These are just some examples of companies whose use of data is not limited to reports and dashboards. They are transcending traditional BI by infusing what we call “data experiences” into day-to-day tools and workflows, using the abundance of available data for closing the gap between discovering insights and taking action.

Forrester3 writes “insights-driven businesses harness and implement digital insights strategically and at scale to drive growth and create differentiating experiences, products, and services.” This is the vision behind our next major product release. To better support the new era in data and analytics by building a platform where companies can create any number of data experiences imaginable. Looker 7 is that platform.

Looker 7 delivers new and enhanced capabilities in three key areas:

  • A rich development framework to power custom data experiences across the organization. Looker 7 includes an in-product Marketplace for customers to deploy and developers to monetize pre-built models, applications, integrations, plug-ins, data sources, and custom visualizations. With Looker, companies can jumpstart their development efforts with updated SDKs, new UI components, and rich developer resources.
  • A reimagined BI experience out-of-the-box. Looker 7 delivers dashboard capabilities in a modern interface that makes them more interactive, more intuitive, easier to edit and customize. A new closed-loop Slack integration delivers data insights from Looker directly into Slack conversations for seamless collaboration. A new SQL runner visualizations give data analysts the ability to understand raw data on the fly.
  • Built for enterprise-grade workflows. A new managed data integration and database solution connects to 45+ data sources and allows customers to unify data in disparate systems seamlessly. Looker 7 offers a range of multi-cloud hosting choices powered by kubernetes, including Amazon Web Services, Google Cloud Platform, or Azure clouds. New system activity analytics give administrators the ability to analyze user behavior, adoption of data products, or performance issues.

We are excited to announce this new major release of our platform and to continue furthering the vision of data experiences.

Learn more about Looker 7 or join in for our Looker 7-launch webinar with our Looker Chief Product Officer, Nick Caldwell, on November 20th.


1 IDC: Expect 175 zettabytes of data worldwide by 2025, Andy Patrizio, December 3, 2018

2 Gartner 100 Data and Analytics Predictions Through 2022, Douglas Laney, Guido De Simoni, Rick Greenwald, Cindi Howson, Ankush Jain, Valerie Logan, Alan D. Duncan, May 21, 2018

3 "Insights-Driven Businesses Set The Pace For Global Growth,” Forrester Research, Inc., December 14, 2018

]]>
2019-11-06T09:00:00-08:00
<![CDATA[Empowering Developers to Create Powerful Custom Data Experiences]]> http://looker.com/blog/empowering-developers-to-create-powerful-custom-data-experiences http://looker.com/blog/empowering-developers-to-create-powerful-custom-data-experiences “What tools do developers and analysts need to build better data solutions for their internal teams and customers?”

The Looker platform team has answered this question over the past year by making our platform more approachable and flexible for developers who are building solutions for a wide range of operational use cases.

From data products to automated workflows, highly custom data experiences, custom visualizations, deep integrations, and more, our new developer tools and framework unlock powerful ways to extend the Looker platform.

We listened carefully to analyst and developer feedback to understand how we can improve current tools and find new ways we can empower developers to build data experiences in and out of Looker.

We’re excited to be introducing a rich set of new tools that will enable you to build better, more tailored data experiences.

Customize embedded experiences more easily

Our customers embed customized Looker content in many ways, from embedding in external websites to adding a dashboard to internal Salesforce accounts.

To enable you to continue pushing the envelope, you can now take advantage of our new dashboard customization features, which allow you to:

  • choose from a whole new selection of filters to fit your data use case
  • customize tiles by adjusting the spacing, alignment, and shadow
  • theme your embedded dashboard with more fine-grain control

In addition, you’ll soon be able to add pre-built, custom tiles to your dashboard. Want a real-time view of what people are saying about your company on social media? Or maybe a live stock ticker to track your company’s share price? Developers can use pre-built components to ensure these custom experiences fit in Looker naturally. And, after you build out your dashboard, you can easily embed to enrich your website with Looker insights.

Build with more approachable APIs and software development tools

Looker developers have built some amazing things with our APIs, but we know that getting started can be a challenge that requires a lot of developer sophistication.

To help break down these barriers, we’ve made our core APIs more approachable to developers in the languages they know best, including Javascript/Typescript, Python, Kotlin, and Swift SDK’s. We’re also packaging up common Looker components (filters, visualizations, navigation, layout, etc.) so developers can build custom data experiences much more quickly.

Create Looker hosted applications using our extension framework

We don’t want people building outside of Looker to have all the fun, so we’re making it possible for developers to build these experiences right inside Looker.

With the extension framework, developers can build Looker hosted applications and plug-ins using our new SDKs and components, all within a secure sandbox environment. Content created on this framework can be built as a full-screen experience, embedded into an external website, or embedded inside Looker (as a custom tile you plug into your dashboard, for example). We hope these new capabilities inspire your creativity and imagination.

Discover, install, and distribute in our marketplace

In addition to a framework for building these experiences, we are providing developers a place to share their creations with their peers and other users: the Looker marketplace.

In the Looker marketplace, developers can deploy what they’ve built, share it with other users, and discover and install applications, plugins, and integrations with ease and speed.

For instance, users can now install a block or visualization in seconds, and an entire robust application in only a few minutes.

At launch, the marketplace will contain ten plug-ins and visualizations, 15 models, and 50 connections — with more additions to come. Stay tuned to see this content expand throughout 2020.

Better products, better experiences

Whether you’re building a better data product for your customers, or a better Looker experience for your internal stakeholders and business users, the solutions built by the brilliant minds in the Looker Platform Ecosystem of developers can get where you want to go, faster.

Share what you’re excited about with the Community. We can’t wait to see where your creativity takes you.

]]>
2019-11-06T08:59:00-08:00
<![CDATA[Designing A Better Data Experience: Extending LookML Development with Customized Tools]]> http://looker.com/blog/designing-a-better-data-experience-interview http://looker.com/blog/designing-a-better-data-experience-interview Editor’s note: Carl Anderson will be sharing his thoughts around open-source tooling at JOIN’s Extending LookML Development with Customized Tools session on Wednesday, 11/6, at 4:15 pm. Don’t have a ticket to JOIN yet? Reach out to our team to get your ticket to take part in the conversation.


Everyone has opinions about how to build a great data experience for end-users.

But how do you boost the experience for the developers and analysts building reports for a business?

Carl Anderson has some answers to that question.

Carl is the Senior Director of Data Science at WW International (and author of “Creating a Data-Driven Organization”) and has been an advocate of the developer experience at Looker for years. Carl’s new suite of LookML modeling enhancements (called lookml-tools) boosts the developer experience by providing new ways to enforce consistency for developers, understand relationships among LookML files, and keep LookML up to date.

We sat down with Carl to talk about his approach to making Looker successful for companies at scale, how to set LookML developers up for success, and the value of open-source development on top of a data platform.

Looker: Carl, you’ve been part of deploying Looker at Warby Parker, WeWork, and now at WW International (formerly Weight Watchers). What are some challenges you faced as you were scaling a data culture to hundreds of people?

The first challenge is onboarding — dedicating time to get new users up and running.

Once people start digging in, the challenge becomes managing their requests. We get loads of requests for additional dimensions and measures: tweak this, rename that, and so on. We started having sessions on Friday afternoon to decide which requests to accept or reject.

Sometimes we’d have to be strict and say no. Or we’d recommend a workaround that might be challenging to execute.

What are some of the most important things that a data admin should be looking for when delivering a great data experience for their end-users?

A team might have a lot of existing dashboards that are informational but don’t drive decisions.

I would sit down with each team and ask them what they wanted to have control over or what decision they wanted a dashboard to help inform. I’d then work backward from there.

Typically that process resulted in a small, discrete set of dimensions, measurements, and models behind the dashboards that enabled them to make data-driven decisions.

So it's really about focusing the discussion toward actionable decisions?

Yes, especially if you inherit a legacy reporting system. We had one last year that had around 2000 reports, many of which were one-offs. Others could be confusing; it was hard to figure out how often they were run or if they were still valuable.

Switching from a legacy system to Looker often provides a fresh slate to determine what truly matters.

Once we had answers to how a team works, what levers they had, and what their KPIs were, we knew what to focus on — how we could help a team deliver and improve their KPIs.

What aspects of Looker are particularly suited to handle scale for developers? What did you see we were missing for those aspects?

When you're developing LookML, you want to adopt best software practices, such as keeping your code DRY (Don’t Repeat Yourself), using inheritance, having baselines that you extend, and so on.

One of the drivers for starting on our own LookML linter was to establish some basic coding standards to tame the codebase and let the structure shine through.

Enforcing naming conventions such as where things go, where to find them, what they're called, etc. lets people who may be new to the organization — or to this repository — dig in.

Did you create the toolkit to enforce that methodology of DRY code standards for developers?

We have a set of LookML developers who are used to software or BI engineering. We also have analysts that know SQL very well but aren’t software developers.

Creating a linter gave them all guidance and best practices to use. Getting everyone to follow the same coding standards provides immense value.

We basically ate our own dog food first to make it easier for an analyst to write clean LookML code.

Did these rules come from your experience in software development? Or did some of these rules come from an awareness of what was cluttering up Looker instances for those in a different role?

It's a mix. Some, like Describe all fields surfaced for users, are important regardless of what you’re doing. It's critical for users to know the specifics of what something is. That knowledge helps to avoid confusion.

Rules like the Drill Downs, are meant to enhance Looker as a data discovery tool. There's always going to be one or more dimensions you should be able to drill down to for any given measure. We want to make sure there are obvious places to go once a user opens up a field.

Others like the Name Rules are more about developer experience than the end-user experience.

And some rules help smooth experiences for both developers and end-users. Don’t SHOUT! adds consistency to the UI, the end-user, and the developer.

Last but not least, Hello Members is important because we changed our internal language around how we talk about members and subscriptions and so on. We want to enforce that rule because it helps ground Looker as a source of truth.

What do you see as the particular value of your lookml-tools being open-source (rather than native features within Looker)?

Great question. Some of these tools — and not just lookml-tools — may be open-source simply because Looker hasn’t built them natively. For instance, an API endpoint that I could post some LookML to and receive e a “yes, this is valid” or “no, it isn't.” response would be helpful.

Open-source tools signal the start of a thriving community where people recognize a need and share their solutions, which is great.

With regards to the contribution to lookml-tools, what contributions to this project would be most helpful for you from the community?

I would be happy if people used them!

No one's going to have exactly the same set of linter rules that I have. They’re going to add their own classes of rules, which has a multiplier effect with other members of the community contributing different types of rules. I can imagine that there are other major types of rules that people are looking for in a linter.

LookML has so much conditional logic. I worry that there's some edge case I haven't thought about and the parser or whatever is going to fall over. Having a richer test suite or a corpus of valid open-source LookML files that cover the breadth of the look functionality would be handy for everyone.

I also think that while there’s a certain value of the native support for a feature. There’s a certain tangible value to open-sourcing a project because it communicates that you're not necessarily locked into a certain way of doing it.

This might be a good time to mention what happened yesterday.

Josh Temple, a Senior Data Engineer at Milk Bar, reached out to me with an open-sourced python LookML parser and asked if I was interested in trying it out. He’d seen my post about lookml-tools.

So I spent my morning seeing whether I can incorporate that as a parser into my Python project and get my test suite to pass. After a couple of hours, I had like 90% of my test suite passing.

It’s great that someone else created a Python parser and opened it up to the community. And it helps me out because I can manage the dependencies and make it easy to install and run my code.

I think it speaks to the community that through Discourse or some other channel, a person found out about this and essentially offered a contribution to this project to make it even better — not just for me, but for everyone using this tool now.

(Update: the latest LookMLl-tools now uses Josh’s Python parser exclusively.)


This is an edited and condensed version of this interview.

]]>
2019-10-31T06:00:00-07:00
<![CDATA[Using Snowflake MVs in Looker]]> http://looker.com/blog/using-snowflake-mvs-in-looker http://looker.com/blog/using-snowflake-mvs-in-looker Snowflake recently announced the general availability of materialized views (MVs). If you aren’t familiar with MVs, they are physical database objects that contain the result of a query, so they lie somewhere between a table, which is a physical object, and a view, which is based on a query but is a logical object.

MVs have many uses, including enhancing query performance, encapsulating complex logic, and aggregating data.

Enhancing Query Performance

Say you frequently report on a customer’s month-to-date spend and you have a table that contains line-item information about a customer’s purchase history. Using MVs, you could create a view that rolls the purchases up by customer for the current month. And since that aggregated data would always be as current as the data in the base table, the result would be highly performant queries when accessing data on customer month-to-date spend.

Without MVs, your options would be to aggregate the data each time the query was run to ensure that the most current data was included, which could have performance implications depending on data volumes, or to create a rollup table that is refreshed on a schedule. The rollup table would yield excellent performance, but the data may not be current.

The Value of Snowflake MVs

There are two cost components to be aware of, but not deterred by, with Snowflake MVs. One is the additional storage required for the MV, and the other is the cost of maintaining the MV (keeping it current), which is handled by a Snowflake service. That service is being continuously improved to reduce the maintenance cost. Over the last few weeks alone, Snowflake has released improvements to the service which can result in a 30%-40% cost savings over earlier versions (with more improvements coming soon). It’s worth keeping track of how much your MVs are costing to maintain vs. the amount of time saved by having faster queries. As we’ll see in our demo use case, that time savings can be significant.

In addition, it’s important to note that while the compute resources for creating the MVs come from your own warehouse, the compute resources used to maintain the MVs are provided by Snowflake. This is good news because it means that you don’t have to have an active warehouse in order for your MVs to stay up-to-date.

Snowflake’s Materialized Views vs Looker PDTs

Looker users may be familiar with Looker’s Persistent Derived Tables ( PDTs), which are physical tables that are created based on the result of a query. This sounds quite a bit like an MV, so customers frequently ask whether they should be using MVs or PDTs for their use cases. Here are a few guidelines to help you decide:

You should use Looker PDTs when:

  • You don’t have access to a DBA or you just want to run a quick test. Typically, most data analysts can create a PDT, whereas creating an MV may require a Database Administrator.
  • The query that you want to materialize includes more than one table. Currently, MVs in Snowflake do not support joins, although you can join data between two MVs.

You should use Snowflake MVs when:

  • Having a real-time view of data is important. Data in a MV will always be in sync with the data in the base table, whereas data in a PDT will only be current as of the last time it was recreated.
  • You’ll be accessing the data outside of Looker. Each time Looker rebuilds the PDT, the name of the table changes, making it challenging to query from outside of Looker. The name of an MV stays consistent.

Snowflake MVs with Looker

Snowflake’s Robert Fehrmann wrote a great blog post demonstrating how MVs can be used to provide different access paths into the same set of data, improving performance based on the types of queries that are run. Before continuing on, I would highly recommend giving it a read.

In Robert’s blog, it assumes that you understand that there are two different access paths available. From there, you can choose to query either the base table or the MV built on the base table depending on what access path you need. These are valid assumptions if you are writing your own SQL, but what if you wanted to accomplish this in Looker with your LookML model, so users wouldn’t even have to know that there are two database objects to choose between?

Let’s take a look at how to get everything set up in Looker.

Taking Advantage of Snowflake MVs in Looker: Step by Step Example

In the following example, I followed all of the steps in Robert’s blog post with one exception. When I created the MV, I included all of the columns from the base table, so my MV create statement looked like this:

CREATE OR REPLACE MATERIALIZED VIEW MV_TIME_TO_LOAD
   (CREATE_MS, PAGE_ID, TIME_TO_LOAD_MS, METRIC2, METRIC3, METRIC4,
   METRIC5, METRIC6, METRIC7, METRIC8, METRIC9) CLUSTER BY (PAGE_ID)
AS
SELECT
   CREATE_MS, PAGE_ID, TIME_TO_LOAD_MS, METRIC2, METRIC3, METRIC4,
   METRIC5, METRIC6, METRIC7, METRIC8, METRIC9
FROM WEBLOG

After I built the table, loaded the 10 billion rows and created the MV, it was time to get things set up in Looker.

For the purposes of this example, I’ll assume that you are familiar with the basic Looker developer functions, such as creating a connection, creating a project and generating a LookML model. If you aren’t, this video explains how to do all of those things: Looker — Database To Dashboard

Before we start running queries, we’ll need to add a new measure for AVG_TIME_TO_LOAD_MS into our weblog view file. It will look like this:

Now we are ready to start running queries, so let’s follow the examples in Robert’s blog and test our performance.

First, we’ll query the weblog table filtering it on CREATE_MS. We’d expect this query to run pretty quickly because the WEBLOG table is clustered on CREATE_MS.

SELECT COUNT(*) CNT, AVG(TIME_TO_LOAD_MS) AVG_TIME_TO_LOAD
FROM WEBLOG
WHERE CREATE_MS BETWEEN 1000000000 AND 1000001000;

Using our weblog Explore, we’ll build the query by selecting the Count and Avg Time To Load MS and filtering on the Create MS

As expected, the query takes just over a second.

Now we’ll try selecting the same columns, but filtering on PAGE_ID instead, based on the query from the Snowflake blog post:

SELECT COUNT(*) CNT, AVG(TIME_TO_LOAD_MS)  AVG_TIME_TO_LOAD
FROM WEBLOG
WHERE PAGE_ID=100000;

This query takes almost 90x longer to run due to the fact that there is not an efficient retrieval path for data filtered by page_id.

Incorporating MVs Into a Query

Let’s see how we can incorporate our MV, which is clustered by page_id, into our query to give us faster query times on data filtered by page_id.

Our current weblog view file looks like the screenshot below with PUBLIC.WEBLOG specified as the sql_table_name.

We’ll add in some logic using the Liquid templating language to dynamically choose between the WEBLOG table and the MV_TIME_TO_LOAD MV, depending on which filter criteria is selected. The logic is very simple — if the filter criteria includes the page_id, then use the MV that is clustered on page_id. Otherwise, use the base table which is clustered on create_time_ms.

If you aren’t using Liquid in your LookML, you’re missing out. Take a look at some of the many Looker Community articles on using Liquid in your LookML to get an idea of the use cases.

If everything is working as expected, Looker should dynamically select whether to use the base table or the Materialized View as the source of the query, depending on the filter criteria selected. You can see below that with the data filtered by Create MS, Looker will use the WEBLOG table as the source of the query.

But, if we change our filter criteria to page_id, Looker automatically updates the query to access the MV instead. With a couple of lines of code, Looker takes care of selecting the right source for the data based on the selected filter criteria to ensure that your self-service users always get the best performance possible.

Now, let’s try re-running our query filtered by page_id. This time Looker queries the MV and the results are returned in under two seconds, a 45x improvement over our previous attempt!

Start Using Snowflake MVs with Looker

MVs provide another great tool for working with Snowflake. They can enhance the experience of end-users by improving query performance, or simplify getting access to data in complex structure by presenting a flattened view. MVs can also benefit developers by encapsulating complex logic in a reusable way.

As MVs are still a relatively new feature, we expect there will be more enhancements coming in future releases to make MVs even more useful. A few of the enhancements that we are hoping to see in the not-too-distant future include support for joins as well as the automatic selection of the most performant path between the base table and any associated MVs by the optimizer. If you have plans for how you will be using Snowflake’s MVs with Looker to enhance your end user’s experience, we’d love to hear them! Want to learn more about Looker + Snowflake? Reach out to our team for a demo today.

]]>
2019-10-19T06:00:00-07:00
<![CDATA[Marketing Analytics: Measuring Corporate Communications with Looker]]> http://looker.com/blog/measuring-corporate-comms-with-looker http://looker.com/blog/measuring-corporate-comms-with-looker I’ve always preferred words to numbers. I’ve fallen asleep with my nose in a book since I learned to read, so a career in Corporate Communications (Corp Comm) felt like the perfect fit for me. My interests and strengths seemed pretty predictable until I found myself joining a data company where everyone—even the English majors—relied on numbers.

Still, I saw myself as a marketing person at a data company. The Looker marketing team has used Looker to manage marketing analytics since there was a Looker marketing team (this all started with Lissa Daniels). We use Looker for demand generation, event ROI, website optimization, and much more.

But as a Corp Comm team, we didn’t expect access to data would enable us to analyze the impact of our efforts so easily.

Looker—both the platform and the culture—changed that.

While much of what communications teams do can’t be measured easily (or happens behind the scenes), we wanted to find the best way to track, analyze, and share the things we CAN measure. We wanted to go beyond metrics such as “share of voice” and “number of social media followers,” which are helpful but tend to look backward rather than using data to suggest improvements and focus on communications metrics that provide actionable insights.

Because, at Looker, we believe data isn’t just about reporting—it’s about improving.

Measuring Communications Analytics with Looker

Building our team a high-level communications dashboard seemed like an excellent place to start. To do this, I worked with our Marketing Analytics Manager. He and I began by thinking about what data in our internal Looker instance would hold the most valuable insights for our team.

Because we have Google Analytics 360 (GA 360) and Salesforce (SFDC) data accessible via Looker, we knew we could analyze activities on Looker.com as well as activities associated with campaigns and leads in SFDC. From here, we thought about our available team data in three buckets:

  • Awareness
  • Content (i.e. press releases & blogs)
  • Analyst Relations

For this first dashboard, we focused on the activities that we drive on Looker.com.

Measuring Awareness

Awareness is a common Corp Comm goal, but how do you measure it? Is it a brand survey? A count of external mentions or impressions?

It can be all of these things! And while a combination is probably the best approach, we use organic search and direct web traffic as a proxy for general Looker awareness.

comms

Our entire marketing team tracks this change over time by region, but I wanted to identify the Corp Comm impact on this metric.

Years ago, when we first built a social media strategy at Looker, we started adding UTM tracking into our links that we leverage in both GA 360 and SFDC. We have UTM tracking in our various social media channels, employee advocacy tools, press releases, and large campaigns.

comms

Using GA 360, we can see what channels drive the most traffic and web activity. Using SFDC, we can see what acquisition campaigns drive downloads and progress down the sales funnel.

By combining this data in Looker, our team can identify how we contribute to the increase in web traffic as well as how we help create Marketing Qualified Leads (MQLs), Sales Qualified Leads (SQLs), pipeline activity, and closed business. This information allows us to identify what channels are the most valuable.

comms comms

For example, while we have the most followers on LinkedIn, we see that Twitter drives more traffic to Looker.com. We also see that press releases and employee advocacy social posts lead to more form-fills than corporate social posts.

We believe this is because folks following our corporate account are already familiar with Looker while connections of our employees may still want to learn more and download content. We’ve taken action on these insights by putting increased effort into our employee advocacy tool and program.

Measuring Content Performance

Most of the content our team produces is published outside of the Looker domain, which makes it much more difficult to accurately attribute our content’s influence on the business.

In the future, we will share how we analyze some of that data in Looker, but for this post, I will focus on an analysis of our newsroom and blog pages, as published on our website.

By analyzing page data, our team can identify our top-performing press releases and blogs as well as spot trends to help us understand what types of news are most interesting to our audience and what kind of blogs they want to read.

One interesting finding is that our newsroom and blog readership have very different interests. For press releases, we see the most traffic for corporate announcements sharing business growth and major hires.

comms

Meanwhile, our blog audience is most interested in detailed and how-to pieces as well as reading about insights from data (Game of Thrones, Education & Literacy, and Fantasy Football, etc.).

comms

Our team uses these insights when deciding what belongs in our newsroom or on our blog as well as how to focus our time on side projects (ex. analyzing interesting datasets).

Measuring Analyst Relations

Some of the richest data available to our team has to do with tracking the ROI on content derived from our work with Industry Analysts (think Gartner and other research firms).

When we license the rights to a report or a webinar with one of these firms, we create a custom campaign in SFDC to track the leads who interact with each piece of content—as well as how they move through our sales funnel and (hopefully) into our customer funnel.

By analyzing this data in Looker, we’re able to easily compare pieces of content across campaigns and firms to identify the impact and value on our business. Our team uses this data to help determine which firms we hold seats with, which reports we license—and how we prioritize our time (filling out requests for information (RFIs), researching, and creating content for webinars, etc.).

This in-depth analysis has helped us identify which firms and reports bring in the most MQLs and which reports create the highest value MQLs (leads who are most likely to move through our sales funnel). Interestingly, we have found that the highest volume and highest converting pieces of content are not necessarily the same.

comms comms

All of this data helps our team determine how we spend our time and money and ask for additional resources and support.

When we’re getting ready to launch content that we KNOW is high value but that may require additional support from other teams, we can share this data as proof that their effort will be well spent. And, when we test something and see a low return, we’re confident in passing on similar opportunities in the future.

For a team of ‘word folks’ who are quite new to data, having access to these communications metrics as answers and ‘proof’ of our work has been game-changing.

And we’re just getting started.

]]>
2019-10-11T06:00:00-07:00
<![CDATA[What’s New in Looker? October 2019 Features & Updates]]> http://looker.com/blog/new-looker-features http://looker.com/blog/new-looker-features Welcome to “What’s New in Looker?”, a blog post aimed at making you say things like, “Whoa, I had no idea I could do that with Looker!”

This post focuses on new product features of dashboards, Looks, and Explores. We’ll leave the coding to the developers.

I have two great things to share with you today.

Never miss an important data change again

Get notified of important changes in your data as soon as they happen with Conditional Alerts. Set the alert conditions yourself or subscribe to an alert someone else made. Either way, you’ll be in the know.

For example, as a customer success manager, you can elect to be notified when your customer’s month-over-month product usage drops by 25%.

If you manage an inventory, you can receive notifications when your average shipping time rises above a certain threshold so that you can investigate what’s causing the holdup.

How to do it

To set up an alert, hover your cursor over the dashboard tile you want to make the alert for. Click the bell icon in the upper right corner.

Fill out the dialog box that pops up according to your preferences.

In this example, if the “Average Days to Process” value changes to something above 3.4, I’ll get an alert in my inbox every day at 5 a.m. until it falls below 3.4 again.

new_features

Click Create Alert to finish.

If you want to subscribe to an alert someone else has created... do it! Tiles with existing alerts display a number over the bell icon. Click it to open the alert pop-up.

Click Follow on the window that appears next. You’ll get on the recipient list like a nice fall breeze goes through an open front door... effortlessly.

new_features

Which data changes are most important to you and your team? Think over your favorite dashboards and consider creating an alert using this new Looker product feature today.

Know instantly if your metric is on track or not

Apply conditional formatting to a single value visualization, and you’ll know how your metric is performing at a glance.

In the example below, the number will display in red if the customer health score is below 50. If it’s above 70 (that is, worthy of a celebration dance), the metric will display in green.

How to do it

To apply conditional formatting to a single value visualization, edit the tile.

Select the Formatting tab and click the Enable Conditional Formatting toggle.

Set your rules. In this example, if the number of open orders is more than 1000, the value will appear in orange.

And that’s it!

If you want to get creative, you can even use conditional alerts and conditional formatting in tandem. You can set up the value to go through a series of color changes, and send an alert when it hits the final color threshold.

It will be like a pressure gauge on a hot water heater with a warning alarm — except for your data!

new_features

Have any feedback or questions? Get in touch via the Community discussion on Exploring.

Until next time,
Jill Hardy
Content Strategist, Customer Experience

]]>
2019-10-09T06:00:00-07:00
<![CDATA[Why is Using the Looker Action Hub like Ordering Takeout?]]> http://looker.com/blog/looker-action-hub-examples http://looker.com/blog/looker-action-hub-examples Modern food delivery amazes me.

As soon as I feel hungry I can open my phone, click a few times, and food magically appears on my doorstep. It’s my lazy Sunday afternoon paradise.

The Looker Action Hub is like that, but for data: it enables you to take action on information from right where you are in your workflow. As soon as you see interesting data, you can use an action to automatically create a JIRA ticket, send it to a colleague via Slack with a few clicks, or send it to an S3 bucket.

See? Exactly like instant food delivery from the comfort of your couch... Okay, okay, maybe not exactly like that, but it’s still pretty great. And it’s better for your business because you can send your data pretty much anywhere your heart desires.

You can even revolutionize your business with automated alerts like Alto did.

Automated Alerts with Slack and Twilio

Ride-hailing service Alto (Alto Experience) hums along like a machine thanks to the Slack and Twilio Action Hub integrations they use to handle automated alerts.

Traffic issue? Drivers are automatically notified. Experiencing higher than anticipated trip volume in a particular area? No problem. Alto's Dispatch team is notified to position vehicles to handle the demand.

action_hub

Airport pickups can be hectic. When a customer requests one, Alto automatically sends a text (via the Action Hub Twilio integration) asking for an exact meeting point. Since both waiting at an airport and driving in circles at an airport are about as fun as scrubbing the floor, I’m calling this a double win.

action_hub

So how did they do it?

Alto Experience uses Looker to search for patterns in data that indicate a problem on the road. When a “problem” pattern is found, they take action automatically.

For example, if a passenger is making a coffee stop during his trip while another passenger is in the queue behind him, Alto quickly recognizes that and matches the waiting passenger to a nearby vehicle. Notifications go out so everyone is kept up to date.

Boom: alerts handled. No time spent building, testing, or iterating on a custom notification system.

Predictive Analytics with Amazon SageMaker

The Action Hub’s capabilities extend beyond messaging: you can also use it to nail your next marketing campaign with predictive analytics.

Imagine you’re working on building an email series for your SaaS product and you know exactly which customers are most likely to upsell. Focus on targeting only them, and you can trigger that first email with confidence.

How do you get to that point? You need machine learning capabilities to get those predictions. The Action Hub’s built-in integration with Amazon SageMaker makes them accessible.

Building a predictive analytics model requires three data sets:

  • a training data set (this is what “trains” your model to make predictions)
  • a validation data set (used to refine the model)
  • a test data set (used for a final evaluation of how well the model was trained)

So, back to our imaginary SaaS company and its upsell campaign. Let’s say you’re using a legacy on-premise database. You can use Looker to separate your three data sets — then send all that data to SageMaker via the Action Hub to train your machine learning model.

The model will then help you identify common characteristics of customers who upsell. Maybe the age of the customer’s account is an indicator, or the industry the customer is in, the last time their administrator logged in, or how involved they are in your campaigns... whatever the indicators of an upsell are, it’s time to find out!

Powerful stuff.

But maybe you want to use your data in a more... unique fashion. Maybe you even want to...

Purchase Carbon Offsets from a Dashboard using a Custom Action

Yep, really.

Looker’s Community Manager Izzy created a custom action that enables businesses to offset the CO2 footprint of their shipping orders, right from a dashboard.

A carbon offset helps compensate for carbon dioxide pollution produced by preventing it from happening elsewhere. Purchasing an offset often means investing in renewable energy, funding tree planting, or protecting an existing forest. While not a secret shortcut to sustainability, after you’ve reduced your impact as much as possible, offsets can get you the rest of the way to carbon neutral.

The idea behind this action is to make it extremely easy for an environmentally-minded business to act on its values. Just a few clicks to help the atmosphere! The Lorax would be proud.

Right from the Looker dashboard, a user can view the CO2 impact of an individual order or their entire business, take action against it, and then instantly see their updated offset spend and net impact without ever leaving the page.

Here’s a screenshot of the action:

action_hub

action_hub

Izzy even implemented threshold options to prevent someone from accidentally spending too much on offsets. Pretty neat!

But maybe you want to do something entirely different with your data... like send it to a messaging application we don’t already have an action set up for. Or update contacts in your marketing tool. Heck, maybe you want to send an order to the printing and framing shop every time a deal over a certain dollar amount comes in to celebrate.

The point is that you can build a custom action and make those data dreams come true.

What Actions Will You Take?

And there you have it: three ways the Action Hub can help you do exactly what you want with your data without the rigmarole of a manual process (the equivalent of going to the store, buying ingredients, and cooking meals yourself instead of just clicking a few times to make food appear).

Whether you want to automate alerts, find out which of your customers want to hear from you, “green” your business, or something else entirely, the possibilities are wide open.

What do you want to do with your data? Bring your ideas to the Community Action Hub discussion.

Until next time,
Jill Hardy
Content Strategist, Customer Education

]]>
2019-10-03T01:00:00-07:00
<![CDATA[The Next Wave of Data: Top Takeaways from Barcelona]]> http://looker.com/blog/top-takeaways-from-barcelona-event http://looker.com/blog/top-takeaways-from-barcelona-event The demand for access to actionable data has never been higher.

The challenge for data professionals lies in meeting this demand while ensuring that:

  • Access to data is managed in a secure, governed, and scalable way.
  • Adoption of data tools and technology occurs.

To delve into this, we hosted a recent meetup in Barcelona, a thriving hub for data-driven organizations. We brought together some of the brightest regional minds in data from CIS Consulting, Looker, Snowflake, Fivetran, King, Marfeel, and more.

Here are the core themes we observed:

Three Key Considerations for your Data Tech Stack

Traditional data infrastructures have evolved significantly. The increase in cloud-based solutions has given organizations unlimited and uninhibited capabilities to store, process, and analyze their ever-growing databases.

Because of this, companies can now tap their data's real potential with the help of new and reimagined data solutions.

Data Integration

There is a new approach to ingesting data from multiple sources which involves extracting, loading, and then transforming data (ELT approach) as opposed to traditional ETL.

This new blueprint for data integration tools (which leaves the transformation stage until last) allows engineers to create a more flexible stack. With it, they can easily apply changes to the business model at a later stage — saving time and money.

Data Storage

Data storage, like data integration, has also evolved. Modern tech stacks need a database solution that can support organizations as they scale, handling increasing volumes of data without compromising on reliability or performance.

Security and data protection must exist in every aspect of a cloud warehouse architecture. Whereas warehousing used to be complex and inflexible, newer cloud-based databases have turned the industry on its head.

By separating storage from computing, both data providers and consumers are now able to share live data concurrently in a secure and managed environment.

Business Intelligence

Today’s data analytics platforms need to fit into the workflow of the entire company and provide a single version of the truth. By doing this, organizations enable their business users to carry out the analytics they require on a daily basis.

This approach to analytics, where quality information is easily accessed, means that high-functioning and efficient teams can make critical data-based decisions.

Cost Saving and Growth Aiding

Most businesses are actively looking for ways to reduce unnecessary costs and drive growth. Understanding the new data technologies available can be the key to a data-driven strategy. This strategy enables lowered operational costs, improved profit margins, and — ultimately — a competitive advantage in the market.

King, the mobile gaming giants that brought Candy Crush to the world, implemented an incident management process that allowed them to reduce the operational cost of incidents by 70%. They highlighted how Looker has been pivotal in the process by providing the ability to run root cause analysis and anomaly detection on vast amounts of data.

In general, there is a significant inefficiency in IT staff and analysts spending endless hours on laborious and time-consuming “data cleansing” tasks.

Implementing a data stack that removes the complexities of data preparation and transformation allows these teams to focus their efforts on creating real value for the enterprise and data consumers.

This reallocation of brainpower to focus on data-driven strategy is often the key to driving creativity and unlocking new market potential.

In the End, It’s About Culture

Organizations are focused on creating and embracing a data-driven culture as a part of their core data initiatives. Driving adoption is the key to unlocking true business value from a data stack.

For adoption to happen, the value of technology must be communicated and shared across the company. Utilizing a data stack with agile technologies that grows with the company, inspires new ideas, and unifies all business departments makes this value obvious.

Once a culture achieves data appreciation — and there is a desire to shift the culture to revolve around data — being “data-driven” can go beyond theoretical.

When guided by the use of the right technologies, it can become one of the operational foundations of the company.


See what else happened at the Barcelona Meetup here and check out future events with Looker near you.

]]>
2019-09-19T01:00:00-07:00
<![CDATA[Looker Lets You Choose What Works Best For Your Data]]> http://looker.com/blog/looker-supports-multi-cloud-solutions http://looker.com/blog/looker-supports-multi-cloud-solutions Looker prides itself on helping customers choose the data stack that best serves their specific needs. Looker’s unique architecture lets our customers take advantage of public, private, hybrid, and multi-cloud environments—along with the features and benefits each provides.

To continue expanding our multi-cloud offering, we’re excited to be announcing the following features and capabilities that will provide greater choice for our customers.

Looker now:

  • Supports more databases than ever, with 50+ SQL dialects supported (including new additions such as Actian Avalanche, BlinkDB, Mongo, Vector, and more)
  • Supports OAuth with Snowflake to improve data governance and control.
  • Has achieved SOC 2 Type 1 compliance when hosted on Google Cloud Platform.
  • Continues to support data export (via the Action Hub) to cloud platforms such as Amazon S3, Azure storage, DigitalOcean, Google Cloud Storage, and more.

Also, beginning in November 2019, Looker will allow customers the choice to have their instances hosted on Amazon Web Services (AWS) or Google Cloud Platform (GCP), with plans to offer Azure hosting in early 2020.

And, as always, you can self-host.

We continue to invest in capabilities that enhance our multi-cloud approach—as well as improving interoperability with a wide ecosystem of technologies.

Expanded choice of databases and dialects

Looker speaks to databases in the language they understand—SQL. But because every database is different and speaks a slightly different dialect of SQL, creating universally applicable SQL queries is virtually impossible.

So, Looker takes a different approach. Our platform abstracts your query from the underlying SQL dialect and allows data teams to write a query once, leaving the SQL creation to Looker.

Looker now speaks 50+ different dialects of SQL, including those of the most popular modern database and data warehouse technologies. The latest database integrations from Looker include Actian, Avalanche, BlinkDB, Mongo, and Vector.

“We are proud to partner with Looker to provide our customers powerful modern data infrastructure on premises or in the cloud environment of their choosing. Together we’re helping our customers realize the true value of their data virtually anywhere and at any scale.”
— Jason Wakeam, VP Business Development and Alliances, MemSQL

Supporting multiple databases and their SQL dialects has direct business value. Looker customers can choose the database that best suits their data needs—and Looker’s support for a wide range of databases can simplify migration.

One organization using Looker with two different databases as they migrate between enterprise data warehouses (EDWs) is HR tech company Namely. By leveraging this mix of technologies, they’re continuing to bring intuitive, powerful HR tools to midsize companies.

“At Namely, data security and privacy are extremely important to us, and so is the database we choose. With Looker, we don’t need to rewrite all our queries to make them work with a new database. Looker helps us focus on putting data in the hands of users, wherever it’s located.”
— Jessica Ray, Sr. Product Manager, Reporting & Analytics, Namely

Simplified authentication and control for Snowflake users

In addition to supporting new SQL dialects, Looker is continually updating and improving how it supports databases. As a part of this announcement, we now support OAuth with Snowflake to help our customers using Snowflake authenticate and authorize data access between the two systems.

OAuth is an open-standard protocol that allows supported clients authorized access to Snowflake without sharing or storing user login credentials.

SOC 2 Type 1 compliant

Looker hosts and manages Looker deployments for the vast majority of our customers. As a Looker customer, you can now choose which underlying cloud provider hosts your Looker instance.

Deployment of Looker-hosted instances has historically been on Looker’s virtual private cloud (VPC) on Amazon Web Service (AWS). Beginning in November, customers can choose between hosting on AWS or the Google Cloud Platform (GCP).

Customers can even self-host on private infrastructure if necessary. Looker plans to support hosting on additional cloud providers soon, with Azure hosting planned for early 2020.

“While we deepen the integration of Looker into Google Cloud Platform (GCP), customers will continue to benefit from Looker’s multi-cloud functionality and its ability to bring together data from SaaS applications like Salesforce, Marketo, and Zendesk, as well as traditional data sources. This empowers companies to create a cohesive layer built on any cloud database, including Amazon Redshift, Azure SQL, Snowflake, Oracle, Microsoft SQL Server, or Teradata, as well as on other Public Clouds, and in on-premise data centers.”
— Thomas Kurian, CEO, Google Cloud

Earlier this year Looker achieved Service Organization Control 2 Type I (SOC 2 Type 1) certification for our hosting environment on GCP. Looker already maintains a SOC 2 Type 2 report for Looker Cloud instances hosted on Amazon Web Services (AWS).

The SOC 2 report for GCP demonstrates Looker’s commitment to security, availability, and confidentiality in our hosted production environments.

This report includes design and operating effectiveness tests for our existing hosted environment. The report also provides information on our practices — ranging from vulnerability management to endpoint protection — that affirm your information has appropriate safeguards in place.

Data where you need it, even across clouds

Looker continues to make it easy to operationalize data and insights using Looker Actions. Actions allow Looker users to deliver data directly into workflows using connected systems such as Slack or Jira.

Actions also allow users to deliver data between clouds, with query results dropped into a range of cloud storage “buckets” for use within those clouds. These pre-integrations include out-of-the-box data delivery into Azure, Google Cloud Storage, Amazon S3, DigitalOcean Storage, and other systems.

Using Looker Actions to deliver query results between clouds allows customers to leverage features that are available in specific clouds.

For example, you use Google BigQuery as your enterprise data warehouse but want to use Amazon Sagemaker for machine learning. Looker can automatically deliver training, validation, or test data sets based on a schedule you define into an Amazon S3 bucket for use by our data science team.

In other words, Looker Actions enable the use of cloud features across a range of cloud providers.

Looker supports your cloud strategy

Every organization’s data environment is as unique as its business. Success requires an analytics platform that supports your unique data stack.

Looker can do that.

With Looker, you’re able to build on cloud technologies regardless of the cloud provider. You can consolidate into a single cloud, or leverage the benefits of multi-cloud quickly, securely, and in a consistent, governed manner.

Whatever your strategy may include, Looker is here to help you do more with your data.

Experience platform freedom. Request a demo or proof of concept to see it in action.

]]>
2019-09-12T06:00:00-07:00
<![CDATA[Looker Achieves SOC 2 Type 1 Certification for Google Cloud]]> http://looker.com/blog/looker-soc-2-type-1-certification http://looker.com/blog/looker-soc-2-type-1-certification Looker is excited to announce new hosting options for our customers hosted in the Looker Cloud. Beginning in November, Looker customers can choose to have their instances hosted on either Amazon Web Services (AWS) or Google Cloud Platform (GCP), with plans to expand on these hosting options to include Microsoft Azure in early 2020.

Customers who choose to have Looker host their environments can focus on data and insights while avoiding unnecessary infrastructure management burden. Looker-hosted customers receive benefits that include system performance monitoring, managed upgrades, backups and recovery, and more.

As we continue to support a choice of clouds, our customers’ growing multi-cloud needs, and a range of hosting options, our commitment to security and compliance best practices remains strong. In June 2019, Looker achieved Service Organization Control 2 (SOC 2) Type 1 for Looker Cloud hosted on GCP, well before hosting on GCP will be generally available. The SOC 2 report demonstrates Looker’s commitment to the principles of security, availability, and confidentiality in our hosted production environments.

Looker already maintains a SOC 2 Type 2 report for Looker Cloud instances hosted on Amazon Web Services (AWS). This report includes tests of design and operating effectiveness for our existing hosted environment as well as information on our practices, ranging from vulnerability management to endpoint protection, providing assurance that your information has appropriate safeguards in place.

Looker has also chosen to transition to a Kubernetes architecture, which provides more hosting flexibility over traditional virtual environments. The SOC 2 Type 1 for GCP report provides tests of design for our new Kubernetes based infrastructure housed on GCP. Now that we have our SOC 2 Type 1 for GCP, we will be testing this, along with our AWS infrastructure, during our Type 2 testing in the beginning of next year.

If you’re interested in learning more about Looker’s approach to security, compliance, and the security responsibility we share with our customers, check out our security page or reach out to our team to speak directly with a Looker expert and see the platform for data in action.

]]>
2019-09-12T05:59:00-07:00
<![CDATA[Building with Looker: What I Learned During My Internship]]> http://looker.com/blog/building-with-looker-during-my-internship http://looker.com/blog/building-with-looker-during-my-internship If there’s one thing I learned during my time at Looker, it’s that the Looker platform is really really cool. Being able to answer any business question with a few clicks of a button is a powerful ability, especially when everyone is enabled to do so.

Connor, our resident Looker Marketing Analytics Manager (and my mentor), knows our platform and how to leverage data with it exceptionally well. He's so knowledgeable that marketers often recruit him when they are building out their own explores or dashboards.

He realized that many of the marketing team’s data requests are similar. However, there wasn’t one single dashboard he could point folks to so they could find and use the metrics they were looking for.

My project was born to help solve this bottleneck and enable the marketing team to easily find and pull the metrics they needed—when they needed them.

There’s more to marketing than just SWOT

To kick off my project, I interviewed people within the Demand Generation team to learn more about their business and how each of their jobs impacted the marketing team and beyond. It was a great way to learn the ins and outs of marketing and how teams impact Looker as a whole. It was especially fun to learn how intricate the Marketing org is.

I took a Fundamentals of Marketing class at Cal Poly, but I didn’t realize just how much more I had to learn. I made a joke with Connor and Brenda, my manager, that there is way more to marketing than what I learned in school.

One of the main things I learned in that class was how to create a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis. But my time at Looker taught me there is so much more to marketing than just SWOT analyses.

The beauty of the unknown was that it gave me so much room to learn and grow. I learned the different parts of the marketing funnel, how Marketing and Sales intertwine, and the marketing metrics used to measure all of this in Looker (which was the best part).

Did someone say HTML?

After conducting all the necessary interviews, it was time to get started on my dashboard. I began by compiling a list of all common data requests, such as the number of Marketing Qualified Leads (MQLs) in a quarter and Stage1 to Stage 2 opportunity conversions.

From there, I started to create Looker explores based on these common requests. As I was building them out, I would occasionally notice I had missed adding a filter and would have to go back and edit my work.

When I told my mentor about these instances and asked him how to check my work best, he replied “Do you see why we need your project? It’s not just you that’s missing a filter here and there.”

Hearing this was so comforting and made working on this project an even better learning experience. I began to realize that my project would have a huge impact once it was completed.

Once my explores were built and pre-filtered for common marketing metrics, I began working on the layout of the dashboard. One thing I definitely wanted to include was Text Tiles.

You can include anything—from a header for your dashboard to a button that links out to a website or explore—on a Text Tile. To make the most of Looker’s Text Tiles, I needed to format them using HTML. When creating a dashboard with Looker, you can insert a Text Tile and format it using HTML. The HTML will then display the written information on the dashboard.

Since I’ve only taken coding classes in school, I worked through learning HTML and getting the formatting right to display what I wanted. It took some trial and error, but referring to an internal Looker dashboard created to guide anyone diving into the world of Text Tiles helped a ton.

intern

How to use the dashboard

intern

So how will marketers at Looker use this dashboard?

Say you’re a new member of the Marketing team at Looker. In your role, you need to track how many Stage 1 opportunities convert to Stage 2 opportunities, but you’re still learning how to use Looker to find your answer. Or perhaps you know how to make an explore, but you’re not completely positive that the data you’re pulling is correct.

With this dashboard, you can read through, find exactly what you’re looking for, click on a button that takes you to an already created, pre-filtered explore, and then add that explore to your own dashboard. This will enable everyone—in marketing and beyond—to use this information to learn and build dashboards with the metrics they need.

What comes next?

The best (and possibly most exciting) thing about my project is that it can continue to be iterated on. While my dashboard specifically focuses on top of the funnel metrics, Looker lets you add middle and bottom of the funnel metrics or any other metrics you may need for successful data analysis.

Everyone, regardless of technical expertise, should feel empowered to build their own explores and dashboards. I’m proud that my project will enable folks to do just that. I hope that everyone will be able to make the most of their data with dashboards like the one I built.

]]>
2019-09-11T06:00:00-07:00
<![CDATA[A Collective Impact: Interns of Looker]]> http://looker.com/blog/interns-of-looker http://looker.com/blog/interns-of-looker Uncharted Territory

Larry Tran walks into orientation with the confidence of familiarity.

This summer, he has created a dashboard to “guide and give pulse to how EMEA Marketing, Sales, and Sales Development reps are doing throughout the year or quarter” — a project that has given him insight into the challenge of international business.

He spends his mornings on calls with the European offices and feels his time is valued — that his peers recognize the challenge of his task and appreciate his efforts to build a globally beneficial tool.

“[Europe] is very different from the US... different territories within Europe have different needs and cultures. Thus, I created a dashboard specific to each of their needs that helps guide decision making and gives insight into how they’re doing.”

This is his second summer interning at Looker — an opportunity that came with an abrupt change in perceived expectations.

At Cal Poly, the path of a BIS major is clearly defined. Business Information Systems majors are ushered towards internships, and eventual employment, with the Big Four — major accounting and consulting firms partnered with the university. Larry has watched his brother’s future take shape through these opportunities and is excited to do the same.

A recruiter, however, tells him he is “not as focused on BIS as other candidates” because of his interest in marketing. He finds Looker at a career fair — he knows nothing about the company but they are excited by his BIS background and curiosity for other fields.

Now on his second summer interning for Looker, he is thankful he leaned into what spurred his interests.

“Without being told no, I would not have ended up here.”

Empowered By Looker

For Saloni Agrawal, the Looker internship comes with unexpected revelations.

As an intern in the Department of Customer Love (DCL), Looker’s customer support team, she has two projects. The first is her thesis — an ongoing project hosted by DCL meant to provide lessons in storytelling through Looker to develop customer empathy. For Saloni’s thesis, she takes AirBnB data and uses Looker to discover the optimal travel options depending on location, season, and price. The second is to revamp a section of the DCL ramp for new hires. “DCL is evolving and scaling rapidly,” Saloni explains, “so we have to develop a new onboarding system by expanding on the current one to pass on information without it falling through the cracks.”

Saloni initially found Looker through a tour set up by the University of California, Santa Cruz Information Systems Management Association. There she was invited to ask questions to a panel of recent graduates working at Looker who gave her insight into the culture and work available. As fate would have it, she found Looker at a career fair soon after and applied for the internship program. “I didn’t think I had the experience necessary... but I wanted to work here.”

She remembers being engulfed in the positive energy and excitement of her mentor and manager as they burst through the door at orientation, ready to sweep her away for introductions. From the moment she meets her team, she knows this is where she wants to be.

“The other DCL interns have been amazing. When we get stuck, we ask each other first and are able to help each other find a solution... I’ve learned so much from them. I’ve found friends within them and gained a new perspective on life because of the conversations we have had. Truly happy I was able to meet them.”

Looker has been an opportunity for Saloni to realize her potential. She was unsure what she wanted to do but Looker gave her the opportunity to explore her options. She explains, “I want to be a powerhouse but sometimes I don’t know how to do that. Looker is right there, next to you, while you’re figuring it out.”

Making An Impact

Working as an Intern on the security operations team, Roy’s project is to build an automated system that would respond to alerts and notify security of unexpected behavior. “We're snapshotting a Kubernetes container and using a malware detection algorithm I designed to alert to potential threats.” He explains, “We also developed a pre-approved whitelist that analyzes the changes in the snapshotted container and alerts if some event occurs in the container that is not on the whitelist.”

Not only has Roy built this system from scratch, but it is the implementation of a patent idea from his manager that is currently under a patent-pending status. This makes Roy among the first to ever work with this concept.

Having interned at three companies prior to Looker, he recognizes “what makes Looker unique is the culture.” Roy discusses his experience through his interactions with his fellow employees. Every day he enters the building, he is warmly welcomed with what he describes as “vibrant happiness.” His workday is defined by community and collaboration and a desire to help each other so that everyone can experience success.

Roy is a Ph.D. candidate at UCSC. In his undergrad, he wanted to study business but a single class became a pivotal moment in his education — a computer science class where he built a replica of Battle Star Galactica. This is where he fell in love with his trade.

Now Roy finds himself in the position to shape both the company and data security as a whole. “I want my work to impact Looker and keep our systems secure, however, if I build a tool that could help other companies' systems, I'd be more than honored to have that wide of an impact in the security community... To contribute to a more systems secure world is something I pride myself in.”

Redefining Success

As an engineering intern, Grace Lin’s project requires her to go into the backend of the Looker platform to shorten the time it takes for Looker to connect with a client’s database. She explains, “Even if it takes a second, those seconds add up.”

“Looker is representative of the reason that I'm interested in tech. Coming into college and not really knowing what I wanted to do, I chose computer science because I saw that a lot of real change was being made through technology. I didn't know what I wanted to do but wanted to be able to make some kind of positive impact. I think it's really beautiful and special how Looker enables others to utilize tech.”

Handshake, a career network for college students and recent grads, introduces her to the Looker Internship Program. The vetting process is not what she expects. “Applying to technical positions is very impersonal... with Looker, it was the complete opposite. They carry their values from HR and recruiting to the rest of the company.”

To Grace, Looker is “perfect in that it combines computer science with data science applications” She looks forward to learning through problem-solving as a software engineer. More than that, she enjoys her team for their desire to help her grow.

Situated between her managers’ workspace and the co-founder’s office, Grace’s desk is a center for collaboration. With the position of her workspace, she gains inspiration from observing this environment, especially the interactions of Looker Co-Founder and Chief Technical Officer, Lloyd Tabb. “I can see how much he loves this company... He’s coding every day and talking to people — always helping them be the best they can be.”

Grace’s intern experience allows her to partake in a truer definition of success — becoming a person who works toward compassionate understanding to further others and, thus, the company as a whole.

A note from the author

I spent most of the time leading up to this internship worried about whether or not this was the right fit for me.

Performance and writing are my passions — pursuits I momentarily felt I was giving up.

But I’ve already touched on my experiences, so I’ll be brief.

It took five minutes to change that.

Five minutes of talking to other interns, meeting them for the first time and realizing they were nervous too. For their own reasons, but nervous nonetheless.

There is not a single experience that you are alone in — my peers see that. We have been built up from that foundation — supported and cultivated in an environment teaching us to recognize success as something we experience together.

It does not matter if you’re a musician, a lawyer, or an intern, Looker helps you build the mindset to continue to find success in all endeavors.

]]>
2019-09-05T09:00:00-07:00
<![CDATA[Data Storytelling in Action: Using Data to Guide your Fantasy Football Draft Strategy (Part II)]]> http://looker.com/blog/data-driven-fantasy-football-draft-strategy-part-ii http://looker.com/blog/data-driven-fantasy-football-draft-strategy-part-ii Last year, I wrote a post in preparation for the 2018 Fantasy Football season about two concepts to keep in mind when approaching your draft: Holistic Value and Pick Value.

This year, I’ll expand on these concepts while introducing new ways to prepare for your draft. You can find all of this information on this interactive dashboard.

One mistake I’ve made in the past (and I’m sure I’m not alone) was overly focusing on the top of the draft at the expense of my picks in later rounds.

I’ve found that it’s just as important to round out the bottom of your roster as it is to draft your stars at the top.

You can learn from my mistakes. I’m going to lay out the exact strategies I now use to prepare for every stage of the draft.

Here are some concepts to be aware of that I’ll discuss throughout the post.

Key Concepts

Round Composition

Round composition is the strategy of considering multiple rounds at a time to understand the makeup of players to be drafted by position — rather than solely focusing on a single pick or a single round. You can then match that strategy to your team’s draft needs.

While this sounds simple, many people default to looking pick-by-pick or round-by-round instead of considering the bigger picture.

For example, in rounds three and four, there are twelve Running Backs (RBs) to only seven Wide Receivers (WRs). Even if you need a RB, it may make sense to still target the best available receiver in the third round knowing there will be plenty more RBs to grab in the fourth.

Player Trajectory

Each player’s year-over-year trajectory is something I always like to keep in mind. Simply put, is a player expected to take a leap forward — or take a step back — this year? You want to watch out for guys in for a significant regression, especially in respect to their draft position.

Point Regression usually comes when a player is aging, injured, or coming off a “fluke” year that may be hard to repeat. The goal is to mitigate our risk by avoiding players with any of the above.

Looking at the first couple of rounds, you’ll see all the top guys are expected to regress a little (which makes sense considering they are all coming off of spectacular seasons).

But if you look at the Projected Points Per ADP vs. 2018, it's only a small regression compared to their overall value. Players like Nick Chubb, Odell Beckham Jr., and Dalvin Cook are other examples of guys trending in the right direction.

Player Point Composition

The composition of how a player scores fantasy points is crucial in later rounds. For example, a top-five RB in 2018 point composition looked like this:

This tells me to look for RBs who also catch a lot of balls. This is a great benchmark for finding RBs in later rounds who match this composition in hopes of finding a breakout star.

Okay, now onto the round breakdowns where you can see these concepts in action.

Premier Rounds (Rounds 1-4)

These are the rounds that should determine your weekly starters. Remember, it's about total points, not the team that looks best on paper when the draft ends.

Running Backs

Last year I recommended spending your premium picks on RBs and to specifically target “bell cow” RBs who average more than twenty attempts per game.

Well, one year changes a lot.

Last year there was only one RB (Ezekiel Elliot) who averaged more than twenty rushes per game, compared to eight backs the year before. This year, zero RBs are expected to have that volume.

With that information, it makes sense to explore RBs with more involvement in the passing game. To do so, I brought in a new metric this year to evaluate RBs: receiving targets per game.

Now our list is starting to resemble mock draft RB rankings:

I like that Ezekiel Elliot is falling a little, allowing someone with a mid-round pick to steal him (assuming he ends his contract dispute). Other guys to watch out for are Le’Veon Bell, coming off a year of not playing, David Johnson and Nick Chubb. Joe Mixon was surprisingly high on this list as well (5th).

RBs are plentiful in rounds three and four, but WRs are not, so I recommend going with the best available or WR in the third and waiting for RBs like Chris Carson, Mark Ingram, or Josh Jacobs in the fourth.

If somehow Damien Williams falls to you, I like him as well. All are expected to improve from last year’s performance (Jacobs is a rookie).

Wide Receivers

I couldn’t believe when I looked at the top projected forty players and saw only fourteen receivers. Contrary to my belief of prioritizing RBs early, it looks like WRs should be the focus this year (aside from the top four or five picks).

Let’s take a quick look at our star WR point composition. The 22% on touchdowns stands out to me. Focus on finding WRs who don’t rely on big yardage every week and are projected to score touchdowns instead.

Last year, the guys with high touchdown counts were DeAndre Hopkins, Davante Adams, Tyreek Hill, and Antonio Brown.

This year, Davante Adams and DeAndre Hopkins are both expected to reach double digits again and should be taken very early, with JuJu Smith-Schuster and Odell Beckham Jr. joining Tyreek Hill and Antonio Brown in the next tier.

A sleeper that seems to be falling in drafts is Amari Cooper, the only expected WR to increase in average points per game this year.

Tight Ends

Travis Kelce is a safe pick, and George Kittle is acceptable as well, but I think there are TEs you can find in the later rounds that will be just fine.

Quarterbacks

You don’t need to focus on QBs in the early rounds. Take Patrick Mahomes if you want, but generally, you can wait.

Later Rounds (Round 5-10)

While you’ll still be rounding out our rosters with starters in these rounds, it's important to think about your premier bench players here as well. These are players that you can mix into your lineup throughout the year or may become starters if other players get hurt. Quality depth is important.

Running Backs

The mix of RBs and WRs is more even in these rounds, and people will likely draft the rest of the starting QBs:

As you can see, guys like Sony Michel and Philip Lindsay are expected to be near double digits in projected touchdown volume. Philip Lindsay also had ten touchdowns last year.

Overall, the RBs in these rounds are projected to have similar average points per game as the WRs (strengthening the argument for prioritizing them early).

An interesting player from a point composition standpoint is Kenyan Drake. Out of all the RBs in the later rounds, his point composition most resembles the star players since he is involved in the passing game.

Watch out for him as a sneaky sleeper.

Wide Receivers

WRs in the later rounds can be tricky as many of them won’t be the number one option on their teams. If you can find players who are the number one option, they could have good value this late.

If not, look for players with a high trajectory from the previous year, and receivers expected to get a lot of yards (or who play for teams with good offenses).

It looks like many of the WRs in the later rounds are expected to improve. Guys like Will Fuller, Curtis Samuel, and Christian Kirk stand out. For players who are likely to have the most targets on their team, I like Julian Edelman, Kenny Golloday and D.J. Moore.

If you are targeting players with high touchdown projections, Calvin Ridley, Cooper Kupp and Mike Williams are good picks.

Another note on D.J. Moore: he is expected to improve his points per game by two points. This makes sense now that he is in his second year and likely the number one receiver for Carolina. Could be a great late-round pick.

Tight Ends

I like to depend on a lot on previous year stats when I look at Tight Ends. If you take a look at last year’s stats for the Tight Ends projected to go in rounds five through ten, Zach Ertz is a clear front runner. If he is hanging around, it makes sense to grab him. Eric Ebron also stands out with his fourteen touchdowns from a year ago.

Generally speaking, from Zach Ertz down to David Njoku, it's only a 2.2 point difference in projected points per game over the five-round span. Don’t stress too much about TEs as all of the ones listed here will be serviceable. Focus on rounding out your WR and RB starters before grabbing one of them.

Quarterbacks

Similar to TEs, I think there are plenty of QBs to go around. There are usually more starting quality QBs than there are teams in your league.

Again, looking at projected points per game, there is only a one-point difference between Deshaun Watson and Cam Newton.

With this in mind, let’s try and find guys projected to make a leap forward this year:

Baker Mayfield and Carson Wentz stand out. I think Baker Mayfield is in for a big season. Kyler Murray is a rookie, hence his expected improvement. And lastly, similar to finding RBs who catch the ball this year, people seem to be gravitating towards QBs who can run.

I don’t necessarily hate this strategy, but let's go back to the composition of the top five QBs last year:

Only 5% of points came from rushing. I’d be careful about going all-in on that strategy. Guys like Baker Mayfield, Matt Ryan, and Carson Wentz stick out as potential top-five QBs this year, in my opinion.

Happy drafting!

In conclusion, make sure you are looking at your draft holistically (two, three, or even four rounds at a time) to understand when it makes sense to draft by best available, your team needs, and where you may find value in different positions.

Use this dashboard to scout out which players are expected to be available in different stages of the draft. Additionally, make sure you’re looking at both last year’s stats and this year’s projections when picking players. You want players expected to improve, not regress.

Finally, have fun...after all, it’s only fantasy football.

]]>
2019-08-29T06:00:00-07:00
<![CDATA[Your Data, Integrated: Embed Looker in Your Tools & Bring Analytics into Your Team’s Everyday Workflows]]> http://looker.com/blog/private-embedding-examples-workflow http://looker.com/blog/private-embedding-examples-workflow One of Looker’s coolest capabilities is bringing data to people in the tools where they already spend their time through embedded analytics. People can get their answers in the same window, without having to go somewhere else or ask anyone for help.

And data analysts are freed up from a ton of everyday data requests — hello, time for awesome projects!

You can empower your organization this way too. Embedding Looker dashboards, Explores, and Looks in your internal tools and applications for analytics is so easy we explained how to do it in five steps.

Embedded reports come with the Looker security you expect since they’re presented through an iframe (not passed into the tool).

Sounds great in theory, but what are people actually doing with embedded reports? How might you integrate them into your teams’ workflows?

Here’s some inspiration to get your wires firing. Below are two really effective examples of embedding Looker. Get ready for some sparks!

Embedded Analytics in Zendesk

Our beloved Department of Customer Love (if you’ve ever talked with Looker chat support, you’ve met them) spend their time in Zendesk. Their chats move quickly, and they need information in real-time to be as helpful to Looker customers as possible.

To that end, our experts embedded data from Looker in the Zendesk side panel to give our support team some context about the folks they’re chatting with.

They can quickly get information about the version of Looker that someone is running, which types of users are active in their instance, and other details that provide helpful context.

Now you know the secret behind our DCL team’s success. If you’ve ever wondered why our DCL team is so great — wonder no more! Not only are they a highly technical team of fun-loving people, their awesomeness is fortified by data.

Embedded Analytics in Salesforce

We also embedded Looker in our Salesforce instance. Check it out:

The embedded bar chart at the bottom gives our customer success team a customer’s license information right on an account page. This way, the team can quickly see if an account is over-deployed and reach out if necessary.

Where Will You Embed?

These are just two of the many possibilities of privately embedding Looker. Compare different platforms for embedded analytics by downloading our whitepaper.

Which tools in your company would benefit from embedded data?

Get the process rolling today with our 5-step "cheat sheet."

Cheers,
Jill Hardy
Content Strategist, Customer Education

]]>
2019-08-22T06:00:00-07:00
<![CDATA[5 Tips to Make Your Next Dashboard Your Best Yet (Designing Dashboards for UX/UI)]]> http://looker.com/blog/dashboard-ux-ui http://looker.com/blog/dashboard-ux-ui Today I’m going to describe five principles that will help you create dashboards that serve the people that count, rather than just serving up data.

The principles are:

  • finding your dashboard’s “big idea”
  • getting buy-in with a wireframe
  • ensuring clarity
  • keeping it simple
  • creating a good flow of information

I like to think of the first two as the research phase because they take place before I start developing my dashboard. And I think of the last three as the creation phase, since I’m thinking about them as I build.

A clear dashboard that focuses on a central theme speaks for itself. You’ll spend less time explaining the dashboard, and data-driven decisions can be made more easily because the right information is readily accessible.

Sounds like a solid way to work, doesn’t it?

Well then, let’s get started.

The Research Phase

What’s the Big Idea?

Knowing what you want to convey is the foundation of building an amazing dashboard. What’s the big idea behind your dashboard? Its raison d’etre?

The best way to get these answers is by having a conversation with your audience. It’s crucial to understand who the audience is, what they hope to accomplish with this data, and the actions they will take based on the information they see.

For instance, an executive making strategic business decisions needs different information than an operations manager who is keeping things running smoothly on a day-to-day basis.

Both have a goal. Asking your audience what they’re hoping to get out of their dashboard is the first step in making it happen. Your conversation might go something like this:

You: What’s your role here at Housing Inc.?

Audience: I’m a housing development manager. I oversee the development process from conception to ribbon-cutting.

You: Great, thanks! I understand you want some information about the housing markets in California. What specifically do you want to get out of this dashboard?

Audience: Oh, thanks for asking. I want something that will help me determine where to pursue our next development.

You: Good to know. What kind of information helps you determine that?

Audience: I need to see trends in the marketplace… what are the local rents like for different unit sizes? How much is property selling for? What’s the median income for the area? Are there affordable housing requirements in the local area? How long is the typical permitting process?

It would be great if we could set all of this data up and then I could change which location I’m looking at to compare different markets.

You: If you had this information in front of you, what would you do with it? What action would you take?

Audience: If I saw something that looked promising, I would pick up the phone and start making calls to people in the area to get our process rolling.

You: If you had access to all of that information from different areas, would it be enough to pick up the phone? Is there anything else you might need?

Audience: You know, another thing that can have a surprisingly significant influence on the decision to build is parking. If there is ample parking in the area, meaning we don’t have to build it ourselves, we’ll save a ton of money. I’d like to see what parking is like in each area as well.

You: Good to know — I’ll make sure we get you the parking information too. Thanks for the chat today!

Easy, right?

Even if you aren’t talking to a housing development manager, this method of drilling into the details and making sure that the dashboard will be actionable applies universally.

To help, we put together a guide about how to talk to your audience, complete with suggested questions and a space to take notes. You can download it here.

Get Buy-in with a Wireframe

Not only will these conversations ensure your dashboard is useful, they’re also a way to get buy-in from your audience before you begin. That means you lessen the chances of your audience changing their requirements after you spend time and energy building your dashboard.

To be extra sure you’re delivering what your audience wants, create a wireframe. A wireframe of a dashboard represents what it will look like when it’s finished. Include which types of visualizations will represent the data.

Simply drawing on a piece of paper can do the trick. The point is to give your audience a preview of what the dashboard will look like and work with them to refine it — before you create it.

Once you have agreement on the purpose of the dashboard and its content, it’s time to start building. As you do so, keep in mind the principles of clarity, simplicity, and flow.

The Creation Phase

Make it Clear

Clarity ensures viewers understand what the content of a dashboard means.

Use descriptive titles, labels, and notes to make it clear what people are looking at. They should know what each number and visualization is saying without having to ask. The dashboard pictured below exemplifies the use of these features.

Use your audience’s lingo as you title visualizations and add descriptions.

For instance, business users aren’t as familiar with your data as you are, and they won’t know how to translate database column names into business definitions.

As a data analyst, you can bridge that gap by using language that business users are acquainted with.

Make it Simple

Everything on the dashboard should have a purpose. Think about the “big idea,” the action your audience wants to take after seeing this information. Does every tile support and inform that action? If you find one that doesn’t, show no mercy — remove it.

Ideally, you’ll provide viewers the option to drill into more detail if they’re curious.

The sales manager’s dashboard shown below exemplifies simplicity. Every tile answers a piece of the question, “How is my team performing?”

At the top are real numbers against quota. Underneath that are person-by-person numbers, where I can see how everyone is doing booking meetings.

With this visualization, I can pinpoint who might have strategies to share with everyone, as well as who might need them.

Now, notice what isn’t here:

  • Extraneous text: There is no Y-axis label on the two-column charts because it’s obvious what the numbers mean.
  • Too many decimal places: The value_format parameter is used to limit the number of decimal places shown.
  • A rainbow: The color scheme here is simple. For instance, the Total Won ACV chart stands out immediately because it’s displayed in purple, while everything else is blue or green.

If you’re a LookML developer, you have some extra tools to simplify your dashboards. You can use drills and linking to provide details that fall outside of the scope of the dashboard.

Note: If you’re creating embedded dashboards with Looker, be extra careful to make sure that your audience can access any links or drills you provide. Embed users typically have very specific permission sets — they may not be able to drill, for instance, so be sure to create a smooth experience by providing content that is within their permissions.

Let it Flow

Flow means the dashboard contains a steady, sensical, well-organized stream of information. It’s all about where you put which content.

Positioning Content

Think of your dashboard like a news website — you want the headlines at the top. People can then scroll for more details.

Take advantage of the way people read information. For English readers, that pattern is top left, then right, then down and to the left (if your audience reads a different language, follow their common reading pattern).

In the example below, the Campaign Performance section is the headline. If someone were curious for more detail, they could look to the Profit Analysis section that starts below it.

Use margins and titles to frame the visualizations and break up the story into sections (keeping each section to about a page whenever possible).

Lastly, notice the alignment of the tiles. Standardizing the sizes of the tiles and neatly lining them up keeps your dashboard easy to read.

Colors in Embedded Dashboards

If you’re embedding, you can customize the colors of your dashboards to match the app or webpage where it appears. This maintains flow within your product, as in the example below.

You can even create a customized theme to apply across all of your embedded content.

If you customize the theme of your embedded dashboards:

  • Create a high contrast between the text and background colors so that the text is easily readable.
  • Don’t get too carried away with colors. Playing is fun, but remember that the data is the star of the show. The point of having a dashboard is for your customers to take action on the information that they see.

Conclusion

Well, there you have it — all my best dashboarding tips in one blog post. Which one surprised you the most?

Can you think of a dashboard you already created that could be made even better by implementing some of these ideas?

Know of any helpful tips I missed?

Tell us about it in the Community.

Until next time,
Jill Hardy
Content Strategist, Customer Education

]]>
2019-08-19T09:00:00-07:00
<![CDATA[onPeak + Looker: Increasing Hotel Bookings with Automated Workflows ]]> http://looker.com/blog/onpeak-workflow-automation http://looker.com/blog/onpeak-workflow-automation At onPeak, we manage hotel negotiation and reservation processes for clients that organize meetings, trade shows, and large conferences. We are focused on providing excellent hotel booking experiences for our attendees.

To do this, we needed a solution that gave everyone in the company access to accurate data. We implemented Looker as our company-wide data solution in 2017.

As we implemented Looker, we realized it was so much more than a BI tool. Once we got the foundation in place, we could automate entire workflows and fully capitalize on our data. Here are some examples of how we did just that.

The pre-Looker manual workflow

One of our top priorities is ensuring that our room blocks are full and that both our clients (and the hotels) are happy. Naturally, we wanted to reach people who have registered for an event but hadn’t booked their room yet. Identifying these people was difficult because our registration and booking data lived in separate databases.

To create our email lists, we needed to tie these two sources together. Unfortunately, this required one of my analysts to manually pull data from both systems, stitch the CSVs together, dedupe the bad data, and rationalize this in Excel.

We would then email the Excel file to the marketing department, who would manually upload it to their system and add it to a campaign.

Automating the flow with Looker

Step 1: Make the list

Because Looker transforms data at query time, we were able to dump our somewhat raw data into Amazon Aurora. We then joined our two data sets together to see which email addresses existed in the registration table but not in the hotel bookings table.

Step 2: Send the data to marketing

We knew the data that came out of Looker, rather than our manual process, was going to be up-to-date and accurate. That gave us (and our marketing team) the confidence to automate the next few steps. We scheduled the data to be sent to an S3 bucket every night. From there, we set up a trigger in Salesforce Marketing Cloud to fetch new results every morning.

Step 3: Send emails

The marketing team was then able to automate their process and set up campaigns with parameterized emails that ran off the automated data. Once the list came in, we sent registrants a tokenized email with promotional information about the hotels available for their event.

Win-Win

I am thrilled with this new automated workflow because it gives back hours of analyst time to our teams every week. But the real winners were our customers (and by extension, our business). By automating this workflow, we were able to increase the percentage of email recipients that booked a hotel room by 50%, resulting in increased revenue for us — and our customers.


Stay up to date on trending topics in big data, data analytics stories, product news, and more by subscribing to the Looker blog.

]]>
2019-08-19T06:00:00-07:00
<![CDATA[3 API Tools to Delight Your Embed Customers]]> http://looker.com/blog/api-tools-inspiration http://looker.com/blog/api-tools-inspiration In this post, I’m going to show you three API tools to make an interface that will wow your customers—while also being fun to use.

I probably don’t need to tell you that great user experience can dramatically grow your business and solidify your position as a trustworthy brand.

In fact, according to this Forrester study1, implementing a better user experience has the potential to raise conversion rates by 400%.

That’s an inspirational metric.

And since inspiration is what this post is all about, I gathered some truly awesome examples of what you can do with the Looker API. Whether they shine a light on the exact “oomph” you want to add to your Looker experience or simply illuminate possibilities, I hope you enjoy them.

Custom Filter Bar

The dashboard below uses icons to put a picturesque spin on filtering.

How’d we do it? The filter icons aren’t a part of the dashboard; they’re part of the website where it’s embedded. The iframe that houses the dashboard “listens” to the filter bar and automatically updates when someone clicks a filter icon.

You can get the code here to try it out yourself.

Report Selector for Embed Users

This example allows embedded users to switch between different reports easily.

api

The “Select Report” section is set up with an API call to Looker that grabs a list of reports from a designated folder. Adding or removing anything from that folder automatically updates what the user sees. No extra development work is required to push new content out to customers.

Put the report in the right place, et voila!

Gotta have it for yourself? The resources in our GitHub repo will get you started.

Create a Living Data Dictionary

Why build a data dictionary? For context.

Context makes the difference between something you quickly throw in the trash (a simple stick figure portrait) and something you treasure (a simple stick figure portrait...that your kid drew).

In the world of data analysis, it’s the difference between your customer glossing over a report they don’t understand and finding an insight that helps their business grow so much they start recommending your product.

But maybe your dictionary “customer” is internal.

In that case, context is the difference between your colleague coming to you for help picking fields for the report they’re building and knowing they can check a reference whenever needed—leaving you time to work on some of those stunning visuals we covered in the previous section.

The definitions are dynamically pulled from your LookML model and include the data type, description of the metric, and the associated SQL parameter.

You can even style it as you please with CSS and HTML. That’s right, your dictionary will always be fresh and looking great—and all you have to do is set it up once.

api

Interested? Get the code here.

Bonus Eye Candy: Gorgeous API-powered Visualizations

What’s more enticing in a dashboard than sleek, highly interactive visualizations? Tacos.

But also—finding those visualizations in a page boasting seamless tab navigation with a one-click dark mode! Yep, all of this beauty is possible with the API.

Which is almost as good as street tacos. Almost.

There’s more to the “how” of this gorgeous website than I can divulge in a blog post, but you can register for an interactive demo of the website on this page to check it out for yourself.

Conclusion

Which tool are you going to try first? Let us know how it goes in the Community API discussion.

Until next time,

Jill Hardy
Content Strategist, Customer Education


1 The Six Steps For Justifying Better UX Business Case: The Digital Customer Experience Improvement Playbook, Forrester Research, Inc., December 28, 2016

]]>
2019-08-15T09:00:00-07:00
<![CDATA[Click Attribution: Types of Models & Attribution Strategy]]> http://looker.com/blog/click-attribution-and-its-models http://looker.com/blog/click-attribution-and-its-models What is click attribution?

Click attribution is a way to determine what sources or campaigns are driving the most results for online companies. Many people like click attribution because it is trackable back to its site, email, or source, and click-through links can be programmed to include several attributes. Click attribution also allows people to see the relative performance of different messages, executions, or marketing techniques. Plus, it is a strong signal of intent or interest. Whatever content was clicked, you can assume it was compelling enough to incite action by that user.

What types of click attribution models are there?

The most common click attribution models are first-click attribution, last-click attribution, and linear attribution. There can be many variations of attribution algorithms that assign different values based on the type of transaction and channels involved. These three attribution models are common and not proprietary or algorithmic, so they are a great introduction to attribution.

What is first click attribution?

First click attribution is a model that assigns 100% of the credit for a sale to the first channel that a user clicked through. Some customers convert on the very first interaction with a company, but many will have at least two interactions during their journey to purchase. The first click attribution model rewards the marketing channels or activities that are deemed introducers to the brand.

What is last-click attribution?

Last-click attribution is a model that assigns 100% of the credit for a sale to the last known channel that a user clicked through. This is in some ways a time-decay model: rather than giving fractional attribution to the last channel a user touched, it gives all the credit to it. Last-click attribution tends to be common among many companies regardless of their web analytics platform.

What is linear attribution?

Linear attribution breaks the credit for a sale or action into equal parts pending how many touchpoints were measured in the course of the customer’s purchase journey. If the user had four marketing channel interactions that ultimately resulted in a sale, each channel would be assigned 25% credit for the sale.

How to choose click attribution models

Most companies will choose one attribution model to use in standard reporting, and often this is last-click attribution. Last-click attribution will favor channels or marketing activities that are lower in the funnel, meaning that the customer is ready to make a purchase rather than being in their discovery or shopping phase.

When evaluating the results of last-click attribution, companies should consider their entire marketing mix and targeting strategies. The truest measure for last-click attribution is an email or text channel. Almost immediately upon receiving these messages, customers or clients either do or don’t take action. Other channels, such as search, paid social, podcasts, etc. are likely driving one another. How your attribution rules are configured can make a difference in the end result of which channels or activities get ‘credit’ for the conversion.

Click attribution example

Company A runs a marketing campaign that includes paid social ads, podcast ads, and online display banners. The customer hears a podcast ad and is curious, so they look up the company in a search browser and visit the website to learn more, but they do not make a purchase. After visiting the website, the customer begins receiving online display banners and paid social ads advertising the company and product. They later hear another podcast ad the following week and note that there is a promo code offered for a discount. Later that day, the customer clicks a paid social ad, shops on the site, and at check out they enter the promo code from the podcast before submitting their order.

Pending how attribution rules are configured, this order could be attributed in two ways. Either it would either be classified as Paid Social, since that was the last channel that a click occurred, or it would be classified as Podcast, since that it’s that channel that had the associated promo code. Ultimately, the order can only be classified to one channel. Which channel do you think should be deemed responsible for driving this purchase?

This example may seem complex, but in reality, this is a simple example. It does not include more complicating factors like marketplace sales or brick and mortar.

It’s for this reason that understanding attribution is both art and science. There are many algorithms available on the market and countless companies trying to crack the code to have the most accurate tracking, but none of them can solve this for every piece of information or every touchpoint a consumer has. This is why comparing first click and last-click attribution models is a good place to start. Google Analytics Attribution Models are great for this too, because it includes first click and last-click in their default suite. With this, you can easily compare sales that were measured both ways side by side across multiple channels.

Additional ways to improve attribution

As the example above shows, promotional codes are another method for improving attribution. They’re often used as a measurement and attribution tactic for social influencers, on podcasts, radio, tv, and in direct mail. A great way to add an additional attribution layer is to ask customers what caused them to purchase or how they learned about the company. By introducing this one question, you can gain a better understanding of which interaction the customer found most memorable.

Combining all of these data sources to draw insights using a marketing analytics platform will give you a good idea of how your marketing activities are performing. Ultimately, you will have a range of performance pending which data sources you have. Understanding which activities are upper funnel (introducing your brand to new potential customers) and which are lower funnel (capturing the sale from someone ready to purchase) will further help you determine what the corresponding metrics should be.

At the end of the day, there is no silver bullet to having the perfect attribution model. By collecting as much data as possible and considering the role your media mix plays in a customers' path to purchase, you can optimize your marketing spend to customer conversion based on what your optimal channel mix looks like.

Check out these tips to learn more tips on creating an effective attribution model.

Learn more with Daasity + Looker

Daasity has approached attribution analysis in multiple ways in our direct-to-consumer (D2C) Analytics Suite, which integrates seamlessly with Looker. The data model can use additional data beyond Google Analytics to prioritize attributes such as specific promo code usage, post-checkout survey results, or map orders to marketing channels. Using that data mapping with Looker to visualize results, users can slice and dice data by initial order marketing channel to better determine financial metric targets.

Additionally, the D2C Analytics Suite allows users to easily view results by first click, last-click, and ad platform (view + click) in one simple graph to help gauge results.

Daasity and Looker continue to find ways to make it easier for eCommerce and D2C brands to access and see the data they need to inform strategies and tactics for growth.

For more information, visit www.daasity.com and subscribe to the Looker blog to stay up to date on future how-to’s, best practices, and data stories.

]]>
2019-08-15T06:00:00-07:00
<![CDATA[How Milk Bar is Driving Data Adoption with Looker]]> http://looker.com/blog/driving-data-adoption-at-milk-bar http://looker.com/blog/driving-data-adoption-at-milk-bar

Data-driven desserts

“Wait, why does a bakery need a data engineer?” I get that a lot. I’m a data team of one at Milk Bar, the popular dessert brand by chef Christina Tosi, of Chef’s Table and MasterChef fame. In addition to sampling literally every cake, cookie, or truffle that our R&D team sends over to our office, I’m responsible for wrangling information across our omni-channel business. Like any modern retail company, we rely on strong business intelligence and a data-driven culture to open new stores, launch new products, streamline operations, and unify our customer experience online and in store.

Looker allows my colleagues to engage with the ever-growing pool of data generated by our business. Every week at Milk Bar, our leaders discuss a scheduled performance report on the health of our company, store managers receive updates on the previous day of sales, our demand planner pulls sales volumes for forecasting, and our marketers run various analyses to guide their spend. I’m proud to say that about 75% of our 30 users are actively using Looker in a given week.

But it wasn’t always like this!

Preheating the oven

In the past, Milk Bar operated like most restaurants or food companies, which lag far behind when it comes to analytics. Our finance team was the conduit for information, and every request required pulling reports from multiple systems and stitching them together in Excel. We were in the 7th level of shared sheet hell. It was a time-consuming process and discouraged people from asking questions or hunting down data to inform their decisions.

As a one-person team, I knew that I would need to build a platform where my colleagues could answer data questions on their own. I didn’t want to tuck data away in databases that required SQL queries and become the new bottleneck for information. At the same time, I knew that even the best self-serve platform wouldn’t be enough to cultivate a data-driven mindset across the company. I was certain I would need to develop a culture of data seeking and exploring at Milk Bar.

Getting a taste for data

When it comes to rolling out new tools, engineers and developers tend to have a terrible habit. We’ll spend days or weeks building a tool or a feature and only spend a few hours on documentation, training, and communication. If we want to drive data adoption and encourage a data-driven mindset for our companies, that has to change. We have to become like Steve Jobs, convincing people why they should buy an iPad when they already own a phone and a computer. That requires real empathy for our users and some persuasive skills. If you build it, they might come, but they probably won’t unless you’ve stepped into their shoes and answered, “Why should I use this?” and “Okay, so how do I use this?

Practical steps for driving data adoption

Here are some practical steps that I’ve taken at Milk Bar to help our company drive data adoption. None of these ideas are a silver bullet on their own, but cumulatively, they teach, nudge, and encourage our users to develop a more data-driven mindset.

Train early and often

Milk Bar is a small company, so I can afford to train every new Looker user in group sessions. I hold beginner and advanced training sessions to make sure each person with a license knows how to use the tool, including Browse and Explore. I aim to train new employees during their first week of work (even if it means sinking an hour of my day in a one-on-one session). Why? I want Looker’s data platform to be ingrained in their habits and workflows from day one.

I can’t stress this enough — if you teach someone to use a tool and they don’t get it or they later find it frustrating, they’ll look for another way to answer their questions. Before you know it, their alternate path to a solution will become a habit. Good luck getting them back to your tool!

During training, I emphasize that anyone can reach out to me for a follow-up training session, ask me to check their work, or sit down together and build a query. I want the barrier to entry to be so low that it would be strange to seek out the same information any other way.

Develop data explorers, not data consumers

We are an Explore-first company, and we encourage our users to self-serve their own requests. Dashboards are great for busy people, but dashboards primarily describe, not diagnose. Additionally, dashboard-heavy instances usually require an army of analysts making tweaks and changes for users who don’t feel empowered to take responsibility for their own questions. I want my colleagues to be active users of Looker, not passive users. Throughout our use of Looker, I’ve noticed that individuals spend a lot more time in Looker when they feel empowered to answer their own questions.

Find and support your data ambassadors

Each team has 1-2 people who have organically developed into Looker power users. These people are ambassadors for Looker across the company, and the word-of-mouth credibility they provide is powerful. Identifying these folks is critically important to continued data adoption. Here at Milk Bar, I make sure I support these people well by meeting with them quarterly to gather feature requests and concerns. By supporting them in this way, I can continue to point new Looker users in the direction of the power users on their teams so they know they have someone else to go to with questions.

Communicate updates thoroughly

I add each new user to a contact group that I frequently email with release notes: new explores, dashboards/looks, examples of cool things other Looker users are doing, etc. I send these emails monthly. These are all updates I should document anyway, so my thought is, why not email them out as an extra touchpoint for people who have forgotten about or lost interest in Looker? Who knows — maybe the new explore I released will suddenly make Looker relevant to one of my colleague’s jobs when it previously was not.

Consider a weekly performance report

Our weekly performance report goes out to most people in the corporate office every Monday afternoon. This helps reinforce Looker as the single source of truth for performance metrics, and is a natural invitation for people to go deeper in Looker when they have questions about the weekly report.

Collect feedback (and automate it!)

People won’t give much unprompted feedback unless their ability to do their job is significantly hampered. Your users simply don’t have time to explain their minor frustrations or confusion to you. Instead, they’ll probably just stop using your platform. To avoid this, you have to ask them. One way I did this was by setting up an Airflow job that uses i__looker and the Looker API to email inactive and active users different email templates asking for feedback on their experience. This is just one example of how to gather user input, and yours doesn’t have to be that technical. If you’re looking for a simple way to begin collecting feedback, start by setting a recurring calendar reminder to contact new users and ask for feedback on their experience.

Be a user

I try to spend some time in Looker exploring and answering my own questions. Sometimes, this helps me find bugs before my users do! Other times, I find interesting insights that I can pass on to my stakeholders: insights they may not have known how to pull or may not have thought to investigate. It’s a lot easier to provide good user experiences if you take some time to be a user yourself.

Start making your company more data-driven

I’m a firm believer that to have real success with any product, you have to:

  • Remind people it exists and demonstrate how it makes their lives easier
  • Understand the frustrations of active and inactive users and fix them
  • Create a community of users who can help and support each other

I believe the same goes for driving data adoption at your company by using Looker. It’s important to be an engineer or an analyst, but don’t forget to be a product manager too. Remember that good things come when you train, demo, document, and communicate! You’ll find that the extra effort has a compounding effect on data adoption and (hopefully) the success of your business.


Join the conversation and share your own insights about data culture and data adoption in the Looker Community.

]]>
2019-08-14T06:00:00-07:00
<![CDATA[Why We Built a Data Culture at Fivetran]]> http://looker.com/blog/why-we-built-a-data-culture-at-fivetran http://looker.com/blog/why-we-built-a-data-culture-at-fivetran

At Fivetran, we build technology that centralizes data from different applications into data warehouses, so enabling organizations to be data-driven is an essential part of our mission.

But what does it mean to have a data-driven culture?

My teammates think of this in a few ways:

"A data-driven culture means that individuals are tuned in to start thinking about problems and their solutions with a data-focused mindset. Enabled by a data-driven culture, users across the organization can ask questions like: What can the data tell me about this problem?, What hypotheses can I test with the data we have available? and How will the changes I’m going to implement affect the data we’re collecting and other people who are leveraging this data?"

Christine Ndege, Solutions Engineer/Data Analyst at Fivetran

"Creating a data-driven culture is something that requires buy-in from across the organization. Not only does it mean making decisions based on evidence and analysis, but it also requires team members’ hard work to populate and provide the data behind this analysis, and thus is truly a whole team effort."

Ryan Muething, Data Analyst at Fivetran

To me, being data-driven has more than one advantage. Generally, when decisions are made based on facts and not best guesses, important discussions happen naturally. People ask: Is this data showing what we expected? If so, they know they’ve confirmed their suspicions. More often than not, however, the initial result is surprising and unexpected. It reveals things you weren’t aware of, both positive and negative.

Why a data-driven culture?

With numerous interpretations of what a data-driven culture is, you may be wondering why an organization would strive to be data-driven.

Think about the decisions teams within an organization make every day. Product teams are continuously iterating to deliver value to customers. Account reps are tracking their actions against quarterly targets. Marketing teams are building their go-to-market strategies. Finance teams are determining quarterly budgets — and the list goes on.

At any given time, all of these teams are leveraging data to measure, adjust, and deliver on their goals. While everyone works rapidly to help the organization succeed, the data they’re using can influence results.

The difference that having a data-driven culture makes is when everyone makes business decisions based on the same data, confidence in the decision-making process increases. Product teams can prioritize development based on the same data that marketing teams use to inform go-to-market strategies. Finance teams can be sure that their budgets are rooted in the same data that the sales teams use to forecast their pipelines — and so on.

Data-driven with Looker

At Fivetran, we build connectors that deliver ready-to-query data into cloud warehouses. To do this, we utilize Looker as our centralized data hub. All of our data, including (but not limited to) data from our product, sales, engineering, support, operations, and marketing departments, is centralized in a data warehouse and modeled for access in Looker. Many of us utilize scheduled reports in addition to daily queries to stay on top of alerts and changes.

The mission of my team is to ensure that our BI layer is a truly useful single source of truth for all of our teams. For instance, we use Looker to highlight progress towards company goals during every companywide meeting, allowing us to give teams across Fivetran a window into different departments' activities and successes.

Acknowledging challenges

Most of the time, the hardest part about continuing to build on our data culture is simply getting people started with Looker. Once people learn how to use it, our team doesn’t need to do much to keep their momentum going.

What is important, however, is the accuracy of the data and the data models. If folks start seeing that things are incorrect as they begin using Looker data regularly, they may start questioning how much they should trust the tool. It’s crucial that the quality of the information we provide via Looker is accurate and consistent with data from any other sources people may be using so that confidence and trust can continue to grow throughout users’ experiences with data and with Looker.

Building on the foundation

We’ve found that having a general understanding of what questions teams are attempting to answer is a good place to start when looking to encourage data-driven decision-making. Speaking to team members and understanding what they want to track helps our team deliver value to them.

A great way to continue building on this is to get people excited about finding answers and going deeper into the data. Instead of delivering explores and dashboards to answer questions for them, holding individuals accountable for meeting objectives and goals encourages them to be data-driven and track their progress more closely. In addition to this, providing regular opportunities for anyone to ask questions and learn in real-time helps to build on that excitement and trust in the platform. At Fivetran, we hold Looker office hours three times a week to help people get started, learn how to set up their first few dashboards, and do complex merged explores and offsets.

The point of establishing a strong data culture is to drive data usage, so making sure your data model is accurate and easy to understand is key. If the only people who can use Looker to build reports are folks with a background in SQL, the non-technical majority is not going to be able to utilize the explore functionalities at all, which denies people useful insights. To create a data culture that everyone can be a part of, make things simple and provide explores that aren’t visibly complicated — i.e., the data model behind the explores can be complicated, but it should remain hidden from the user. For the user, it should just work.

Join the conversation and share your own insights about data culture and data adoption in the Looker Community

]]>
2019-08-09T06:00:00-07:00